5g, AI, AI, blockchain, data, Federated Learning, General, Healtcare, Healthcare, Healthcare, metaverse, ML, ML, NHS, swarm, Technology

Federated Learning for the Healthcare Metaverse

The emerging “metaverse” concept is bringing together multiple technologies like virtual reality, augmented reality, and artificial intelligence to create persistent, shared 3D virtual spaces. The healthcare industry aims to harness these technologies to build the “healthcare metaverse” – immersive virtual worlds for medical education, collaboration, real-time patient monitoring, and more.

A key challenge in constructing these futuristic healthcare environments is enabling cross-institutional data sharing and collaboration while preserving data privacy. This is where federated learning comes in – it is a distributed machine learning approach that enables model training on decentralized data located on user devices without exchanging actual patient data.

How Federated Learning Works

Federated learning allows training machine learning models directly on remote devices (like wearables, medical devices, hospital servers etc.) without direct access to raw patient data. The devices compute updates locally after processing data on-device. Only these derived updates are shared for aggregated model improvement on a central server. This preserves data privacy and reduces security risks during collaboration.

Healthcare Metaverse Applications

Several potential healthcare metaverse use cases can benefit from privacy-preserving federated learning:

  • Multi-center observational studies and clinical trials leveraging patient data across organizations without violating regulations.
  • Early warning systems tapping into devices monitoring patient vital signs at scale to detect emerging health threats.
  • Logistical systems predicting equipment and medicine demand across healthcare networks to optimize real-time provisioning.
  • Diagnostic aids allowing medical AI models to learn from diverse patient populations and scanning devices while preventing data leakage.

Addressing Key Challenges

To fully realize the promises of federated learning for the healthcare metaverse, several important challenges need resolution:

  • Developing robust data security and encryption standards to prevent rare data leaks during model updates.
  • Careful benchmarking with simulation studies to account for system heterogeneity and unreliable devices in healthcare settings.
  • Designing incentive mechanisms and cost models to justify participation of small clinics and independent providers in collaborative networks.
  • Building regulatory sandboxes and data transparency tools to verify proper use of sensitive patient data.

The Path Forward

In the near future, the healthcare metaverse built on federated learning principles can enable various stakeholders – patients, providers, payers, researchers etc. – to seamlessly interact at global scale while maintaining data privacy and sovereignty. More research and testing is critical to address open problems before these systems handle sensitive medical data. If current limitations can be resolved carefully, the healthcare metaverse powered by federated learning will be transformative.

This synopsis provides an overview of key concepts, potential use cases, current challenges, and future outlook for leveraging federated learning to enable privacy-preserving data collaboration within futuristic healthcare metaverse environments. Please let me know if you would like me to clarify or expand any sections further.

Solutions

Navigating the Future: How the EU’s AI Regulations Impact Healthcare and Life Sciences

The European Commission has recently proposed a regulation on artificial intelligence (AI), with key repercussions for the healthcare and life sciences industry. The proposed regulation, called the “Artificial Intelligence Act,” aims to ensure that AI systems are trustworthy, human-centric, and respect EU values and rules. It also wants to foster innovation and competitiveness in the EU’s AI sector.

The proposal introduces a risk-based approach to AI regulation. Depending on the potential harm that an AI system can cause, it will be subject to different levels of obligations and oversight. Healthcare and life sciences is one of the sectors that will be most affected by the AI Act. Medical devices (including Medical Device Software) and in vitro diagnostic medical devices are explicitly captured and regulated by the proposal with great impact on the MedTech industry. AI systems that diagnose diseases, monitor patients, or assist doctors will also be subject to the new rules. However, the proposal also recognizes the potential benefits of AI in healthcare, such as improving diagnosis, treatment, and patient outcomes. Therefore, it aims to strike a balance between innovation and safety.

The EU AI Act: A Balancing Act for Responsible Innovation

The AI Act addresses these concerns by classifying AI systems based on the level of risk they pose to individuals and society. Low-risk systems, such as recommendation algorithms, require minimal scrutiny, while medium-risk systems, such as medical imaging analysis tools, necessitate transparency reports and conformity assessments. High-risk systems, such as AI systems that make autonomous decisions about medical treatment, face the most stringent regulations, including human oversight, audits, and certifications.

Key Principles: Safety, Transparency, Non-Discrimination

At the heart of the AI Act lies a commitment to fostering safe, transparent, and non-discriminatory AI practices. Healthcare and life sciences organizations must ensure that their AI systems adhere to these principles, demonstrating that they are reliable, free from biases, and operate with clear justifications for their decisions.

Human Oversight and Due Diligence

The AI Act emphasizes human oversight as a crucial safeguard against the potential misuse of AI in healthcare settings. Organizations must establish processes that ensure human experts can intervene in the operation of high-risk AI systems, preventing autonomous decisions that could harm patients.

Prohibiting Unacceptable AI Applications

To safeguard individuals and society, the AI Act explicitly prohibits the development and deployment of AI systems that pose an unacceptable risk. This includes systems that engage in cognitive behavioral manipulation, social scoring, or real-time or remote biometric identification without explicit consent.

Ethical Considerations: Privacy and Fundamental Rights

The AI Act addresses ethical concerns that have arisen in the context of AI, such as the potential for discrimination, privacy violations, and misuse of personal data. Healthcare and life sciences organizations must comply with data protection regulations and ensure that their AI systems respect fundamental rights and freedoms.

Navigating the Road Ahead: Preparation and Collaboration

As the AI Act enters into force, healthcare and life sciences organizations must proactively prepare for compliance. This involves assessing current AI practices, identifying potential risks, and developing strategies to meet regulatory requirements. Training programs for staff, risk assessments, and oversight procedures are essential steps in this process.

Collaboration among stakeholders, including healthcare professionals, AI developers, regulators, and policymakers, is paramount to ensuring the successful implementation of the AI Act. Open dialogue and shared expertise will be instrumental in navigating the complexities of the legislation and fostering responsible AI practices in the healthcare sector.

A Brighter Future: AI Empowering Healthcare

The EU AI Act marks a significant step towards a future where AI is used responsibly and ethically in healthcare and life sciences. By promoting safety, transparency, non-discrimination, and human oversight, the legislation provides a framework for harnessing the power of AI to improve patient care, advance medical knowledge, and optimize healthcare delivery. As AI continues to evolve, the AI Act will serve as a guiding principle, ensuring that this transformative technology benefits all without compromising the core values of healthcare.

Solutions

Time to Install a Chief AI Officer in Healthcare

The intersection of healthcare and technology has always been a dynamic one. Recent developments, particularly the rise of Generative AI (Gen AI), underscore the urgency for a specialised role in healthcare organisations – the Chief AI Officer (CAIO). Let’s explore the compelling reasons for this timely addition and the unique challenges presented by AI and Gen AI in the healthcare domain.
AI is already being used to develop new drugs and treatments, improve clinical decision-making, and personalise care for patients. Gen AI is a subset of AI that can create new data and content, such as medical images, synthetic patient data, and new drug candidates. Gen AI is still in its early stages of development, but it has the potential to revolutionise healthcare in many ways.

Why now?
There are several reasons why it is time to install a Chief AI Officer (CAIO) in healthcare organisations:
AI is becoming increasingly powerful and sophisticated. As AI algorithms continue to improve, they can perform more complex tasks and achieve better results. This means that AI can be used to solve a wider range of problems in healthcare.
Gen AI is opening new possibilities. Gen AI has the potential to revolutionise healthcare by creating new data and content that can be used to develop new drugs and treatments, improve clinical decision-making, and personalise care for patients.
Healthcare organisations need to be prepared for the future. AI is already having a significant impact on the healthcare industry, and this impact is only going to grow in the coming years. Healthcare organisations need to have a plan in place for how they will implement and use AI to improve patient care.

What has Gen AI expedited?
Gen AI has expedited the need for a CAIO in healthcare by opening new possibilities for using AI to improve patient care. For example, Gen AI can be used to create synthetic patient data that can be used to train AI algorithms and to develop new treatments without having to use real patient data. Gen AI can also be used to create new drug candidates and to design clinical trials.

Challenges of AI / Gen AI in healthcare organisations
There are several challenges that healthcare organisations need to be aware of when implementing and using AI and Gen AI:

Data privacy and security. AI and Gen AI systems need to be designed to protect patient data privacy and security.

Bias. AI and Gen AI systems can be biased, which can lead to unfair and inaccurate results. It is important to carefully evaluate AI and Gen AI systems for bias before using them in clinical practice.

Trust. Healthcare professionals need to trust AI and Gen AI systems before they will be willing to use them in clinical practice. It is important to develop educational and training programs to help healthcare professionals understand and trust AI and Gen AI systems.

Conclusion
AI and Gen AI are rapidly transforming the healthcare industry, and healthcare organisations need to be prepared for the future. One way to do this is to install a CAIO who can lead the organisation’s AI strategy.
The CAIO would be responsible for developing and implementing the organisation’s AI strategy, overseeing the development and deployment of AI systems, and ensuring that AI systems are used in a safe, ethical, and responsible manner. The CAIO would also be responsible for educating and training healthcare professionals on AI and Gen AI.
The time is now for healthcare organisations to install a CAIO. AI and Gen AI have the potential to revolutionise healthcare, and healthcare organisations need to have a plan in place for how they will use these technologies to improve patient care.

Additional benefits of having a CAIO in healthcare:
Strategic leadership: A CAIO can provide strategic leadership for the organisation’s AI initiatives. This includes developing and implementing an AI strategy, identifying and evaluating AI technologies, and managing the organisation’s AI investments.
Collaboration: A CAIO can help to foster collaboration between different departments and stakeholders within the organisation on AI initiatives. This is important because AI projects often require input and expertise from a variety of different people.
Advocacy: A CAIO can advocate for the use of AI within the organisation and to the public. This can help to raise awareness of the benefits of AI and to build trust in AI technologies.

Overall, having a CAIO in healthcare can help organisations to accelerate their adoption of AI and to realise the full potential of AI to improve patient care.

AI, AI, Cloud Native, data, Federated Learning, General, Healtcare, Healthcare, healthcare emerging technology, ML, ML, NHS, Personsal, Solutions, swarm, Technology

Is Shadow AI Going to Be Worse Than Shadow IT in Healthcare?

The evolution of technology in healthcare has always been a double-edged sword. On one hand, innovations like electronic health records, telemedicine, and AI diagnostics promise to revolutionize patient care. On the other hand, they introduce new challenges related to security, privacy, and management. A prominent challenge of the past decade has been “Shadow IT,” where healthcare professionals adopt unauthorized tech solutions. But now, there’s a new player on the horizon: Shadow AI. And it might be an even bigger concern.

What is Shadow IT?

Before diving into Shadow AI, let’s recap Shadow IT. It refers to any information technology adopted without explicit organizational approval. In healthcare, this could be as simple as a department using a non-approved messaging app, or more complex like a cloud-based patient data storage solution. The intentions might be good—better communication, more efficient data access—but the risks can be high. Non-approved software might not be HIPAA compliant or could introduce vulnerabilities into an organization’s IT ecosystem.

Enter Shadow AI

Shadow AI is a logical progression from Shadow IT. It includes any artificial intelligence solutions implemented without the proper oversight or integration into the broader IT infrastructure. This could be a chatbot introduced to a clinic’s website for patient queries or an AI algorithm a radiologist uses to assist with diagnoses.

The adoption of Shadow AI might be driven by:

– The desire to enhance patient care.

– Speed up diagnosis processes.

– Reduce the workload on medical staff.

But the implications of Shadow AI can be more severe than Shadow IT.

Why Might Shadow AI Be Worse?

1. Complexity: Unlike traditional software, AI models evolve and learn. If not appropriately managed, they can start producing inaccurate or biased results, directly impacting patient care.

2. Data Sensitivity: AI models, especially in healthcare, operate on sensitive data. Shadow AI might not adhere to the same data protection standards as approved AI solutions, putting patient data at risk.

3. Dependency: Once integrated into daily operations, medical professionals might grow dependent on AI models for decision-making. If these AI models aren’t validated or appropriately maintained, it can lead to persistent medical inaccuracies.

4. Ethical Concerns: The unauthorized use of AI models, especially on patient data, raises significant ethical concerns. Patients might be unaware that AI is being used in their care, violating their rights to transparency.

Navigating the Challenges

Preventing the rise of Shadow AI requires a multi-faceted approach:

  • Update policies to cover AI/ML development, procurement, and monitoring. Communicate to all departments.
  • Establish centralized AI Ethics Review Boards for objective risk-benefit assessments of use cases.
  • Require transparency for all models including model cards detailing testing, performance, data sources, and ethics reviews.
  • Develop internal AI/ML platforms enabling collaboration and knowledge sharing across functions.
  • Nurture data science translators, AI coaches, and other emerging roles to responsibly embed AI through the organization.
  • Promote an ethical AI culture valuing patient well-being over efficiency gains.

 The rise of Shadow AI in healthcare is a looming challenge. While its intentions—improving patient care, and streamlining operations—are noble, the potential risks are significant. But through assertive policies, multi-disciplinary governance, and cultural stewardship, healthcare innovators can ethically unleash AI’s full potential. Confronting shadow AI head-on is key to realizing that positive future.

AI, AI, blockchain, data, Federated Learning, General, Healthcare, Healthcare, ML, ML, NHS, Solutions

Unveiling the Hype: Large Language Models Revolutionizing Healthcare

In recent years, the field of healthcare has witnessed an unprecedented surge in technological advancements, with one remarkable innovation leading the charge: Large Language Models (LLMs). These sophisticated AI systems, driven by machine learning algorithms, have captured the imagination of researchers, practitioners, and the public alike. The allure of LLMs in healthcare stems from their unparalleled ability to process and understand human language, paving the way for revolutionary breakthroughs across various medical domains. From improving clinical documentation to accelerating drug discovery, these models have captured the industry’s attention. But what exactly is driving this hype? This blog will delve into the driving forces behind the hype of Large Language Models in healthcare, exploring the transformative potential and the challenges they bring.

Before I go into specifics in healthcare, defining LLM may not be a bad idea for some. LLM stands for large language model and are cutting-edge natural language processing models that have shown new capabilities in text generation, comprehension, summarization, and more. Their massive scale and contextual understanding enable many new applications compared to previous NLP systems. LLMs are driving major advancements in conversational AI, content creation, and other language-related tasks. Examples include GPT-3, Google’s LaMDA, Meta’s OPT

Now let’s have a look at some of the key areas where LLMs are gaining significant attraction in healthcare.

Natural Language Understanding and Processing

One of the primary reasons behind the buzz surrounding Large Language Models in healthcare is their remarkable natural language understanding and processing capabilities. These models, like OpenAI’s GPT-3, can comprehend and generate human-like text, making them invaluable tools for handling vast amounts of medical data, patient records, clinical notes, and research articles. This ability enables LLMs to assist medical professionals in extracting meaningful insights, identifying patterns, and making informed decisions, thereby enhancing diagnostic accuracy and patient care.

Clinical Decision Support

Large Language Models hold immense potential as clinical decision support systems. By analyzing a patient’s medical history, symptoms, and test results, LLMs can generate personalized recommendations for treatment plans, medication options, and potential interventions. These AI-driven suggestions offer healthcare providers an additional layer of expertise, helping them make well-informed choices that consider the latest research, best practices, and patient-specific factors.

Medical Literature Analysis

The sheer volume of medical literature and research is a challenge for healthcare professionals to stay updated. Large Language Models excel in rapidly analyzing and summarizing scientific papers, clinical trials, and journal articles. By extracting key information and trends from an ever-expanding body of knowledge, LLMs empower healthcare practitioners to stay current with the latest advancements, ultimately contributing to evidence-based decision-making.

Telemedicine and Patient Communication

The rise of telemedicine and virtual healthcare interactions has highlighted the need for effective patient communication. Large Language Models offer a bridge between patients and healthcare providers, enabling seamless and natural language interactions. These models can answer patient queries, provide medication information, and offer general medical advice, enhancing patient engagement and accessibility to healthcare services.

Faster Drug Discovery

Processing massive volumes of pharmacological data using large language models may help predict effective new drug compounds. Models can scan research papers, clinical trial results, and chemical databases to highlight promising candidates. This more efficient drug discovery process could accelerate the development of life-saving treatments.

Language Barriers and Global Healthcare

In a world marked by linguistic diversity, Large Language Models can help break down language barriers in healthcare. LLMs can automatically translate medical information and instructions into multiple languages, ensuring that patients around the world receive accurate and comprehensible guidance. This global reach can potentially improve healthcare outcomes and access in underserved regions.

Cost Reduction

Automating workflows like coding medical records, synthesizing reports, or answering patient queries can potentially save huge costs in healthcare administration. Large language models can enable accurate voice transcription, assist report generation, and even suggest medical codes – relieving staff workload. Their scalability also unlocks cost benefits.

Ethical Considerations and Challenges

Despite the remarkable potential of Large Language Models in healthcare, their adoption is not without challenges. Privacy concerns, data security, and biases within the models are critical issues that must be addressed. Ensuring patient data confidentiality and mitigating biases that may lead to inequitable healthcare outcomes are essential steps in responsibly harnessing LLMs’ power.

let me conclude with this, The hype surrounding Large Language Models in healthcare is not merely a fleeting trend; it represents a transformative shift in how medical professionals approach diagnostics, treatment planning, and patient communication. These AI-driven systems have the capacity to revolutionize healthcare by leveraging their natural language understanding and processing capabilities, aiding in clinical decision-making, analyzing medical literature, and overcoming language barriers. As the healthcare industry continues to embrace this technological evolution, it is imperative to strike a balance between the promise of LLMs and the ethical considerations they entail, ultimately paving the way for a more accessible, efficient, and patient-centric healthcare ecosystem.

Solutions

Healthcare inequalities due to digital divide in EMEA

Healthcare inequality is a persistent issue across countries in Europe, Middle East, and Africa (EMEA), but the advancement of technology presents an opportunity to bridge the gap and improve healthcare access and outcomes.

In Europe, countries with more developed healthcare systems have embraced technological advancements, such as electronic health records, telemedicine, and artificial intelligence (AI) to improve healthcare access and delivery. For example, in Denmark, electronic health records have allowed healthcare providers to share patient data and provide more personalized care. Similarly, in the United Kingdom, telemedicine has been used to connect patients in rural areas with healthcare providers, reducing barriers to healthcare access.

In the Middle East, countries like the United Arab Emirates have invested heavily in technological advancements in healthcare, such as telemedicine, electronic health records, and AI. These advancements have improved access to healthcare services and allowed for more personalized and efficient care delivery. However, some countries in the region, like Yemen and Syria, have limited access to healthcare technology due to ongoing conflicts.

In Africa, technology has the potential to transform healthcare delivery and bridge the gap in healthcare access and outcomes. Mobile health (mHealth) applications have been developed to provide healthcare information and services to remote and underserved communities. Additionally, telemedicine and electronic health records are being used to improve healthcare access and delivery in countries like Kenya and South Africa.

Despite the potential benefits of healthcare technology, the digital divide remains a significant challenge in many parts of the EMEA region. Limited access to the internet and technology infrastructure in rural and marginalized communities prevent the adoption of healthcare technology, exacerbating healthcare inequality.

To address healthcare inequality with the spin of technology advancement in the EMEA region, governments and healthcare providers must prioritize investment in technology infrastructure and the development of healthcare technology that is accessible and affordable to all communities. Additionally, efforts must be made to provide digital literacy training to healthcare providers and patients to maximize the potential benefits of healthcare technology.

In conclusion, technology presents a significant opportunity to bridge the gap in healthcare access and outcomes across countries in Europe, Middle East, and Africa. While technological advancements have been embraced by more developed healthcare systems, efforts must be made to ensure that all communities have access to affordable and accessible healthcare technology to reduce healthcare inequality in the region.

AI, AI, blockchain, data, Dell EMC, Federated Learning, General, Healtcare, Healthcare, ML, ML, NHS, swarm

Can I Trust the Data I See? Clinician’s & Patient’s views on Medical Data from IoT devices and sensors

One question I get asked repeatedly, in every talk about Healthcare IoT and sensors, is about trust and data quality – Can I trust the data I see? The use of IoT in the health sector has seen the development of new applications in this sector. Wearables are being used to monitor patients, drug delivery systems, personalized treatments based on activity, and tele-based healthcare solutions. Data can be noisy and erroneous as it comes mostly from heterogeneous devices that suffer from battery and accuracy issues. The privacy of the users is a significant concern. I have written a few articles on IoMT, its adoption, opportunities, and security-related subject previously, so I wouldn’t repeat myself, but you can find them in my other articles here. Let’s start with the questions I usually come across: –

Will a clinician trust data from IoT sensors or devices?

What are the factors that influence human trust in IoT data?

Can trust in medical IoT be optimized to improve decision-making processes?

The adoption of medical IoT in healthcare depends on data being collected and secure transmission for processing. Because of the nature of the healthcare vertical, clinicians are always concerned about the trustworthiness of data, which in fairness, has failed to prove itself on some occasions in the past. By “data trustworthiness”, I mean that the data used as the basis for making decisions is the right one from the right patient. Hence, deciding the data trustworthiness is an important consideration not only for the health experts but also for the technology experts who believe in future IoT-health innovation. The topic of data quality is the primary concern and possibly ‘make and break’ for any IoT-based medical solution.

Let’s dive a little deeper into the problem statement – it is essential to determine the quality of data shared between IoT domains to facilitate the best decisions or actions. The importance is compounded in IoT environments when data is derived from low-cost sensors, which may be unreliable. It has been widely adopted that each application requires a unique description of data quality (DDQ). For data to be shared, these unique descriptions must be standardized and advertised. Data quality can be described and evaluated using DQDs. DQDs provide an acceptable, standardized, flexible, and measurable set of quality metrics to measure data quality. Before I go further into DQDs, I will explain data quality using four properties: intrinsic, contextual, representational, and accessibility. For each of these properties, appropriate DQDs are given.

Clinical decisions can be negatively impacted by poor data quality. Several factors which can degrade the quality of data in an IoT context include deployment scale, resource constraints, fail-dirty, security vulnerability, and privacy preservation processing. These manifest differently at different stages of the AI models which are applied for clinical decision capacity. For example, fail-dirty, sensor fault, and deployment scale are more predominant during data generation, as privacy preservation processing manifests mostly during data use and storage but altogether, they deteriorate the overall accuracy of clinical decisions and eventually trust!

Generally, the quality of data is highly dependent on the intended use and this is a multidimensional concept that is difficult to assess as each use case defines its quality properties. There are four categories of data quality properties that I believe any assessment system should be able to implement collectively rather than in isolation. These include:-

Intrinsic: This category examines quality properties in the data itself. For example, data quality may be looked at in terms of how a sensed point deviates from an actual point (anomaly detection) or how a particular data point differs from the rest of the data.

Contextual: This looks at quality properties that must be considered within the context of the task at hand. For example, it must be relevant, timely, and appropriate in terms of quantity. This property of data quality has previously been neglected.

Representational: This looks at computer systems that store the information. They must ensure that the data is easy to manipulate and understand.

Accessibility: This looks at data quality challenges that are as a result of the way users access the data systems. For example, it could be from the insecure, unregistered network where new packets can be introduced into the data thus affecting its quality.

Each of these data quality properties defines quality metrics that can be used to assess data quality. These are collectively known as data quality dimensions. Examples of these include but not to accuracy, accessibility, timeliness, believably, and relevancy.

Factors that affect data quality in healthcare IoT exist throughout. For example, during data generation, data quality may be affected by sensor fault, or environmental factors; during data transfer and pre-processing, network outages may impact data quality, and factors such as privacy preservation processing affect data quality during storage and use. There is a need to evaluate data quality at each stage, store such scores or add them as metadata and combine them into a single metric that can be advertised to data consumers in real-time.

Understanding data quality manifestations at each stage helps us not only to solve data quality challenges but could also inform the usability of tools used at such stages. For example, data quality score at the data generation stage could be used to automatically recalibrate sensors (a decrease in quality score over time could be correlated with calibration errors), or data quality scores during modeling can be used to tune automatically tune machine learning models. I will finish this episode on the note that introducing IoT systems, involving advice from AI to support the clinical decision, requires more than just functionality. There is a need to increase the trust of users in the reliability and accuracy of data and as AI moves from the currently acceptable narrow intelligence directed by clinician-determined action plans to a future in which advice is generated by the IoT system. Our technologically adept participants are not yet ready for this step; research is needed to ensure that technological capability does not outstrip the trust and data quality concerns of the individuals using it.

AI, AI, blockchain, data, Federated Learning, General, Healtcare, Healthcare, Healthcare, ML, ML, NHS, Pandemic and Data, Personsal, swarm, Swarm Learning, Technology

Swarm Learning as a privacy-preserving machine learning approach in Healthcare

Swarm Learning could be the answer to fears over AI

We have recently seen the role played by Big Data, analytics, and AI during the last 16 months in pandemic. We were warned of terrible consequences of the growth of Covid-19 in early March last year, and yet there were delays in lockdown for two weeks because the models did not indicate urgency. According to Richard Self, ‘Data, analytics and models are how we understand the world’ is the current wisdom. Also ‘the data and models do not lie’. ‘We must follow The Science’.

 As you all know my interest in Federated learning for healthcare data. Recently, I was advised to look into swarm learning which is allegedly somewhat similar to federated learning but slightly different in how it protects the data. This article is to briefly explain and talk about the differences between federated and swarm learning and how it can be applied to protect sensitive data.

The challenge of shared machine learning is often a concern for medical data. According to the International Data Corporation, global data will grow from 33 zettabytes in 2018 to 175 zettabytes in 2025. Fast and reliable detection of patients with severe and heterogeneous illnesses is a major goal of precision medicine. Patients with leukaemia can be identified using machine learning on the basis of their blood transcriptomes. However, there is an increasing divide between what is technically possible and what is allowed, because of privacy legislation.

To enable the integration of medical data from any data owner worldwide without breaking laws, Swarm Learning-a decentralized machine-learning approach that unites edge computing, blockchain-based peer-to-peer networking and coordination while maintaining confidentiality without the need for a central coordinator, thereby going beyond federated learning, could be the answer!

This article focuses on what swarm learning is and how does it differ from Federated learning.
 
The swarm acumen or theory is often showed in nature with birds in flight or ants in colonies or in humans with participants in a market economy. Swarm Learning is a decentralised machine learning framework that enables organisations to use distributed data to build ML models by leveraging blockchain technology that facilitates the sharing of insights captured from the data rather than the raw data itself.

Traditional machine learning makes use of a data pipeline and a central server that hosts the trained model. The disadvantage is all the datasets are sent to and from the central server for processing. It is time-consuming, expensive and requires a lot of computing power. This communication can also hurt user experience due to network latency, connectivity and so on. In addition, huge datasets need to be sent to one centralised server, raising privacy concerns.

In swarm learning, the ML method is applied locally at the data source. The approach leverages a decentralised ML approach. It makes use of edge computing, blockchain-based peer-to-peer networking and coordination without any need for one central server to process data. AI modelling is done by the devices locally at the edge (source of the data), with each node building an independent AI model of their own. The network amplifies intelligence with real-time systems with feedback loops that are interconnected.       

On the other hand, in Federated learning, model is trained on multiple devices. Each participating device has its own local data record that is not exchanged with other participants. In contrast, with conventional machine learning there is a central data set.

The main difference between federated and swarm learning is that with federated learning there is a central authority that updates the model(s) and with swarm learning that processing is replaced by a smart contract executing within the blockchain. The Updating of model(s) is done by each node updating the blockchain with shared data and then once all updates are in, it activates a smart contract to execute some Ethereum code which collects all the learnings and build a new model (or update the existing model). Thus no node is responsible for updating the model – it’s all implanted into a smart contract within the Ethereum block chain.

Some companies have begun leveraging Swarm intelligence. For example, Italian startup Cubbit has developed a distributed technology for cloud storage that uses swarm intelligence to deliver speed and privacy, with each Cubbit Cell acting like a node in a swarm. Moreover, the maintenance of these systems costs much less as compared to traditional data centres. 

Dutch company DoBots specialises in swarm robotics. The company’s project FireSwarm consists of a group of UAVs that specialises in finding dune fires. German start-up Brainanalyzed enables scaling profits and predicting market movements for fintech customers. It combines swarm intelligence with data analytics to improve financial decision making.  

Swarm learning is still in its early days, but developing the technology today is recognition of a set of trends that make a new way of thinking vital.

Edge computing, AI and blockchain create a process by which we move from a data flood to kind of hydroelectric power, where the vast amounts of data are directed and put to work, adding, not subtracting, from our lives and what we can do with them. 

I am convinced that swarm learning can give a huge boost to medical research and other data-driven disciplines. The current state is just a start. In the future, I see this use of technology in the realms of Alzheimer’s and other neurodegenerative diseases,” Schultze said. “Swarm Learning has the potential to be a real game changer and could help make the wealth of experience in medicine more accessible worldwide. Not only research institutions but also hospitals, for example, could join together to form such swarms and thus share information for mutual benefit.”

AI, blockchain, data, Dell EMC, Federated Learning, General, Healtcare, Healthcare, Healthcare, healthcare emerging technology, Isilon, ML, ML, NHS, Personsal, Solutions, Technology, Uncategorised

Federated Learning for future of Digital Healthcare Informatics

Belated New Year wishes! I hope you and your family are keeping safe. For the first article of the New Year, I am not going to write about trends or top five healthcare tech etc., which you may already have read and seen on a variety of platforms.

Instead, I am going to write about something which, I believe, has a great potential in shaping the future of healthcare data analytics. For that reason, I am personally going to spend 6-7 years (joys of being a part time learner) of my life researching into Federated Learning (FL)!

We have seen significant development related to healthcare data analytics in last few years. In digital healthcare, the introduction of powerful Machine based learning (ML) and particularly, Deep Learning-based models have led to innovations in radiology, pathology, genomics and many other fields. But unlike other verticals such as finance and automotive, existing medical data is not fully exploited by ML, primarily because it sits in data silos and privacy concerns restrict access to this data. For example, different hospitals may be able to access the clinical records of their own patient populations only. And while regulations such as HIPAA, PHI, etc., are great at protecting such sensitive data, they also pose bigger challenge for modern data mining and ML techniques, such as deep learning which relies on large amount of training data.

Federated Learning (FL) is a learning architype that addresses the problem of data governance and privacy by training algorithms collaboratively without exchanging the underlying datasets. In simple words, it holds great promise on learning with fragmented sensitive data and instead of aggregating data from different places all together, or relying on the traditional discovery then replication design, it enables training a shared model with a central processing, while keeping the data in local repositories/location, where they originate from. Simple, but not quite!

The below diagram is a good depiction of a simple FL workflow.

Let’s dig a little more. Data driven healthcare is the main source for precision medicine, requires models to be trained and is evaluated on sufficiently large and diverse datasets. There is no denying that medical datasets are hard to curate, for reasons mentioned previously. The need for sufficiently large databases for AI training has deposited many initiatives seeking to pool data from multiple institutions. Large initiatives have so far primarily focused on the idea of creating data lakes. Examples include NHS Scotland’s National Safe Haven, the French Health Data Hub and Health Data Research UK. Centralising the data, however, poses not only regulatory and legal challenges related to ethics, privacy and data protection, but also technical ones – safely anonymising and controlling access. Therefore, transferring healthcare data is a non-trivial, and often impossible, task.

On the other hand, Federated Learning (FL) promises the solution to above challenges. In a FL setting, each data controller (hospital or other care facility) not only defines their own governance processes and associated privacy considerations, but also, by not allowing data to move or to be copied, controls data access and the possibility to revoke it. So, the potential of FL is to provide controlled, indirect access to large and comprehensive datasets needed for the development of ML algorithms, whilst respecting patient privacy and data governance. Moving the to-be-trained model to the data instead of collecting the data in a central location has another major advantage: the high-dimensional, storage-intense medical data does not have to be duplicated from local institutions in a centralised pool and duplicated again by every user that uses this data for local model training.

I am concluding this episode on the note that ML, and particularly DL, has led to a wide range of innovations in the area of digital healthcare. As all ML methods benefit greatly from the ability to access data that approximates the true global distribution, FL is a promising approach to obtain powerful, accurate, safe, robust and unbiased models.

All in all, a successful implementation of FL will represent a shift from centralised data warehouses or lakes, with a significant impact on the various stakeholders in the healthcare domain. One important thing to remember is that the medical FL use-case is fundamentally different from other domains!

To be continued ……

AI, AI, Covid and Data, data, General, Healtcare, Healthcare, Healthcare, healthcare emerging technology, Isilon, ML, ML, NHS

Small Data​, Solution for Healthcare Challenges ?

As somebody has said, “Big Data has a little brother. And together, Big and Little Data are far more powerful than Big Data alone”

Most of the past five-ten years’ attention has been focused on “big data,” especially to fuel data science and machine learning. All the energy spent on big data obscured something we used to know: “good things come in small packages”. Small data, however, represents its own revolution in how information is collected, analysed and used.

In healthcare, there is great interest in and excitement about the concept of personalized or precision medicine and advancing this vision via various ‘big data’ efforts. While these methods are necessary, there are evidence that, they are insufficient to achieve the full personalized medicine promise. A rigorous, complementary ‘small data’ model that can function both autonomously from and in collaboration with big data is also needed.

what is Big Data & Small Data?

Small Data: It can be defined as small datasets that are capable of impacting decisions in the present. Small Data is also helpful in making decisions but does not aim to impact the business to a great extent, rather for a short span of Small data can be described as small datasets that can have an influence on current decisions.
In nutshell, data that is simple enough to be used for human understanding in such a volume and structure that makes it accessible, concise, and workable is known as small data.

Big Data: It can be represented as large chunks of structured and unstructured data. The amount of data stored is immense. It is therefore important for analysts to thoroughly dig the whole thing into making it relevant and useful to make proper business decisions.
In short, datasets that are huge and complex that conventional data processing techniques cannot manage them are known as big data.

Here is the table help differentiate between the two;

The rise of small data in healthcare: Big data has transformed healthcare in many areas. Analysts extract detailed statistics from a population or an individual to help to reduce costs, carry out new research and identify the early onset of disease.

But recently, we saw a gradual shift in emphasis towards “small data” analytics as hospitals examine their existing data to improve clinical and operational processes and identify cost savings. For many clinicians and front-line healthcare professionals, small data offers the most value to their organizations as it can have a direct impact on patient care. Typical examples of small data include information relating to OT turnover times and missed clinical appointments. Small data provides detailed information on how many times a patient has been admitted to the A&E within the last month, for example.

Small data can also provide insight into significant trends affecting your hospital, especially in the areas of cost reduction.

The Value of Small Data in Healthcare: These individual, real-time snap shots of longitudinal patient experience aren’t being captured by EHRs, and so have largely remained out of the reach of healthcare providers. By leveraging the enterprise cloud and intuitive mobile interfaces, small data can be captured, shared, and acted on in a collaborative fashion by the entire post-acute care team. For example, any of a patient’s providers, regardless of their specific role, can note changes in condition that may signal a worsening clinical state (like an increase in weight of a heart failure patient, or new home hazards that might trigger a fall) and send alerts to the appropriate team member for quick action.

It is these data that occurs at the “Point-of-Living” that has been shown to drive so much of clinical outcomes, for the medically complex elder population who require the largest spend. Activities of daily living, psychosocial issues, caregiver and environmental factors and measurements of a patient’s understanding of her care plan and medications are some of the key dimensions to capture and analyse.

That’s what small data is all about: granular, patient specific information along all dimensions of care that reflect all aspects of health. Health with a capital ‘H’ if you will.

The Four P’s of Small Data : Big data has been defined as encompassing the four V’s: volume, velocity, variety, and veracity of information. We see small data in the healthcare setting as encompassing “four P’s”: punctual, purposeful, prognostic, and at the point-of-living.

  1. Punctual: small data is timely and transactional in nature. There is very little “lag” in being able to interpret and act on the information. E.g. My patient’s blood pressure is very high today.
  2. Purposeful: small data represents pertinent information that directly affects patient satisfaction and goals of therapy. E.g. Does my patient understand her plan of care?
  3. Prognostic: small data can be predictive of impending risk and poor outcomes. E.g. are there hazardous environmental factors in the home that increases risk of falling?
  4. Point-of-Living: small data occurs everywhere the patient interacts with a multi-disciplinary team, across the care continuum. E.g. My patient’s home health aid noticed she was short of breath when delivering her groceries.

Small + Big= Best of All: Taken together, small and big data sets will create a rich data tapestry across the entire care continuum, and will allow for truly remarkable predictive analytics, cost transparency insights, and comparative product performance analysis.

AI, blockchain, data, General, Healtcare, Healthcare, Healthcare, healthcare emerging technology, ML, NHS, Personsal, Solutions, Technology

Blockchain of Medical Things (BCoMT) – Securing Internet of Medical Things

Firstly, and most importantly, I hope you are in good health and coping well with Covid 19, physically and psychologically. It has been well over a month since I posted my last article. This has been due to some personal change of circumstance, in a good way. Hopefully you all will see more regular posts from me.  

with the explosion of IoT sensors and data captured through these devices, user privacy and security remain a major challenge in IoT(specially in healthcare -IoMT). Recently, there has been some research on a possible solution to address this overwhelming security concerns using Blockchain. So, this article is based on my recent talk at an event where I discussed Blockchain of Medical things (BCoMT).

And if you have missed my previous article of Blockchain and healthcare, please visit https://wordpress.com/read/feeds/88857668/posts/2259347247

Blockchain of Medical Things aims to solve the Internet of Medical Things (IoMT) security problem using Blockchain. Blockchain and IoT– as standalone technologies – have already proved them to be highly disruptive.

Internet of Medical things (IoMT), which has derived from Internet of Things, is a collection of devices connected to the internet to provide health related services. Basically, IoMT is a connected infrastructure of health system such as medical devices, software applications and services.

On November 2017, the Food and Drug Administration (FDA) approved the first pill with a sensor inside of it (aripiprazole tablets with sensor) that can track if a patient has swallowed it. This pill’s sensor sends messages to a wearable patch, and the patch itself transmits the message to a mobile application on the smartphone. This technology could be a game changer for chronic disease and mental health disorders.

Blockchain is a peer-to-peer technology for distributed data sharing and computing which provides unalterable and irremovable transactions. By definition, Blockchain (BC) is a type of Distributed Ledger in a temper-proof digital ledger with time stamp.

Since IoMT highly utilises the existing wireless sensor network (wsn) technologies or open internet, intrinsically it remains vulnerable to privacy as well as security threats. The main concern in IoMT is the secure and efficient transmission of the medical data. Healthcare data is a lucrative target for hackers and therefore securing protected health information (PHI) is the primary motivation of healthcare providers. However, the inability to delete or change information from blocks makes blockchain technology a suitable alternative for the healthcare system and could prevent these issues. Blockchain, by its design and architecture- consensus method and cryptographic techniques – is considered as a Trust Machine. Thus, it possesses the potentials to address major share of the security issues found in IoMT.

Scientists argue them to be complementary technologies to each other: BC requires participating nodes for consensus approach which can be supplemented by IoMT devices while IoMT requires security features which can be met by BC such as transparency, privacy, immutability, operational resilience and so forth.

Applying blockchain technology in IoMT cold provide many benefits. Most of the current IoMT ecosystems depend on communication and control models that are centralized. This model has connected generic computing devices and continues to support small-scale IoMT networks. However, the growing need for large-scale, distributed open IoMT systems cannot be satisfied by the centralized communication model and will fully benefit from decentralised approach, which is the core of blockchain based solution.

Key benefits of deploying blockchain are:

  • Secure medical records which cannot be completed without the involvement of a trusted intermediary avoiding a performance bottleneck and a single point of failure.
  • Patients can access and have control over their data and family members can also view the details of their patient condition.
  • Distribution of data is accurate, consistent and timely in blockchain.
  • Any change happens in the blockchain can easily be visible by all the members of the patient network.

Final remarks, IoMT privacy and security is one of the most significant challenge in its adoption. On the other hand, Blockchain is evolving as one of the most promising and creative technologies for security. Blockchain holds promise for privacy and security in IoMT. Merging blockchain with the Internet of Medical Things has been provided a decentralized way to manage the rapidly increasing number of IoMT devices.

AI, Covid 19, Covid and Data, Healtcare, Healthcare, Healthcare, healthcare emerging technology, ML, ML, NHS, Pandemic and Data, Pandemic and Data, Technology

Healthcare IT 1.9 – The impact of Covid-19 on Health IT & Tech adoption

Healthcare IT 1.9 – The impact of Covid-19 on Health IT & Tech adoption

We are living through an experience the vast majority of us have never had before and I’m sure we will remember these times even decades from now. I hope that you and your loved ones are safe. I have had quite a few requests recently asking about the current trends in healthcare and technologies to assist the next phase in current Covid-19 pandemic. So, here are my thoughts based upon personal experience, feedbacks from field and working closely with other key healthcare solution providers.

Let me explain the nomenclature of this blog’s title ‘healthcare IT 1.9 ‘– and why I am not calling it ‘healthcare 2.0 or Healthcare 1.1 or in between’. The reason is simple, based on my personal opinion – the paradigm shift and adoption of different technologies in healthcare and what has been seen so far during different phases in this current pandemic, is hugely significant and I would consider this as a big leap. Perhaps healthcare organizations have travelled 5 years ahead within matter of months but essentially, it’s the pace that has changed, as we have not invented any new concepts. The ideas and solutions have remained the same, such as remote consultation, remote diagnostics, self-service, AI, Big Data and Analytics – we have been talking about these things for years. And hence I am not calling the current wave in healthcare informatics as 2.0 – rather calling out a 1.9 (almost 2.0 but not exactly). I hope that clarifies it. I plan to further write on this 2.0 topic (virtual reality, 3D printing and prosthetics, augmented reality, and even robotic healthcare workers)
 in near future – stay tuned!

There is no question about the positive role that technology has played and will continue to play to combat the COVID-19 pandemic.  Pandemic in 1918 (Spanish flu) had resulted in 50+ million deaths (https://en.wikipedia.org/wiki/Spanish_flu) whereas this time the deaths are fewer comparatively, although it isn’t over yet. According to WHO, IT and Data have made a huge difference which was lacking in 1918.

It is now accepted that these changes are here to stay. Not only is COVID-19 a pandemic which is unlikely to vanish with the summer but there is a real urgency to develop the insights we need to deploy AI and telemedicine, alongside improving and personalising the care of people who will be affected on the second, third or may be fourth wave. This means that the reliance on medical technological solutions will increase at a pace and scale we have not witnessed before.

The post-COVID world is likely to be remembered as the time when the care of other medical interactions like the provision of primary care or the management of non-communicable diseases shifted to digital modalities as the default rather than the exception. This new post COVID-19 age is also likely to then enable all the other technologies we have been celebrating, like insights associated with AI, and the potential that 5G gives us in terms of the Internet of things to all converge in a whole variety of ways. We are seeing this happen in real-time and at a pace we could never have imagined. In England, primary care at scale has now finally started to embrace telehealth and has deployed a new digital first pathway as a route to managing streaming of care to the appropriate place. This would have been beyond the limits of the possible only a few weeks ago. 

The other significant change we can already see accelerating is the adoption of precision health, both in more personalised and predictive public health, but also in utilising digital technology in empowering people to better self-manage in non-communicable disease. 

Here are my top 5 digital health technology adoptions in this recent pandemic era.

  1. Data Analytics – we all have seen and heard a lot about flattening the curve. Data Analytics have helped medical professionals to understand and manage the pandemic better.
  2. 3D Printing – To meet these demands in medical supplies, the maker community banded together with their 3D-printers. From garage hobbyists to established companies, people are 3D-printing equipment from face shields through swabs to ventilator parts. 
  3. Telemedicine (Remote Healthcare) -Given the virulence and ease of transmission of SARS-CoV-2, social distancing measures are a must. But what to do if you feel sick or are following an ongoing treatment? Should you risk going to the hospital and putting your health at further risk? As for hospitals, they are already overloaded with critical cases; isn’t there a way to reduce unnecessary visits? For these issues, telemedicine is a ready-made option that saw a boom during the pandemic, even after the outbreak subsides.
  4. Artificial Intelligence (AI) – A.I. is not only being used to track the disease but also to diagnose it. Hospitals in China and in UK are employing software’s to detect signs of COVID-19 pneumonia on CT scans. A new A.I. algorithm developed by the Chinese tech giant Alibaba can detect COVID-19 infections from CT scans of patients’ chests with 96% accuracy in a matter of seconds! As a means to look beyond, A.I. is being deployed to mine the load of research papers surrounding the COVID-19 which have accumulated since the outbreak. While we must be cautious about unrealistic expectations of A.I.’s potential, we cannot ignore its potential in such crisis. As such, instead of fearing a Skynet-controlled future, we should see A. I. as an aide to augment the skills of healthcare professionals.
  5. Genomics: Gene sequencing: helping in the race to find a vaccine – Gene sequencing of the virus’ genome has been speeding things up to get a viable vaccine out. All of these sequencings and other research and study requires AI and digital capacities. Since the first sequencing was done, nearly two dozen more studies have been completed. All of the combined studies are needed for the crucial development of vaccines.  The progression, algorithms, combined global knowledge, and their accuracies can move faster than ever before because of the digital component of the work.

Closing remarks, as you may have noticed that my top 5 picks aren’t anything out of blue; they all have been a topic of discussion for the past 5 years, if not longer and hence my reason not to class current IT shift as a new major release. I believe, from now onwards, there will be a renewed and re-positioned investment made by governments across the globe in their healthcare systems. These progressions are the beginning of a new future. The COVID-19 pandemic, which is probably not going to disappear anytime soon, is also an awakening for healthcare systems. It is time we prepare ourselves with well-equipped AI, Analytics, remote healthcare and other technologies to improve and customise treatments for individual patients. In other words, the dependence on healthcare technological solutions will keep on increasing at a pace and scale we have not seen so far. This has numerous potential outcomes. From diagnosing illness at home to the customised care of patients post discharge from hospitals and giving advice and reminders over video conferencing, change is likely to happen in every sphere of a patient’s journey.

What healthcare will look like in the future is anyone’s guess, but there will undoubtedly be changes because of COVID-19. There will be struggles, and there will be advances, but in the end, whatever ends up being the new normal will significantly impact healthcare organisations and consumers. Stay safe!

AI, Covid 19, Healthcare, healthcare emerging technology, ML, NHS, Pandemic and Data, Technology

Critical Role of Data to Combat Pandemic

Firstly, and most importantly, I sincerely hope you and family are keeping well. Secondly, I want to apologise to all my readers for slightly deviating from original plan – I had stated at the end of my previous article that the next one would be about ‘Cloud Native Cloud Platform Infrastructure and Healthcare Cloud’. Instead, I have decided to share my views on the role of Data in situation like, the planet is currently facing – the Covid 19 Pandemic.

A pandemic is a special case of a disaster, but more widespread than other types. Disaster management plans typically consist of four stages: response, recovery, mitigation, and preparedness. With COVID-19, we are currently focused on the response stage and the response strategy centers on containment and suppression via social distancing. All four stages require decision-makers at all levels to gather and process enormous amounts of data and this is where machine learning (ML) and artificial intelligence (AI) technologies can help.

Over the last couple of weeks, I have been involved in IT projects which are directly and closely associated with the pandemic such as temporary (Nightingale) hospitals in UK and few others. I feel there are so much more that the data is capable of to help combat these pandemics. Data is necessary for our understanding of the world, and particularly for unique, unprecedented circumstances such as these. I strongly feel that we, in the IT industry, should stand alongside our NHS heroes and support them in the best possible way, as we are capable to do so.

While technologies, such as artificial intelligence (AI) and machine learning (ML), have been monumental to the insights, obtaining reliable data at scale continues to be a priority. Without a single place (or few multiple data hubs with links between them) to gather and analyse this data, decision-makers are unable to move as quickly as the response demands; information in spreadsheets held by disparate organisations will be duplicated and rapidly becomes outdated, leading to inaccurate or incomplete understanding of the situation.

Let’s take a look into what we get out of this data. Without diving too deep, AI which has immense reliance on data could be used in three different ways in the current crisis;

  1. Rapidly develop antibodies and vaccines for the Covid-19 virus
  2. Scan through existing drugs to see if any could be repurposed
  3. Design a drug to fight both the current and future coronavirus outbreaks

Other example of how the UK government intends to use the data is to consolidate information from across the NHS (hospitals) and partner organisations, to give decision-makers more accurate visibility into the status of the response. This includes metrics such as:

  1. Current occupancy levels at hospitals, broken down by general beds and specialist and/or critical care beds
  2. Current capacity of A&E departments and current waiting times
  3. Statistics about the lengths of stay for Covid-19 patients

We are still struggling to find the best way to combat the virus, but on the bright side, there have been new innovations that collect data to protect citizens from the pandemic.  New solutions powered by data, including electronic surveillance solutions and machine learning models, are now in place to prevent the spread of the disease. To summarize, following are the few applications, where I see data driven approaches making a huge difference:-

AI/ML to identify, track and forecast outbreaks, AI/ML to help diagnose the virus, drones delivering medical supplies, robot sterilisers, delivering food and supplies and performing other tasks, increasing drug development cycle, Chatbots to share information, Supercomputers working on a coronavirus vaccine, among many others.

In the middle of the mayhem that we are in, I believe a secure data storage system should be prioritised to ensure that vulnerable people suffering from the pandemic do not fall victim to cyber-enabled or cyber-dependent crimes. I am sure everybody is taking necessary measures to minimise the risks.

Finally, technology is an important enabler in fighting disease, and the collection and analysis of data is a crucial part of that fight. This crisis has reinforced the value of quick, secure collaboration. It has already triggered a chain of innovation, powered by huge volumes of data. All this data needs to be rolled up, ingested, analysed, and shared in a timely way so we can take action to help prevent the spread of the disease. By using data to track the disease and model its behaviour, we can hopefully make the coronavirus pandemic a thing of the past.

Stay safe, stay strong and listen to your regional/country’s guidance. We will beat this!!

Cloud Native, container, Dell EMC, Enterprise Hybrid Cloud, General, Healtcare, healthcare emerging technology, Hybrid Cloud, Multi Cloud, NHS, Solutions, Technology, Uncategorised, Vxrail Cloud Foundation

Innovating Healthcare App using Containers – Cloud Native

What a week, it has been and If I had to describe it in one word, I would say ‘ CLOUDY ‘ – No, not because of weather (well that too, if you spent last week in England) – but I say so because of my engagement last week – it was all about cloud – meetings, key note at an event and what a great way to finish it by writing about it.

OK, so another episode on cloud native but this is a spin on healthcare application and use of containers. For my last write up, please visit (Demystifying-cloud-native-applications)

A quick recap – what are containers? Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run.

Why Containers: Instead of virtualising the hardware stack as with the virtual machines approach, containers virtualise at the operating system level, with multiple containers running atop the OS kernel directly. This means that containers are far more lightweight – they share the OS kernel, start much faster, and use a fraction of the memory compared to booting an entire OS.

Healthcare organizations are demanding more applications as they increase mobility and add flexibility to their IT infrastructure. Healthcare app containers are emerging as a key way to manage and deploy applications at scale. Also, as they search for tools to enable their digital transformation, they increasingly land on containers as a technology to enable that shift to cloud native application architectures.

Driving Innovation in Healthcare with Containers and Docker, Healthcare industry is rapidly embracing EHRs, enabling providers to improve patient engagement and deliver better patient outcomes. They have more opportunities than ever to leverage big data, generated as a result of advances in healthcare research, adoption of wearable technologies and mobile health applications.

While developing new applications, healthcare providers can identify business domains to develop microservices based architecture.  In a typical setting, microservices architecture could be developed for managing workflows around patients, encounters, appointment and scheduling. FHIR is the next generation healthcare interoperability standard to digitise the exchange of patient health information across heterogenous systems. Building microservices based on FHIR resources specification, an organisation could easily establish interoperability with growing list of healthcare applications. In the UK, NHS Digital also makes extensive use of the HL7 FHIR standard. (https://digital.nhs.uk/services/fhir-apis)

In the multi cloud era, healthcare organisations can also become cloud-ready by moving their existing applications to a container framework, allowing them to leverage the benefits offered by cloud. Containers act much like they do in the physical world, i.e. separating data based on predetermined characteristics. When migrating from one cloud storage model to another, it’s much easier to move data if it is contained in one place or separated from data that does not need to be moved. For example, Docker simplifies the process of creating and managing containers on premise, on cloud or on hybrid environments. Multi-tenant SaaS applications  As healthcare organisations develop innovative use-cases to provide better patient care based on big data, predictive analytics and machine learning, Software as a Service (SaaS) becomes a preferred approach to offer services to providers and patients.

While security is paramount for data related to healthcare, Containers provide heightened security by design by separating data. Containing access to PHI by clearance level or department protects data in other containers. While the data in the breached container is still compromised, the other containers are virtually separated and unaware of each other, making cross-penetration impossible.

OK, so far, all sounds great, right?  The benefits of containers are clear but there are deployment challenges organizations need to be aware of before committing to wide-spread adoption.

Containers can sometimes be too complex to integrate into existing environments or too many skilled staff are needed to manage the container. Integration into legacy IT infrastructures is a valid concern. Certain infrastructure solutions can be in place for years and still function as needed, but containers and virtual machines may not function as desired due to bandwidth restrictions, potential incompatibility of physical servers or lack of cloud platform deployment.

OK, concluding this episode, I believe there is definitive value in container deployment in healthcare even with the challenges that prevent organisations from getting there. It’s no longer a question of if healthcare organisations should fully embrace cloud technology, but when. And to clarify, when I say cloud, I mean multi cloud (read previous blog – multi-cloud-starts-with-2-x-w-why-what). Moving to the cloud and containers-based application provides the flexibility and resources to address healthcare providers’ most pressing challenges that they have in present time. To take advantage of every benefit of using multi cloud technology, make sure you go forward with a true cloud-native solution.

Coming next — >> Cloud Native Cloud Platform Infrastructure and Healthcare Cloud

Thank You !

Cloud Native, container, Dell EMC, Enterprise Hybrid Cloud, Solutions, Technology, Uncategorised

Demystifying Cloud-Native Applications

I am sure you have already guessed what this blog is about but before I start, I must state that application development is not my thing. Good job I realised it quite early on when doing an undergrad university assignment of solving 4 x 4 arithmetic matrix equation, I wrote a 16 lines of code for solving that matrix instead of using a for/loop( probably 1 line code) in C language. That was embarrassing 😊. After that, I vowed to only read programming to pass my exams!

The cloud revolution has been into our lives for over a decade now. It can be said that Cloud Native is a path to innovation, established in order to take complete advantage of Cloud Computing. Cloud Native is recently one of the biggest trends in the software industry.

Before going further into cloud native, one concern that I get quite a lot from customers is Cloud based vs cloud native application development. Cloud-based development refers to application development executed by means of a browser that points to a cloud-based infrastructure, whereas, cloud-native development refers more specifically to application development grounded in containers, microservices, and dynamic orchestration. I will explain containers, microservices and orchestration in the next few lines.

So, to simplify, some definitions:-

Cloud-Native Development

Cloud-native development was designed with the goal to facilitate and utilize the cloud to its maximum potential. When we communicate with the cloud and deploy our applications, we need a service which can enable better execution. Cloud-native helps in designing, building and running applications on the cloud. It consists of continuous integration, container engines, and orchestrators.

Cloud-Based Development

A cloud platform is a compelling mixture of cloud computing, networking, storage and business utilization that promotes IT and consumer satisfaction. Microsoft has enabled the fast track of resources with its Azure platform. Google started preparing for these challenges by launching their Google Cloud Platform.

Cloud-Enabled Development

A cloud-enabled application is an application that was moved to cloud, but it was originally developed for deployment in a traditional data center. Some characteristics of the application had to be changed or customized for the cloud. When developing an application that will be deployed in the cloud, you must keep the cloud principles in mind. They should be taken into account as part of the application.

What Is Cloud Native?

Cloud Native is totally about how the applications are created and deployed, not        where the applications are developed. There are tons of definition but the two that I  really like ;

A)      Cloud native is an approach to building and running applications that fully exploit the advantages of the cloud computing model. (extract from Pivotal)

B)    According to Cloud native Foundation, Cloud native computing uses an open source  software stack to be:

  1. Each part (applications, processes, etc) is packaged in its own container. This facilitates reproducible, transparency, and resource isolation.
  2. Dynamically orchestrated. Containers are actively scheduled and managed to optimize resource utilization.
  3. Microservices-oriented. Applications are segmented into microservices. This significantly increases the overall agility and maintainability of applications.”

Before we go any further, lets understand Containers, Orchestration and Microservices.

Container

The basic idea of containers is to package software with everything is needed to execute it into one executable package, e.g., a Java VM, an application server, and the application itself. Then, run this container in a virtualized environment and isolate the contained application from its environment.

The main benefit of this approach is that the application becomes independent of the environment and that the container is highly portable. The same container can run on development, test or production system. And if application design supports horizontal scaling, multiple instances of a container can start or stop to add or remove instances of the application based on the current user demand.

The Docker project is currently the most popular container implementation. It’s so popular that the terms Docker and container are often used interchangeably. But keep in mind that the Docker project is just one implementation of the container concept and could be replaced in the future.

Orchestration

Deploying application with all dependencies into a container is just the first step. It solves the deployment problems but in order to benefit from a cloud platform fully, it comes with new challenges.

Starting additional or shutting down running application nodes based on the current load of system isn’t that easy. It requires,

  • monitoring,
  • triggering the startup or shutdown of a container,
  • making sure that all required configuration parameters are in place,
  • balancing the load between the active application instances
  • sharing authentication secrets between your containers.

Doing all of that manually requires a lot of effort and is too slow to react to unexpected changes in system load. It needs the right tools in place that automatically do all of this. This is what the different orchestration solutions are built for. A few popular ones are Docker SwarmKubernetesApache Mesos and Amazon’s ECS

Microservices

The approach utilized for developing the Cloud Native application by gathering various small independent services that run its own processes and implements their business capabilities is called microservice. All these works together and manage the overall functionality of the whole system. The development of Cloud Native applications is done as a system of Microservices. With this approach, scaling can be made highly efficient as each of them has one functionality, well-defined boundary, and API.

Why Use Cloud Native? Benefits of Cloud Native

Below mentioned are the certain benefits of using Cloud Native applications.

Competitive Advantage: Stay Ahead Of The Competition :Cloud Native application tends to offer a competitive advantage as it lets businesses build and deliver applications quickly in response to customer requirements.

Great Flexibility: Works Everywhere : Cloud Native applications aren’t just meant to work on public clouds, but applications built can run on both public as well as private clouds without any modification.

Auto Scalability,  Reduce Expenditure:  Cloud Native applications come with the auto-scale feature in order to let you handle continuous business requirements. With this, the enterprises deal with the pay-as-you-go model letting them pay for computing services utilized by them based on the time frame they used. Along with that, the downtime with Cloud Native applications is very less or not at all.

Auto Redundancy, Avoid Failures : With Cloud Native applications being resilient to failures, they handle the outages and enable corrective actions. The application starts processing instant moves from one data center to another without any interruption to the service in case of any failure.

Speed, Be As Fast As Flash : Quick deliverables are the aim of every company in the software industry. SME or large scale enterprise, all of them achieve this aim with the use of Cloud Native. With Cloud Native, the time utilized for the development of the application can be moved from months to days or hours. Ultimately, Cloud Native is the reason for organizations to be more responsive to the client’s requirements.

Margin, Always A Concern : With Cloud Native, highly efficient applications usually cost less. This application lets you pay for additional resources only when it is required. You do not need to pay for the resources even when you don’t have a large number of visitors.

Summary

The ideas and concepts of cloud-native computing introduced a new way to implement complex, scalable systems. Even if you’re not hosting your application on a cloud platform, these new ideas will influence how you develop applications in the future.

Containers make it a lot easier to distribute an application. You can use them during your development process to share applications between team members, or to run them in different environments. And after all tests have been executed, you can easily deploy the same container to production.

Microservices provide a new way to structure your system. They introduce new challenges, but they also shift the attention to the design of each component. That improves encapsulation and allows you to implement maintainable components that you can quickly adapt to new requirements.

And if you decide to use containers to run a system of microservices in production, you need an orchestration solution that helps you to manage the system.

Thanks for reading !