Originally published on 30 June, 2022.

Being used around the world, digital technologies and artificial intelligence (AI) are increasingly transforming medicine, medical research, and public health.  

The United Nations Secretary-General has stated that the safe use of new technologies, including AI, can help the world achieve the UN Sustainable Development Goals. These goals include the health-related objectives under Sustainable Development Goal 3. AI could also help with meeting global commitments to achieve universal health coverage.

Nevertheless, the use of AI for health raises ethical, legal, and societal concerns, most of which are not unique to AI. These challenges must be adequately addressed if we want to leverage AI to improve human health, preserve human autonomy, and ensure equitable access to healthcare.

By reading this article, you’ll get a better understanding of:

  • How AI is used in healthcare
  • What general concerns AI raises in the health sector
  • The role health AI startups and SMEs play in responsible AI use

AI has numerous application possibilities in healthcare.

Great advancements have been made in the health sector. For instance, AI has helped develop health research and drug development, health systems management and planning, and public health.  

Also, examples of AI applications include:  

  • Clinical decision support systems
  • Remote-controlled robotic surgery
  • Diagnostic imaging
  • Personalised patient treatment
  • … and more!

Undoubtedly, those innovative solutions have increasingly improved diagnosis and clinical care and contributed to better resource allocation and prioritisation in health care. However, they are not without potential risks.

Data privacy: With great power comes great responsibility.

For a long time, computing in healthcare has raised ethical challenges that extend beyond the domain of traditional regulators and participants in healthcare systems.

In particular, the collection, processing, analysis, use, and sharing of health data present concerns related to the overly sensitive nature of the data. The numerous types of data, known as ‘biomedical big data’, form a health data ecosystem including data from:

  • Standard sources like health services, public health, research;
  • Additional sources like environmental, lifestyle, socioeconomic, behavioural, and social databases.

This can present systemic biases due to underrepresentation of certain groups (such as underrepresentation of gender, race, and age), which can be echoed by the machine learning model.

Excessive health data collection combined with privacy violations can result in secondary use of data, creating significant ethics and human rights risks. Existing alarming cases include profiling for targeted advertising, discrimination based on one’s health conditions, unfair and unequal insurance coverage and pricing, among other possible harms.

Health AI startups have been rapidly gaining ground on the market.

OECD’s 2021 Survey on National Health Data Infrastructure and Governance points out that “appropriate reconciliation of the risks and benefits associated with AI is necessary to ensure that the technology equally promotes health and quality of life for all”.  

This requires common understanding and coordinated action on how to:

  1. Best ensure the protection and privacy of health data
  2. Maximise the benefits of AI to individuals and societies from health data availability and use.

To achieve these goals, AI health startups and SMEs play a crucial role in the industry. The Joint Research Centre (JRC) of the European Commission shows that venture capital (VC) investments in health-related startups have seen rapid growth, especially steep after 2014. “The percentage of such investments directed towards AI health-related startups has also increased, passing from ~7% in 2014 to around 14% in 2017”.  

According to market research company Grand View Research, the global AI in the healthcare market was valued already at USD 10.4 billion in 2021 and is expected to grow at a CAGR of 38.4% from 2022 to 2030.

The potential benefits of AI don’t come without challenges.

Multiple stakeholders in the health industry have been promoting and fostering best practices, codes of conduct and AI Ethics guidelines and corporate policies supported by large enterprises, multi-governmental organisations and civil society organizations.  

On top of that, existing drafting of regulation at the EU and national levels have emerged, such as the EU AI Act. It has already stirred up public debate on the topic in both private and public sectors – and shaped practices of the AI industry even before its entry into force.

A key regulation also governing and setting new requirements for AI-based medical device software is the European Medical Device Regulation (EU) 2017/745 (MDR) which came into force on 26 May 2021.  

The EU MDR aligns EU legislation with technical advances, changes in medical science and progress in law-making. It applies to medical devices for human use and their accessories in the Union, among which are processing software intended to provide information for diagnosing, preventing, monitoring, treating, or alleviating diseases.  

The regulation applies to software that provides a diagnosis or therapy by itself, but also to software that merely provides information to a medical professional. [1]

The EU MDR states that devices falling into its scope shall meet the general safety and performance requirements set out in Annex I which apply to them, considering their intended purpose.  

Medical devices must perform as intended by their manufacturers. In addition, they must be designed and manufactured so that they are suitable for their intended purpose during normal conditions of use. They must be safe and effective and should not compromise the clinical condition, patients’ safety, or the user’s safety and health.

Saidot is a pioneer in AI governance and alignment.

Saidot has developed a tailored AI governance model that specifically considers ethical challenges from the perspective of AI SMEs. A recent significant example of our work is the ELISE Ethics Self-Assessment Process.  

ELISE stands for European Network of AI Excellence Centers, a network of artificial intelligence research hubs where the best European researchers in machine learning and AI work together to attract talent, foster research through collaboration, and inspire and be inspired by industry and society.

In its first open call, ELISE selected 16 SMEs and startups that develop AI services or applications. The companies selected will take part in a six-month program and receive up to 60,000 euros in funding.

To best assist the SMEs in assessing their possible AI ethics risks, we developed a concept for industry-specific support focusing on the industries of ELISE applicants. The concept is composed of:

  1. Guided AI ethics self-assessment on Saidot’s platform;
  2. Coaching, and training on organisational structures, helping existing teams implement and monitor the ethics of AI;
  3. AI ethics experts’ feedback for reviewing applicants’ AI products; and finally, follow-up mechanisms.  

The results of this comprehensive approach were highly successful. SMEs around Europe experienced and appreciated the value of the process.

“The AI Ethics assessment has been a great chance for self-reflection and learning. The assessment process through the Saidot platform felt very intuitive right from the start, while at the same time providing interesting insights that normally get lost when evaluating the technical or financial feasibility of an AI project. Furthermore, the expert's feedback was a real eye-opener: They provide new perspectives on familiar AI issues such as explainability or generalisability, focusing more on ethical and human aspects.”

Alessandro Grillini, Founder & Managing Director of Reperio B.V.


We are confident that supporting SMEs on their responsible AI journey and building the conditions of a trustworthy digital transition is essential to leveraging AI to improve human health.  

Saidot contributes to more coordinated actions for reskilling SME managers and workers and ensuring an ethical and participatory approach in redesigning work processes and training AI models in the health sector.

[1] Beckers, R.,Kwade, Z., & Zanca, F. (2021). The EU medical device regulation: Implications for artificial intelligence-based medical device software in medical physics. Physica Medica, 83, 1–8.https://doi.org/10.1016/j.ejmp.2021.02.011