Artificial Intelligence: Promising Landscape for Health Start-Ups and SMEs

June 30, 2022

Digital technologies and artificial intelligence (AI) are increasingly transforming medicine, medical research, and public health. AI is now used in health services around the world. The United Nations Secretary-General has stated that the safe deployment of new technologies, including AI, can help the world to achieve the United Nations Sustainable Development Goals, which would include the health-related objectives under Sustainable Development Goal 3. AI could also help to meet global commitments to achieve universal health coverage.

The use of AI for health nevertheless raises ethical, legal, and societal concerns. Most of these concerns are not unique to AI. The use of computing in healthcare has for long challenged developers, governments, and providers, and AI poses additional and novel ethical challenges that extend beyond the domain of traditional regulators and participants in healthcare systems. These ethical challenges must be adequately addressed if we want to leverage the potential of AI to improve human health, preserve human autonomy, and ensure equitable access to health care.

Great advancements in the field have been, for instance, in health research and drug development, health systems management and planning, and public health. Also, examples of AI applications are clinical decision support systems, remote-controlled robotic surgery, diagnostic imaging, and personalized patient treatment. There is no doubt that those innovative solutions have been increasingly improving diagnosis and clinical care and contributing to better resource allocation and prioritization in health care, however, not without potential risks.

The collection, processing, analysis, use, and sharing of health data present particular ethical concerns that are related to the overly sensitive nature of the data. The numerous types of data, known as ‘biomedical big data’, form a health data ecosystem including data from standard sources (e.g., health services, public health, research) and additional sources(environmental, lifestyle, socioeconomic, behavioural, and social), and can present systemic biases due to underrepresentation of certain groups (such as underrepresentation of gender, race, age, etc.), which can be echoed by the machine learning model.

Excessive health data collection combined with privacy violations can result in secondary use of data creating major ethics and human rights risks. Existing alarming cases include profiling for targeted advertising; discrimination based on one’s health conditions; unfair and unequal insurance coverage and pricing, among other possible harms.

“The use of AI for health nevertheless raises ethical, legal, and societal concerns.”

OECD’s last year's Survey on National Health Data Infrastructure and Governance points out that “appropriate reconciliation of the risks and benefits associated with AI is necessary to ensure that the technology equally promotes health and quality of life for all”. This requires common understanding and coordinated action on how to best ensure the protection of health data privacy and maximize the benefits to individuals and societies from health data availability and use.

To achieve these goals, AI health start-ups and SMEs play a crucial role in the industry. The Joint Research Centre (JRC) of the European Commission shows that venture capital (VC) investments in health-related start-up companies have seen rapid growth, especially steep after 2014. “The percentage of such investments directed towards AI health-related start-ups has also increased, passing from ~7% in 2014 to around 14% in 2017”. Today, according to market research company Grand View Research, the global AI in the healthcare market was valued already at USD 10.4 billion in 2021 and is expected to grow at a CAGR of 38.4% from 2022 to 2030.

The potential of AI promises great advantages and benefits for SMEs but not without challenges. Multiple stakeholders in the field have been promoting and fostering best practices, codes of conduct and AI Ethics guidelines and corporate policies supported by large enterprises, multigovernmental organizations and civil society organizations. On top of that, existing drafting of regulation at the EU and national levels have emerged, such as the EC proposed AI Act, already fomenting a large public debate on the topic in both private and public sectors and shaping practices of the AI industry even before its entry into force.

A key regulation also governing and setting new requirements for AI-based medical device software is the European Medical Device Regulation(EU) 2017/745 (MDR) which came into force on 26 May 2021. The EU MDR aligns EU legislation with technical advances, changes in medical science and progress in law-making, and it applies to medical devices for human use and their accessories in the Union, among which are processing software that is intended to provide information for the diagnosis, prevention, monitoring, treatment, or alleviation of disease. The regulation applies to software that provides a diagnosis or therapy by itself, but also to software that merely provides information intended to inform a medical professional.[1]

The EU MDR states that devices falling into its scope shall meet the general safety and performance requirements set out in Annex I which apply to them, considering their intended purpose. Medical devices must achieve the performance intended by their manufacturer and must be designed and manufactured in such a way that, during normal conditions of use, they are suitable for their intended purpose. They must be safe and effective and shall not compromise the clinical condition or the safety of patients, or the safety and health of users.

“Saidot has pioneered a tailored AI Ethics governance model that specifically considers ethical challenges from the perspective of AI SMEs.”

Saidot has pioneered a tailored AI governance model that specifically considers ethical challenges from the perspective of AI SMEs. A recent significant example of our work is the ELISE Ethics Self-Assessment Process. ELISE stands for European Network of AI Excellence Centers, a network of artificial intelligence research hubs where the best European researchers in machine learning and AI work together to attract talent, foster research through collaboration, and inspire and be inspired by industry and society. In its first open call, ELISE selected 16 SMEs and start-ups that develop AI services or applications. The companies selected will take part in a six-month program and receive up to 60,000 euros in funding.

To best assist the SMEs to assess their possible AI ethics risks, we developed a concept for industry-specific support focusing on the industries of ELISE applicants. The concept is composed of a guided AI ethics self-assessment on Saidot’s platform; coaching, and training on organizational structures, helping existing teams to implement and monitor the ethics of AI; AI ethics experts’ feedback for reviewing applicants’ AI products; and finally, follow-up mechanisms. The results of this comprehensive approach were highly successful. SMEs around Europe experienced and appreciated the value of the process.

“The AI Ethics assessment has been a great chance for self-reflection and learning. The assessment process through the Saidot platform felt very intuitive right from the start, while at the same time providing interesting insights that normally get lost when evaluating the technical or financial feasibility of an AI project. Furthermore, the expert's feedback was a real eye-opener: they provide new perspectives on familiar AI issues such as explainability or generalizability, focusing more on ethical and human aspects.” Alessandro Grillini, Founder & Managing Director of Reperio B.V.

We are confident that supporting SMEs on their responsible AI journey and building the conditions of a trustworthy digital transition is essential to leveraging AI to improve human health. Saidot contributes to more coordinated actions for reskilling SME managers and workers and ensuring an ethical and participatory approach in redesigning work processes and training AI models in the health sector.


[1] Beckers, R.,Kwade, Z., & Zanca, F. (2021). The EU medical device regulation: Implications for artificial intelligence-based medical device software in medical physics. Physica Medica, 83, 1–8.https://doi.org/10.1016/j.ejmp.2021.02.011

Collaborate with us

We collaborate with partners who work in different ways to create a better future with AI. Connect with us to talk about how our platform can help your and your customers' businesses thrive.
Contact