Trust the process? Healthcare AI is battling security and regulatory complexity
The EU AI Act holds promise for safer, smarter healthcare, but fragmentation, workforce strain, and fragile trust still threaten its success
Europe’s healthcare systems stand at a pivotal moment. Long regarded as pillars of solidarity, they now face a dual challenge: enduring fiscal pressures and environmental shocks, alongside emerging risks from the rapid advance of artificial intelligence. At the European Health Forum Gastein, AI-dominated conversations – even in sessions not formally devoted to the topic- underscoring its disruptive potential.
The mood oscillated between cautious optimism and informed scepticism.
Digital technologies promise breakthroughs in science, efficiency gains in care delivery, and relief for chronic labour shortages. Yet the EU’s landmark AI Act, designed to regulate the technology across sectors, will be critical in determining whether these benefits materialise without compromising safety and trust. For healthcare, success is not optional – it is existential.
EHFG President Clemens Martin Auer warned about adopting AI heedlessly. “The biggest political and social challenge for solidarity systems still lies ahead of us: Rapid digitalisation and AI, especially on their potential impact on the labour market,” he said during the first plenary, adding the concern that “AI will drastically change or might even obliterate the very foundation of the social contract as we know it today.”
On the other hand, Lucilla Sioli, director for AI and digital industry at the European Commission’s communications and technology directorate, noted during the discussion on the AI Act, “artificial intelligence has the potential to completely revolutionise the health sector, and maybe even faster in principle than other economic sectors.”
For Steffen Thirstrup, chief medical officer of the  European Medicines Agency (EMA), it is essential to ensure  AI can be trusted. “That the output is trustworthy and it is used ethically and correctly,” he told Euractiv on the sidelines of the EHFG.
Bridging gaps
During a discussion on the EU’s AI Act, Ricardo Baptista Leite, CEO of Health AI (the global agency for responsible AI in health), also noted that costs have risen over the decades without improving health outcomes.
At the same time, the burden of disease continues to grow, undermining the objective of achieving universal health coverage for all.
“Those who have money in their pocket will always find a way to get access to care. It is always those in the most vulnerable conditions who will be left behind. And so inevitably, there is a risk of creating a vicious cycle of rising inequities and poverty if we don’t break that cycle,” he added.
Physicians typically only look at about 10% of what truly impacts the patient’s health – the clinical factors. “Genomic and biological factors play a role, but around 60% of what affects our health lies outside the healthcare system; the air we breathe, the environments we live in, and the decisions we make each day,” he remarked.
“Each of these factors generates vast amounts of data we’ve yet to harness and translate into better clinical decisions. That’s where I believe one of the greatest opportunities of artificial intelligence lies,” he said.
However, he explained that it’s not technology, per se, that will solve all the problems. “We should not use AI to retrofit healthcare, but rather reimagine how health should be delivered.”
For patients, trust and transparency are essential.
“In this case, stronger regulation would not be a burden but a safeguard, helping build the confidence needed for real uptake,” remarked Valentina Strammiello, interim executive director of the European Patients’ Forum.
Barriers to innovation
However, the diffusion of the technology has been extremely slow in health systems, Baptista Leite remarked, with many innovators hitting a brick wall, unable to scale.
Barriers slowing progress range from fragmented legislation and rapid technological change to limited institutional capacity to keep pace. Data governance, cybersecurity, and postmarket surveillance are also critical concerns. “Ensuring we can detect potential risks and harms early on will be essential,” he said.
“Beyond that, we need clear frameworks for health technology assessment and reimbursement, but above all, we need to build trust. Without it, healthcare professionals won’t use these tools, and patients will hesitate to adopt them,” he added.
Circling back to trust, Thirstrup also discussed the challenge of handling commercially confidential information. “There needs to be some security walls around it. We need to develop and deploy AI solutions that are safe for confidential information.”
Transparency may be another challenge, according to Thirstrup. “Today, we are very transparent. We publish our assessment report. We will continue doing so”, he said, adding that there is a concern when, at some stage, part of that assessment report will be constructed with the aid of AI.
“However, I cannot foresee us completely abandoning the human brain for this,” he remarked.
At the end of the day, Thirstrup sees AI more as a tool to handle the workload and routine activities, and whatever is produced must be checked by someone who knows, as the biggest fear remains that the AI will produce something inaccurate that would undermine the Agency’s credibility.
Afua van Haasteren, director for health policy and external affairs at Roche, spoke of Europe’s “regulatory lasagna”, the layers of overlapping rules. In this case, they span from the AI Act and the MDR to the EHDS, the Data Act, and so on.
“While each aims to improve patient care, together they create a complex landscape that’s especially challenging for small and medium-sized enterprises to navigate,” she observed, adding that these frameworks need to align and take existing products into account, allowing practical arrangements that support innovation rather than stifle it.
Diana McGhie, healthcare policy lead at the World Economic Forum, added that, alongside the overlap in existing regulations, there is concern about compliance costs that could “squeeze out those innovators and those small and medium enterprises.”
Possible benefits
Digital tools could also become a “bitter pill”, according to Stefan Eichwalder, director of the health systems division at Austria’s health and social affairs ministry. He explained his concerns about the potential for burnout. “Too often, tech meant to support health professionals ends up undermining them. A recent German survey found that the use of electronic health records was linked to higher stress and burnout among GPs,” he said.
However, studies show that AI tools, like speech recognition, can save clinicians up to an hour a day, time that can be reinvested in patient care.
Marco Marsella, director of digital, EU4Health and health systems modernisation at the European Commission’s health and food safety directorate, discussed responsible AI in health systems that could yield a double dividend.
He gave the example of prevention and early detection, where AI could be deployed, which can yield high returns, adding that AI could attract investment but also strengthen European technological sovereignty.
Trust in the Act
Citing trust as a must, Sioli estimated that the AI Act is designed to provide it.
“It introduces specific requirements in terms of transparency, human oversight, data governance, and postmarket monitoring. It creates the condition for minimising the risk that artificial intelligence brings.”
For Sioli, the big advantage of the Act is that it is “a single act in the single market,” in contrast with the often cited USA, where there are more than 90 pieces of legislation on AI across the different states.
In that sense, the Commission is working to align the medical device regulation and the AI Act so companies go through a single certification process for a medical device, whether it includes AI or not. Additionally, the digital omnibus will aim to streamline regulations and procedures.
Strammiello warns about not including provisions for low-risk AI systems in the legal framework to protect citizens. “What is deemed ‘low risk’ today could pose greater risks tomorrow,” she explained.
She underlined the importance of supporting patient communities and citizens through education and digital health skills, to ensure they can engage with AI solutions safely and meaningfully.
Transformation should be pursued inclusively, “bridging gaps and ensuring that no group is left behind,” Eichwalder pointed out.
Virginia Mahieu, neurotechnology director at the Centre for Future Generations, noted during the last plenary of the EHFG, “change is coming”, but how the future of healthcare may look is unknown.
She said, “That uncertainty is exactly why we need to think outside the box to stress test our health systems, our welfare state, and our social contract against very different possible futures.”
[BM]