Perspectives

European Research Has an AI Problem — And It's Not the One You Think

European research institutes have AI strategies but no AI workflows. Here's what the implementation gap means — and what needs to change.

05 Apr 2026 ·7 min read ·Pranoti Kshirsagar
AI strategyEU AI Actresearch organisationsknowledge workopinion

Understanding where an industry stands on AI adoption is useful. Being able to map it systematically — with verified sources, structured synthesis, and actionable conclusions — is a different skill. This case study documents a deep research sprint on AI integration across European research institutes: what the question was, how it was investigated using AI-assisted research tools, what the evidence shows, and what it means strategically.

AI integration in European research institutes is accelerating — at least on paper. Strategies are being approved. Committees are being formed. Press releases are going out. And yet, if you ask most researchers and research managers what AI tools they are actually using in their day-to-day work — and how — the honest answer is usually a shrug, a mention of ChatGPT for emails, and a vague sense that the institution is “working on it.”

This is the AI problem that nobody in the sector is naming directly: it is not a technology problem, it is an implementation gap. And based on what I have seen researching how Europe’s flagship institutes are approaching AI adoption — and working with research organisations on exactly this — it is a gap that is widening rather than closing.

Everyone has a strategy. Almost nobody has a workflow.

In November 2025, CERN formally approved an organisation-wide AI strategy. Helmholtz has a dedicated AI platform with an internal consultant team spanning 18 research centres. The European Commission has published a European Strategy for AI in Science, with a virtual institute — RAISE — to pool AI resources across the EU. These are genuinely significant developments.

But here is what struck me when I dug into the evidence: the organisations that are furthest ahead are not the ones with the most ambitious strategies. They are the ones that have built operational infrastructure around implementation. Helmholtz did not just publish a strategy — they hired an internal AI consultant team to actually help researchers use AI methods. That distinction matters enormously.

For the vast majority of European research organisations, the strategy layer is largely in place. The workflow layer is almost entirely missing. And that gap is now a legal problem, not just an operational one.

Article 4 of the EU AI Act entered into force on 2 February 2025. It requires every organisation that deploys AI systems to ensure a sufficient level of AI literacy among the staff dealing with those systems. Enforcement sits with national market surveillance authorities.

I want to be specific about what “deployer” means here, because I think most research organisations are underestimating their exposure. If your team uses Microsoft Copilot, any AI-assisted writing tool, an AI chatbot for literature search, or any automated tool that uses machine learning to process work-related data — your organisation is a deployer. The obligation is on the institution, not the individual researcher.

Most research institutions in Europe are not currently meeting this requirement. Not because they are resistant to AI — the opposite is often true. But because the practical implementation of AI literacy training requires someone to actually design it, deliver it, and make it fit the context of research work. That is not something a policy document does on its own.

The European Commission has published a living repository of AI literacy practices to help. An EU AI Skills Academy is launching in 2026. These are useful signals. But they do not solve the immediate challenge for your team in 2025.

Fraunhofer is automating research writing. What does that mean for research professionals?

This is the finding from my research that I think deserves the most attention — and gets the least.

Fraunhofer FIT, one of Europe’s most applied AI research institutes, has a project called LIKE with a stated objective of “the complete automation of knowledge work processes through LLM-based agents.” The primary documented use case is the automation of research paper and white paper creation. The agents are designed to interact autonomously, make decisions, and execute complex knowledge tasks.

This is not a futurist speculation. It is active applied research at an institute whose entire purpose is to transfer technology into operational reality.

I am not saying that research writing will disappear, or that the intellectual work of science will be automated. I do not believe that. What I am saying is that the structural parts of knowledge work — the formatting of reports to funder templates, the drafting of literature review sections from a corpus of papers, the preparation of progress reports that follow a predictable structure — are exactly the tasks that LLM agents are being built to handle. And research organisations that have not thought about what this means for their staff, their workflows, and their quality controls are going to find themselves in a difficult position.

The parallel development I keep pointing to when I talk with research teams about this: Helmholtz is explicitly hiring for candidates with experience in “agentic systems, LLM-based tool use, and workflow orchestration frameworks.” This is not a data science role. It is a role building AI-integrated workflows for scientists. The fact that it is an institutional hiring criterion tells you that Helmholtz considers this a core operational need they cannot meet through existing capacity.

Most organisations cannot afford a team like that. But they can afford to work out, systematically, which workflows would benefit most from AI integration — and build the governance and training to do it responsibly.

The grant writing situation is more urgent than most people realise

One area where the implementation gap has immediate, high-stakes consequences is grant writing. I have been working with research teams on AI-assisted proposal workflows long enough to see both what works and what goes wrong.

Here is the current situation based on verified data: one in six scientists already use generative AI to help write grant proposals (Nature, 2023). Horizon Europe applications have jumped 80% year-on-year. And Horizon Europe’s Standard Application Form now requires explicit disclosure of AI tool use on page 32 — failure to disclose can render a proposal ineligible.

At the same time, AI hallucination rates in generated citations ranged from 14% to 95% across 13 models tested in 2025. A single fabricated reference, caught by an expert reviewer who knows the field, ends the proposal’s credibility immediately. I have seen this happen.

The response to this is not to stop using AI in grant writing. It is to use it in a structured, disclosed way, with manual citation verification as a non-negotiable step. The researchers and teams who get this right will have a genuine productivity advantage. The ones who use AI carelessly — generating text without reviewing it, including citations without checking them — are taking on risk that is entirely avoidable.

What I think needs to happen

I have spent the past two years building AI-integrated workflows for research communication, grant writing, and knowledge management — first for my own consulting practice, then documenting and systematising those approaches for the organisations I work with. What I keep seeing is that the barrier is almost never technical. It is structural.

Research organisations need three things, in this order:

First, an honest audit of current AI exposure. Which tools are staff already using? What data are those tools processing? This does not need to be exhaustive — it needs to be honest. You cannot build AI literacy training around tools you have not acknowledged.

Second, a prioritised list of knowledge work workflows where AI adds the most value. Grant writing. Literature synthesis. Progress reporting. Communications drafting. These are not peripheral activities — they consume enormous amounts of researcher time, and they are precisely the workflows where LLM tools deliver the most immediate, measurable gains with the lowest implementation complexity.

Third, governance before scale. A simple internal policy covering which tools are permitted, how AI-assisted outputs must be reviewed, and how to disclose AI use in funded outputs. This is not bureaucracy — it is what makes AI adoption sustainable. EMBL published a formal AI policy for its communications team. CERN built an AI Steering Committee. These are not large organisations doing large things — they are organisations that understood that governance enables adoption rather than slowing it down.


The EU AI Act has created a compliance floor. The RAISE initiative is building shared infrastructure. The Fraunhofer automation projects are demonstrating what is technically feasible. But the work that actually matters — translating all of this into workflows that fit how researchers and research managers operate day-to-day — is still largely undone.

That is the problem I find genuinely interesting. And it is the problem I am working on.

If you are working through this at your organisation — or thinking about what it means for a role or project you are involved in — I am happy to talk through it. Get in touch →


Found this useful? Share it or read more perspectives.