
Is AI made in Europe automatically responsible AI?

Author: Marianne Schoenmakers
The use of digital technologies, and particularly AI, is increasing rapidly in both the public sector and society. Dependence on non-European technology companies is significant, as shown by various reports. The debate on digital sovereignty is therefore more urgent than ever. The arguments vary: from strengthening Europe’s competitiveness to safeguarding resilience and ensuring technology rooted in ‘European values’. Yet how these values should be embedded in technology often remains vague.
Digital sovereignty: the need for control over AI
Digital sovereignty concerns the ability of states, organisations and other collectives to exercise control over digital resources, from infrastructure to data and AI systems. This is under pressure because large non-European technology companies have access to virtually unlimited computing power, datasets and distribution platforms. As a result, they have great influence on public processes, ranging from information provision to crucial infrastructures such as education and healthcare.
Because the power of these companies is not democratically legitimised, calls for transparency and accountability mechanisms, such as audits and disclosure requirements, are increasing. At the same time, the desire to strengthen ‘home-grown’ European AI innovation as a counterbalance is growing.
Regulation versus innovation: an overly simplistic debate
In discussions about the perceived lack of European AI innovation, strict regulation is often cited as a factor slowing down and making innovation costly. A more flexible regulatory framework, such as in the US, is believed to give companies more flexibility to innovate. But this perspective is too simplistic. As recent research shows, many more factors are at play: stagnating tech transfer, low R&D investments, fragmented capital markets and bottlenecks in digital infrastructure. Regulation is therefore only one factor in a complex ecosystem.
For a long time, the EU focused on limiting risks through regulation, such as the GDPR and the AI Act. With the intended Brussels effect, the EU also hoped to set international standards. Since 2024, we’ve seen a clear shift. Partly due to the Draghi report, there is a call not only to regulate but also to actively invest in European innovation capacity. In parallel, there are calls to ease and simplify European digital regulation, including through the digital omnibus proposals. Against this background, several initiatives emerged over the past year to stimulate AI innovation, such as the EuroStack initiative and, closer to home, the recently published National AI Delta Plan. While these highlight that AI must be developed ‘according to European values’, these plans remain unclear on how those values should be concretely safeguarded in design and development processes.
The implicit assumption seems to be that AI made in Europe automatically produces responsible AI. But here lies the problem: how do you embed European values in digital technology while simultaneously calling for relaxed rules intended to safeguard those values? Responsible AI innovation and adoption requires a multilayered approach: clear policies and legislation, explicit methods, mature AI governance structures and organisational embedding. These do not arise automatically from geographical origin.
Towards sovereign and responsible AI innovation
Digital sovereignty is not an end in itself, but a means to strengthen competitiveness, resilience and values-driven technology. Recent incidents, such as the International Criminal Court example where a chief prosecutor was cut off from Microsoft services due to US sanctions, fuel the fear that organisations may be suddenly disconnected from essential digital infrastructure. Organisations struggle with their dependence on large American AI providers, particularly in generative AI, where market dominance is strong. In practice, I see organisations increasingly thinking about reducing digital dependency. But in strategic decisions, it often remains unclear which concrete levers they can pull. Choosing a European or open-source model is appealing, but its quality still lags in many areas. This requires sharper decision-making by organisations, but also by the EU, especially in domains and processes critical to society.
Conclusion
AI made in Europe is not a self-evident route to responsible AI. Digital sovereignty only delivers value when linked to concrete AI governance principles, transparent design practices and structural organisational embedding. This requires more than promoting European technology: it also demands making explicit which public values we want to protect, how those values are embedded in design and decision-making, and where we are willing to make trade-offs.




