
Experimenting to regulate: how can innovation and governance of AI go hand in hand?

Author: Anne Fleur van Veenstra
Artificial intelligence (AI) is developing at an extremely rapid pace. Regulation of this technology therefore often lags behind. Entrepreneurs see opportunities: new AI services and the use of AI to improve or accelerate existing processes. But there are also risks, such as privacy violations, discrimination and misinformation. For AI, both innovation and regulation are necessary; the key question is whether these can reinforce each other. One way to do this is by experimenting to find innovative solutions while also testing regulation.
AI is developing at an extremely rapid pace
You can hardly open a newspaper today without encountering reports about developments in AI and their implications. Recent headlines include a pilot with a Dutch AI language model, civil rights organisations taking legal action against an ‘nudify app’, and a US AI company stating it does not want its technology to be used for surveillance or autonomous weapons systems. Both the use of the technology and calls for its regulation are therefore regularly in the news.
Opportunities for AI include the development of new AI services, such as AI assistants like ChatGPT, Claude and Copilot. AI is also being applied in healthcare, agriculture, manufacturing and other sectors, with the aim of making processes more efficient and effective. Risks include privacy violations, discrimination or exclusion, and the spreading of misinformation.
There is also a further category of risks: so-called lock-in effects. These arise when dependency on suppliers develops. This is particularly relevant because most digital services originate in the United States and there are few European alternatives. A recent debate in the Dutch House of Representatives, for example, concerned whether it was responsible to sell the company Solvinity to a US firm. Solvinity provides the platform on which the Dutch electronic identity system DigiD runs. Because there are still hardly any European alternatives, this type of risk also applies to AI services.
Perspectives on governance
Governance and regulation of AI aim to enable desired effects - making innovation possible and encouraging adoption of the technology - while also preventing or mitigating negative effects such as privacy violations, discrimination or lock-in effects. This governance is often referred to as AI governance.
In practice, we see three perspectives in relation to AI governance:
1.
Innovation perspective: this focuses on stimulating new AI innovations, for example by funding AI research and applying AI across various sectors. Worldwide, enormous investments are being made in AI. In the Netherlands, for instance, investments are being made in innovation labs for specific sectors or in an AI factory in the Groningen region.
2.
Values perspective: this perspective focuses on developing AI based on desired values, such as respect for fundamental rights including privacy, fairness and non-discrimination. At European level, legislation has been developed for this purpose. In addition, guidelines and practical tools are being created to help organisations develop or apply AI. These are often accompanied by specific methodologies that support the embedding of values in technology.
3.
Transition perspective: this perspective steers the development of Dutch or European AI systems to contribute to digital autonomy and sovereignty. Although there are attempts to coordinate this at European level, policy in this area mainly takes place at national and local level through policy measures, standard-setting and investments by government and industry.

‘Because perspectives on AI governance differ, the governance instruments developed from these perspectives do not always align. There is even a risk that they work against each other.’
Anne Fleur van Veenstra, Director of Science at TNO Vector and Professor by special appointment Governance of data and algorithm for urban policy at Leiden University
Hand in hand
All three perspectives are relevant for AI governance and therefore coexist. Innovation plays a role in all three perspectives. It is central to the innovation perspective, but it is also part of the other two perspectives, where innovation is either values-driven or focused on autonomy. Because perspectives on AI governance differ, the governance instruments developed from these perspectives do not always align. There is even a risk that they work against each other. For example, AI developed in Europe is not automatically responsible AI.
Experimenting to regulate
In addition to developing laws, regulations, policies and guidelines, there is another important way to continuously learn about the impact of technology and its intended and unintended effects: experimenting with AI in order to learn for the purpose regulation as well. One example is the use of regulatory sandboxes, as included in the AI Act. This also happens on a smaller scale, such as pilots for the use of generative AI in chatbots for municipalities.
One such experiment involved the application of machine learning (a form of AI) for youth policy, which TNO carried out in collaboration with the municipality of Rotterdam and the Ministry of the Interior and Kingdom Relations. Together, we explored how machine learning could contribute to a policy challenge faced by the municipality of Rotterdam. At the same time, we learned about the potential positive and negative effects of using AI. These insights were subsequently used by the Ministry for the development of AI policy.
By explicitly learning through experiments with the aim of improving governance and regulation, innovation and the governance of AI can go hand in hand in a way that fits the rapid development of AI.
This reflection is based on the inaugural lecture by Anne Fleur, entitled ‘On experimenting and regulating: perspectives on the governance of data and algorithms’, which took place on 20 March 2026 at Leiden University.



