In February, the EU endorsed the AI Act and launched the new European Artificial Intelligence Office. This legislation governs how AI is developed, deployed and used in Europe. The obligations will incrementally become mandatory until 2030. While the Act was primarily designed to protect humans from risks of manipulation and illicit use of AI, it will not be without impact on the Biodiversity Community. AI in our domain will mostly be ranked as no to low risk. However we will have to adhere to the transparency requirements and be particularly attentive when combining AI with Citizen Sciences initiatives affecting potentially the general public. A further point of attention with our worldwide scope, is that non-European AI initiatives or tools to be allowed deployment or usage in Europe will have to commit to follow both the AI Act and the the GDPR (General Data Protection Regulation) and have a reference person or institution with legal address in Europe.
Future EU projects can be audited any time during their execution period on how they comply with both legal and ethical requirements of AI with risks to be put on hold or even stopped.
Current discussions at the EU level are on how they can remain competitive in AI compared to other countries where less legal or ethical barriers exist, while judged essential there are no doubts that they slow down the development and implementation processes. The balance between open collaboration and free sharing of data and knowledge are challenged by concerns about so called strategic autonomy and competitiveness.
As an introduction to this session, this talk will go to the best of our knowledge over this new EU AI Act requirements and how it may affect future AI linked activities in our Natural Sciences domain.