Image Alt

Mobility Innovators

EU Artificial Intelligence Act: Comprehensive Regulations for AI Industry and Impact on Generative AI Systems like ChatGPT

There is a strong voice in the market for the regulation to manage and control Artificial Intelligence (AI). Many countries are still debating if they should frame any regulations to control the development of AI. The European Parliament has come forward with a draft “The EU Artificial Intelligence Act” and adopted its negotiating position on the Artificial Intelligence (AI) Act on 14 June 2023.

EU members (27 countries) started the discussion on the regulation of AI in 2021. However, the process is expedited after the launch of ChatGPT last year. EU Commission wants to ensure that AI developed and used in Europe is fully in line with EU rights and values in line with its safety, privacy, transparency, non-discrimination, and social and environmental rules.

The EU Artificial Intelligence Act is the first comprehensive set of regulations for the AI industry, requiring generative AI systems, such as ChatGPT, to be reviewed before commercial release. The regulation aims to define the following:

  • harmonized rules for the placing on the market, the putting into service, and the use of AI systems in the EU
  • Prohibitions of certain artificial intelligence practices
  • Specific requirements for high-risk AI systems and obligations for operators of such systems
  • Standardized transparency rules for certain AI systems;
  • Framework on market monitoring, market surveillance, and governance
  • Practices in support of innovation

The act will follow a risk-based approach and establish obligations for the organizations, which are building and/or using AI systems depending on the level of risk the AI can generate.

Prohibited AI practices

Unacceptable risk AI systems are systems considered a threat to people and will be banned, such as:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;
  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
  • Biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing systems (based on profiling, location or past criminal behavior);
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

High-risk AI practices

AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment is categorized as high risk. AI systems used to influence voters and the outcome of elections and recommender systems used by social media platforms (with over 45 million users) were added to the high-risk list. Further, AI systems that are used in products falling under the EU’s product safety legislation, as well as, falling into eight specific areas will have to be registered in an EU database:

  • Biometric identification and categorization of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law

Limited risk

There will be no special requirements for Limited risk AI systems, except minimal transparency requirements that would allow users to make informed decisions. The user should be aware that they are interacting with AI system and can take a decision whether they want to continue using it. This is mainly focusing on AI systems that deep fakes AI, which generate or manipulate image, audio or video content.

Obligations for general-purpose AI

The companies which are building foundation models will require to assess and mitigate possible risks to health, safety, fundamental rights, the environment, democracy, and rule of law. Foundation models, a term coined by the Stanford Institute for Human-Centered Artificial Intelligence, is a model trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning. Further, the organization will require to register their models in the EU database before their release on the EU market.

Generative AI systems, including ChatGPT, must adhere to transparency regulations by clearly indicating that their content is AI-generated. This requirement assists in distinguishing deep-fake images from authentic ones, providing safeguards against the production of illegal content. Additionally, comprehensive summaries of copyrighted data utilized for training these models should be publicly accessible.

About The Author

Jaspal Singh is the Founder of Mobility Innovation Lab (MIL) and Host of the Mobility Innovators Podcast. If you are working on innovative ideas and solving mobility and transportation issues, please feel free to reach out. He loves to talk about startups, mobility, and technology.