Home Knowledge The Time to (AI) Act is Now: A Practical Guide to Prohibited AI Systems Compliance

The Time to (AI) Act is Now: A Practical Guide to Prohibited AI Systems Compliance

The EU’s Artificial Intelligence Act (the AI Act) marks a pivotal step in regulating artificial intelligence (AI) by establishing a framework to ensure ethical AI use while safeguarding fundamental rights.

The AI Act, published in the Official Journal on 12 July 2024, introduces strict rules on the deployment and use of certain AI systems. This article provides a detailed and practical guide for businesses to navigate these regulations, focusing on AI systems compliance requirements and practical steps to ensure adherence.

A. Overview of Prohibited AI Practices

Under Article 5 of the AI Act, the following AI practices are prohibited:

1. Manipulative or Deceptive AI Systems:

  • AI systems that use subliminal techniques or purposefully manipulative methods to distort human behaviour and impair decision-making, resulting in significant harm.
  • Examples include AI systems that employ imperceptible audio or visual stimuli to influence consumer choices unknowingly.

2. Exploitation of Vulnerabilities:

  • AI systems that exploit vulnerabilities due to age, disability, or specific social or economic situations to cause significant harm.
  • This includes AI systems that target children, elderly individuals, or economically disadvantaged groups.

3. Social Scoring:

  • AI systems that evaluate or classify individuals based on their social behaviour or personal characteristics, leading to unjustified or disproportionate treatment.
  • Social scoring by public or private entities can result in discrimination and exclusion, violating fundamental rights.

4. Predictive Policing Based on Profiling:

  • AI systems used solely for predicting criminal offences based on profiling or assessing personality traits.
  • This prohibition does not apply to AI systems that support human assessments based on objective and verifiable facts.

5. Untargeted Facial Recognition Databases:

  • AI systems that create or expand facial recognition databases through untargeted scraping of images from the internet or CCTV footage.

6. Emotion Recognition in Workplaces and Educational Institutions:

  • AI systems designed to infer emotions in workplaces and educational settings, unless used for medical or safety reasons.

7. Biometric Categorisation:

  • AI systems that categorise individuals based on biometric data to infer sensitive characteristics such as race, political opinions, or sexual orientation.

8. Real-Time Remote Biometric Identification for Law Enforcement:

  • While this is not a primary focus for most businesses, it is important to note that real-time remote biometric identification systems are heavily restricted for law enforcement purposes, with strict conditions and safeguards.

B. Key Dates:

  • 12 July 2024: The AI Act published in the Official Journal.
  • 1 August 2024: The AI Act will become law.
  • 2 February 2025: Rules on Prohibited AI Systems come into effect.

C. Enforcement and Penalties

Non-compliance with the rules on Prohibited AI Systems will attract substantial administrative fines of up to €35,000,000 or, if an undertaking, 7% of the offender’s total worldwide annual turnover, whichever is higher. Non-compliant AI systems can also be taken off the EU market.

D. Steps to Compliance:

1. Conduct an AI Inventory:

  • Begin by creating a comprehensive inventory of all AI systems currently in use within your organisation.
  • Categorise these systems based on their purpose, functionality, and the data they process.

2. Assess AI Systems Against Prohibited Practices:

  • Review each AI system to determine if it falls under any of the prohibited categories outlined in Article 5.
  • Pay particular attention to systems designed for customer interaction, marketing, decision-making, and those processing sensitive data.

3. Implement Compliance Measures:

  • If any AI systems are identified as potentially prohibited, develop a plan to either discontinue their use or modify them to ensure compliance.
  • Establish internal policies and procedures for ongoing monitoring and assessment of AI systems to prevent non-compliance.

4. Training and Awareness:

  • Educate employees, especially those involved in AI development and deployment, about the new regulations and the importance of compliance.
  • Provide specific training on identifying and mitigating risks associated with prohibited AI practices.

5. Documentation and Reporting:

  • Maintain detailed records of all AI systems, assessments, and compliance measures undertaken.
  • Be prepared to provide documentation to regulatory authorities if required.

The AI Act represents a comprehensive effort by the EU to regulate AI technologies and protect fundamental rights. Compliance is not just about avoiding penalties but also about fostering trust and ensuring ethical AI practices. By taking proactive steps to assess and modify AI systems, your company can navigate these regulations effectively and maintain a competitive edge in a rapidly evolving technological landscape.

For further guidance and support on AI compliance, please contact Barry Scannell, Leo Moore, or any member of the William Fry Technology Department.