top of page

Guide for U.S. Businesses: Global Reach of the EU AI Act

Updated: Mar 17

The European Union's Artificial Intelligence Act (EU AI Act), arguably the first comprehensive global AI law, was officially approved on Wednesday, setting new precedents in AI governance for the global stage. This act is particularly crucial for U.S. businesses who use or create AI technologies to understand, as it extends its impact beyond the EU, influencing global AI practices and standards. Next steps before our businesses are impacted by this new law require that the Act be placed into the Official Journal of the EU. This will likely happen quickly in 2024. The regulation will become effective roughly two years after this entry into force, meaning it likely will not be fully implemented until late 2025 or 2026.


Below, I explain what's included in this new law, why U.S. businesses care (including sample use cases causing risk), how to assess your current compliance state, and recommended actions.

Originally published when general terms agreement was reached 12/9/23. Updated 3/17/24 when final legislation was approved.


Critical Warning to U.S. Businesses NOT Operating in the EU


U.S. business leaders, take note: The EU AI Act extends well beyond the EU's borders. It governs not only AI systems within the EU but also those from foreign entities whose outputs affect the EU market. This means any U.S.-based AI system impacting EU individuals or businesses, particularly high-risk or prohibited systems, could face stringent penalties. Fines can reach up to €35 million or 7% of global turnover, whichever is higher. Thus, it's crucial for any non-EU company, including those in the U.S., to understand and comply with the EU AI Act to avoid potential legal and financial repercussions.

The Core Elements of the EU AI Act


The EU AI Act marks a significant milestone in global AI regulation, echoing the wide-reaching impact of its predecessor, the GDPR. This comprehensive legislation introduces a risk-based framework to regulate AI, encompassing prohibitions on certain AI uses, including sensitive biometric categorization and emotion recognition, and stringent controls over biometric identification in public spaces. It categorizes AI systems based on potential risks, with particular focus on high-risk applications, and implements a two-tier system for general AI systems. Non-compliance carries hefty penalties, underscoring the Act's seriousness in shaping the future of AI usage globally. Here are 9 key highlights to understand:


Risk-Based Regulatory Framework: AI systems classified as high-risk require a mandatory fundamental rights impact assessment.


Foundation Models Regulated: Models whose training required 10^25 flops of compute power - the largest of the large language models.


Banned Systems: These systems will be prohibited with 6 months for companies to comply:


  • biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race)

  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases

  • emotion recognition in the workplace and educational institutions

  • social scoring based on social behavior or personal characteristics

  • AI systems that manipulate human behavior to circumvent their free will

  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).


Transparency: Providers and users of high-risk AI systems must be able to explain how their systems work and make decisions. This includes providing clear and comprehensible information about the system's capabilities, limitations, data sources, and human oversight


Bias Mitigation: High-risk AI systems must be designed and developed to manage biases effectively, ensuring that they are non-discriminatory and respect fundamental rights.


Document Maintainance: Providers of high-risk AI systems must maintain thorough documentation to demonstrate their compliance. This includes records of programming and training methodologies, data sets used, and measures taken for oversight and control.


Human Oversight: Human oversight for high-risk systems is required. This ensures that human discretion to minimize risks is included in the AI system’s deployment.


Penalties for Non-Compliance: Non-compliance with the EU AI Act can lead to substantial fines, ranging from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the infringement and company size. For context, let's consider the possible impact on Microsoft. Microsoft's global turnover revenue was approximately $211.92 billion in the fiscal year 2023, 7% of this figure is approximately $14.55 billion.


Exclusions for Open-Source Systems: AI systems that are open-source and freely available, unless they are foundation models, are generally exempted. This supports the intent that smaller developers or those developing AI in a more open, collaborative environment will have fewer regulatory hurdles. However, if these systems become foundational (i.e., extensively used as a base for other AI applications), they might still fall under the purview of the Act. Further, the exemptions are not automatic, but depend on whether they meet criteria such as being transparent, accountable, and respectful of human dignity.


Why US Businesses Care - Examples of High Risk Business Use Cases


If you think that this doesn't impact your business, I encourage you to reflect more deeply. Even if your business doesn't operate in the EU, you'll likely be impacted by this legislation, just as the GDPR set a global standard. Also take note that if you're using vendor products that have AI embedded in them, then your organization is at risk. Here are some examples of high-risk use cases that are most likely to drive fines (this is not an exhaustive list):


  1. Healthcare Diagnostics and Treatment Recommendations: AI systems used in healthcare for diagnosing diseases or recommending treatments could be considered high-risk due to their potential impact on health and safety.

  2. Credit Scoring and Lending Decisions: AI used in financial services for credit scoring or loan approvals could be classified as high-risk, given their significant influence on individuals' financial situations.

  3. Employment and Recruitment Tools: AI-driven tools used for screening job applicants or making employment decisions could be high-risk due to their potential impact on employment opportunities and livelihoods.

  4. Education and Exam Grading Systems: AI applications used for grading exams or educational assessments could be considered high-risk, impacting students' educational outcomes.

  5. Law Enforcement and Judicial Systems: AI systems used in predictive policing, risk assessment in sentencing, or parole decisions could be high-risk, given their profound implications for fundamental rights and freedoms.

  6. Transportation and Autonomous Vehicles: AI in autonomous vehicles or traffic management systems could be high-risk due to safety implications.

  7. Audit and Financial Review Services: AI used in auditing financial statements or assessing financial health of companies could be high-risk, given their critical role in financial accuracy and integrity.

  8. Real Estate and Property Management: AI used for predictive analytics in real estate investments or property valuations could be high-risk due to their significant financial implications.

  9. Energy and Utilities: AI applications in energy distribution, like load forecasting or grid management, could be considered high-risk due to their impact on critical infrastructure.

  10. Retail and E-commerce: AI used for personalized pricing or creditworthiness assessment in e-commerce could fall under high-risk, affecting consumer rights and financial fairness.

  11. Manufacturing and Production: AI systems for quality control or predictive maintenance in manufacturing could be high-risk, given their impact on product safety and workplace hazards.

  12. Agriculture and Food Production: AI applications in precision agriculture affecting crop treatment or food safety could be high-risk due to potential health and environmental impacts.

  13. Media and Entertainment: AI systems used for deepfake detection or content recommendation could be high-risk, especially if they influence public opinion or infringe on intellectual property rights.

  14. Telecommunications: AI applications for network optimization or customer data analysis might be considered high-risk due to privacy and security concerns.

  15. Public Sector Services: AI used in public service delivery, such as social welfare or public resource allocation, could be high-risk, given their impact on citizen rights and public trust.

  16. Travel and Hospitality: AI-driven personalization in travel services or risk assessment in hospitality management might fall under high-risk, affecting consumer rights and safety.

  17. Legal: AI systems used for legal analysis, predicting case outcomes, or assisting in litigation strategy might be considered high-risk due to their potential impact on legal rights and the justice system.

  18. Consulting and Advisory Services: AI applications that provide strategic business recommendations or risk assessments could be high-risk, especially if they significantly influence business decisions or financial planning.

  19. Technology

    1. Cybersecurity Applications: AI systems used for threat detection or security breach prediction might be high-risk if they are part of the safety components of products or services that are already regulated under EU law, such as electronic communications networks and services, or if they are used in critical infrastructures, such as energy, transport, or health.

    2. Cloud Computing Services: AI that manages or optimizes cloud resource allocation could be considered high-risk,  if it is part of the safety components of products or services that are already regulated under EU law, such as electronic communications networks and services, or if it impacts data privacy or operational reliability of critical infrastructures, such as energy, transport, or health.

    3. Software Development Tools: AI-driven code generation or testing tools could be high-risk if if they are part of the safety components of products or services that are already regulated under EU law, such as medical devices, machinery, or vehicles, or if they significantly influence software quality and security of critical infrastructures, such as energy, transport, or health.


Preparing for Compliance - Key Questions for U.S. Businesses


Understanding the massive potential negative financial impact, businesses must effectively navigate the EU AI Act's complexities. To help you to do so, I recommend that you consider these crucial questions. Note: Even if you don't operate in the EU, assume that this will become the global defacto standard.


  1. Compliance Assessment: Are our AI systems in line with EU high-risk regulations?

  2. Risk Management: What are our risks under the EU AI Act, particularly for systems that might fall under high-risk or prohibited categories?

  3. Strategic Alignment: How do our strategies align with the new regulatory environment?

  4. Operational Readiness: What operational changes are needed for EU AI Act compliance

  5. Technology Audit: Are we using AI technologies now prohibited by the EU AI Act?

  6. Data Management: How do we ensure our data management complies with the EU AI Act's fundamental rights impact assessments?

  7. Human Oversight: What systems do we have for human oversight of high-risk AI systems?

  8. Financial Planning: Are we prepared for the financial implications of compliance and potential fines?

  9. Innovation vs Compliance: How do we balance these needs?

  10. Employee Training and Awareness: What training is needed for effective compliance?

  11. Ethical AI Commitment: How do our practices align with the EU AI Act's ethical standards?

  12. Market Strategy: How should our EU strategy adapt under the AI Act? If we don't operate there, do we produce AI system outputs used by EU individuals or businesses downstream?

  13. Stakeholder Engagement: How are we communicating our compliance strategy?


Strategies for Effective Adaptation


To effectively adapt to the EU AI Act, U.S. businesses can follow the Plan-Do-Check-Act (PDCA) framework, a widely recognized business management method:


  1. Plan: Develop a strategic plan for AI Act compliance. This includes identifying affected AI systems, assessing current compliance levels, setting clear objectives, and allocating resources.

  2. Do: Implement the plan by adapting AI systems, conducting employee training, establishing legal and regulatory consultation, and embedding ethical AI practices.

  3. Check: Regularly monitor and review the effectiveness of the compliance measures. This involves auditing AI systems, assessing employee understanding, and evaluating legal advice.

  4. Act: Based on the review, take corrective actions to improve compliance strategies. Continuously update policies, training programs, and AI system designs to meet evolving regulations and standards.


Consider taking these specific actions:


  1. Situation Assessment: Conduct a thorough review of how current AI systems and practices align with the EU AI Act. This should include an audit of AI applications, data usage, and existing compliance measures.

  2. Regulatory Change Management: Establish a dedicated team or assign roles for monitoring, interpreting, and implementing regulatory changes.

  3. Training: Develop comprehensive training programs to educate employees on the AI Act's requirements and ethical AI practices.

  4. Consultation and Collaboration: Engage with legal experts, AI ethics specialists, and industry peers to share best practices and navigate complexities.

  5. Technology Adaptation and Auditing: Regularly review and update AI systems to ensure ongoing compliance. This includes auditing algorithms for bias and ensuring transparency in AI decision-making processes.

  6. Ethical AI Practices: Foster a culture that prioritizes ethical AI development, going beyond mere compliance to embrace responsible AI usage as a core business value.

  7. Communication and Transparency: Regularly communicate with stakeholders, including customers, employees, and regulators, about AI practices and compliance efforts.

  8. Risk Management and Contingency Planning: Develop risk management strategies and contingency plans to address potential compliance failures or technological challenges.

  9. Organizational Roles: Reevaluate and adapt organizational roles and structures to support EU AI Act compliance. I will provide more specifics on this in an upcoming blog.

  10. Use AI to Ensure AI Compliance: Utilize AI tools and solutions to assist in compliance efforts. This could include using AI for monitoring compliance, managing data more effectively, and ensuring transparency in AI operations.


This approach ensures that U.S. businesses not only comply with the EU's AI Act but also use it as a catalyst for innovation and ethical leadership in AI.


Conclusion


The EU's AI Act is a high impact development, setting a new global benchmark in AI governance that impacts U.S. businesses across various industries. From healthcare to technology, the implications are vast, necessitating a strategic path from awareness to action. By embracing the Plan-Do-Check-Act framework, businesses can effectively navigate these regulatory waters, ensuring compliance while fostering innovation. The quick and thoughtful transition from strategy to action is critical — it's not just about understanding the EU AI Act but actively adapting to its requirements.


Upcoming blogs will dig deeper into specific roles and responsibilities reshaped by this legislation (hint: many roles were altered or created by this act), offering a roadmap for navigating this new era of AI regulation. Stay tuned as we explore how to transform these challenges into opportunities for growth and leadership in AI practices.


Feel free to contact me at lisa@drlisa.ai if your organization needs guidance.

961 views1 comment
bottom of page