top of page

Navigating the Proposed US AI Act: Guiding Businesses on Regulatory Compliance and Innovation

Many business leaders were anxious hearing about recent happenings in Washington as the US Senate brought together their full contingency to learn about AI. Big tech leaders and others were offered the opportunity to influence policy thinking. As I continue my commitment to helping executives and professionals to navigate AI disruption, I've addressed below the proposed Bipartisan Framework for U.S. AI Act.



This proposed act would significantly impact businesses across all industries. As this legislation aims to regulate AI development, accountability, transparency, and national security, it is crucial for businesses to understand its potential consequences and take proactive measures to adapt. I explore the expected impacts on businesses, provide recommendations on how to prepare for the forthcoming changes, and draw lessons from historical examples of successful regulatory compliance here:


I. Licensing Regime and Oversight

Under the proposed legislation, companies developing advanced general purpose AI models (e.g., GPT-4) or using AI in high-risk scenarios would be required to register with an independent oversight body. This licensing regime entails maintaining risk management, testing, data governance, and incident reporting programs.


The impacts on businesses could include:


  • Compliance Costs: Businesses would need to allocate resources to establish and maintain risk management, testing, data governance, and incident reporting programs, as required by the licensing regime. These activities may require additional staff, technology infrastructure, and ongoing investments.

Business Example: A FinTech startup specializing in AI-powered fraud detection for online transactions would need to set up a specialized compliance unit. This team would be tasked with navigating the intricate AI licensing regulations. Their primary responsibility would be to ensure that their AI algorithms consistently adhere to the latest regulatory standards, a process that demands meticulous attention and ongoing effort.

  • Regulatory Burden: The registration process and ongoing compliance requirements would introduce new administrative burdens for businesses. They would need to navigate the regulatory landscape, stay updated on evolving guidelines, and ensure continuous adherence to the established standards.

Business Example: A fintech startup offering AI-driven financial advisory services would need to establish a dedicated compliance team to navigate the complex regulatory requirements. This team would be responsible for ensuring that the company's AI models align with the evolving guidelines, which can be a time-consuming process.

  • Increased Accountability: The legislation would enhance accountability by imposing regulatory oversight on AI development and usage. Businesses would be held responsible for ensuring the safety, ethicality, and transparency of their AI systems and practices. This could involve conducting audits, disclosing information, and demonstrating compliance to regulatory authorities.

Business Example: A healthcare institution implementing AI for patient diagnosis would face increased accountability. They would need to conduct regular audits of their AI systems to ensure accurate diagnoses, disclose information about how AI influences medical decisions to patients, and demonstrate compliance with medical ethics and privacy regulations.

  • Impact on Innovation: Although many are concerned about negative impacts on innovation due to regulation, our current environment is a patchwork that makes innovation high risk. Reasonable and clear laws/regulations will foster innovation, stop states from enacting increasingly confusing and contradictory state-level legislation, and attract investment as they reduce the risk of unknowns and provide businesses with the confidence to explore new opportunities and develop advanced AI models.

*Caution: Over regulation will limit innovation, stifle or kill startups and small businesses, and benefit the already oversized behemoth technology companies.


Business Example: A startup specializing in AI-driven autonomous drones could benefit from reasonable and clear regulations. With well-defined rules on drone operation, safety, and privacy, they can confidently invest in research and development to create innovative drone applications, knowing that compliance will be easier to achieve and there's a clear path ass to revenue generation.

  • Competitive Landscape: The legislation will affect the competitive landscape by imposing additional requirements on businesses. Compliance with the licensing regime will become a differentiating factor, with companies that can demonstrate strong risk management, data governance, and incident reporting programs gaining a competitive advantage.

*Note: It will be important to support smaller players as added regulatory burden benefits resource-rich entities.


Business Example: In the autonomous vehicle industry, companies that excel in compliance with safety and data privacy regulations will gain a competitive edge. They can market their vehicles as not only innovative but also safe and privacy-conscious, attracting more customers.

  • Legal and Reputational Risks: Non-compliance with the legislation could result in legal penalties, fines, or other enforcement actions. Moreover, businesses that fail to meet the required standards may face reputational damage, loss of trust from customers and stakeholders, and negative public perception.

Business Example: A social media platform that mishandles user data and doesn't adhere to AI content moderation regulations could face hefty fines and legal actions. Additionally, the negative publicity from privacy breaches could lead to a significant loss of trust among users and the public.


To prepare for these changes:


In the 1980s-90s, industries like pharmaceuticals and medical devices faced increased FDA regulation of product approval processes. Others can learn from how savvy organizations responded to these regulatory changes.


Pfizer commits to quality and safety and has developed a comprehensive Corporate Quality Policy that describes how it ensures the delivery of safe and effective products and meets regulatory requirements and other relevant standards and guidelines. The policy provides a systematic approach designed to ensure Pfizer meets its commitment to patients and defines the elements of its Quality Management System for regulated activities. Pfizer's culture of quality and integrity includes a clear governance and organization structure, a risk-based management process, and a rigorous training and qualification program for its employees. Pfizer has also engaged in proactive dialogue with regulators and other stakeholders to comply with quality laws and regulations, such as the Prescription Drug User Fee Act (PDUFA), which requires pharmaceutical companies to pay fees to support the FDA’s review of new drug applications.


Following Pfizer's example by establishing governance roles focused on quality/risk programs and engaging regulators/experts, companies can:

  • Develop compliance frameworks that embed documentation from innovation stage

  • Streamline registration processes through transparent, data-driven assurance approaches

  • Inform the evolution of guidelines with feedback to balance obligations

Learning from Pfizer, proactive investment in quality systems guided by regulatory collaboration can smooth introduction of AI applications while building trust through transparent oversight. Clear rules also reduce risks to foster overall innovation.


II. Legal Accountability and Consumer Protection

The proposed Framework emphasizes legal accountability for those companies using AI, enabling enforcement and private rights of action when AI systems breach privacy, violate civil rights, or cause harm.


The impacts on businesses include:


  • Legal Exposure: The proposed Act would increase legal exposure for businesses using AI systems. They would be held legally accountable for any privacy breaches, civil rights violations, or harm caused by their AI systems. This could result in legal disputes, lawsuits, and potential financial liabilities.

Business Example: A social media platform uses AI algorithms to recommend content to users. If the AI inadvertently promotes hate speech or harmful content, resulting in civil rights violations, the platform could be legally accountable. This could lead to legal disputes, users filing lawsuits, and potential financial liabilities.

  • Compliance Requirements: Businesses would need to implement measures to ensure the safety, accuracy, and transparency of their AI systems, as well as to protect privacy and civil rights. This may involve conducting thorough risk assessments, implementing robust data protection measures, and establishing mechanisms for addressing user complaints and concerns.

Business Example: An autonomous vehicle manufacturer must ensure compliance with the Act. They use AI for self-driving cars. To meet compliance requirements, they conduct rigorous risk assessments to identify and mitigate potential safety issues. They implement robust data protection measures to safeguard user information, and they establish a user feedback system to address concerns promptly. This not only ensures compliance but also enhances the safety and transparency of their AI-driven vehicles.

  • Reputational Risks: Non-compliance or instances of AI systems breaching privacy, violating civil rights, or causing harm could lead to significant reputational damage. Negative publicity, loss of consumer trust, and public backlash could impact a business's brand image and market reputation.

Business Example: Picture an e-commerce retailer employing AI-powered product recommendations. If the AI algorithm starts recommending inappropriate or offensive products to customers, and this issue gains public attention, it could result in reputational risks. Negative publicity on social media platforms, coupled with customers expressing dissatisfaction and sharing their experiences, may erode consumer trust. This negative perception can lead to a decline in customer loyalty, affecting the retailer’s brand image and potentially causing a drop in sales and market share.

  • Increased Oversight and Reporting: The Act may require businesses to enhance their oversight mechanisms and reporting practices. They would need to demonstrate compliance, provide transparency regarding AI capabilities and limitations, and promptly address any issues or concerns raised by users or regulatory authorities.

Business Example: A financial institution utilizing AI for credit scoring must provide transparent reports on how their AI models determine creditworthiness. They may need to establish channels for users to report concerns about the fairness of the AI-driven decisions. Compliance with these reporting requirements ensures transparency and accountability but may also entail additional administrative efforts.

  • Consumer Perception and Trust: Consumer perception of AI systems and the trust placed in businesses using AI could be influenced by the Act. Customers may expect higher levels of accountability, transparency, and ethical practices from AI companies. Meeting these expectations and building and maintaining consumer trust would become crucial for businesses.

Business Example: A retail company implementing AI-driven inventory management may find that customers expect transparency about how AI influences inventory decisions. Ensuring customers feel confident that AI is used responsibly and to their benefit becomes essential for maintaining consumer trust.

  • Impact on Innovation and Development: The increased legal accountability and compliance requirements may impact the pace of innovation and development of AI technologies. Businesses will need to invest additional resources in research, testing, and compliance, which could potentially slow down the introduction of new AI products or services to the market. However, this pace impact may be balanced by the increased appetite for innovation investment due to clarity of the rules reducing risk.

Business Example: Imagine a tech startup specializing in AI-driven healthcare diagnostics. The proposed Act introduces heightened legal accountability and compliance requirements, necessitating additional investments in research, rigorous testing, and stringent compliance measures. This might initially seem like it could slow down the introduction of their AI diagnostic tool to the market.


To prepare for these changes:


In the 1990s, increased privacy regulations like the Children's Online Privacy Protection Act (COPPA) and Health Insurance Portability and Accountability Act (HIPAA) raised compliance obligations for companies. Lessons can be drawn from how businesses adapted responsibly.


One example of a business adapting and behaving responsibly while driving their business imperatives is Scholastic, a leading publisher. They diligently comply with COPPA and HIPAA, securing parental consent for children's data and protecting health information. Recognized by the FTC for their commitment, they balance innovation with privacy adherence. Through proactive engagement with regulators and initiatives like the Platform for Privacy Preferences project, Scholastic maintains trust, reduces legal risks, and remains focused on their core mission of advancing literacy.


As the proposed AI Act increases obligations, businesses can likewise view this as a chance to innovate responsibly. Approaching compliance holistically through measures such as:

  • Appointing a new governance role focused on AI transparency, accountability and oversight

  • Engaging with policymakers and experts to understand requirements

  • Developing robust processes and documentation around AI uses and associated risks

Learning from how others successfully navigated new rules in the past offers a model to balance innovation, legal standards, and business needs. Doing so helps to safeguard operations legally while strengthening transparency and trust with consumers.


III. National Security and International Competition

The legislation emphasizes protecting national security by restricting the transfer of advanced AI models and related technologies to adversary nations or those involved in gross human rights violations.


The impacts on businesses include:


  • Export Restrictions: Businesses engaged in the development and deployment of advanced AI models and related technologies would need to comply with export control regulations. They may face restrictions on selling, transferring, or sharing these technologies with countries designated as adversaries or involved in human rights violations. This could impact international collaborations, partnerships, and market opportunities.

Business Example: Imagine a semiconductor manufacturer specializing in AI chips and hardware. They have a global presence and supply AI hardware to various industries, including telecommunications. With the new export restrictions, they are faced with limitations on selling their advanced AI chips to certain countries designated as adversaries. These chips are vital for improving the performance of AI-driven telecom equipment. This added complexity and limitation on market access impact the manufacturer’s ability to meet the growing global demand for AI hardware, potentially affecting their competitive position and market share in the AI technology industry.

  • Compliance and Due Diligence: Businesses would need to establish rigorous compliance processes and due diligence measures to ensure that their AI technologies are not being exported to restricted entities. This may involve screening customers, conducting risk assessments, and implementing robust export control frameworks to prevent inadvertent technology transfers.

Business Example: Consider an aerospace manufacturer specializing in AI-enhanced navigation systems for commercial aircraft. Their AI technology significantly improves flight safety and efficiency. With the introduction of export restrictions, the manufacturer must establish rigorous compliance processes and due diligence measures to prevent inadvertent exports to restricted entities or countries involved in human rights violations. To achieve compliance, the company implements a stringent screening process for their customers and partners. They conduct thorough risk assessments, evaluating the end-use of their AI-enhanced navigation systems and the affiliations of potential buyers. They also develop and enforce strict export control frameworks to ensure that their technology remains within authorized channels. These compliance efforts, though resource-intensive, are essential for national security and the responsible use of their AI technology in aviation. Ensuring that their systems do not end up in the wrong hands safeguards their reputation, preserves customer trust, and helps maintain the integrity of their products in the global aerospace market.

  • Supply Chain Complexity: The legislation's focus on national security will lead to increased scrutiny of supply chains associated with AI technologies. Businesses will need to assess and monitor their supply chains to ensure that components, software, or services sourced from external vendors or partners do not violate export control regulations or compromise national security interests.

Business Example: Imagine an electronics manufacturer specializing in AI-powered consumer devices, such as smartphones and smart home appliances. They rely on a global network of suppliers for various components and software. In light of the legislation’s focus on national security, the manufacturer faces increased scrutiny of their supply chain. They need to ensure that components, software, and services sourced from external vendors or partners comply with export control regulations and do not compromise national security interests. To address this, the company initiates a comprehensive assessment of its supply chain. They identify critical components and software that incorporate AI technology and assess the origins of these components. This includes a close examination of suppliers’ compliance with export regulations and their adherence to ethical sourcing practices. This scrutiny introduces added complexity and demands proactive monitoring of the supply chain. The manufacturer must establish mechanisms for ongoing assessment and verification, ensuring that their products maintain compliance and do not inadvertently contribute to national security risks. While challenging, this diligence is essential for both compliance and safeguarding their reputation as a responsible and secure AI technology provider.

  • Impact on International Collaborations: The legislation will impact international collaborations and partnerships, particularly those involving countries or organizations that fall under the restricted categories. Businesses will need to navigate complex legal and regulatory frameworks to ensure compliance while continuing to engage in global research, development, and innovation initiatives.

Business Example: Consider a global research consortium focused on advancing AI for healthcare. This consortium involves researchers, universities, and technology companies from various countries, including some in regions designated as restricted under the legislation. With the introduction of export restrictions, the consortium faces challenges in maintaining its international collaborations. Some member organizations are located in restricted countries, and their involvement in collaborative AI research initiatives becomes subject to complex legal and regulatory frameworks. To navigate this, the consortium must establish a comprehensive compliance strategy. They engage legal experts to interpret the legislation’s implications on their collaborations. This involves identifying specific projects or activities that may be affected and developing mitigation plans. While the legislation adds layers of complexity, the consortium recognizes the importance of responsible global research and innovation in healthcare AI. They are determined to find ways to continue their collaborations, even if it means adapting to the evolving regulatory landscape. This illustrates the need for businesses and organizations to proactively address compliance challenges while pursuing cross-border initiatives in AI research and development.

  • Ethical Considerations: The legislation's emphasis on protecting national security will raise ethical considerations for businesses involved in AI development. They would need to assess the potential implications of their technologies being used in contexts that contradict human rights or pose risks to national security. Ethical guidelines and responsible AI practices would become paramount in ensuring compliance and avoiding reputational damage.

Business Example: Imagine a software vendor specializing in AI-driven surveillance systems. Their technology is used for public safety and security, but the legislation introduces a heightened emphasis on protecting national security. With this new focus, the company faces ethical dilemmas. They need to assess the potential implications of their surveillance technology being used in ways that could infringe on human rights or civil liberties, even if it’s intended for national security purposes. In response, the company takes proactive steps to align with ethical guidelines and responsible AI practices. They engage with human rights organizations and experts to conduct impact assessments. This involves evaluating the potential risks and benefits of their technology, as well as considering safeguards to prevent misuse. While this adds complexity to their business operations, the company recognizes that responsible AI development is essential not only for compliance but also for safeguarding their reputation and maintaining the trust of their customers and stakeholders. These ethical considerations shape their approach to AI development, ensuring that their technology respects human rights and avoids contributing to potential abuses of power.

  • Industry Collaboration and Standards Development: To prepare for the legislation, businesses can engage in industry-wide collaborations and initiatives aimed at shaping the legislation, sharing best practices, and contributing to the development of ethical and responsible AI standards.

Business Example: Imagine an alliance of renewable energy companies that heavily rely on AI for optimizing energy production and distribution. With the impending legislation, they recognize the need for industry-wide collaboration. The alliance brings together companies from various sectors, including solar, wind, and grid management. They join forces to shape the legislation, sharing their unique insights on AI’s role in sustainable energy practices. By presenting a unified front, they aim to influence the legislation in a way that encourages responsible AI adoption while supporting clean energy goals. Emulating the successful public-private collaboration seen in initiatives like the Cybersecurity Information Sharing Act (CISA), the alliance establishes working groups focused on developing industry-specific AI standards. They engage with government agencies to ensure these standards align with the legislation’s objectives. This collaborative effort not only helps these businesses navigate the regulatory complexities but also contributes to the development of ethical and responsible AI standards within the renewable energy sector. By working together, they ensure that AI technologies are harnessed for sustainable, environmentally friendly practices while adhering to legal and ethical guidelines.


To prepare for these changes:


To effectively navigate the implications on supply chains and collaborations while upholding national security, businesses can look to lessons from past policy changes. Following the September 11th attacks, the US government tightened export controls through laws like the Patriot Act and Enhanced Proliferation Control Initiative (EPCI). Semiconductor manufacturers faced challenges, as chip exports now required licenses for a broader range of countries.


After the 9/11 attacks, Texas Instruments (TI), a global leader in the semiconductor industry, showcased exemplary leadership. They swiftly developed a comprehensive export compliance program in line with U.S. regulations. By establishing a dedicated team, implementing rigorous training, and leveraging technology, TI ensured thorough export transaction monitoring. They actively engaged with the U.S. government and stakeholders, advocating for balanced export controls that considered both national security and economic interests. Furthermore, TI collaborated with business partners to ensure supply chain compliance, fostering trust and transparency. Their proactive approach not only helped them navigate post-9/11 challenges but also solidified their reputation in the global market.


Businesses preparing for changes under the US Artificial Intelligence Act can similarly engage to shape pragmatic implementation approaches. They can conduct due diligence of existing relationships through a national security lens, exploring options like:

  • Appointing oversight committees to periodically evaluate high-risk supply chains and collaborations.

  • Establishing an industry-wide pre-screening process for new partnerships in restricted areas, allowing expert guidance upfront.

Drawing on TI's example, active policy engagement, voluntary self-regulation, and proactive partnership efforts can help balance national security imperatives with continued innovation and global economic leadership in the era of artificial intelligence.


Conclusion

The proposed bipartisan US Artificial Intelligence Act has the potential to reshape the AI landscape and significantly impact businesses across various sectors. By proactively preparing for the forthcoming changes, staying informed about regulatory developments, conducting compliance assessments, enhancing transparency and accountability, prioritizing consumer protection, and engaging in public-private collaborations, businesses can navigate this evolving regulatory landscape while building trust, ensuring accountability, and embracing responsible AI practices. Drawing insights from historical examples of successful regulatory compliance, such as the Sarbanes-Oxley Act, the Patriot Act and Enhanced Proliferation Control Initiative, the Children's Online Privacy Protection Act and the Health Insurance Portability and Accountability Act, as well as public-private collaboration initiatives, businesses can find inspiration for their preparedness efforts.


Disclaimer: The information provided in this blog is based on the proposed legislation as of the date of writing and may be subject to changes or amendments. Examples of businesses who have learned to be successful within regulatory/legal restrictions are intended to be representative and do not serve as an endorsement of their ethics. It is recommended to consult legal professionals and regulatory authorities for the most up-to-date and accurate guidance.

61 views0 comments
bottom of page