top of page

AI Permeates Our Lives: 2023 Expected to Continue Legislative Ramp Up

How do we both protect people from unintended harms AND ensure a strong national innovation environment for Artificial Intelligence?

Opinions abound on this topic! My viewpoint is that of a pragmatist. Let's start with the desired end in mind. Check out the data I've assembled and read my thoughts👇

Data source- IAPP Global Information Privacy Community and Data presentation - Lisa C. Palmer

Despite most people being unaware of it, Artificial Intelligence (AI) surrounds us daily, with over 91% of leading companies continually investing in it (Watters, 2022). Siri on our phones, online shopping showing us items that it has learned that we want, social media advertisements based on our online habits, automatic lane assist and braking in our cars, and so much more (Heilweil, 2021; Lee & Yoon, 2021). Even less obvious, and more dangerous, are the AI systems that decide who is approved for a home loan, who is offered a job, who receives the best healthcare, who receives bail versus being held in jail, etc. (Burt, 2020) Since this technology is so widespread and invasive in our lives, we assume that it’s heavily legislated and regulated--right? Definitely not.
As with any technology, most people believe that AI should not be used to discriminate against people, purposefully or unintentionally. The Pew Research Center documented Americans’ concerns over the increasing use of AI in daily life (Nadeem, 2022). With 72% of Americans neutral or more concerned than excited about AI, it is politically and ethically prudent to protect constituents. In my view, the US can more quickly move forward to protect people, while not stifling innovation, by focusing not on immediately requiring AI transparency, but rather, on the results (outcomes) created by the AI.

PEW Research Center: Americans Lean Toward Concern over Excitement about Daily Use of AI Source: Survey conducted November 1-7, 2021 – “AI and Human Enhancement: Americans’ Openness is Tempered by a Range of Concerns”

As a committed pro-business supporter, I applaud the US approach that creates an environment where America can be the global AI innovation leader. With Russian President Vladimir Putin stating that the country with the best AI will be “the ruler of the world” and Chinese leaders aggressively pursuing AI dominance, it’s clear that this is necessary for national competitiveness and security (Maggio, 2017; Marr, 2021). Thus, we must move quickly and thoughtfully to maximize the capabilities of AI while also protecting people against systemically embedded actions that our society does not condone. It’s time for specific legislative and regulatory action that both supports innovation and ensures that this critical technology well serves humanity (The White House, 2019).

The US Federal Government has made progress through an Executive Order issued by President Trump that was later codified by a Democrat-controlled legislature under the National Defense Authorization Act of 2021 in the National Artificial Intelligence Initiative Act (NAIIA of 2020; The White House, 2020). Earlier this year, additional foundational progress was made with the introduction of a national framework, the Blueprint for an AI Bill of Rights (OSTP, 2022). Further, national regulatory agencies have created a patchwork of situationally-specific regulations. The Federal Trade Commission, in one example, has established regulations and strongly advised businesses regarding expected situational behavior (Jillson, 2021).
For context from my home state, Oklahoma, like many others, has the same structural challenges that the federal structure creates. There is no single agency with accountability, or law focused on businesses and the way that they use AI. Arguably, the problem is worse in Oklahoma as there is little governmental attention given to commercially-created technology issues. In 2022, the first legislative attempts were made to address AI (NCSL, 2022). Three bills were introduced but all failed. This is unfortunate as H.B. 3011 attempted to address AI algorithmic harms (Walke, 2022). Ideally, a state-level unified approach to addressing digital issues and ensuring opportunities is needed.

Data source - National Conference of State Legislatures. Data analysis & presentation - Lisa C. Palmer

On the surface, this could seem that enough is being done, but the activity is misleading. Currently, there’s so much debate about explaining how the systems work, among other concerns, that progress is languishing in protecting people from current-day harm (Barredo Arrieta et al., 2020). While the valuable explainability and transparency debate continues (Jacovi et al., 2021), legislation should be crafted that embeds expectations, across all human-impacting systems, that ensure fair and equitable outcomes. Let’s start by establishing situationally-agnostic legal and auditable expectations specifically for equitable outcomes, and associate penalties for failure to comply. Then, the onus will be on those financially benefiting from this powerful technology to ensure that they are creating “fair” outcomes while not stifling critical AI innovation.


Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.

Burt, A. (2020, August 28). How to fight discrimination in AI. Harvard Business Review.

Heilweil, R. (2021, May 12). Tesla is casting a spotlight on the government’s struggle to keep up with self-driving cars. Vox.

Jacovi, A., Marasovic, A., Miller, T., & Goldberg, Y. (2021, March 3–10). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI [Paper presentation]. FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, NY, United States.

Jillson, E. (2021, April 19). Aiming for truth, fairness, and equity in your company's use of AI. Federal Trade Commission Business Blog.

Lee, D., & Yoon, S. (2021). Application of artificial intelligence-based technologies in the healthcare industry: Opportunities and challenges. International Journal of Environmental Research and Public Health, 18(1), 271.

Maggio, E. (2017, September 4). Putin believes that whatever country has the best AI will be the ruler of the world. Business Insider.

Marr, B. (2021, March 15). China poised to dominate the artificial intelligence (AI) market. Forbes.

Nadeem, R. (2022, March 17). How Americans think about artificial intelligence. Pew Research Center: Internet, Science & Tech.

National Artificial Intelligence Initiative Act of 2020, Division E U.S.C. § 5001 et seq. (2020).

National Artificial Intelligence Initiative Office (NAIIO). (2021). Legislation and executive orders. National Artificial Intelligence Initiative.

National Science Foundation (NSF). (2020, August 26). NSF advances artificial intelligence research with new nationwide institutes. Special reports announcements.

NCSL. (2022, August 26). Legislation related to artificial intelligence. National conference on state legislatures.

Office of Science and Technology Policy. (2022, October 4). Blueprint for an AI bill of rights. The White House.

Roberts, D. (2021, May). FTC orders destruction of algorithms derived from privacy violations. Business Law Today, (1).

Select committee on artificial intelligence. (2022).

The White House. (2019, February 11). Accelerating America’s leadership in artificial intelligence.

The White House. (2020a). American artificial intelligence initiative: Year one annual report [PDF].

The White House. (2020b, December 3). Executive order on promoting the use of trustworthy Artificial Intelligence in the federal government | the white house.

18 views0 comments


bottom of page