The lack of formal AI governance has created a quick-to-market ecosystem that is powerful for innovation and is also leaving unintended harms largely unidentified and no one accountable when harms are discovered. 2023 will hold leaps forward in areas such as generative AI.
We will see shifting workforce impacts, ownership disputes, regulatory and policy advances, and business leaders seeking refuge in cyber insurance while embracing more AI in their internal operations, revenue, and risk avoidance pursuits. Read here for a deeper dive with Lisa Palmer, Chief AI Strategist & Ethicist with AI Leaders, a boutique consulting firm helping executives and boards to maximize opportunities and minimize risks in this environment.
Why I believe that we need a heavy focus on AI Outcomes.
Because “doing the right thing” is failing us. Let me explain.
In my experience working within top-tier technology companies – and with 1000s of business executives across Fortune 500 companies – AI is not purposely being built to cause harm. Instead, they are often governed by a misguided philosophy of “doing the right thing.”
It’s important that we recognize that, despite positive aims, well-intentioned and business-oriented AI is creating harm today. The real danger lies in subtlety because these harms go undetected and unrestricted. So let’s talk about where doing the right thing goes horribly wrong.
Arguably, it’s good business practice to give credit to only those individuals who are defined as a good credit risk. What about when credit scores limit access to loans because of someone’s known associates – because of those people closest to them – so they are never able to own a home or create generational wealth.
How about intending to lower crime and keep people safe? This is certainly a doing the right thing initiative. So what happens when predictive policing identifies people who are deemed likely to commit crimes and their constitutional rights are undermined?
What about employers who are seeking the best job applicants to ensure high-quality customer service? Again, this is a positive goal. And to this end, their job application system eliminates candidates based on their home zip code. Imagine being screened out as a candidate, without knowing that you were eliminated simply because of where you live.
The examples go on and on, where doing the right thing, is NOT the right thing to do.
In 2023, sales of cyber liability and errors and omissions insurance policies will spike as leaders seek to balance growing regulatory pressures and the reality that their “do the right thing” governance is not working.
How will generative AI impact the workforce in 2023?
Everyone, employees and employers, needs to embrace lifelong learning or be left behind!
Jobs are changing because of the addition of AI to their roles. Yes, some jobs are being replaced, but it’s far more common to see people who are now using AI as a core part of their job. We must understand that entire jobs aren’t disappearing overnight. Rather, tasks are iteratively becoming automated causing a shift toward human + AI roles.
As business leaders, the focus will need to be on a constant revamping of job descriptions as individual tasks become automated. Then, employers need to invest in on-the-job training to pivot employees from tasks that are automated into the new AI-augmented version of their roles.
My two workforce predictions for 2023 are:
The proliferation of roles such as the “AI Whisperer.“ This is someone so skilled in using words to create art, that they are highly sought-after creative talent.
And a quickened evolution of people working in partnership with AI all around the globe.
What are the legal implications of generative AI? Who owns works created by it? Who is legally responsible for its results?
I always encourage people to do three things:
Focus on the outcomes that are created by AI to identify opportunities and risks,
Grant ownership to creators, and
Follow the money for responsibility
Let’s consider generative Art. The first art piece sold by famous auction house, Christie’s, sold for $432,000. This was over 43 times what they expected it to sell for at auction. Clearly, there’s significant money involved here which means that business interest and legal attention are destined to escalate.
As for the legalities, this is a sticky question around the world today! Many global courts have already ruled that AI cannot be an inventor. So, back to generative art as the example. Who is the owner of art that is generated by a computer? In the US, an artist recently won an art contest using generative AI – and the other artists were livid. Should that artist get to claim ownership of artwork that was simply spawned by his key phrase? Or should it require him to somehow manipulate the result to own the outcome? Or, does the engineer or company who made the algorithm own it? And who is responsible for the outcome that it creates should it be offensive or harmful?
In my opinion, as we move into 2023, there are 3 rules to live by:
Companies must ensure that the AI they develop, test, deploy, and sell creates positive and equitable outcomes.
The creator using the AI tool owns what they make.
And the company that profits from this technology needs to be held accountable when it fails to meet positive and equitable outcome expectations.
What’s your forecast for AI policy in the US?
Bottom line, US AI policy will continue to be fragmented under the control of silo’d regulatory agencies and differing state laws, and American companies will continue to be held accountable to standards set outside of the US.
I’m encouraged that a Republican President executed an executive order that was later codified by a Democrat-controlled legislature. And then the Blueprint for the AI Bill of Rights was released this year. This indicates that we have bipartisan momentum, which is desperately needed.
However, due to the US lagging behind other parts of the world, we are likely to continue seeing American companies designing their operations to accommodate standards established elsewhere. This has already happened with General Data Protection Regulation (GDPR) on data protection and privacy. With GDPR and similar laws passed by 5 US states undermining the big data training approach currently being used by AI, as their core premise is to limit the amount of data that is captured, the stage is set for standards established outside of the US federal stage to dictate our future.
For 2023, policy focus needs to be laser-focused on protecting people from unintended harm while supporting a culture of innovation needed to ensure national security and competitiveness.
2023 promises more AI, a shifting workforce, ownership disputes, regulatory advances, and leaders buying cyber insurance.
About the Author:
Lisa C. Palmer, former big tech executive and AI doctoral candidate, is the Chief AI Strategist & Ethicist for AI Leaders based in Broken Arrow, OK, where she uses outside-in thinking to help Fortune 500 executives and board directors drive growth, improve profitability, and turn ethics and reputation risks into opportunities.
Comments