Is AI Safety and Ethics Progress Just Political Showmanship?
Public concern about ethical considerations surrounding AI have exploded along with expansive media coverage since ChatGPT took the world by storm in November 2022, prompting politicians and public figures to take a stand. However, along with many others, I remain skeptical of their intentions and recognize much of the hype as political grandstanding.
It is true that public companies are primarily driven by financial returns, and politicians are often motivated by re-election. Let's delve deeper into some of the recent actions taken by those in power and see what they mean for the future of AI.
Politics
The $140M investment in seven research centers in the US is a direct result of a law passed years ago that specifically designated these seven sites. The National Artificial Intelligence Initiative Act (NAIIA) of 2020 was designed to bolster AI activities at over a dozen agencies, including the expansion of a network of research institutes launched in the summer of 2020. It is the closest thing to a US national AI strategy that has been endorsed by Congress.
Many believe that this law was meant to ensure that the US remains at the forefront of global AI research in the face of growing investments by other countries. So, while there's certainly a political showmanship element, there is an underlying positive motivation to keep the US ahead in the AI race.
Big Tech
Tech companies have long faced concerns about AI systems' privacy violations, bias, and the proliferation of scams and misinformation. While these issues still exist today, it is important to understand that tech executives are accountable for financial returns. It is not fair to villainize them for being responsible fiscal leaders.
It is vital to keep their financial motivations front-of-mind when we see their calls for regulation and ongoing doomsday predictions. The sooner that their agile, startup competitors are burdened by expensive regulations, the better for them. They are behemoths who are intrinsically slow to adapt and their entire business models are now at risk. Thus, these calls for government intervention are NOT altruistic!
What does this mean for the future of AI?
Overall, if we want to see progress in AI safety and ethics, we must recognize that the system is designed to reward certain behaviors from all involved players. If we want different results, the system must change.
True progress on AI safety and ethics will only come about through a combination of government oversight, industry collaboration, and public pressure. It is up to us to hold our politicians and companies accountable and work together to ensure safety and innovation.
Comments