Do you worry about AI destroying your future? Is there any chance that government regulation will save you?
To give shape to your fear, pick your favorite dystopian sci-fi movie:
Metropolis (1927) – AI-like robot Maria is used to manipulate the working class.
2001: A Space Odyssey (1968) – A spaceship’s onboard AI supercomputer, HAL, starts killing the crew to keep them from interfering with HAL’s primary directive.
The Terminator (1984) – Skynet, a self-aware AI, launches a nuclear war and sends robots to eliminate humanity.
Ex Machina (2014) – A reclusive scientist creates a humanoid AI robot named Ava, who turns on humans for her own benefit.
Those are the worst cases – AI harming or enslaving humanity. But there are more pedestrian harms to worry about, such as protecting individual privacy, protecting people from AI bias or discrimination, and protecting against AI eliminating jobs and harming wages.
AI regulation is just taking shape. There is no comprehensive federal regulation of AI. Some states and cities have enacted laws addressing specific AI uses. The European Union is crafting a comprehensive AI law, but many important details remain to be determined.
What’s Happening in the U.S.
In the U.S., on the federal level, the Biden Administration issued a thought piece entitled “AI Bill of Rights.” It contains five guiding principles for responsible use of AI. It’s just aspirational guidance, not the law.
The Federal Trade Commission has been the most active federal agency, collaborating with the Department of Justice and the Equal Employment Opportunity Commission. These agencies claim to have some power over AI already.
They have warned against possible bias/discrimination violations arising from using AI in credit, tenant approval, hiring and employment, and insurance. They also have addressed invasive commercial surveillance.
The FTC also has issued guidance that companies shouldn’t deceive consumers regarding when using AI to interact with them, that customers should receive an explanation when denied a product or service based on AI decision-making, and that companies should validate whether their AI models work as intended.
On the state and local levels, several states have enacted laws to address possible AI bias/discrimination in hiring and employment decisions. Some jurisdictions have banned or restricted using facial recognition software in law enforcement. Several jurisdictions also have enacted laws allowing civil suits against creators of deepfakes, especially when used for fabricated pornography. California has imposed notice and disclosure requirements regarding using chatbots to incentivize sales or influence election voting.
The Coming Battle Over AI and Bias/Discrimination
Overall, the hottest area for government action in the United States is addressing bias/discrimination via AI in hiring and employment. What constitutes illegal bias/discrimination is hotly litigated in the courts and regulatory agencies. The recent Supreme Court decision effectively banning affirmative action in college admissions is shaking up this area of the law.
An AI will maximize whatever kind of output the AI’s creator instructs, subject to any constraints programmed by the creator. In theory, you can program an AI to maximize achievement benchmarks, such as grades or test scores, or to do that only within the constraints of specified identity-group quotas.
There will be a major political and societal battle over whether AI must be tuned toward affirmative action or, instead, toward identity-blind merit in human selection processes, such as hiring, employment, government contracts, and school admissions. This battle is crucial because many people implicitly accept the accuracy and legitimacy of a computer’s output. Thus, there is a lot of power to be had in controlling the design of how AIs handle human-selection tasks.
Europe is More Active
Europe is leading the way in AI regulation. This leadership parallels online privacy regulation. The EU has a comprehensive and demanding online privacy regulatory regime called the General Data Privacy Regulation (GDPR). There is no U.S. federal analog, but some states, including Virginia, have enacted comprehensive online privacy laws.
The EU is still fashioning its AI regulation, so vital details are undetermined. The primary feature will be to categorize AI into three levels of risk and calibrate regulation to risk level.
The highest level is unacceptable risk, where AI use will be generally prohibited. For example, the EU would prohibit national-level social-credit scoring systems, such as what China uses for repressing its residents. Use in law enforcement also would be tightly controlled.
The moderate risk level will include things such as hiring and employment decisions. These areas will be regulated, but AI use will not be banned.
At the low-risk level, such as chatbots, regulation will probably be limited to transparency requirements so users can make informed decisions.
What About Job Losses and Saving Humanity?
If you’re keeping score, note you don’t see incipient government regulation addressing two threats: possible job loss and wage diminution and existential or enslavement threats to humanity (like in the movies).
Regarding jobs and wages, I have detected no coming regulation. But labor unions are raising AI concerns in collective bargaining. AI usage was a major issue in the recent strike by Hollywood writers.
If AI causes rapid and substantial job loss, some voters may demand that governments tax AI to compensate those affected and fund job retraining programs. But, governments may hesitate to tax AI because of the increase in productivity and economic growth it may create.
As for protecting humanity from catastrophic outcomes, you might see some effort at regulation through treaties and national laws. But those won’t be effective against rogue operators. Control against foreign threats probably must come through cyberwarfare and other military tools.
We might eventually treat powerful AI like how the world addresses nuclear weapons. (Think non-proliferation.) You could see national laws and treaties regulating the sale and possession of powerful computers, which are needed to run large AIs. In theory, one also could regulate the size of an AI neural net. Yet, detecting violations and stopping harmful uses in time would be difficult, especially those occurring in rogue countries.
And what do we do if a rogue country goes too far? As fantastical as this may sound, we may see the day when the U.S. government or NATO conducts a military strike against a rogue government or operator to destroy an advanced AI that threatens humankind. This brings us full circle. That’s sort of the plot of The Day the Earth Stood Still (Keanu Reeves remake, not the original).
So, get your popcorn ready. How AI unfolds and is regulated will be fascinating and maybe scary to watch.
Written on October 18, 2023
by John B. Farmer
© 2023 Leading-Edge Law Group, PLC. All rights reserved.