Here’s how the US and China can see eye-to-eye on AI regulation

Earlier this month, OpenAI released its most-advanced models yet, saying they had the ability to “reason” and solve complex math and coding problems. The leading AI startup, valued at some $150 billion, also acknowledged that they raised the risk artificial intelligence (AI) could be misused to create biological weapons.

You would think the potential of such a consequential outcome would raise alarm bells that stricter oversight of AI is critical. But despite almost two years of existential warnings from industry leaders, academics and other experts about the technology’s potential to wreak havoc, the US hasn’t enacted any federal regulation. 

A chorus of voices inside and outside the tech industry dismisses doomsday warnings as distractions from AI’s more near-term harms, such as potential copyright infringement, deepfakes and misinformation, or job displacement. But lawmakers have done little to address these too.

One core argument levelled against regulation is that it will impede innovation and could result in the US losing the AI race to China. But China has been rapidly advancing in spite of heavy oversight—and all-out US efforts to block it from accessing critical components and equipment. 

US export controls have hampered China’s progress, but one area where it leads the US has been in setting standards for how the most sweeping technology of our time can be created and used.

China’s autocratic regime find its easy to impose tough rules, as suffocating as they may seem for its tech industry. Beijing has different motives, including retention of social stability and party power, but also sees AI as a priority and is working with the private sector to boost innovation under its supervision.

Despite political differences, there are some lessons the US can learn. For starters, China is tackling near concerns through a combination of new laws and court precedents. 

Cyber regulators rolled out laws on deepfakes in 2022, protecting victims whose likeness was used without consent and requiring labels on digitally altered content. 

Chinese courts have also set standards on how AI tools can be used, issuing rulings that protect artists from copyright infringement and voice actors from exploitation. 

Broader interim rules on GenAI require developers to share details with the government on how algorithms are trained and pass stringent safety tests (alignment with socialist values is one such). But regulators have also shown balance and rolled back some daunting requirements after feedback from China’s AI industry.

This is in stark contrast to efforts in the US. Lawsuits over current AI harms are making their way through courts, but the absence of federal action has been stark. A lack of guidelines also creates uncertainty for business leaders. 

US regulators could take a leaf out of China’s playbook and narrowly target laws focused on known risks while working more closely with the industry on guardrails for far-off threats.

In the absence of federal regulation, some states are taking matters into their own hands. Californian lawmakers okayed an AI safety bill that would hold companies liable if their tools are used to cause “severe harm,” such as to unleash a biological weapon. 

Many tech companies, including OpenAI, have opposed the bill, arguing that such legislation should be left to the US Congress. An open letter from AI entrepreneurs and researchers also said that the bill would be “catastrophic” for innovation and would let “places like China take the lead in development of this powerful tool.”

Such loud voices have long used this line of argument to fend off regulation. And it’s a worry that the US can’t seem to agree on laws to prevent worse-case AI scenarios, let alone address the more immediate harms.

China should not be cited as an excuse to avoid meaningful oversight. Approaching AI safety as a zero-sum game between the US and China leaves no winners. Mutual suspicion and geopolitical tensions mean we won’t likely see the two working together to mitigate AI risks anytime soon. But it doesn’t have to be this way.

Some of the most vocal proponents for regulation are pioneers who helped create AI. A few so-called AI godfathers, including Turing Award winners Yoshua Bengio, Geoffrey Hinton and Andrew Yao, sat down earlier this month in Italy and called for global cooperation across jurisdictions. 

They acknowledged the geopolitical climate, but also implored that loss of control or malicious use of AI could “lead to catastrophic outcomes for all of humanity.” They offered a framework for a global system of governance. 

Many argue that they may be wrong, but the risks seem too high to write off. Policymakers from Washington to Beijing should learn from scientists who have at least shown it is possible to find common ground. ©bloomberg

Source link

Share it :

Leave a Reply

Your email address will not be published. Required fields are marked *