Why AI Driven Chatbots Must be Regulated?

Part III

Regulation Can Stand Between Anarchy and Order

The clamour for regulation surprisingly is led not by consumers so much as it is by the tech companies who are the creators and purveyors of these chatbots. 

When the Senator Blumenthal opened the hearing of Senate Judiciary Subcommittee, he began by playing an audio clip with “introductory remarks”  from a computer-generated voice that sounded just like him. He revealed that the audio was made by a voice cloning software, with words written by ChatGPT, the AI chatbot created by Sam Alkman’s company. 

That voice was not mine. The words were not mine,” Blumenthal said. 

Blumenthal asked ChatGPT why did it pick the themes in the introductory remarks. ChatGPT is supposed to have replied ‘Blumenthal has a strong record in advocating for consumer protection and civil rights. He has been vocal about issues such as data privacy and the potential for discrimination in algorithmic decision making. Therefore, the statement emphasizes these aspects.’” 

Blumenthal jokingly thanked Altman for the “endorsement, of his stand” but then seriously warned about the dangers of Generative AI technology. 

A more befitting example of the methodology and power of ChatGPT and the deceptions it can create will be difficult to find. 

Please Regulate Us!

As Altman  began his testimony, he surprised everyone by urging the government to regulate.  “We think the regulatory intervention by the governments will be critical to mitigate the risks of increasingly powerful models,” Altman said. 

Altman is not alone in voicing this concern. Many of his peers share this apprehension of misuse; and  support the idea of regulating the AI that would maximise the benefits of the transformative technology while minimising the harms. One suggestion was for the government to consider licensing and testing requirements of development and release of AI models.  Government may also mandate independent auditing of companies like OpenAI.

“For a very new technology we need a new framework,” Altman said.

Altman’s sentiments, whether contrived or spontaneous, found echo in Microsoft’s President, Brad Smith’s statement that followed suit rather quickly.

Smith wants government to move quickly and proposes moves like requiring an emergency brake for A.I. systems used in critical infrastructure and licenses for creating “highly capable” A.I. models.

And he  also acknowledged that A.I. developers need to show restraint in creating new products with potentially broad, and negative, social consequences, and said that Microsoft wasn’t trying to pass the buck onto government regulators. “There is not an iota of abdication of responsibility,” he said.

Today almost all top AO executives, Altman, Smith, Sundar Pichai included have asked government to work together to create effective rules, in other words, some kind of a government mandated regulation.

Is it  the latest sign that the tech industry is betting on outreach to regulators as the best way to head off more onerous regulation — but it’s unclear whether governments will craft rules these companies would like?

Safeguards- Inadequate if Not Absent 

Who should be liable if AI generated applications tend to destroy and destruct? Legally the creators and the platforms and the consumers constitute distinct identities, each one alone cannot be held accountable for any adverse consequences arising out of AI generated contents.

In US, the template for most regulations, a string of challenges to Section 230 — the law that shields online platforms from liability for user-generated content — have failed over the last several weeks. Most recently, the Supreme Court declined on Tuesday to review a suit about exploitative content on Reddit. But the debate over what responsibility tech companies have for harmful content is far from settled — and generative artificial intelligence tools like the ChatGPT chatbot could open a new line of questions.

Should  Section 230 apply to generative A.I.? Opinions are divided.  The writers of section 230 feel, it does not. Senator Ron Wyden a Democrat Senator and Chris Cox, a former Republican representative, thus representing a broad consensus, argue otherwise. “We set out to protect hosting. Platforms are immune only to suits about material created by others, not their own work.”, opines Wyden. Chris Cox echoes, . If you are partly complicit in content creation, you don’t get the shield,” though  these distinctions, which once seemed simple, are already becoming more difficult to make.

And what to do with  search engines who also create content? Typically, search engines are considered vehicles for information rather than content creators, and search companies have benefited from Section 230 protection. Chatbots generate content, and they are most likely beyond protection. But tech giants like Microsoft and Google are integrating chat and search, complicating matters. “If some search engines start to look more like chat output, the lines will be blurred,” Wyden said.

The legal position is as nebulous elsewhere. Europe is nowhere clearer; and as regards India, another huge consumer of these applications and extremely vulnerable to its pernicious after-effects, even the debate’s contours are hazy and confused.

A Deadly Recipe

Things do not seem to improve with time.  Generative A.I. tools have already been used to make intentionally harmful content. And hallucinations — the falsehoods that generative A.I. tools create (like court cases that never existed) — are a significant problem. If a user prompts an A.I. for cocktail instructions and it offers a poisonous concoction, the algorithm operator’s liability is obvious, said Eric Goldman, a law professor at Santa Clara University and a Section 230 expert.

Goldman fears that anger over immunity for social media platforms threatens nuanced debate about the next generation of tech development. The challenge is complex, even intractable, as  most emerging situations are not definitive.

Goldman said. ““The blossoming of A.I. comes at one of the most precarious times amid a maturing tech backlash,” We need some kind of immunity for people who make the tools,” he added. “Without it, we’re never going to see the full potential of A.I.”

Regulation Not A Bad Word

Regulation is made out to be a bad word. But life cannot continue without regulation. We exist because we regulate ourselves. Nature regulates and so does organised life. One may prefer to call it discipline. But it’s merely semantics. Without discipline, the resultant chaos and confusion, will imperil our very existence, and certainly impede all progress.

The choice before humanity is clear.  Experimentation is an essential nature of humans. The instinct to breach new frontiers defines human mind. Yet to repeat mistakes and invite  disasters also are aspects of the same human mind. But over time, humanity has always retraced its steps back from the brink of complete annihilation. The same human mind must heed the reasoning that should compel to learn from the past mistakes. 

We must curb and control the occasional impulses and tendencies that may lead us to the path of existential destruction, and from a disquieting and chaotic disorder.

AI is value neutral. It is up to us to harness its potential in creative ways and not permit it to  be usurped by the demonic intelligentsia or demagogues for short term  gains and perniciously contrived  ends. The liberalism that drives us to creativity and freedom must be overwhelmingly positive and broadly beneficial. And they must follow discipline. 

Regulations can never be perfect. They shall always be a compromise, a well intentioned compromise between the intrinsic human urge to create and the mitigation of this urge if it tends to go wayward. It is a necessary evil, if you please. And it is always a work in progress, ever evolving and responding to the transient developments affecting us.

The logic that humans do not know what is good for them is delusional and self-defeating. These new technologies must work for the common good. A technological marvel that tends to destroy and destruct cannot  seek validation and justification through arguments of generous liberalism. They must be contained. 

AI is one such technology.

Published by udaykumarvarma9834

Uday Kumar Varma, a Harvard-educated civil servant and former Secretary to Government of India, with over forty years of public service at the highest levels of government, has extensive knowledge, experience and expertise in the fields of media and entertainment, corporate affairs, administrative law and industrial and labour reform. He has served on the Central Administrative Tribunal and also briefly as Secretary General of ASSOCHAM.

Leave a comment