Opinion | How to Protect Americans From the Many Growing Threats of AI

Chuck Schumer has promised major action on what could be the most important issue of our lives. He needs to go big. Here’s how.

Apr 16, 2024 - 07:22
 0
Opinion | How to Protect Americans From the Many Growing Threats of AI

Just over 10 months ago, the U.S. Senate started considering what the nation should do about the rise of artificial intelligence. I was there, testifying in front of the Senate hearing that kicked off months of frenzied AI focus on Capitol Hill, alongside OpenAI CEO Sam Altman and IBM’s Christina Montgomery. We answered question after question on how to regulate AI and what was at stake.

The overwhelming, bipartisan sense of the room was that the United States needed to address AI policy, urgently. Yet so far not one major piece of AI legislation has reached the floor.

At the time, Democrats and Republicans agreed that the Senate had been too slow to act on the explosive rise of social media, and the clear consensus was that no one wanted to repeat the same mistake with AI. The desire to do something united politicians from left, right and center: Democrats including Sens. Richard Blumenthal, Amy Klobuchar, Dick Durbin and Cory Booker agreed with Republicans such as Sens. Josh Hawley, Lindsey Graham and John Kennedy.

Everyone also agreed that America needs a legislative approach that fosters innovation, rather than stifles it, because the long-term potential of AI for good — by advancing science, medicine and technology — remains strong.

Fast forward 10 months: The House has been consumed by internal struggles and saber-rattling about TikTok; several members have put proposals forward, but nothing has come to a vote. In the Senate, Majority Leader Chuck Schumer identified AI as a priority and spent much of the fall holding closed-door information sessions with an impressively wide range of thinkers, advocates and executives. But as yet, nothing has been brought to the floor.
Senate Majority Leader Chuck Schumer takes a question from a reporter outside the West Wing following an event about government regulations on artificial intelligence systems at the White House on Oct. 30, 2023.

In many ways the situation with AI has gotten worse. Many of the things that I merely speculated about in the hearing last May have already come to pass. Just a week after I warned that AI might be used to manipulate markets, a deepfaked photo of the Pentagon on fire circulated on X, briefly driving down the stock market. In September, a deepfaked video may have been the first to influence the outcome of an election, in Slovakia. Nonconsensual deepfake porn wasn’t even mentioned at the hearing, but by fall, some teen girls in New Jersey faced nonconsensual deepfake porn made by classmates. More recently, Taylor Swift was subjected to the same.

Cybercrime has become more brazen, too. A Hong Kong bank was recently scammed out of $25 million, in a series of transactions in which a key employee sought transaction approval in a video call. In hindsight, every other employee on the call appears to have been a deepfake. Reports of AI-generated voice impersonation for kidnapping scams are increasing, too.

A flood of bogus AI content also threatens to turn whole sections of the Internet into the equivalent of your email spam folder. Amazon is now flooded with fake books. And it has become clear that generative AI has a plagiarism problem. As The New York Times versus OpenAI lawsuit demonstrated, GenAI systems (such as ChatGPT) sometimes reproduce large chunks of someone else’s text and present it as their own work. And as my own research with the artist Reid Southen has shown, image programs produce trademark characters unbidden, threatening the livelihoods of artists, actors, movie studios and video game studios. Other jobs could be next.

New flaws have been exposed. In one case, AI-generated code routinely hallucinated nonexistent software packages, which gave bad actors a new vector for introducing malware.

Other research shows that GenAI models can be covertly racist, suggesting greater prison sentences for people who speak in nonstandard dialects.

Many longer-term risks of AI, though potentially dire, still seem very hypothetical — we have yet to see an AI so clever it can outwit a human and take over a military installation or launch a war. But there can be no doubt that AI is already being used to take advantage of Americans literally every day, and that we have too little in place to defend against these growing threats.

The most ambitious American effort to date is President Joe Biden’s executive order, which has sweep and many interesting ideas, but lacks funding and teeth; it could also easily be cast aside by the next administration. Only Congress can enact permanent, properly funded new AI policy.

So what should Congress do? Schumer has promised an AI report, rumored to be appearing very soon, to set guidelines for new laws, but as the shape of the risk becomes clearer, the bar is actually getting higher.

Perhaps the single greatest policy challenge is that no single law could possibly suffice to contain the risks of AI, because AI itself is not one thing but many, with many potential uses, many of which probably haven’t been envisioned yet.

We need many different types of laws to combat human criminals — and in a similar way, we will need a layered approach to deal with the broad array of potential misuses of AI. In the course of researching a forthcoming book on channeling AI’s impact in positive way, Taming Silicon Valley, I have come to the conclusion that the bare minimum our national AI policy should address is the following. It’s a lot, but anything less will leave our citizens vulnerable.

Data Rights

Generative AI is built on vast oceans of data, essentially everything on the internet, by you, me, and everyone else — almost none of it with the explicit permission from those who created it.

The big AI companies want freedom to use all that data without anyone’s consent, and without compensation, any time. It’s in effect a massive land grab of intellectual property, and we should no more allow this than we should allow realtors to seize physical property. Training data should require consent, compensation and licensing. Nobody should be allowed to use your data for free, without your consent.

We also need strict rules around privacy and personal data. Educational AI companies, for example, should not be allowed to gather data from schoolchildren only to sell it to marketers for microtargeted advertisements. Currently, everything a consumer types into ChatGPT is fair game for OpenAI to use as it pleases. That should not be so.

Privacy rights might sound boring, wonky and very 2010. But the U.S. government has never seriously locked them down, and with AI snarfing up everything you ever did, the need is more urgent than ever.
The OpenAI logo is seen on a mobile phone in front of a computer screen which displays the ChatGPT home Screen, on March 17, 2023, in Boston.

Transparency

Big AI companies have a right to want to keep the algorithms behind their most powerful models secret, just as all software makers might, but the public has a right to know far more about the data they use and the safety testing their systems have been subjected to. There is a healthy debate about whether or not it’s safer for AI models to be “open” (freely available, as some of the software that powers the internet is), but the least we should insist on, given how much AI models can influence our lives, is the following:

  • Openness about training data and how it has been sourced. What data those models have been trained on matters enormously, for bias, potential copyright infringement and other reasons.
  • Safety testing. Scientists and regulators should be able to determine what internal testing has been done for safety. Ford famously knew that its Pinto gas tanks might explode and kept that secret; dozens of lives were lost.
  • Labeling. Anything that is generated by AI should be labeled as such, in part to address the growing crime wave of impersonation and disinformation.

Liability

The potential negative consequences of Generative AI are immense, from cybercrime to defamation to contaminated elections.

It’s fair to assume that no matter what harms its AI products may cause, the tech industry’s first move will be to try hide behind Section 230 — the part of the Communications Decency Act that has given social media carte blanche to post almost anything, however toxic or untrue. We can’t repeat that disaster with AI. Any new regulation must make clear that Generative AI developers will be held responsible for the content they produce.

AI Literacy

AI has moved fast; education has not kept pace. Hardly anyone in the general public has been trained on either the risks or the benefits of AI. If we don’t fix that quickly, Americans will be left behind, exploited and not forearmed. We need to mandate —and fund — AI literacy in schools, colleges and among adults.

This includes making requirements on AI developers to contribute the development of AI-related curricula and even to pay for public service ads that address the risks of their products.

Layered Oversight

Mile for mile, commercial airlines are exceptionally safe, in large part because the regulation of airlines happens at multiple layers — from the design of aircraft to rules about maintenance to regulations about how crashes are to be investigated, with coordination between multiple bodies.

Strong governance of AI will require something analogous: coordinated, layered oversight. We could wait for dozens of accidents to unfold before building the same structures around AI, but waiting is dangerous: Given the speed of automation, incidents may spread rapidly and cause considerable damage. Anything we can to try to anticipate harms proactively rather than only retroactively would be greatly valuable.

A layered oversight system would include:

Independent Oversight. Companies obviously cannot be trusted to govern themselves. Congress should have the wisdom to realize the degree to which big companies try to set the rules for everyone else. To avoid regulatory capture, we need independent scientists in the loop, in every stage of decision making about what do with AI.

A U.S. Agency for AI. We have cabinet-level agencies for major sectors, like Agriculture, Commerce and Defense, and numerous independent agencies to tackle more specialized realms, like the FCC and CFPB. Technology has never had an agency of its own, of either kind: For better or worse, it has always been treated like a tool, not an industry. That hasn’t worked out well with social media. And AI is a tipping point — a potentially revolutionary new force that could eventually occupy a huge chunk of our economy and with an enormous impact.

We need a full-time agency with appropriate expertise dedicated to understanding both risks and opportunities. And it has to be dynamic and empowered, to move as quickly as technology moves. Congress is too slow to legislate GPT-5 differently from GPT-4, but if GPT-5 raises a new set of issues, the agency needs to be able to act decisively. Such an agency would bring together many of the themes above, convening outside scientists and ethicists, setting up processes for licensing and auditing and so on.

A Global AI Agency. In the long run, the only safe AI world is one in which all countries work together to ensure the safety of AI. Other countries and the EU have been moving forward with their own AI policies, but Europe should not become the sole regulator by default. The U.S., too, should play a leading role in developing global coordination around AI policy. Forthcoming U.S. law should take symbolic steps, allocating funding toward building bridges with other nations, with the long-term goal of a treaty in which the U.S. has a firm hand.

Incentivizing AI for Good

 AI has a lot of positive potential for helping humanity leverage its talents and solve problems, but also great potential for causing harm. A smart AI policy should tilt the scales toward the former.

As Stanford economist Erik Brynjolfsson has argued, tax incentives should support companies that make jobs rather than companies that destroy jobs, and discourage companies from seizing intellectual property without compensation. The incentives for AI need to tilt the same way.

Looking further ahead, if a large part of the economy flows to a few companies and leaves many citizens without jobs, we will eventually need to consider wealth-redistribution approaches such as a Universal Basic Income, perhaps in the not-too-distant future.

Research into Trustworthy AI

The hardest thing for many to realize is that as good as current AI is, it’s not nearly good enough, and perhaps not even on the right track.

Generative AI systems have become notorious for their “hallucinations” and unreliability; we may need entirely new approaches to get to AI that we can trust. But today, our collective research agenda is being set almost entirely by big companies that seem perfectly content to ship AI solutions that are far from trustworthy. (Whether their customers remain satisfied with that remains to be seen.)

More broadly, we should strive toward making AI a public good, with trustworthiness as the highest goal. A federally funded Manhattan Project focused on research in trustworthy AI for the public good, available to all and aligned with human interests, could be transformative — and ultimately have immense payoff.

All of this is a tall order. Without a fully articulated and appropriately funded federal AI policy from Congress, ever more power will accrue to a small number of massive tech companies, and citizens will be sitting ducks. Governments themselves will lose power to the AI companies, just as they lost power to social media platforms; we will wind up in what Ian Bremmer has called a “technopolar world” in which unelected tech companies rather than governments hold much of the real power.

The choices Congress makes (or fails to make) around AI policy in the coming months will likely have a massive impact on the coming decades. If Schumer and others shy away from what needs to be done, the sad history of social media may repeat itself, but this time at an even larger scale, with even greater consequences.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

viralnews360 I'm an IT whiz by day, a wordsmith by night. With a keyboard in hand and a head full of code, I translate the complexities of the digital world into engaging stories for the folks at ViralNews360. When I'm not deciphering algorithms or wrangling servers, you'll find me exploring the latest tech trends and crafting articles that inform, inspire, and maybe even spark a few laughs. Join me on the journey as I bridge the gap between tech and everyday life, one byte at a time!