President Joe Biden, in a visit Tuesday to the epicenter of the technology industry, pointed to the dangers of unrestrained social media as he called for federal laws to protect Americans from fast-accelerating new artificial intelligence.
“Social media is already showing us the harm … powerful technology can do without the right safeguards in place,” Biden said before a meeting with AI experts and researchers at the Fairmont Hotel in San Francisco. “That’s why I said at the State of the Union that Congress needs to pass bipartisan privacy legislation to impose strict limits on personal data collection, ban targeted advertising to our children, and require companies to put health and safety first.”
Significant downsides to social media have emerged in recent years, as researchers and news media documented shadowy sales of personal data, online-addiction maladies including eating disorders among children, the spread of divisive and politicized misinformation, and foreign interference in U.S. elections. Critics have pointed to purported failures by government and major technology companies to impose guardrails around social media to prevent damaging fallout. Worries that government and tech firms will move too slowly on the new generative AI have arisen as the cutting-edge technology makes lightning-fast inroads into many aspects of life and commerce.
The San Francisco event — scheduled to include leading AI scientist and Stanford University professor Fei-Fei Li, Tristan Harris, executive director of the Center for Humane Technology, Sal Khan, CEO of free-education non-profit Khan Academy, Jim Steyer, CEO of media-education non-profit Common Sense Media, and others — was to focus on both the opportunities and risks of artificial intelligence, and give the President a chance to speak on his administration’s tech-related policies over the past three years. Biden said the meeting, for him, was also for education.
“I want to hear directly from the experts — and these are some award-winning experts — on this issue, and the intersection of technology and society, who can provide a range of the broad range of perspectives for us,” Biden said before the meeting, arranged as a panel discussion.
Sitting next to Gov. Gavin Newsom and speaking to media in the hotel’s ornate Gold Room, Biden highlighted the blistering speed of AI advancement. “We’re going to see more technological change in the next 10 years than we’ve seen in the last 50 years, and maybe beyond that,” Biden said. “AI is already driving that change in every part of American life, and often in ways we don’t notice. AI is already making it easier to search the internet, helping us drive to our destinations while avoiding traffic. AI is changing the way we teach, learn and help solve challenges like disease and climate change.”
The event was part of a three-day campaign trip across the state, one dotted with private fundraisers and speaking engagements in the Bay Area.
The White House this week called addressing the effects and future of AI a “top priority” for Biden and pledged “decisive actions we can take over the coming weeks.” The administration’s “Blueprint for an AI Bill of Rights” highlights both “deeply harmful” potential effects along with potential “extraordinary benefits.” The document focuses on promoting safe and effective applications of the technology, data privacy, protection from discrimination, and transparency to the public.
Artificial intelligence, for many years a key ingredient in Silicon Valley products and technology development, made an abrupt and startling leap forward late last year with the release of ChatGPT-3 by San Francisco’s OpenAI. The chatbot uses generative AI, a type of AI fed massive amounts of data scraped from the internet, that can produce answers to questions, as well as imagery and sound, in response to prompts by a user.
The technology has fueled a Silicon Valley gold rush, with investors pouring billions of dollars into startups that build products and provide services for industries from auto-making and software engineering to marketing and customer service — and virtually everything in between. Consulting giant McKinsey in a report last week said that worldwide, generative AI could produce $2.6 trillion to $4.4 trillion in economic benefits annually, when built into 63 industries.
But the speedy technological advance has raised alarms over the potential for the tool to dramatically escalate the effectiveness and scale of misinformation, online scams and computer viruses; to spread biases from its training material into society and the economy, or to wipe out vast numbers of jobs. Some major players in the tech industry have raised the prospect of a so-called super-intelligence, smarter than people, that could threaten humanity’s existence.
Many researchers focus on the more immediate potential harms, while politicians fret that applying the brakes too hard through laws, regulations and policies could allow other countries, particularly China, to jump ahead of the U.S.
Biden on Tuesday said executive actions and funding strategies for responsible AI development would help the nation “lead the way and drive breakthroughs in this critical area … from cybersecurity, to public health, to agriculture, to education, and frankly so much more.” But already the training of generative AI on huge volumes of copyrighted data has spawned lawsuits, drawn scrutiny from Congress, and sparked fears that tech companies could use art, photos, news articles, music, screenplays, books, research papers and code to create lucrative products without compensation for the creators.
The White House said this week that the Office of Management and Budget will soon release draft policy guidance for federal agencies to ensure the development, procurement, and use of AI systems is centered around safeguarding the American people’s rights and safety.
“We need to manage the risks to our society, our economy and our national security,” Biden said Tuesday. “My administration is committed to safeguarding American rights and safety, to protecting privacy, to addressing bias and misinformation, to making sure AI systems are safe before they are released.”