The aggressive effort by major players aims to reshape the narrative as polls show increasing public disapproval of AI
OpenAI made a surprise announcement this week – not an update to ChatGPT or another multibillion-dollar datacenter – but a policy paper that called for a reimagining of the social contract based around “a slate of people-first ideas”. It’s the latest move in an aggressive effort by the major AI players to reshape the narrative around their industry, as polls show public disapproval of AI increasing.
OpenAI’s 13-page paper, titled Industrial Policy for the Intelligence Age, follows its surprise acquisition of tech-friendly podcast TBPN and its announcement of plans to open a Washington DC office that will feature a dedicated space called the OpenAI workshop for non-profits and policymakers to learn about and discuss the company’s technology.
OpenAI’s rival Anthropic has meanwhile announced its own thinktank, the Anthropic Institute, which similarly proclaimed an intention to explore how the growth of AI would disrupt society.
As disruptions from AI become more tangible and calls for greater scrutiny of big tech companies grows louder, the industry appears to be both recognizing the widespread discontent and looking for ways to reframe the debate.
Sam Altman, OpenAI’s CEO, talked about the public perception problems facing AI firms at investment firm BlackRock’s conference in Washington DC last month: “You can see a bunch of potential headwinds. AI is not very popular in the US right now. Datacenters are getting blamed for electricity price hikes, almost every company that does layoffs is blaming AI whether or not it really is about AI,” he said.
Still, the company’s marketing push is not only about burnishing its image. In developing thinktanks and research institutes, while at the same time spending millions on lobbying efforts, some experts also see AI firms attempting to undercut independent efforts to regulate the industry.
“The OpenAI paper has a lot of the sounds of wanting more regulatory oversight,” said Sarah Myers West, co-executive director at the non-profit AI Now Institute, which advocates for more public accountability over the AI industry. “But then when you look under the hood, they have lobbied very successfully for an administration that has taken a very aggressive deregulatory stance toward AI.”
OpenAI and Anthropic did not respond to a request for comment.
OpenAI’s paper marks a shift in tone that appears to reflect worries within the company around how its technology is being publicly received. Rather than talk about how workers can adapt to the new technology to avoid falling out of the labor market, the document talks about “building a resilient society” and asks policymakers to create guardrails on safe AI.