This week, OpenAI released an unexpected policy document advocating for a revised social framework centered on human-focused initiatives. This step is part of a broader strategy by leading AI companies to alter perceptions of their sector, amid rising public skepticism shown in surveys. The 13-page report, named Industrial Policy for the Intelligence Age, comes after OpenAI’s acquisition of a technology-oriented podcast and its plans to establish a Washington, D.C. office with a dedicated area for nonprofits and officials to explore the firm’s innovations. Competitor Anthropic has launched its own research organization, the Anthropic Institute, aimed at examining AI’s societal effects. As AI’s disruptions become more evident and demands for stricter oversight of tech giants intensify, the sector seems to acknowledge public unease and seeks to redirect discussions. At a BlackRock event in Washington, D.C. last month, OpenAI CEO Sam Altman addressed these issues, noting potential obstacles like AI’s low popularity in the U.S., blame on data centers for rising energy costs, and companies attributing job cuts to AI regardless of accuracy. Beyond image enhancement, these efforts, including think tanks and substantial lobbying investments, may aim to counter independent regulatory pushes, according to analysts. Sarah Myers West, co-executive director of the AI Now Institute, which pushes for greater AI accountability, observed that while the OpenAI document suggests support for more oversight, the company has effectively advocated for a deregulatory approach. Neither OpenAI nor Anthropic provided comments. The report represents a change in approach, emphasizing societal resilience and urging safeguards for safe AI development over worker adaptation. It proposes ideas like a reduced workweek and a fund to distribute profits to the public, akin to universal basic income concepts. The document positions these as discussion starters to ensure AI benefits all, warning that without updated policies, support systems may lag behind technological advances. Critics view it as a publicity tactic that deflects responsibility to governments and society, portraying AI dominance as unavoidable rather than regulable. Myers West noted the paper outlines welfare objectives without committing company resources. Meanwhile, the firm lobbies for lighter rules and opposes state-level controls. Caitriona Fitzgerald from the Electronic Privacy Information Center warned that delaying federal action allows unchecked growth, aligning with industry preferences. OpenAI allocated nearly $3 million to lobbying in 2025, and its president co-founded a pro-AI political action committee that raised over $125 million last year. The group has aired advertisements in New York opposing a congressional candidate supportive of AI regulations.
Add A Comment


