Article Archive
WASHINGTON, D.C. – President Trump’s recent Artificial Intelligence (AI) Executive Order shows that this Administration is dedicated to America’s global leadership in AI technology innovation. This Order directed the development of an AI Action Plan to sustain and enhance America’s global AI dominance. Today, the American people are encouraged to share their policy ideas for the AI Action Plan by responding to a Request for Information (RFI), available on the Federal Register’s website through March 15.
“The Trump Administration is committed to ensuring the United States is the undeniable leader in AI technology. This AI Action Plan is the first step in securing and advancing American AI dominance, and we look forward to incorporating the public’s comments and innovative ideas,” said Lynne Parker, Principal Deputy Director of the Office of Science and Technology Policy (OSTP).
The AI Action Plan will define priority policy actions to enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation. With the right governmental policies, continued U.S. AI leadership will promote human flourishing, economic competitiveness, and national security.
Today’s RFI from OSTP seeks input from interested public parties, including academia, industry groups, private sector organizations, state, local and tribal governments, and others on actions that should be included in the AI Action Plan.
Comments can be submitted online and will be accepted until 11:59PM on March 15, 2025.
Please click here for submission information.
Public Comment Invited on Artificial Intelligence Action Plan
The White House February 25, 2025WASHINGTON, D.C. – President Trump’s recent Artificial Intelligence (AI) Executive Order shows that this Administration is dedicated to America’s global leadership in AI technology innovation. This Order directed the development of an AI Action Plan to sustain and enhance America’s global AI dominance. Today, the American people are encouraged to share their policy ideas for the AI Action Plan by responding to a Request for Information (RFI), available on the Federal Register’s website through March 15.
“The Trump Administration is committed to ensuring the United States is the undeniable leader in AI technology. This AI Action Plan is the first step in securing and advancing American AI dominance, and we look forward to incorporating the public’s comments and innovative ideas,” said Lynne Parker, Principal Deputy Director of the Office of Science and Technology Policy (OSTP).
The AI Action Plan will define priority policy actions to enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation. With the right governmental policies, continued U.S. AI leadership will promote human flourishing, economic competitiveness, and national security.
Today’s RFI from OSTP seeks input from interested public parties, including academia, industry groups, private sector organizations, state, local and tribal governments, and others on actions that should be included in the AI Action Plan.
Comments can be submitted online and will be accepted until 11:59PM on March 15, 2025.
Please click here for submission information.
Article Archive
March 13, 2025
To: Faisal D'Souza, NCO From: Christopher Lehane
Office of Science and Technology Policy OpenAI
2415 Eisenhower Avenue 1455 3rd Street
Alexandria, VA 22314 San Francisco, CA 94158
This document is approved for public dissemination. The document contains no business-proprietary or confidential
information. Document contents may be reused by the government in developing the AI Action Plan and associated
documents without attribution.
“It is the policy of the United States to sustain and enhance America’s global dominance in
order to promote human flourishing, economic competitiveness, and national security”
– President Donald J. Trump, Executive Order 14179, January 23, 2025
OpenAI respectfully submits the enclosed proposals to the Office of Science and
Technology Policy as it weighs a new AI Action Plan that will, as Vice President Vance
stated recently at the Paris AI Action Summit, maintain American leadership in AI and
“make people more productive, more prosperous, and more free.” As America’s
world-leading AI sector approaches artificial general intelligence (AGI), with a Chinese
Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s
new AI Action Plan can ensure that American-led AI built on democratic principles
continues to prevail over CCP-built autocratic, authoritarian AI.
OpenAI agrees with the Trump Administration that AI creates prosperity and freedom worth
fighting for—especially for younger generations whose future will be shaped by how this
Administration approaches AI. Globally, most ChatGPT users are under age 35; in the US,
about one third are ages 18 to 24.[1] Both young people and their parents recognize the
economic opportunities AI presents:
● More than seven in 10 parents in the US believe children today will be worse off
financially than they are.[2]
● Nine in 10 US parents think it's important that their kids learn how to use artificial
intelligence for their future jobs—and eight in 10 say either that isn’t happening
today, or they don’t know if it is.[3]
● Three in four college-age AI users want to use AI in their education and careers.
Many are teaching themselves and their friends about AI without waiting for their
schools to provide formal AI education.[4]
[1] Self-reported among logged-in users
[2] Pew Research Center: Views of children’s financial future, Jan. 2025
[3] Morning Consult survey commissioned by Samsung: 88% of US Parents of Gen Alpha & Gen Z Students Say AI Will Be Crucial to Their Child’s Future Success, Sept. 2024
[4] OpenAI: Building an AI-Ready Workforce: A Look at ChatGPT Adoption in the US, Feb. 2025
In particular, AI could drive significant increased productivity over the next decade. Here’s
how we can realize this heightened prosperity and greater freedom together.
to the domesticated horse, the wheel, steam power, the car, the plane—we scaled the
freedom of mobility. From daylight to candle and lamplight, to electricity providing light and
power at all hours—we scaled the freedom to produce, think and create. From word of
mouth to the stylus and tablet, to the printing press, telegraph, phone, computer,
smartphone—we scaled freedom of learning and knowledge. Now, as we approach AGI,
innovation is poised to scale human ingenuity itself—the sum of our freedoms to learn and
know, think, create and produce.
As our CEO Sam Altman has written, we are at the doorstep of the next leap in prosperity:
the Intelligence Age. But we must ensure that people have freedom of intelligence, by
which we mean the freedom to access and benefit from AGI, protected from both autocratic
powers that would take people’s freedoms away, and layers of laws and bureaucracy that
would prevent our realizing them.
More than 400 million people around the world are using ChatGPT to ideate, discover, and
break through beyond what we’re currently capable of doing on our own. Just two weeks
ago, we partnered with the Department of Energy’s national labs to bring together 1,500
scientists to use our tools to take scientific discovery farther, faster.
Our work at OpenAI also suggests that as AI advances, progress accelerates and becomes
increasingly affordable, as reflected in these three scaling principles:
1. The intelligence of an AI model roughly equals the log of the resources used to train and
run it. Until recently, scaling progress has primarily come from training compute and data,
but we have shown how to make intelligence scale from inference compute, as well. The
scaling laws that predict these gains are incredibly precise over many orders of magnitude,
so investing more in AI will continue to make it better and more capable. We believe that
the socioeconomic value of linearly increasing intelligence is super-exponential in nature.
2. The cost to use a given level of AI capability falls by about 10x every 12 months, and
lower prices lead to much more use. We saw this in the change in token cost between
GPT-4 in early 2023 and GPT-4o in mid-2024, where the price per token dropped about
150x in that time period. Moore’s Law predicted that the number of transistors on a
microchip would double roughly every two years; the decrease in the cost of using AI is
even more dramatic.
3. The amount of calendar time it takes to improve an AI model keeps decreasing. AI
models are catching up with human intelligence at an increasing rate. The typical time it
takes for a computer to beat humans at a given benchmark has fallen from 20 years after
the benchmark was introduced, to five years, and now to one to two years—and we see no
reason why those advancements will stop in the near future.
By scaling human ingenuity ever faster and more affordably, AGI will create a flywheel of
more freedom leading to more productivity, prosperity, and yet more innovation—letting us
once again focus on positive-sum growth.
that is shaped by the democratic principles America has always stood for. As OpenAI
recently laid out in our Economic Blueprint, we believe these principles include:
● A free market promoting free and fair competition that drives innovation.
● Freedom for developers and users to work with and direct our tools as they see fit,
in exchange for following clear, common-sense technical standards that help keep
AI safe for everyone, and being held accountable when they don’t.
● Preventing government use of AI tools to amass power and control their citizens,
or to threaten or coerce other states.
In advancing democratic AI, America is competing with a CCP determined to become the
global leader by 2030. That’s why the recent release of DeepSeek’s R1 model is so
noteworthy—not because of its capabilities (R1’s reasoning capabilities, albeit impressive,
are at best on par with several US models), but as a gauge of the state of this competition.
As with Huawei, there is significant risk in building on top of DeepSeek models in critical
infrastructure and other high-risk use cases given the potential that DeepSeek could be
compelled by the CCP to manipulate its models to cause harm. And because DeepSeek is
simultaneously state-subsidized, state-controlled, and freely available, the cost to its users
is their privacy and security, as DeepSeek faces requirements under Chinese law to
comply with demands for user data and uses it to train more capable systems for the CCP’s
use. Their models also more willingly generate how-to’s for illicit and harmful activities such
as identity fraud and intellectual property theft, a reflection of how the CCP views violations
of American IP rights as a feature, not a flaw.
Today, CCP-controlled China has a number of strategic advantages, including:
● As an authoritarian state, its ability to quickly marshal resources—data, energy,
technical talent, and the enormous sums needed to build out its own domestic chip
development capacity.
● Its preexisting Belt and Road initiative. As with Huawei, the PRC will scale the
adoption of PRC-based AI systems like DeepSeek’s by coercing countries needing
AI tools and nation-building infrastructure funds.
● Its ability to benefit from regulatory arbitrage being created by individual American
states seeking to pass their own industry-wide laws, some of which are modeled
on the European Union’s regulation of AI. These laws are easier to enforce with
domestic AI companies than PRC-based companies and could impose
burdensome compliance requirements that may hinder our economic
competitiveness and undermine our national security. They also may weaken the
quality and level of training data available to American entrepreneurs and the
usefulness for downstream consumers and businesses.
● Its ability to benefit from copyright arbitrage being created by democratic nations
that do not clearly protect AI training by statute, like the US, or that reduce the
amount of training data through an opt-out regime for copyright holders, like the
EU. The PRC is unlikely to respect the IP regimes of any of such nations for the
training of its AI systems, but already likely has access to all the same data, putting
American AI labs at a comparative disadvantage while gaining little in the way of
protections for the original IP creators.
While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and
is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led
AI, securing both American leadership on AI and a brighter future for all Americans.
on AI and in so doing, unlock economic growth, lock in American competitiveness, and
protect our national security. Specifically, we detail:
A regulatory strategy that ensures the freedom to innovate: For innovation to truly create
new freedoms, America’s builders, developers, and entrepreneurs—our nation’s greatest
competitive advantage—must first have the freedom to innovate in the national interest. We
propose a holistic approach that enables voluntary partnership between the federal
government and the private sector, and neutralizes potential PRC benefit from American AI
companies having to comply with overly burdensome state laws.
An export control strategy that exports democratic AI: For countries seeking access to
American AI, we propose a strategy that would apply a commercial growth lens—both Total
and Serviceable Addressable Markets—to proactively promote the global adoption of
American AI systems and with them, the freedoms they create. At the same time, the
strategy would use export controls to protect America’s AI lead, including by making
updates to the AI diffusion rule.
A copyright strategy that promotes the freedom to learn: America’s robust, balanced
intellectual property system has long been key to our global leadership on innovation. We
propose a copyright strategy that would extend the system’s role into the Intelligence Age
by protecting the rights and interests of content creators while also protecting America’s AI
leadership and national security. The federal government can both secure Americans’
freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving
American AI models’ ability to learn from copyrighted material.
A strategy to seize the infrastructure opportunity to drive growth: Sustaining America’s lead
on AI means building the necessary infrastructure to compete with the PRC and its
commandeered resources. We propose policies to seize this unmissable opportunity to
catalyze a reindustrialization across our country, creating and supporting hundreds of
thousands of jobs, boosting local economies, modernizing our energy grid, and preparing
an AI-ready workforce—the key pillar of any country’s AI infrastructure.
An ambitious government adoption strategy: Advancing democratic AI around the world
starts with ensuring that the US government itself sets an example of governments using AI
to keep their people safe, prosperous, and free. With the PRC progressing toward
ambitious targets for AI adoption across its public administration, security, and military, the
US government should modernize its processes to safely deploy frontier AI tools at the
pace of the private sector and with the efficiency Americans deserve.
America always succeeds when it bets on American ingenuity. The enclosed policy
proposals are either derived from, or in the case of copyright represent updates to OpenAI's
Economic Blueprint, and we look forward to discussing them with you.
Chris Lehane
Vice President, Global Affairs
federal government and the private sector to protect and strengthen American national
security. This framework would extend the tradition of government receiving learnings
and access, where appropriate, in exchange for providing the private sector relief from
the 781 and counting proposed AI-related bills already introduced this year in US states.
This patchwork of regulations risks bogging down innovation and, in the case of AI,
undermining America’s leadership position.
Overseen by the US Department of Commerce and in coordination with the AI Czar,
perhaps by reimagining the US AI Safety Institute, this effort would provide domestic AI
companies with a single, efficient “front door” to the federal government that would
coordinate expertise across the entire national security and economic competitiveness
communities.[5]
This targeted framework would empower the federal government to:
● Work with both large AI companies and start-ups on a purely voluntary and
optional basis to stay informed about AI risks as well as cutting-edge capabilities
that support US national interests, including by establishing sandbox and testing
capabilities on the secure premises of federal agencies.
● Evaluate the state of American AI technology against the technology of
competitors and adversaries, including evaluating foreign models for the potential
for back doors or malign influence.
● Coordinate the development of technical standards for evaluating and
safeguarding frontier models from national security risks.
● Provide American AI companies with the tools and classified threat intelligence to
mitigate national security risks that are exacerbated by frontier models (e.g.,
cyber, CBRN) and posed by nation-state actors (e.g., economic espionage by
China).
● Incentivize companies to take part in this voluntary initiative by creating glide
paths for them to contract with the government, including on national security
projects; creating strong protections for any company information shared during
these partnerships; and reducing barriers to companies' internal work related to
national security domains.
● Guarantee that state-based legislation does not undermine America’s innovation
lead on AI. Create a sandbox for American start-ups, and provide participating
companies with liability protections including preemption from state-based
regulations that focus on frontier model security (e.g., CA SB 1047). This will help
keep the US public and private sectors competitive by allowing AI companies of
all sizes to pursue bleeding-edge AI technology free from the regulatory
uncertainty created by some state-based liability regimes.
[5] Federal preemption over existing or prospective state laws will require an act of Congress
regulations that focus on frontier model security (e.g., CA SB 1047). This will help
keep the US public and private sectors competitive by allowing AI companies of
all sizes to pursue bleeding-edge AI technology free from the regulatory
uncertainty created by some state-based liability regimes.
technologies to the PRC—it should ensure that America is “winning diffusion”, i.e., that
as much of the world as possible is aligned to democratic values and building on
democratic infrastructure. To that end, we propose that the US government consider the
Total Addressable Market (TAM), i.e., the entire world less the PRC and its few allies,
against the Serviceable Addressable Market (SAM), i.e., those countries who prefer to
build AI on democratic rails, and help as many of the latter as possible commit, including
by actually committing to deploy AI in line with democratic principles set out by the US
government.
In particular, we propose maintaining the AI diffusion rule’s three-tiered framework to
differentiate among countries in the global AI market, but with some key modifications
that expand the number of countries in Tier I:
Tier I: Countries that commit to democratic AI principles by deploying AI systems in ways
that promote more freedoms for their citizens could be considered Tier I countries.
Tier II: Limited to only those countries that have a history of failing to prevent
export-controlled chips and other US-developed IP from being diverted into, or used by
Tier III countries. These countries would be encouraged and supported to obtain Tier I
status over time; and would be subject to more stringent security requirements in the
interim.
Tier III: CCP-led China, along with a small cohort of countries aligned with the CCP,
would represent its own category that is prohibited from accessing democratic AI
systems.
This strategy would encourage global adoption of democratic AI principles, promoting the
use of democratic AI systems while protecting US advantage. Making sure that
open-sourced models are readily available to developers in these countries also will
strengthen our advantage. We believe the question of whether AI should be open or
closed source is a false choice—we need both, and they can work in a complementary
way that encourages the building of AI on American rails.
Tier I countries should include American allies, as well as those countries that are
committed to democratic AI principles and that present a relatively low risk that American
AI infrastructure (e.g., chips) will be diverted to non-Tier I countries. The commercial
diplomacy strategy in Tier I should recognize these countries’ strong history of export
and customs control compliance and seek to maximally expand democratic AI systems’
market share, while at the same time protecting those systems from IP theft by the PRC
and other malign actors (e.g., the theft of model weights and/or chip designs,
unauthorized influence or access to data center operations).
To expand market share in Tier I countries, American commercial diplomacy policy
should:
● Encourage cross-border capital flows and promote software frameworks that are
optimized for domestic chip design.
● Coordinate global bans on CCP-aligned AI infrastructure, including Huawei chips.
● Continue to represent American company interests in safety and security
standards bodies, and encourage global regulators to adopt pro-growth safety and
security policies.
● Revise the existing export control rules to eliminate country caps on compute.
● Maintain existing export license exceptions (e.g., license exception ACM) that
enable exports of technology and software for technical collaboration with allies
and preservation of economically critical supply chains
To protect the US-developed IP needed to operate data centers in Tier I countries,
security requirements could include:
● Prohibiting relationships with Tier III countries’ foreign military and intelligence
services, and the use of data centers to support military/intelligence missions for
Tier III nations or human rights violators.
● Banning the use of PRC-produced equipment (e.g., Huawei Ascend chips) and
models that violate user privacy and create security risks such as the risk of IP
theft.
● Maintaining corporate control by entities headquartered in Tier I countries.
● Implementing—and constantly modernizing—cybersecurity, model weight security,
and personnel security controls that ideally are globally synced and coordinated
among Tier I governments.
Controls on model weights—if any—should strike a balance between protecting
American-developed IP and promoting the deployment of American-developed models
over those developed by Tier III countries, including the PRC.
Tier II countries should include those with a history of failing to prevent
export-controlled chips and other US-developed IP from being diverted into, or used by
Tier III countries. Here, the commercial diplomacy strategy should still seek to expand
US market share, but should do so more carefully, including by levying stronger controls
on the export of US-developed AI infrastructure. At the same time, the strategy should
provide transparent pathways for Tier II countries to reach Tier I status by adopting
democratic AI principles and more effectively managing risks of chip diversion.
To expand American market share in Tier II countries, in addition to the steps above, the
commercial diplomacy policy could be designed to leverage commercial interest in
American-led AI in order to encourage investment in the US, enhanced security
procedures, and encourage more countries to build on American rails:
● Establish a transparent process to evaluate countries’ readiness to transition from
Tier II to Tier I.
● Support countries’ transition from Tier II to Tier I by helping Tier II governments
strengthen their in-country security programs.
● Encourage greater economic interdependence between the US and Tier II
countries.
● Incentivize public-private partnerships to rapidly mature, scale, and commercialize
hardware-enabled mechanisms that could enhance in-country security controls in
the future.
To protect the American-developed IP needed to operate data centers in Tier II countries,
and to manage both the heightened risk of IP theft and the additional risk that
export-controlled chips might be diverted from Tier II into Tier III countries, the
commercial diplomacy policy also could:
● Allow the export of advanced AI chips to an end-user located in a Tier II country
that meets Tier I security requirements, and that puts in place additional corporate
governance controls as well as technology-enhanced protections (e.g.,
hardware-enabled mechanisms) against the diversion of export-controlled chips.
Tier III countries—including the PRC and any other country subject to a US arms
embargo—should continue to be subject to strict export controls of AI systems, including
existing export controls on advanced chips. The strategy could also expand established
controls, for example, to include advanced chips that are required for large-scale
inference and RL training and the components used to manufacture advanced AI chips
and data centers.
transformative uses of existing works, ensuring that innovators have a balanced and
predictable framework for experimentation and entrepreneurship. This approach has
underpinned American success through earlier phases of technological progress and is
even more critical to continued American leadership on AI in the wake of recent events in
the PRC. OpenAI’s models are trained to not replicate works for consumption by the
public. Instead, they learn from the works and extract patterns, linguistic structures, and
contextual insights. This means our AI model training aligns with the core objectives of
copyright and the fair use doctrine, using existing works to create something wholly new
and different without eroding the commercial value of those existing works.
America has so many AI startups, attracts so much investment, and has made so many
research breakthroughs largely because the fair use doctrine promotes AI development.
In other markets, rigid copyright rules are repressing innovation and investment.
The European Union, for one, has created “text and data mining exceptions” with broadly
applicable “opt-outs” for any rights holder—meaning access to important AI inputs is less
predictable and likely to become more difficult as the EU’s regulations take shape.
Unpredictable availability of inputs hinders AI innovation, particularly for smaller, newer
entrants with limited budgets.
The UK government is currently considering changes to its copyright regime. It has
indicated that it prefers creating a data mining exception that allows rights holders to
“reserve their rights,” creating the same regulatory barriers to AI development that we
see in the EU.
Applying the fair use doctrine to AI is not only a matter of American competitiveness
—it’s a matter of national security. The rapid advances seen with the PRC’s DeepSeek,
among other recent developments, show that America’s lead on frontier AI is far from
guaranteed. Given concerted state support for critical industries and infrastructure
projects, there’s little doubt that the PRC’s AI developers will enjoy unfettered access to
data—including copyrighted data—that will improve their models. If the PRC’s
developers have unfettered access to data and American companies are left without fair
use access, the race for AI is effectively over. America loses, as does the success of
democratic AI. Ultimately, access to more data from the widest possible range of sources
will ensure more access to more powerful innovations that deliver even more knowledge.
We propose that the US government take steps to ensure that our copyright system
continues to support American AI leadership and American economic and national
security, including by:
● Shaping international policy discussions around copyright and AI, and working to
prevent less innovative countries from imposing their legal regimes on American
AI firms and slowing our rate of progress.
● Actively assessing the overall level of data available to American AI firms and
determining whether other countries are restricting American companies’ access
to data and other critical inputs.
● Encouraging more access to government-held or government-supported data.
This would boost AI development in any case, but would be particularly important
if shifting copyright rules restrict American companies’ access to training data.
● Monitoring domestic policy debates and ongoing litigation, and weighing in where
fundamental, pro-innovation principles are at risk.
Generative AI models represent the next frontier of innovation, poised to revolutionize
the private and public sectors, improving healthcare, education, scientific research, and
so much more. If AI innovation remains protected under longstanding copyright
principles, America will maintain and strengthen its role as the world leader in
cutting-edge technologies and remain positioned to continue championing AI based on
democratic principles with countries around the world.
infrastructure. If the US doesn't move fast to channel these resources into projects that
support democratic AI ecosystems around the world, the funds will flow to projects
backed and shaped by the CCP.
We propose a foundational strategy to ensure that investment in infrastructure drives
economic growth that benefits all Americans; maximizes access to AI; and protects
national security interests by keeping sensitive American data on American soil. This
includes policies and initiatives that encourage rather than stifle developers; support a
thriving AI-ready workforce and ecosystems of labs, start-ups and larger companies; and
secure America’s leadership on AI into the future.
First and foremost, building data centers is capital-intensive, particularly for newcomers
seeking to compete against established hyperscalers with vast resources. We support
the solutions already proposed by this Administration to ensure that sufficient capital
flows to building AI infrastructure in the US:
● Investment vehicles like a Sovereign Wealth Fund.
● Government offtake and guarantees that both provide the government with the
compute it needs and signal to markets that the demand will be there for
American-developed AI.
● Tax credits, loans, and other vehicles the US government can direct to provide
credit enhancement.
We also have proposed:
A National Transmission Highway Act, as ambitious as the 1956 National Interstate and
Defense Highways Act, to expand transmission, fiber connectivity, and natural gas
pipeline construction. The process for obtaining the “three Ps”—planning, permitting, and
paying for approvals from federal, state, local, and tribal authorities—disadvantages
America’s AI industry. Transmission lines can take 10 years or more to complete. When
lines are built, parties must agree on which customers pay higher electrical bills to bear
the cost of construction. In this process, delays often affect the build-out of transmission
lines. Streamlining these processes and eliminating redundancies would significantly
speed up infrastructure projects, keeping America’s AI sector globally competitive and
securing a future of reliable, affordable energy.
Digitizing government data currently in analog form. A lot of government data is in the
public domain. Making it more accessible or machine-readable could help American AI
developers of all sizes, especially those working in fields where vital data is often
government-held. In exchange, developers using this data could work with governments
to unlock new insights that help it develop better public policies. For example,
government agencies can build on the work of the US National Archives and Records
Administration in using Optical Character Recognition for text searchability and AI-driven
metadata tagging.
A Compact for AI among US allies and partner nations that streamlines access to capital
and supply chains in ways that support AI infrastructure and a robust AI ecosystem.
Participating countries would also agree to some common standards to safeguard data
centers and the technology. Over time, this collaboration could expand to a global
network of US allies and partners that would compete with the PRC’s AI infrastructure
alliances while also strengthening security through shared standards.
AI Economic Zones, created by local, state and the federal government together with
industry, that speed up the permitting for building AI infrastructure like new solar arrays,
wind farms, and nuclear reactors. This could include creating categorical exclusions to
the National Environmental Policy Act, such as a national security waiver given the
global competition for AI leadership. These zones could also build on the first Trump
Administration’s “Opportunity Zones” through tax incentives or credit enhancements in
order to encourage private capital investment.
A nationwide AI Readiness Strategy—rooted in local communities in partnership with
American companies—to help our current workforce and students become AI-ready,
bolster the economy, and secure America’s continued leadership on innovation.
Maintaining American leadership in AI means ensuring we have an experienced, trained
professional workforce working across the AI supply chain, including construction worker
management, HVAC technicians, and electricians. Government should ensure this
training is accessible and affordable, such as by:
● At the federal level, expanding 529 savings plans to cover more AI supply
chain-related training programs—including for construction, HVAC technicians,
electricians, as well as AI researchers and developers—by amending Section 529
of the Internal Revenue Code or broadening the SECURE Act’s provisions.
● At the federal or state level, incentivizing AI supply chain companies to work with
a backbone organization to understand the workforce needs of AI supply chain
companies, develop a pipeline of training programs that help companies meet
those needs, and coordinate with labor unions, community colleges, and trade
associations to build and operate that training pipeline.
Creation of AI research labs and workforces aligned with key local industries by requiring
AI companies to provide meaningful amounts of compute to public universities to
equitably scale the training of a homegrown AI-skilled workforce. For example, one state
could establish a hub dedicated to applying AI in agriculture while another develops
centers focused on integrating AI into power production and grid resilience.
Using the Defense Production Act (DPA) Title I to manage supply chain risk by
designating gas turbines, rankine cycle turbines, high-voltage transformers, or
switchgear for data centers as “rated orders.” This prioritization could significantly
shorten timelines for data center power infrastructure projects.
employees, and especially national security sector employees, largely unable to harness
the benefits of the technology.
The government should encourage public-private partnerships to enhance government
AI adoption by removing known blockers to the adoption of AI tools, including outdated
and lengthy accreditation processes, restrictive testing authorities, and inflexible
procurement pathways. Specifically, we recommend:
● Modernizing cyber security rules for cloud-based applications. The government’s
current processes for AI providers to comply with federal security
regulations—primarily through the Federal Risk and Authorization Management
Program (FedRAMP)—takes 12 to 18 months, compared to the one- to
three-month commercial standard, with no clear evidence of additional protection
for government data. The government should modernize FedRAMP by
establishing a faster, criteria-based path for approval of AI tools. Criteria could
include Foreign Ownership, Control, or Influence (FOCI) approval; Facilities
Clearance (FCL) status; US incorporation; a first-party AI model that ranks in the
top 20 of a recognized evaluation framework (for example, MMLU, or Massive
Multitask Language Understanding); SOC 2 (System and Organization Controls 2)
accreditation; and a recent third-party penetration test with all findings addressed.
● Accelerating AI testing and experimentation. The government should allow federal
agencies to test and experiment with real data using commercial-standard
practices—such as SOC 2 or International Organization for Standardization (ISO)
audit reports—and potentially grant a temporary waiver for FedRAMP. AI vendors
would still be required to meet FedRAMP continuous monitoring requirements
while awaiting full accreditation. Combined with standard due diligence before
actual use, this approach could allow agencies to access new AI services roughly
12 months earlier while maintaining compliance with federal security
requirements.
● Enabling rapid procurement mechanisms. Once new security and testing
approaches are in place, agencies must also have quicker, more direct routes to
procure and deploy frontier AI tools. The government should continue to evaluate
Other Transactional Authorities (OTAs), Commercial Service Offerings (CSO) or
other procurement paths in order to access technology from frontier AI labs, not
just their legacy IT providers. We are encouraged by the Department of Defense’s
recent efforts to Modernize Software Acquisition.
Enabling federal agencies to quickly acquire consumer-focused models is not enough,
however. The government also needs to pursue and fund bespoke national security pilot
projects for which there may be no commercial market by:
● Partnering with industry to develop custom models for national security. The
government needs models trained on classified datasets that are fine-tuned to be
exceptional at national security tasks for which there is no commercial
market—such as geospatial intelligence or classified nuclear tasks. This will likely
require on-premises deployment of model weights and access to significant
compute, given the security requirements of many national security agencies.
● Acting now to fund these projects and secure this compute—enabling industry
partners to secure chips, transformers, and begin construction, and ensuring that
this compute comes online at the pace that innovation and geopolitical
competition require.
Lastly, frontier AI labs need Facility Clearances (FCL) to work directly with the national
security enterprise on these pilot projects and custom models. The government should:
● Expedite FCL for frontier AI labs committed to supporting national security. The
process for obtaining a FCL can take a year or longer. Given the rapid pace of AI
development, the government should start prioritizing deeper collaboration with
frontier AI labs as soon as possible.
We look forward to discussing the above proposals with the Office of Science and
Technology Policy as we continue to build on our relationship with the US government
and work toward AI that benefits everyone.
people solve hard problems because by helping with the hard problems, AI can benefit the most
people possible—through more scientific discoveries, better healthcare and education, and
improved productivity. We’re off to a strong start, creating freely available intelligence being used
by more than 400 million people around the world, including 3 million developers. We believe AI
will scale human ingenuity and drive unprecedented economic growth and new freedoms that
help people accomplish what we can't even imagine today
March 13, 2025
To: Faisal D'Souza, NCO From: Christopher Lehane
Office of Science and Technology Policy OpenAI
2415 Eisenhower Avenue 1455 3rd Street
Alexandria, VA 22314 San Francisco, CA 94158
This document is approved for public dissemination. The document contains no business-proprietary or confidential
information. Document contents may be reused by the government in developing the AI Action Plan and associated
documents without attribution.
“It is the policy of the United States to sustain and enhance America’s global dominance in
order to promote human flourishing, economic competitiveness, and national security”
– President Donald J. Trump, Executive Order 14179, January 23, 2025
OpenAI respectfully submits the enclosed proposals to the Office of Science and
Technology Policy as it weighs a new AI Action Plan that will, as Vice President Vance
stated recently at the Paris AI Action Summit, maintain American leadership in AI and
“make people more productive, more prosperous, and more free.” As America’s
world-leading AI sector approaches artificial general intelligence (AGI), with a Chinese
Communist Party (CCP) determined to overtake us by 2030, the Trump Administration’s
new AI Action Plan can ensure that American-led AI built on democratic principles
continues to prevail over CCP-built autocratic, authoritarian AI.
OpenAI agrees with the Trump Administration that AI creates prosperity and freedom worth
fighting for—especially for younger generations whose future will be shaped by how this
Administration approaches AI. Globally, most ChatGPT users are under age 35; in the US,
about one third are ages 18 to 24.[1] Both young people and their parents recognize the
economic opportunities AI presents:
● More than seven in 10 parents in the US believe children today will be worse off
financially than they are.[2]
● Nine in 10 US parents think it's important that their kids learn how to use artificial
intelligence for their future jobs—and eight in 10 say either that isn’t happening
today, or they don’t know if it is.[3]
● Three in four college-age AI users want to use AI in their education and careers.
Many are teaching themselves and their friends about AI without waiting for their
schools to provide formal AI education.[4]
[1] Self-reported among logged-in users
[2] Pew Research Center: Views of children’s financial future, Jan. 2025
[3] Morning Consult survey commissioned by Samsung: 88% of US Parents of Gen Alpha & Gen Z Students Say AI Will Be Crucial to Their Child’s Future Success, Sept. 2024
[4] OpenAI: Building an AI-Ready Workforce: A Look at ChatGPT Adoption in the US, Feb. 2025
In particular, AI could drive significant increased productivity over the next decade. Here’s
how we can realize this heightened prosperity and greater freedom together.
Scaling human ingenuity
Innovation creates and scales our ability to push beyond our current limits. From foot travelto the domesticated horse, the wheel, steam power, the car, the plane—we scaled the
freedom of mobility. From daylight to candle and lamplight, to electricity providing light and
power at all hours—we scaled the freedom to produce, think and create. From word of
mouth to the stylus and tablet, to the printing press, telegraph, phone, computer,
smartphone—we scaled freedom of learning and knowledge. Now, as we approach AGI,
innovation is poised to scale human ingenuity itself—the sum of our freedoms to learn and
know, think, create and produce.
As our CEO Sam Altman has written, we are at the doorstep of the next leap in prosperity:
the Intelligence Age. But we must ensure that people have freedom of intelligence, by
which we mean the freedom to access and benefit from AGI, protected from both autocratic
powers that would take people’s freedoms away, and layers of laws and bureaucracy that
would prevent our realizing them.
More than 400 million people around the world are using ChatGPT to ideate, discover, and
break through beyond what we’re currently capable of doing on our own. Just two weeks
ago, we partnered with the Department of Energy’s national labs to bring together 1,500
scientists to use our tools to take scientific discovery farther, faster.
Our work at OpenAI also suggests that as AI advances, progress accelerates and becomes
increasingly affordable, as reflected in these three scaling principles:
1. The intelligence of an AI model roughly equals the log of the resources used to train and
run it. Until recently, scaling progress has primarily come from training compute and data,
but we have shown how to make intelligence scale from inference compute, as well. The
scaling laws that predict these gains are incredibly precise over many orders of magnitude,
so investing more in AI will continue to make it better and more capable. We believe that
the socioeconomic value of linearly increasing intelligence is super-exponential in nature.
2. The cost to use a given level of AI capability falls by about 10x every 12 months, and
lower prices lead to much more use. We saw this in the change in token cost between
GPT-4 in early 2023 and GPT-4o in mid-2024, where the price per token dropped about
150x in that time period. Moore’s Law predicted that the number of transistors on a
microchip would double roughly every two years; the decrease in the cost of using AI is
even more dramatic.
3. The amount of calendar time it takes to improve an AI model keeps decreasing. AI
models are catching up with human intelligence at an increasing rate. The typical time it
takes for a computer to beat humans at a given benchmark has fallen from 20 years after
the benchmark was introduced, to five years, and now to one to two years—and we see no
reason why those advancements will stop in the near future.
By scaling human ingenuity ever faster and more affordably, AGI will create a flywheel of
more freedom leading to more productivity, prosperity, and yet more innovation—letting us
once again focus on positive-sum growth.
Advancing democratic AI
OpenAI believes the best future is one in which we move forward with democratic AI—AIthat is shaped by the democratic principles America has always stood for. As OpenAI
recently laid out in our Economic Blueprint, we believe these principles include:
● A free market promoting free and fair competition that drives innovation.
● Freedom for developers and users to work with and direct our tools as they see fit,
in exchange for following clear, common-sense technical standards that help keep
AI safe for everyone, and being held accountable when they don’t.
● Preventing government use of AI tools to amass power and control their citizens,
or to threaten or coerce other states.
In advancing democratic AI, America is competing with a CCP determined to become the
global leader by 2030. That’s why the recent release of DeepSeek’s R1 model is so
noteworthy—not because of its capabilities (R1’s reasoning capabilities, albeit impressive,
are at best on par with several US models), but as a gauge of the state of this competition.
As with Huawei, there is significant risk in building on top of DeepSeek models in critical
infrastructure and other high-risk use cases given the potential that DeepSeek could be
compelled by the CCP to manipulate its models to cause harm. And because DeepSeek is
simultaneously state-subsidized, state-controlled, and freely available, the cost to its users
is their privacy and security, as DeepSeek faces requirements under Chinese law to
comply with demands for user data and uses it to train more capable systems for the CCP’s
use. Their models also more willingly generate how-to’s for illicit and harmful activities such
as identity fraud and intellectual property theft, a reflection of how the CCP views violations
of American IP rights as a feature, not a flaw.
Today, CCP-controlled China has a number of strategic advantages, including:
● As an authoritarian state, its ability to quickly marshal resources—data, energy,
technical talent, and the enormous sums needed to build out its own domestic chip
development capacity.
● Its preexisting Belt and Road initiative. As with Huawei, the PRC will scale the
adoption of PRC-based AI systems like DeepSeek’s by coercing countries needing
AI tools and nation-building infrastructure funds.
● Its ability to benefit from regulatory arbitrage being created by individual American
states seeking to pass their own industry-wide laws, some of which are modeled
on the European Union’s regulation of AI. These laws are easier to enforce with
domestic AI companies than PRC-based companies and could impose
burdensome compliance requirements that may hinder our economic
competitiveness and undermine our national security. They also may weaken the
quality and level of training data available to American entrepreneurs and the
usefulness for downstream consumers and businesses.
● Its ability to benefit from copyright arbitrage being created by democratic nations
that do not clearly protect AI training by statute, like the US, or that reduce the
amount of training data through an opt-out regime for copyright holders, like the
EU. The PRC is unlikely to respect the IP regimes of any of such nations for the
training of its AI systems, but already likely has access to all the same data, putting
American AI labs at a comparative disadvantage while gaining little in the way of
protections for the original IP creators.
While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and
is narrowing. The AI Action Plan should ensure that American-led AI prevails over CCP-led
AI, securing both American leadership on AI and a brighter future for all Americans.
What we propose
OpenAI’s freedom-focused policy proposals, taken together, can strengthen America’s leadon AI and in so doing, unlock economic growth, lock in American competitiveness, and
protect our national security. Specifically, we detail:
A regulatory strategy that ensures the freedom to innovate: For innovation to truly create
new freedoms, America’s builders, developers, and entrepreneurs—our nation’s greatest
competitive advantage—must first have the freedom to innovate in the national interest. We
propose a holistic approach that enables voluntary partnership between the federal
government and the private sector, and neutralizes potential PRC benefit from American AI
companies having to comply with overly burdensome state laws.
An export control strategy that exports democratic AI: For countries seeking access to
American AI, we propose a strategy that would apply a commercial growth lens—both Total
and Serviceable Addressable Markets—to proactively promote the global adoption of
American AI systems and with them, the freedoms they create. At the same time, the
strategy would use export controls to protect America’s AI lead, including by making
updates to the AI diffusion rule.
A copyright strategy that promotes the freedom to learn: America’s robust, balanced
intellectual property system has long been key to our global leadership on innovation. We
propose a copyright strategy that would extend the system’s role into the Intelligence Age
by protecting the rights and interests of content creators while also protecting America’s AI
leadership and national security. The federal government can both secure Americans’
freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving
American AI models’ ability to learn from copyrighted material.
A strategy to seize the infrastructure opportunity to drive growth: Sustaining America’s lead
on AI means building the necessary infrastructure to compete with the PRC and its
commandeered resources. We propose policies to seize this unmissable opportunity to
catalyze a reindustrialization across our country, creating and supporting hundreds of
thousands of jobs, boosting local economies, modernizing our energy grid, and preparing
an AI-ready workforce—the key pillar of any country’s AI infrastructure.
An ambitious government adoption strategy: Advancing democratic AI around the world
starts with ensuring that the US government itself sets an example of governments using AI
to keep their people safe, prosperous, and free. With the PRC progressing toward
ambitious targets for AI adoption across its public administration, security, and military, the
US government should modernize its processes to safely deploy frontier AI tools at the
pace of the private sector and with the efficiency Americans deserve.
America always succeeds when it bets on American ingenuity. The enclosed policy
proposals are either derived from, or in the case of copyright represent updates to OpenAI's
Economic Blueprint, and we look forward to discussing them with you.
Chris Lehane
Vice President, Global Affairs
1. Preemption: Ensuring the Freedom to Innovate
We propose creating a tightly-scoped framework for voluntary partnership between thefederal government and the private sector to protect and strengthen American national
security. This framework would extend the tradition of government receiving learnings
and access, where appropriate, in exchange for providing the private sector relief from
the 781 and counting proposed AI-related bills already introduced this year in US states.
This patchwork of regulations risks bogging down innovation and, in the case of AI,
undermining America’s leadership position.
Overseen by the US Department of Commerce and in coordination with the AI Czar,
perhaps by reimagining the US AI Safety Institute, this effort would provide domestic AI
companies with a single, efficient “front door” to the federal government that would
coordinate expertise across the entire national security and economic competitiveness
communities.[5]
This targeted framework would empower the federal government to:
● Work with both large AI companies and start-ups on a purely voluntary and
optional basis to stay informed about AI risks as well as cutting-edge capabilities
that support US national interests, including by establishing sandbox and testing
capabilities on the secure premises of federal agencies.
● Evaluate the state of American AI technology against the technology of
competitors and adversaries, including evaluating foreign models for the potential
for back doors or malign influence.
● Coordinate the development of technical standards for evaluating and
safeguarding frontier models from national security risks.
● Provide American AI companies with the tools and classified threat intelligence to
mitigate national security risks that are exacerbated by frontier models (e.g.,
cyber, CBRN) and posed by nation-state actors (e.g., economic espionage by
China).
● Incentivize companies to take part in this voluntary initiative by creating glide
paths for them to contract with the government, including on national security
projects; creating strong protections for any company information shared during
these partnerships; and reducing barriers to companies' internal work related to
national security domains.
● Guarantee that state-based legislation does not undermine America’s innovation
lead on AI. Create a sandbox for American start-ups, and provide participating
companies with liability protections including preemption from state-based
regulations that focus on frontier model security (e.g., CA SB 1047). This will help
keep the US public and private sectors competitive by allowing AI companies of
all sizes to pursue bleeding-edge AI technology free from the regulatory
uncertainty created by some state-based liability regimes.
[5] Federal preemption over existing or prospective state laws will require an act of Congress
regulations that focus on frontier model security (e.g., CA SB 1047). This will help
keep the US public and private sectors competitive by allowing AI companies of
all sizes to pursue bleeding-edge AI technology free from the regulatory
uncertainty created by some state-based liability regimes.
2. Export Controls: Exporting Democratic AI
A comprehensive export control strategy should do more than restrict the flow of AItechnologies to the PRC—it should ensure that America is “winning diffusion”, i.e., that
as much of the world as possible is aligned to democratic values and building on
democratic infrastructure. To that end, we propose that the US government consider the
Total Addressable Market (TAM), i.e., the entire world less the PRC and its few allies,
against the Serviceable Addressable Market (SAM), i.e., those countries who prefer to
build AI on democratic rails, and help as many of the latter as possible commit, including
by actually committing to deploy AI in line with democratic principles set out by the US
government.
In particular, we propose maintaining the AI diffusion rule’s three-tiered framework to
differentiate among countries in the global AI market, but with some key modifications
that expand the number of countries in Tier I:
Tier I: Countries that commit to democratic AI principles by deploying AI systems in ways
that promote more freedoms for their citizens could be considered Tier I countries.
Tier II: Limited to only those countries that have a history of failing to prevent
export-controlled chips and other US-developed IP from being diverted into, or used by
Tier III countries. These countries would be encouraged and supported to obtain Tier I
status over time; and would be subject to more stringent security requirements in the
interim.
Tier III: CCP-led China, along with a small cohort of countries aligned with the CCP,
would represent its own category that is prohibited from accessing democratic AI
systems.
This strategy would encourage global adoption of democratic AI principles, promoting the
use of democratic AI systems while protecting US advantage. Making sure that
open-sourced models are readily available to developers in these countries also will
strengthen our advantage. We believe the question of whether AI should be open or
closed source is a false choice—we need both, and they can work in a complementary
way that encourages the building of AI on American rails.
Tier I countries should include American allies, as well as those countries that are
committed to democratic AI principles and that present a relatively low risk that American
AI infrastructure (e.g., chips) will be diverted to non-Tier I countries. The commercial
diplomacy strategy in Tier I should recognize these countries’ strong history of export
and customs control compliance and seek to maximally expand democratic AI systems’
market share, while at the same time protecting those systems from IP theft by the PRC
and other malign actors (e.g., the theft of model weights and/or chip designs,
unauthorized influence or access to data center operations).
To expand market share in Tier I countries, American commercial diplomacy policy
should:
● Encourage cross-border capital flows and promote software frameworks that are
optimized for domestic chip design.
● Coordinate global bans on CCP-aligned AI infrastructure, including Huawei chips.
● Continue to represent American company interests in safety and security
standards bodies, and encourage global regulators to adopt pro-growth safety and
security policies.
● Revise the existing export control rules to eliminate country caps on compute.
● Maintain existing export license exceptions (e.g., license exception ACM) that
enable exports of technology and software for technical collaboration with allies
and preservation of economically critical supply chains
To protect the US-developed IP needed to operate data centers in Tier I countries,
security requirements could include:
● Prohibiting relationships with Tier III countries’ foreign military and intelligence
services, and the use of data centers to support military/intelligence missions for
Tier III nations or human rights violators.
● Banning the use of PRC-produced equipment (e.g., Huawei Ascend chips) and
models that violate user privacy and create security risks such as the risk of IP
theft.
● Maintaining corporate control by entities headquartered in Tier I countries.
● Implementing—and constantly modernizing—cybersecurity, model weight security,
and personnel security controls that ideally are globally synced and coordinated
among Tier I governments.
Controls on model weights—if any—should strike a balance between protecting
American-developed IP and promoting the deployment of American-developed models
over those developed by Tier III countries, including the PRC.
Tier II countries should include those with a history of failing to prevent
export-controlled chips and other US-developed IP from being diverted into, or used by
Tier III countries. Here, the commercial diplomacy strategy should still seek to expand
US market share, but should do so more carefully, including by levying stronger controls
on the export of US-developed AI infrastructure. At the same time, the strategy should
provide transparent pathways for Tier II countries to reach Tier I status by adopting
democratic AI principles and more effectively managing risks of chip diversion.
To expand American market share in Tier II countries, in addition to the steps above, the
commercial diplomacy policy could be designed to leverage commercial interest in
American-led AI in order to encourage investment in the US, enhanced security
procedures, and encourage more countries to build on American rails:
● Establish a transparent process to evaluate countries’ readiness to transition from
Tier II to Tier I.
● Support countries’ transition from Tier II to Tier I by helping Tier II governments
strengthen their in-country security programs.
● Encourage greater economic interdependence between the US and Tier II
countries.
● Incentivize public-private partnerships to rapidly mature, scale, and commercialize
hardware-enabled mechanisms that could enhance in-country security controls in
the future.
To protect the American-developed IP needed to operate data centers in Tier II countries,
and to manage both the heightened risk of IP theft and the additional risk that
export-controlled chips might be diverted from Tier II into Tier III countries, the
commercial diplomacy policy also could:
● Allow the export of advanced AI chips to an end-user located in a Tier II country
that meets Tier I security requirements, and that puts in place additional corporate
governance controls as well as technology-enhanced protections (e.g.,
hardware-enabled mechanisms) against the diversion of export-controlled chips.
Tier III countries—including the PRC and any other country subject to a US arms
embargo—should continue to be subject to strict export controls of AI systems, including
existing export controls on advanced chips. The strategy could also expand established
controls, for example, to include advanced chips that are required for large-scale
inference and RL training and the components used to manufacture advanced AI chips
and data centers.
3. Copyright: Promoting the Freedom to Learn
American copyright law, including the longstanding fair use doctrine, protects thetransformative uses of existing works, ensuring that innovators have a balanced and
predictable framework for experimentation and entrepreneurship. This approach has
underpinned American success through earlier phases of technological progress and is
even more critical to continued American leadership on AI in the wake of recent events in
the PRC. OpenAI’s models are trained to not replicate works for consumption by the
public. Instead, they learn from the works and extract patterns, linguistic structures, and
contextual insights. This means our AI model training aligns with the core objectives of
copyright and the fair use doctrine, using existing works to create something wholly new
and different without eroding the commercial value of those existing works.
America has so many AI startups, attracts so much investment, and has made so many
research breakthroughs largely because the fair use doctrine promotes AI development.
In other markets, rigid copyright rules are repressing innovation and investment.
The European Union, for one, has created “text and data mining exceptions” with broadly
applicable “opt-outs” for any rights holder—meaning access to important AI inputs is less
predictable and likely to become more difficult as the EU’s regulations take shape.
Unpredictable availability of inputs hinders AI innovation, particularly for smaller, newer
entrants with limited budgets.
The UK government is currently considering changes to its copyright regime. It has
indicated that it prefers creating a data mining exception that allows rights holders to
“reserve their rights,” creating the same regulatory barriers to AI development that we
see in the EU.
Applying the fair use doctrine to AI is not only a matter of American competitiveness
—it’s a matter of national security. The rapid advances seen with the PRC’s DeepSeek,
among other recent developments, show that America’s lead on frontier AI is far from
guaranteed. Given concerted state support for critical industries and infrastructure
projects, there’s little doubt that the PRC’s AI developers will enjoy unfettered access to
data—including copyrighted data—that will improve their models. If the PRC’s
developers have unfettered access to data and American companies are left without fair
use access, the race for AI is effectively over. America loses, as does the success of
democratic AI. Ultimately, access to more data from the widest possible range of sources
will ensure more access to more powerful innovations that deliver even more knowledge.
We propose that the US government take steps to ensure that our copyright system
continues to support American AI leadership and American economic and national
security, including by:
● Shaping international policy discussions around copyright and AI, and working to
prevent less innovative countries from imposing their legal regimes on American
AI firms and slowing our rate of progress.
● Actively assessing the overall level of data available to American AI firms and
determining whether other countries are restricting American companies’ access
to data and other critical inputs.
● Encouraging more access to government-held or government-supported data.
This would boost AI development in any case, but would be particularly important
if shifting copyright rules restrict American companies’ access to training data.
● Monitoring domestic policy debates and ongoing litigation, and weighing in where
fundamental, pro-innovation principles are at risk.
Generative AI models represent the next frontier of innovation, poised to revolutionize
the private and public sectors, improving healthcare, education, scientific research, and
so much more. If AI innovation remains protected under longstanding copyright
principles, America will maintain and strengthen its role as the world leader in
cutting-edge technologies and remain positioned to continue championing AI based on
democratic principles with countries around the world.
4. Infrastructure: Seizing the Opportunity to Drive Growth
Today, hundreds of billions of dollars in global funds are waiting to be invested in AIinfrastructure. If the US doesn't move fast to channel these resources into projects that
support democratic AI ecosystems around the world, the funds will flow to projects
backed and shaped by the CCP.
We propose a foundational strategy to ensure that investment in infrastructure drives
economic growth that benefits all Americans; maximizes access to AI; and protects
national security interests by keeping sensitive American data on American soil. This
includes policies and initiatives that encourage rather than stifle developers; support a
thriving AI-ready workforce and ecosystems of labs, start-ups and larger companies; and
secure America’s leadership on AI into the future.
First and foremost, building data centers is capital-intensive, particularly for newcomers
seeking to compete against established hyperscalers with vast resources. We support
the solutions already proposed by this Administration to ensure that sufficient capital
flows to building AI infrastructure in the US:
● Investment vehicles like a Sovereign Wealth Fund.
● Government offtake and guarantees that both provide the government with the
compute it needs and signal to markets that the demand will be there for
American-developed AI.
● Tax credits, loans, and other vehicles the US government can direct to provide
credit enhancement.
We also have proposed:
A National Transmission Highway Act, as ambitious as the 1956 National Interstate and
Defense Highways Act, to expand transmission, fiber connectivity, and natural gas
pipeline construction. The process for obtaining the “three Ps”—planning, permitting, and
paying for approvals from federal, state, local, and tribal authorities—disadvantages
America’s AI industry. Transmission lines can take 10 years or more to complete. When
lines are built, parties must agree on which customers pay higher electrical bills to bear
the cost of construction. In this process, delays often affect the build-out of transmission
lines. Streamlining these processes and eliminating redundancies would significantly
speed up infrastructure projects, keeping America’s AI sector globally competitive and
securing a future of reliable, affordable energy.
Digitizing government data currently in analog form. A lot of government data is in the
public domain. Making it more accessible or machine-readable could help American AI
developers of all sizes, especially those working in fields where vital data is often
government-held. In exchange, developers using this data could work with governments
to unlock new insights that help it develop better public policies. For example,
government agencies can build on the work of the US National Archives and Records
Administration in using Optical Character Recognition for text searchability and AI-driven
metadata tagging.
A Compact for AI among US allies and partner nations that streamlines access to capital
and supply chains in ways that support AI infrastructure and a robust AI ecosystem.
Participating countries would also agree to some common standards to safeguard data
centers and the technology. Over time, this collaboration could expand to a global
network of US allies and partners that would compete with the PRC’s AI infrastructure
alliances while also strengthening security through shared standards.
AI Economic Zones, created by local, state and the federal government together with
industry, that speed up the permitting for building AI infrastructure like new solar arrays,
wind farms, and nuclear reactors. This could include creating categorical exclusions to
the National Environmental Policy Act, such as a national security waiver given the
global competition for AI leadership. These zones could also build on the first Trump
Administration’s “Opportunity Zones” through tax incentives or credit enhancements in
order to encourage private capital investment.
A nationwide AI Readiness Strategy—rooted in local communities in partnership with
American companies—to help our current workforce and students become AI-ready,
bolster the economy, and secure America’s continued leadership on innovation.
Maintaining American leadership in AI means ensuring we have an experienced, trained
professional workforce working across the AI supply chain, including construction worker
management, HVAC technicians, and electricians. Government should ensure this
training is accessible and affordable, such as by:
● At the federal level, expanding 529 savings plans to cover more AI supply
chain-related training programs—including for construction, HVAC technicians,
electricians, as well as AI researchers and developers—by amending Section 529
of the Internal Revenue Code or broadening the SECURE Act’s provisions.
● At the federal or state level, incentivizing AI supply chain companies to work with
a backbone organization to understand the workforce needs of AI supply chain
companies, develop a pipeline of training programs that help companies meet
those needs, and coordinate with labor unions, community colleges, and trade
associations to build and operate that training pipeline.
Creation of AI research labs and workforces aligned with key local industries by requiring
AI companies to provide meaningful amounts of compute to public universities to
equitably scale the training of a homegrown AI-skilled workforce. For example, one state
could establish a hub dedicated to applying AI in agriculture while another develops
centers focused on integrating AI into power production and grid resilience.
Using the Defense Production Act (DPA) Title I to manage supply chain risk by
designating gas turbines, rankine cycle turbines, high-voltage transformers, or
switchgear for data centers as “rated orders.” This prioritization could significantly
shorten timelines for data center power infrastructure projects.
5. Government Adoption of AI: Leading by Example
AI adoption in federal departments and agencies remains unacceptably low, with federalemployees, and especially national security sector employees, largely unable to harness
the benefits of the technology.
The government should encourage public-private partnerships to enhance government
AI adoption by removing known blockers to the adoption of AI tools, including outdated
and lengthy accreditation processes, restrictive testing authorities, and inflexible
procurement pathways. Specifically, we recommend:
● Modernizing cyber security rules for cloud-based applications. The government’s
current processes for AI providers to comply with federal security
regulations—primarily through the Federal Risk and Authorization Management
Program (FedRAMP)—takes 12 to 18 months, compared to the one- to
three-month commercial standard, with no clear evidence of additional protection
for government data. The government should modernize FedRAMP by
establishing a faster, criteria-based path for approval of AI tools. Criteria could
include Foreign Ownership, Control, or Influence (FOCI) approval; Facilities
Clearance (FCL) status; US incorporation; a first-party AI model that ranks in the
top 20 of a recognized evaluation framework (for example, MMLU, or Massive
Multitask Language Understanding); SOC 2 (System and Organization Controls 2)
accreditation; and a recent third-party penetration test with all findings addressed.
● Accelerating AI testing and experimentation. The government should allow federal
agencies to test and experiment with real data using commercial-standard
practices—such as SOC 2 or International Organization for Standardization (ISO)
audit reports—and potentially grant a temporary waiver for FedRAMP. AI vendors
would still be required to meet FedRAMP continuous monitoring requirements
while awaiting full accreditation. Combined with standard due diligence before
actual use, this approach could allow agencies to access new AI services roughly
12 months earlier while maintaining compliance with federal security
requirements.
● Enabling rapid procurement mechanisms. Once new security and testing
approaches are in place, agencies must also have quicker, more direct routes to
procure and deploy frontier AI tools. The government should continue to evaluate
Other Transactional Authorities (OTAs), Commercial Service Offerings (CSO) or
other procurement paths in order to access technology from frontier AI labs, not
just their legacy IT providers. We are encouraged by the Department of Defense’s
recent efforts to Modernize Software Acquisition.
Enabling federal agencies to quickly acquire consumer-focused models is not enough,
however. The government also needs to pursue and fund bespoke national security pilot
projects for which there may be no commercial market by:
● Partnering with industry to develop custom models for national security. The
government needs models trained on classified datasets that are fine-tuned to be
exceptional at national security tasks for which there is no commercial
market—such as geospatial intelligence or classified nuclear tasks. This will likely
require on-premises deployment of model weights and access to significant
compute, given the security requirements of many national security agencies.
● Acting now to fund these projects and secure this compute—enabling industry
partners to secure chips, transformers, and begin construction, and ensuring that
this compute comes online at the pace that innovation and geopolitical
competition require.
Lastly, frontier AI labs need Facility Clearances (FCL) to work directly with the national
security enterprise on these pilot projects and custom models. The government should:
● Expedite FCL for frontier AI labs committed to supporting national security. The
process for obtaining a FCL can take a year or longer. Given the rapid pace of AI
development, the government should start prioritizing deeper collaboration with
frontier AI labs as soon as possible.
We look forward to discussing the above proposals with the Office of Science and
Technology Policy as we continue to build on our relationship with the US government
and work toward AI that benefits everyone.
About OpenAI
OpenAI’s mission is to ensure that as AI advances, it benefits everyone. We're building AI to helppeople solve hard problems because by helping with the hard problems, AI can benefit the most
people possible—through more scientific discoveries, better healthcare and education, and
improved productivity. We’re off to a strong start, creating freely available intelligence being used
by more than 400 million people around the world, including 3 million developers. We believe AI
will scale human ingenuity and drive unprecedented economic growth and new freedoms that
help people accomplish what we can't even imagine today
Article Archive
Response to the National Science Foundation’s and
Office of Science & Technology Policy’s Request for Information
on the Development of an Artificial Intelligence (AI) Action Plan
90 Fed. Reg. 9088 (Feb. 6, 2025)
Docket No. NSF_FRDOC_0001
March 13, 2025
Executive Summary
The potential of artificial intelligence is nearly unlimited, and we’re already seeing how it
can revolutionize healthcare, accelerate scientific discovery, and transform our
economy for the better.1 But a nation’s ability to harness AI’s enormous benefits
requires the right policy frameworks.
Google welcomes the Trump Administration’s goal of developing a plan to “sustain and
enhance America’s global AI dominance.”2 While America currently leads the world in
AI—and is home to the most capable and widely adopted AI models and tools—our
lead is not assured. As Vice President Vance urged, we must “catch lightning in a bottle”
and unlock AI’s potential.3 To do that, we recommend focusing on three key areas to
secure America’s position as an AI powerhouse and support a golden era of
opportunity:
1. Invest in AI
Like any multi-use technology, AI can be misused by bad actors, but it also promises to
greatly improve our lives. For too long, AI policymaking has paid disproportionate
attention to the risks, often ignoring the costs that misguided regulation can have on
innovation, national competitiveness, and scientific leadership—a dynamic that is
beginning to shift under the new Administration. Sustaining this momentum will require
action in four areas:
A. Advance energy policies needed to power domestic data centers.
A potential lack of new energy supply is the core constraint to expanding AI
infrastructure in the near term. Both training and inference computational needs for AI
are growing rapidly. Compute requirements for training have historically doubled every
six months, and inference compute needs are expected to increase by orders of
magnitude in the coming years. While we are seeing significant efficiency
improvements, widespread AI adoption may still result in large increases in electricity
requirements, with projections of AI datacenter power demand rising by nearly 40 GW
globally from 2024 to 2026.4 Current U.S. energy infrastructure and permitting
processes appear inadequate to meet these escalating needs.
The U.S. government should adopt policies that ensure the availability of energy for
data centers and other growing business applications that are powering the growth of
the American economy. This includes transmission and permitting reform to ensure
adequate electricity for data centers coupled with federal and state tools for de-risking
investments in advanced energy-generation and grid-enhancing technologies. Other
key actions to meet new electricity load growth include improvements in electricity
system planning, incentives for utilities to use existing infrastructure more efficiently,
greater integration of regional electricity grids, and workforce development in building
trades underpinning energy infrastructure.
B. Adopt balanced export control policies.
Export controls can play an important role in supporting national security, but only if
they are carefully crafted to support legitimate market access for U.S. businesses while
targeting the most pertinent risks. AI export rules imposed under the previous
4 Dylan Patel et al., AI Datacenter Energy Dilemma – Race for AI Datacenter Space,
Semianalysis (Mar. 13, 2024).
Administration (including the recent Interim Final Rule on AI Diffusion)5 may undermine
economic competitiveness goals the current Administration has set by imposing
disproportionate burdens on U.S. cloud service providers. While we support the
national security goals at stake, we are concerned that the impacts may be
counterproductive and plan to submit a more detailed analysis of the AI Diffusion rule
by the May 15 comment deadline.
The government will need to craft export controls carefully to avoid creating undue
competitive disadvantages for U.S. companies. The U.S. government should adequately
resource and modernize the Bureau of Industry and Security (BIS), including through
BIS’s own adoption of cutting-edge AI tools for supply chain monitoring and
counter-smuggling efforts, alongside efforts to streamline export licensing processes
and consideration of wider ecosystem issues beyond limits on hardware exports.
Effective enforcement requires robust international engagement to maximize global
compliance. And export controls are most impactful when coupled with a proactive
strategy of domestic energy and infrastructure development to maintain a durable
competitive advantage.
C. Accelerate AI R&D, streamline access to computational resources for
researchers, and incentivize public-private partnerships with
national labs.
Long-term, sustained investments in foundational domestic R&D and AI-driven
scientific discovery have given the U.S. a crucial advantage in the race for global AI
leadership. Policymakers should significantly bolster these efforts—with a focus on
speeding funding allocations to early-market R&D and ensuring essential compute,
high-quality datasets, and advanced AI models are widely available to scientists and
institutions.6 Lowering barriers to entry will ensure that the American research
community remains keenly focused on innovation rather than struggling with resource
acquisition. The government should also continue investments to identify and prioritize
the most important unsolved challenges in the physical and life sciences (e.g., via
federal prize challenges and competitions), focusing on how AI-driven approaches can
help fuel scientific breakthroughs in areas of critical national interest.
6 Google, A Policy Framework for Building the Future of Science with AI (Feb. 2025).
5 See Framework for Artificial Intelligence Diffusion, 90 Fed. Reg. 4544 (Jan. 15, 2025).
Policymakers should move quickly to further incentivize partnerships with national labs
to advance research in science, cybersecurity, and chemical, biological, radiological,
and nuclear (CBRN) risks. The U.S. government should make it easier for national
security agencies and their partners to use commercial, unclassified storage and
compute capabilities, and should take steps to release government datasets, which can
be helpful for commercial training.
D. Craft a pro-innovation federal framework for AI.
(i) Support federal legislation that prevents a patchwork of laws
at the state level, especially for frontier AI development.
The Administration should ensure that the U.S. avoids a fragmented regulatory
environment that would slow the development of AI, including by supporting federal
preemption of state-level laws that affect frontier AI models. Such action is properly a
federal prerogative and would ensure a unified national framework for frontier AI
models focused on protecting national security while fostering an environment where
American AI innovation can thrive. Similarly, the Administration should support a
national approach to privacy, as state-level fragmentation is creating compliance
uncertainties for companies and can slow innovation in AI and other sectors.
(ii) Ensure industry has access to openly available data that
enable fair learning.
Three areas of law can impede appropriate access to data necessary for training
leading models: copyright, privacy, and patents.
Copyright. Balanced copyright rules, such as fair use and text-and-data mining
exceptions, have been critical to enabling AI systems to learn from prior knowledge
and publicly available data, unlocking scientific and social advances. These exceptions
allow for the use of copyrighted, publicly available material for AI training without
significantly impacting rightsholders and avoid often highly unpredictable, imbalanced,
and lengthy negotiations with data holders during model development or scientific
experimentation. Balanced copyright laws that ensure access to publicly available
scientific papers, for example, are essential for accelerating AI in science, particularly
for applications that sift through scientific literature for insights or new hypotheses.
Privacy. Balanced privacy laws that recognize exemptions for publicly available
information will avoid inadvertent conflicts with AI or copyright standards, or other
impediments to the development of AI systems. A federal privacy regulatory
framework should define categories of publicly available data and anonymous data
that are treated differently than personally identifying data. Federal regulations can also
encourage the use of AI-powered privacy-enhancing technologies to help protect
Americans’ data from malicious actors.
Patents. The Administration should improve and maintain access to the U.S. Patent and
Trademark Office’s Inter Partes Review program to permit efficient review of AI patents
granted in error. The U.S. has seen tremendous growth in the patenting of AI in recent
years.7 Many of these patents are held by American companies like Google, but a
growing percentage are held by entities based outside of the U.S., including in China.8
In the last year, China’s overall U.S. patent grants grew by over 30%, more than any
other country.9 With the increasing number of patent applications filed at the Patent
and Trademark Office and the limited time available for reviewing those patent
applications, mistakes are inevitable. According to one study, the agency’s error rate
may be nearly 40% for software-related technologies.10 The rise of the first computers
and then the internet saw a flood of patent applications for traditional functions simply
performed “on a computer” or “via the internet.” To avoid a similar phenomenon around
functions performed “with AI,” businesses need to be able to request agency
assessments of a patent’s validity through the Inter Partes Review process (when the
high statutory bar is met). The agency should not reject meritorious requests based
merely on agency-developed discretion (such as the Fintiv case), and needs to have
continued staffing of its user-fee-funded Patent Trial and Appeal Board.11 Otherwise,
patents that were granted in error can be used by foreign entities to block and
bottleneck American AI innovation, taking time and resources away from R&D, and
subjecting highly sensitive technical information to discovery.
11 See Apple Inc. v. Fintiv, Inc., IPR2020-00019 (Mar. 20, 2020).
10 Shawn P. Miller, Where’s the Innovation: An Analysis of the Quantity and Qualities of
Anticipated and Obvious Patents, 18 Va. J.L. & Tech. 1, 23 (2013).
9 IFI Claims, 2024 Trends and Insights (last visited Mar. 12, 2025).
8 Jack Caporal, The Companies With the Most Generative AI Patents - and Why
Investors Should Care, Motley Fool (updated Mar. 9, 2025).
7 Ayana Marshall, AI Titans: Who’s Dominating the Patent Universe, Harrity (Mar. 11,
2024).
(iii) Emphasize focused, sector-specific, and risk-based AI
governance and standards.
Any regulation of AI applications should be proportional to relevant risks. Determining
when, or if, to regulate requires context and a recognition of the unique challenges and
opportunities in the specific domains where AI is used. Autocorrect features don’t pose
the same risks (or benefits) as healthcare applications deployed in an emergency
room. To account for AI’s context-dependent impacts, government regulation should
be focused on specific applications, building upon existing sectoral rules and
intervening directly only where demonstrably necessary.
Consensus technical standards and protocols can also play a critical role. As a baseline,
regulations should align with recognized standards and support the development of
standards and recommended practices; in many instances, establishing standards may
be better than defining specific terms or thresholds in law or policy because they
better keep pace with the technical state of the art. For example, standards and
protocols can help ensure that privacy-enhancing technologies are implemented
responsibly and in ways that make them accessible to businesses of all types and sizes,
enable benchmarking, build trust, and protect Americans and their data.
(iv) Support workforce initiatives to develop AI skills and ensure
American companies can hire and retain top AI talent.
AI is likely to contribute to important shifts in the future of work. While it can be easy to
learn to use AI tools (since they can often teach the user how to use them), and the
tools often benefit the least-skilled the most, the evolution of AI tools and deployment
may still require a lifelong approach to education that gives all students and workers
foundational AI skills.
This moment offers an opportunity to ensure that AI can be integrated as a core
component of U.S. education and professional development systems. The
Administration and agency stakeholders have an opportunity to ensure that access to
technical skilling and career support programs (including investments in K-12 STEM
education and retraining for workers) are broadly accessible to U.S. communities to
ensure a resilient labor force.
In addition to workforce training and development, the ability of U.S. companies to
access and retain top AI talent and expertise globally is essential and poses a known
challenge. Where practicable, U.S. agencies should use existing immigration authorities
to facilitate recruiting and retention of experts in occupations requiring AI-related skills,
such as AI development, robotics and automation, and quantum computing.
2. Accelerate and Modernize Government AI Adoption
To enable public sector organizations to fully benefit from the potential of cloud
computing and AI, the government needs effective public procurement rules that
foster innovation, ensure value for taxpayers, and promote a competitive and open
market. The U.S. government, including the defense and intelligence communities,
should pursue improved interoperability and data portability between cloud solutions;12
streamline outdated accreditation, authorization, and procurement practices to enable
quicker adoption of AI and cloud solutions; and accelerate digital transformation via
greater adoption of machine-readable documents and data. We also encourage
modernization of existing contracting processes to align with commercial procurement
practices.
The federal government can also take advantage of opportunities to modernize
procurement of emerging technology while reducing reliance on insecure legacy
vendors. We propose lowering barriers to entry and growth through measures such as:
(1) establishing reciprocity and harmonization for industry-approved certifications; (2)
mandating re-use of existing authorizations and related materials to prevent
duplication of effort; (3) facilitating investment in advanced threat detection; (4)
instituting automated continuous monitoring methodologies; and (5) prioritizing open
and market-based competition. Further, federal agencies should avoid implementing
unique compliance or procurement requirements just because a system includes AI
components. To the extent they are needed, any agency-specific guidelines should
focus on unique risks or concerns related to the deployment of the AI for the procured
purpose. U.S. decisionmakers might also consider policies to mandate interoperability
throughout the entire technical stack and combat anticompetitive licensing and
bundling practices. Doing so could also help ensure that government systems are not
encumbered by known concentration risks of legacy technologies—many of which
pose an unacceptable national security risk and cost more for the taxpayer.
12 The Office of Management and Budget’s (OMB’s) 2024 AI Procurement Guidance
outlined the importance of implementing multi-vendor, interoperable AI solutions. See
Off. of Mgmt. & Budget, Exec. Off. of the President, OMB Memorandum M-24-18,
Advancing the Responsible Acquisition of Artificial Intelligence in Government (2024).
Separately, policymakers should mandate open, non-proprietary data standards and
APIs across all government cloud deployments, ensuring seamless interoperability and
data portability to break down silos and enable AI-driven insights. As a part of this
process, the current accreditation and procurement labyrinth should be replaced with
a more agile, risk-based authorization process, drawing inspiration from commercial
sector best practices to increase speed and accelerate the adoption of frontier AI and
cloud solutions.
The Office of Science and Technology Policy (OSTP) and OMB can also issue guidance
detailing more streamlined, automated, and responsive authorization processes for
cloud services (including AI) under the Federal Risk and Authorization Management
Program (FedRAMP); policies to advance greater reciprocity between agencies and
their components; and a renewed approach to faster authorizations for AI services,
which can have a transformative impact on federal agencies.
Policymakers should also consider measures to safeguard critical infrastructure and
cybersecurity, including by partnering with the private sector. For example, pilots that
build on the Defense Advanced Research Projects Agency’s AI Cyber Challenge and
joint R&D activities can help develop breakthroughs in areas such as data center
security, chip security, confidential computing, and more. Expanded threat sharing with
industry will similarly help identify and disrupt both security threats to AI and threat
actor use of AI.
We recommend that the government continue its implementation of a multi-cloud and
multi-model approach to national security use cases, which matches the most
appropriate infrastructure and models to the agency, mission owner, and use case. We
also recommend preserving existing risk-management guidelines covering AI use
restrictions, minimum risk management practices for high-impact and federal
personnel-impacting AI uses, and cataloging and monitoring AI use in the national
security context.
3. Promote Pro-Innovation Approaches Internationally
To advance the widespread adoption of AI technologies both domestically and abroad,
it is crucial to establish consistent, coherent, and interoperable frameworks and norms
for AI development and deployment that reflect American values and interests.
Champion market-driven and widely adopted technical standards. Strong U.S.
government support for standards based on American values will help keep foreign
governments from imposing protectionist requirements that could stifle innovation,
such as requiring duplicative pre-deployment testing to gain market access.
We encourage the Department of Commerce, and the National Institute of Standards
and Technology (NIST) in particular, to continue its engagement on standards and
critical frontier security work. Aligning policy with existing, globally recognized
standards, such as ISO 42001, will help ensure consistency and predictability across
industry.13
At the same time, rapid advances in frontier AI capabilities, including progress toward
Artificial General Intelligence, highlight the need for the federal government to drive
new efforts to ensure American leadership and national security. For the most capable
frontier AI systems, the Administration should identify potential capabilities that could
raise national security risks and work with industry to develop and promote
standardized industry protocols, secure data-sharing, standards, and safeguards.
It is particularly valuable for the U.S. government to develop and maintain an ability to
evaluate the capabilities of frontier models in areas where it has unique expertise, such
as national security, CBRN issues, and cybersecurity threats. The Department of
Commerce and NIST can lead on: (1) creating voluntary technical evaluations for major
AI risks; (2) developing guidelines for responsible scaling and security protocols; (3)
researching and developing safety benchmarks and mitigations (like tamper-proofing);
and (4) assisting in building a private-sector AI evaluation ecosystem.
Building on the robust domestic approach outlined above, the U.S. government should
work with aligned countries to develop the international standards needed for
advanced model capabilities and to drive global alignment around risk thresholds and
appropriate security protocols for frontier models. This includes promulgating an
international norm of “home government” testing—wherein providers of AI with
national security-critical capabilities are able to demonstrate collaboration with their
home government on narrowly targeted, scientifically rigorous assessments that
provide “test once, run everywhere” assurance. Reciprocity arrangements would
enable other nations to acknowledge and accept home governments’ evaluations,
13 See ISO/IEC 42001 - Compliance | Google Cloud.
providing AI developers with appropriate market access without the need for
additional government evaluations in those jurisdictions.
Articulate clear and differentiated obligations—where necessary—for the
respective actors in the AI ecosystem. To the extent a government imposes specific
legal obligations around high-risk AI systems, it should clearly delineate the roles and
responsibilities of AI developers, deployers, and end users. The actor with the most
control over a specific step in the AI lifecycle should bear responsibility (and any
associated liability) for that step. In many instances, the original developer of an AI
model has little to no visibility or control over how it is being used by a deployer and
may not interact with end users. Even in cases where a developer provides a model
directly to deployers, deployers will often be best placed to understand the risks of
downstream uses, implement effective risk management, and conduct post-market
monitoring and logging. Nor should developers bear responsibility for misuse by
customers or end users. Rather, developers should provide information and
documentation to the deployers, such as documentation of how the models were
trained or mechanisms for human oversight, as needed to allow deployers to comply
with regulatory requirements.
Avoid overbroad disclosure requirements. Policymakers should consider urging the
use of model cards and technical reports—already an industry norm—in national and
international fora to ensure that deployers and end users receive relevant information.
The U.S. government should oppose mandated disclosures that require divulging trade
secrets, allow competitors to duplicate products, or compromise national security by
providing a roadmap to adversaries on how to circumvent protections or jailbreak
models. Overly broad disclosure requirements (as contemplated in the EU and other
jurisdictions) harm both security and innovation while providing little public benefit.
Notify users of AI-generated content in appropriate contexts. The U.S. government
should support the further development and broad uptake of evolving
multistakeholder standards and best practices around disclosure of synthetic
media—such as the use of C2PA protocols, Google’s industry-leading SynthID
watermarking, and other watermarking/provenance technologies, including best
practices around when to apply watermarks and when to notify users that they are
interacting with AI-generated content. At the same time, the government should
understand the limitations of such solutions—including the extent to which motivated
actors can strip out this information—and the need for cooperation among all players in
the AI ecosystem to make progress on this issue.
Combat restrictive foreign AI barriers that hinder American businesses and
innovation. Foreign regulatory regimes should foster the development of AI
technology rather than stifle it. Governments should generally not impose regulatory
checkpoints on the development of underlying AI models or AI innovation. Some
governments are seeking to impose undue bureaucratic burdens on AI development
and deployment, often in ways that would primarily affect U.S. companies. The U.S.
government has a significant role to play in strengthening AI governance efforts and
best practices by supporting innovation-friendly approaches and engaging foreign
governments to deter efforts to impose measures that restrict AI development and
deployment by U.S. and local companies. For example, OSTP and other federal
stakeholders can consider bolstering and further resourcing interagency initiatives
(including those undertaken by the State and Commerce Departments) that target and
strengthen commercial diplomacy and promote exports of U.S. digital goods and
services, including American AI. And the U.S. should advocate at the Organisation for
Economic Co-operation and Development (OECD) and other fora for international AI
frameworks that reflect U.S. values and approaches.
******
As a longstanding leader in AI research and development, Google is committed to
responsibly realizing the immense benefits of AI and supporting America’s role as the
world champion in AI innovation. Our mission is to organize the world’s information and
make it universally accessible and useful, and our work on AI lies at the heart of that
mission. We welcome the Administration’s focus on this issue, and we agree that with
the right policy frameworks, America can look forward to an AI-powered golden era of
opportunity.14
14 This document is approved for public dissemination. The document contains no
business-proprietary or confidential information. Document contents may be reused
by the government in developing the AI Action Plan and associated documents without
attribution.
Response to the National Science Foundation’s and
Office of Science & Technology Policy’s Request for Information
on the Development of an Artificial Intelligence (AI) Action Plan
90 Fed. Reg. 9088 (Feb. 6, 2025)
Docket No. NSF_FRDOC_0001
March 13, 2025
Executive Summary
The potential of artificial intelligence is nearly unlimited, and we’re already seeing how it
can revolutionize healthcare, accelerate scientific discovery, and transform our
economy for the better.1 But a nation’s ability to harness AI’s enormous benefits
requires the right policy frameworks.
Google welcomes the Trump Administration’s goal of developing a plan to “sustain and
enhance America’s global AI dominance.”2 While America currently leads the world in
AI—and is home to the most capable and widely adopted AI models and tools—our
lead is not assured. As Vice President Vance urged, we must “catch lightning in a bottle”
and unlock AI’s potential.3 To do that, we recommend focusing on three key areas to
secure America’s position as an AI powerhouse and support a golden era of
opportunity:
1. Invest in AI
Like any multi-use technology, AI can be misused by bad actors, but it also promises to
greatly improve our lives. For too long, AI policymaking has paid disproportionate
attention to the risks, often ignoring the costs that misguided regulation can have on
innovation, national competitiveness, and scientific leadership—a dynamic that is
beginning to shift under the new Administration. Sustaining this momentum will require
action in four areas:
A. Advance energy policies needed to power domestic data centers.
A potential lack of new energy supply is the core constraint to expanding AI
infrastructure in the near term. Both training and inference computational needs for AI
are growing rapidly. Compute requirements for training have historically doubled every
six months, and inference compute needs are expected to increase by orders of
magnitude in the coming years. While we are seeing significant efficiency
improvements, widespread AI adoption may still result in large increases in electricity
requirements, with projections of AI datacenter power demand rising by nearly 40 GW
globally from 2024 to 2026.4 Current U.S. energy infrastructure and permitting
processes appear inadequate to meet these escalating needs.
The U.S. government should adopt policies that ensure the availability of energy for
data centers and other growing business applications that are powering the growth of
the American economy. This includes transmission and permitting reform to ensure
adequate electricity for data centers coupled with federal and state tools for de-risking
investments in advanced energy-generation and grid-enhancing technologies. Other
key actions to meet new electricity load growth include improvements in electricity
system planning, incentives for utilities to use existing infrastructure more efficiently,
greater integration of regional electricity grids, and workforce development in building
trades underpinning energy infrastructure.
B. Adopt balanced export control policies.
Export controls can play an important role in supporting national security, but only if
they are carefully crafted to support legitimate market access for U.S. businesses while
targeting the most pertinent risks. AI export rules imposed under the previous
4 Dylan Patel et al., AI Datacenter Energy Dilemma – Race for AI Datacenter Space,
Semianalysis (Mar. 13, 2024).
Administration (including the recent Interim Final Rule on AI Diffusion)5 may undermine
economic competitiveness goals the current Administration has set by imposing
disproportionate burdens on U.S. cloud service providers. While we support the
national security goals at stake, we are concerned that the impacts may be
counterproductive and plan to submit a more detailed analysis of the AI Diffusion rule
by the May 15 comment deadline.
The government will need to craft export controls carefully to avoid creating undue
competitive disadvantages for U.S. companies. The U.S. government should adequately
resource and modernize the Bureau of Industry and Security (BIS), including through
BIS’s own adoption of cutting-edge AI tools for supply chain monitoring and
counter-smuggling efforts, alongside efforts to streamline export licensing processes
and consideration of wider ecosystem issues beyond limits on hardware exports.
Effective enforcement requires robust international engagement to maximize global
compliance. And export controls are most impactful when coupled with a proactive
strategy of domestic energy and infrastructure development to maintain a durable
competitive advantage.
C. Accelerate AI R&D, streamline access to computational resources for
researchers, and incentivize public-private partnerships with
national labs.
Long-term, sustained investments in foundational domestic R&D and AI-driven
scientific discovery have given the U.S. a crucial advantage in the race for global AI
leadership. Policymakers should significantly bolster these efforts—with a focus on
speeding funding allocations to early-market R&D and ensuring essential compute,
high-quality datasets, and advanced AI models are widely available to scientists and
institutions.6 Lowering barriers to entry will ensure that the American research
community remains keenly focused on innovation rather than struggling with resource
acquisition. The government should also continue investments to identify and prioritize
the most important unsolved challenges in the physical and life sciences (e.g., via
federal prize challenges and competitions), focusing on how AI-driven approaches can
help fuel scientific breakthroughs in areas of critical national interest.
6 Google, A Policy Framework for Building the Future of Science with AI (Feb. 2025).
5 See Framework for Artificial Intelligence Diffusion, 90 Fed. Reg. 4544 (Jan. 15, 2025).
Policymakers should move quickly to further incentivize partnerships with national labs
to advance research in science, cybersecurity, and chemical, biological, radiological,
and nuclear (CBRN) risks. The U.S. government should make it easier for national
security agencies and their partners to use commercial, unclassified storage and
compute capabilities, and should take steps to release government datasets, which can
be helpful for commercial training.
D. Craft a pro-innovation federal framework for AI.
(i) Support federal legislation that prevents a patchwork of laws
at the state level, especially for frontier AI development.
The Administration should ensure that the U.S. avoids a fragmented regulatory
environment that would slow the development of AI, including by supporting federal
preemption of state-level laws that affect frontier AI models. Such action is properly a
federal prerogative and would ensure a unified national framework for frontier AI
models focused on protecting national security while fostering an environment where
American AI innovation can thrive. Similarly, the Administration should support a
national approach to privacy, as state-level fragmentation is creating compliance
uncertainties for companies and can slow innovation in AI and other sectors.
(ii) Ensure industry has access to openly available data that
enable fair learning.
Three areas of law can impede appropriate access to data necessary for training
leading models: copyright, privacy, and patents.
Copyright. Balanced copyright rules, such as fair use and text-and-data mining
exceptions, have been critical to enabling AI systems to learn from prior knowledge
and publicly available data, unlocking scientific and social advances. These exceptions
allow for the use of copyrighted, publicly available material for AI training without
significantly impacting rightsholders and avoid often highly unpredictable, imbalanced,
and lengthy negotiations with data holders during model development or scientific
experimentation. Balanced copyright laws that ensure access to publicly available
scientific papers, for example, are essential for accelerating AI in science, particularly
for applications that sift through scientific literature for insights or new hypotheses.
Privacy. Balanced privacy laws that recognize exemptions for publicly available
information will avoid inadvertent conflicts with AI or copyright standards, or other
impediments to the development of AI systems. A federal privacy regulatory
framework should define categories of publicly available data and anonymous data
that are treated differently than personally identifying data. Federal regulations can also
encourage the use of AI-powered privacy-enhancing technologies to help protect
Americans’ data from malicious actors.
Patents. The Administration should improve and maintain access to the U.S. Patent and
Trademark Office’s Inter Partes Review program to permit efficient review of AI patents
granted in error. The U.S. has seen tremendous growth in the patenting of AI in recent
years.7 Many of these patents are held by American companies like Google, but a
growing percentage are held by entities based outside of the U.S., including in China.8
In the last year, China’s overall U.S. patent grants grew by over 30%, more than any
other country.9 With the increasing number of patent applications filed at the Patent
and Trademark Office and the limited time available for reviewing those patent
applications, mistakes are inevitable. According to one study, the agency’s error rate
may be nearly 40% for software-related technologies.10 The rise of the first computers
and then the internet saw a flood of patent applications for traditional functions simply
performed “on a computer” or “via the internet.” To avoid a similar phenomenon around
functions performed “with AI,” businesses need to be able to request agency
assessments of a patent’s validity through the Inter Partes Review process (when the
high statutory bar is met). The agency should not reject meritorious requests based
merely on agency-developed discretion (such as the Fintiv case), and needs to have
continued staffing of its user-fee-funded Patent Trial and Appeal Board.11 Otherwise,
patents that were granted in error can be used by foreign entities to block and
bottleneck American AI innovation, taking time and resources away from R&D, and
subjecting highly sensitive technical information to discovery.
11 See Apple Inc. v. Fintiv, Inc., IPR2020-00019 (Mar. 20, 2020).
10 Shawn P. Miller, Where’s the Innovation: An Analysis of the Quantity and Qualities of
Anticipated and Obvious Patents, 18 Va. J.L. & Tech. 1, 23 (2013).
9 IFI Claims, 2024 Trends and Insights (last visited Mar. 12, 2025).
8 Jack Caporal, The Companies With the Most Generative AI Patents - and Why
Investors Should Care, Motley Fool (updated Mar. 9, 2025).
7 Ayana Marshall, AI Titans: Who’s Dominating the Patent Universe, Harrity (Mar. 11,
2024).
(iii) Emphasize focused, sector-specific, and risk-based AI
governance and standards.
Any regulation of AI applications should be proportional to relevant risks. Determining
when, or if, to regulate requires context and a recognition of the unique challenges and
opportunities in the specific domains where AI is used. Autocorrect features don’t pose
the same risks (or benefits) as healthcare applications deployed in an emergency
room. To account for AI’s context-dependent impacts, government regulation should
be focused on specific applications, building upon existing sectoral rules and
intervening directly only where demonstrably necessary.
Consensus technical standards and protocols can also play a critical role. As a baseline,
regulations should align with recognized standards and support the development of
standards and recommended practices; in many instances, establishing standards may
be better than defining specific terms or thresholds in law or policy because they
better keep pace with the technical state of the art. For example, standards and
protocols can help ensure that privacy-enhancing technologies are implemented
responsibly and in ways that make them accessible to businesses of all types and sizes,
enable benchmarking, build trust, and protect Americans and their data.
(iv) Support workforce initiatives to develop AI skills and ensure
American companies can hire and retain top AI talent.
AI is likely to contribute to important shifts in the future of work. While it can be easy to
learn to use AI tools (since they can often teach the user how to use them), and the
tools often benefit the least-skilled the most, the evolution of AI tools and deployment
may still require a lifelong approach to education that gives all students and workers
foundational AI skills.
This moment offers an opportunity to ensure that AI can be integrated as a core
component of U.S. education and professional development systems. The
Administration and agency stakeholders have an opportunity to ensure that access to
technical skilling and career support programs (including investments in K-12 STEM
education and retraining for workers) are broadly accessible to U.S. communities to
ensure a resilient labor force.
In addition to workforce training and development, the ability of U.S. companies to
access and retain top AI talent and expertise globally is essential and poses a known
challenge. Where practicable, U.S. agencies should use existing immigration authorities
to facilitate recruiting and retention of experts in occupations requiring AI-related skills,
such as AI development, robotics and automation, and quantum computing.
2. Accelerate and Modernize Government AI Adoption
To enable public sector organizations to fully benefit from the potential of cloud
computing and AI, the government needs effective public procurement rules that
foster innovation, ensure value for taxpayers, and promote a competitive and open
market. The U.S. government, including the defense and intelligence communities,
should pursue improved interoperability and data portability between cloud solutions;12
streamline outdated accreditation, authorization, and procurement practices to enable
quicker adoption of AI and cloud solutions; and accelerate digital transformation via
greater adoption of machine-readable documents and data. We also encourage
modernization of existing contracting processes to align with commercial procurement
practices.
The federal government can also take advantage of opportunities to modernize
procurement of emerging technology while reducing reliance on insecure legacy
vendors. We propose lowering barriers to entry and growth through measures such as:
(1) establishing reciprocity and harmonization for industry-approved certifications; (2)
mandating re-use of existing authorizations and related materials to prevent
duplication of effort; (3) facilitating investment in advanced threat detection; (4)
instituting automated continuous monitoring methodologies; and (5) prioritizing open
and market-based competition. Further, federal agencies should avoid implementing
unique compliance or procurement requirements just because a system includes AI
components. To the extent they are needed, any agency-specific guidelines should
focus on unique risks or concerns related to the deployment of the AI for the procured
purpose. U.S. decisionmakers might also consider policies to mandate interoperability
throughout the entire technical stack and combat anticompetitive licensing and
bundling practices. Doing so could also help ensure that government systems are not
encumbered by known concentration risks of legacy technologies—many of which
pose an unacceptable national security risk and cost more for the taxpayer.
12 The Office of Management and Budget’s (OMB’s) 2024 AI Procurement Guidance
outlined the importance of implementing multi-vendor, interoperable AI solutions. See
Off. of Mgmt. & Budget, Exec. Off. of the President, OMB Memorandum M-24-18,
Advancing the Responsible Acquisition of Artificial Intelligence in Government (2024).
Separately, policymakers should mandate open, non-proprietary data standards and
APIs across all government cloud deployments, ensuring seamless interoperability and
data portability to break down silos and enable AI-driven insights. As a part of this
process, the current accreditation and procurement labyrinth should be replaced with
a more agile, risk-based authorization process, drawing inspiration from commercial
sector best practices to increase speed and accelerate the adoption of frontier AI and
cloud solutions.
The Office of Science and Technology Policy (OSTP) and OMB can also issue guidance
detailing more streamlined, automated, and responsive authorization processes for
cloud services (including AI) under the Federal Risk and Authorization Management
Program (FedRAMP); policies to advance greater reciprocity between agencies and
their components; and a renewed approach to faster authorizations for AI services,
which can have a transformative impact on federal agencies.
Policymakers should also consider measures to safeguard critical infrastructure and
cybersecurity, including by partnering with the private sector. For example, pilots that
build on the Defense Advanced Research Projects Agency’s AI Cyber Challenge and
joint R&D activities can help develop breakthroughs in areas such as data center
security, chip security, confidential computing, and more. Expanded threat sharing with
industry will similarly help identify and disrupt both security threats to AI and threat
actor use of AI.
We recommend that the government continue its implementation of a multi-cloud and
multi-model approach to national security use cases, which matches the most
appropriate infrastructure and models to the agency, mission owner, and use case. We
also recommend preserving existing risk-management guidelines covering AI use
restrictions, minimum risk management practices for high-impact and federal
personnel-impacting AI uses, and cataloging and monitoring AI use in the national
security context.
3. Promote Pro-Innovation Approaches Internationally
To advance the widespread adoption of AI technologies both domestically and abroad,
it is crucial to establish consistent, coherent, and interoperable frameworks and norms
for AI development and deployment that reflect American values and interests.
Champion market-driven and widely adopted technical standards. Strong U.S.
government support for standards based on American values will help keep foreign
governments from imposing protectionist requirements that could stifle innovation,
such as requiring duplicative pre-deployment testing to gain market access.
We encourage the Department of Commerce, and the National Institute of Standards
and Technology (NIST) in particular, to continue its engagement on standards and
critical frontier security work. Aligning policy with existing, globally recognized
standards, such as ISO 42001, will help ensure consistency and predictability across
industry.13
At the same time, rapid advances in frontier AI capabilities, including progress toward
Artificial General Intelligence, highlight the need for the federal government to drive
new efforts to ensure American leadership and national security. For the most capable
frontier AI systems, the Administration should identify potential capabilities that could
raise national security risks and work with industry to develop and promote
standardized industry protocols, secure data-sharing, standards, and safeguards.
It is particularly valuable for the U.S. government to develop and maintain an ability to
evaluate the capabilities of frontier models in areas where it has unique expertise, such
as national security, CBRN issues, and cybersecurity threats. The Department of
Commerce and NIST can lead on: (1) creating voluntary technical evaluations for major
AI risks; (2) developing guidelines for responsible scaling and security protocols; (3)
researching and developing safety benchmarks and mitigations (like tamper-proofing);
and (4) assisting in building a private-sector AI evaluation ecosystem.
Building on the robust domestic approach outlined above, the U.S. government should
work with aligned countries to develop the international standards needed for
advanced model capabilities and to drive global alignment around risk thresholds and
appropriate security protocols for frontier models. This includes promulgating an
international norm of “home government” testing—wherein providers of AI with
national security-critical capabilities are able to demonstrate collaboration with their
home government on narrowly targeted, scientifically rigorous assessments that
provide “test once, run everywhere” assurance. Reciprocity arrangements would
enable other nations to acknowledge and accept home governments’ evaluations,
13 See ISO/IEC 42001 - Compliance | Google Cloud.
providing AI developers with appropriate market access without the need for
additional government evaluations in those jurisdictions.
Articulate clear and differentiated obligations—where necessary—for the
respective actors in the AI ecosystem. To the extent a government imposes specific
legal obligations around high-risk AI systems, it should clearly delineate the roles and
responsibilities of AI developers, deployers, and end users. The actor with the most
control over a specific step in the AI lifecycle should bear responsibility (and any
associated liability) for that step. In many instances, the original developer of an AI
model has little to no visibility or control over how it is being used by a deployer and
may not interact with end users. Even in cases where a developer provides a model
directly to deployers, deployers will often be best placed to understand the risks of
downstream uses, implement effective risk management, and conduct post-market
monitoring and logging. Nor should developers bear responsibility for misuse by
customers or end users. Rather, developers should provide information and
documentation to the deployers, such as documentation of how the models were
trained or mechanisms for human oversight, as needed to allow deployers to comply
with regulatory requirements.
Avoid overbroad disclosure requirements. Policymakers should consider urging the
use of model cards and technical reports—already an industry norm—in national and
international fora to ensure that deployers and end users receive relevant information.
The U.S. government should oppose mandated disclosures that require divulging trade
secrets, allow competitors to duplicate products, or compromise national security by
providing a roadmap to adversaries on how to circumvent protections or jailbreak
models. Overly broad disclosure requirements (as contemplated in the EU and other
jurisdictions) harm both security and innovation while providing little public benefit.
Notify users of AI-generated content in appropriate contexts. The U.S. government
should support the further development and broad uptake of evolving
multistakeholder standards and best practices around disclosure of synthetic
media—such as the use of C2PA protocols, Google’s industry-leading SynthID
watermarking, and other watermarking/provenance technologies, including best
practices around when to apply watermarks and when to notify users that they are
interacting with AI-generated content. At the same time, the government should
understand the limitations of such solutions—including the extent to which motivated
actors can strip out this information—and the need for cooperation among all players in
the AI ecosystem to make progress on this issue.
Combat restrictive foreign AI barriers that hinder American businesses and
innovation. Foreign regulatory regimes should foster the development of AI
technology rather than stifle it. Governments should generally not impose regulatory
checkpoints on the development of underlying AI models or AI innovation. Some
governments are seeking to impose undue bureaucratic burdens on AI development
and deployment, often in ways that would primarily affect U.S. companies. The U.S.
government has a significant role to play in strengthening AI governance efforts and
best practices by supporting innovation-friendly approaches and engaging foreign
governments to deter efforts to impose measures that restrict AI development and
deployment by U.S. and local companies. For example, OSTP and other federal
stakeholders can consider bolstering and further resourcing interagency initiatives
(including those undertaken by the State and Commerce Departments) that target and
strengthen commercial diplomacy and promote exports of U.S. digital goods and
services, including American AI. And the U.S. should advocate at the Organisation for
Economic Co-operation and Development (OECD) and other fora for international AI
frameworks that reflect U.S. values and approaches.
******
As a longstanding leader in AI research and development, Google is committed to
responsibly realizing the immense benefits of AI and supporting America’s role as the
world champion in AI innovation. Our mission is to organize the world’s information and
make it universally accessible and useful, and our work on AI lies at the heart of that
mission. We welcome the Administration’s focus on this issue, and we agree that with
the right policy frameworks, America can look forward to an AI-powered golden era of
opportunity.14
14 This document is approved for public dissemination. The document contains no
business-proprietary or confidential information. Document contents may be reused
by the government in developing the AI Action Plan and associated documents without
attribution.