
Every few decades, there’s a new frontier that we humans are chasing as hard as we can. In the 1960s, it was space. In the 80s, it was computers. And today, it’s an evolution of that, artificial intelligence. And we’re spending trillions of dollars chasing it.
And it’s gotten so big already that the government is stepping in. We haven’t seen a moment like this in decades. The last real parallel was the space race, a brand new technology with global stakes.
So how do you regulate AI? How do you decide who gets to control it?
We often don’t notice history while we’re living through it. But right now, inside federal offices and late-night Google Docs, a handful of people are drafting the rules for how artificial intelligence will shape the next century.
So we talked to experts who are at the leading edge of policy-making for AI:
- [Suresh Venkatasubramanian](https://x.com/geomblog) is a Brown University professor and director of the [Center for Technological Responsibility, Reimagination, and Redesign (CNTR)](https://cntr.brown.edu/), and a former White House OSTP official who co-authored the _Blueprint for an AI Bill of Rights._
- [Dean Ball](https://x.com/deanwball) is a senior fellow at the [Foundation for American Innovation](https://www.thefai.org/) and formerly served in the White House OSTP as the senior policy advisor who led the drafting of _America’s AI Action Plan_.
- [Mackenzie Arnold](https://x.com/MackenZ_arnold) is the Director of U.S. Policy at [LawAI](https://law-ai.org/), focused on making sure advances in AI serve the public good.
In order to find out more about the way that the United States government has thought about AI policy in the past, and continues to do so today. Specifically, we investigated two key documents: the [_Blueprint for an AI Bill of Rights_](https://bidenwhitehouse.archives.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf) published October 2022, and most recently, [_America’s AI Action Plan_](https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf) published in July 2025.
> _But first! A word from our sponsor:_
>
> [SBS Comms](https://sbscomms.com/) is the PR team built for frontier technology — the people who dive into messy, high-stakes ideas and tell the story with clarity and courage.
>
> SBS Comms works with teams pushing into the unknown, transforming complex breakthroughs into stories that resonate with the world. If you’re chasing a frontier as big as AI, they’re the ones you want in your corner.
> 

### What Do the Documents Actually Say?
These documents are defining the way the American government is thinking about AI.
Most of us don’t even know what they say.
**First, let’s talk about the blueprint for an AI bill of rights.**
I spoke to Suresh, a Brown University computer science professor who helped shape the White House blueprint for an AI bill of rights.
I asked him about the story of how the Blueprint for an AI Bill of Rights came to be.
> There was a team of folks led by Alondra Nelson at the OSTP, the Office of Science and Technology Policy at the time. We’d been thinking about how to set the groundwork for AI policy.
>
> Around the time the Biden administration came in, there began to be a lot of interest from the administration as well as to how to structure some rules of the road for what people were beginning to see as an increasing deployment of machine learning in ways that were affecting our lives, whether it’s, you know, jobs or housing or rentals or credit. People had already been talking about this for many years. And so the government said, look, we need to set some standards, set some rules, set some guidelines for how we should be using this technology so that where people’s concerns are kept in the forefront.
>
> And so, you know, we quickly realized that we need to articulate principles, rights that we should have, kind of hearkening back to the original Bill of Rights, which was a way of protecting the U.S. public from the government to make sure that the government could not go beyond their power and responsibilities to oppress citizens. Sprint for a year to sort of put this together, get consultations with all the relevant stakeholders and put something together that the whole of the U.S. government really participated in. If you ever had to edit a Google doc and imagine editing a Word doc that has 2,000 comments from every part of the U.S. government, that’s what it was like.
>
> _Suresh Venkatasubramanian_
So what IS the Blueprint for an AI Bill of Rights?
> So it’s a document that a bunch of us put together in the early part of the Biden administration to acknowledge and name rights that we as people should have to protect us as we move into a world that increasingly is managed by technology. And the first thing is the most basic. Systems we put out in the world that affect people’s lives should be safe and they should be effective.
>
> They should do what they claim to do. The next one is that they should not discriminate. They should not incorporate biases and we should understand how they’re making, coming to conclusions. And then the final one, the fifth one, is what I call a dial zero for operator. That there should always be a way to You shouldn’t be reliant solely on technology to get what you want. So one of the cool things about the blueprint itself is that the principles themselves, the five of them, they take about a couple of pages, but the document itself is 60 pages long.
>
> So what’s the rest of the 55 pages or so? A lot of it is laying out in great detail why we needed those principles in the first place with, you know, examples, citations to stories, articles in the news, and also a detailed list of how you would go about achieving them.
>
> _Suresh Venkatasubramanian_
Okay, so that’s the blueprint for an AI Bill of Rights that was published in October, 2022. But AI has come a long way since then, not to mention we’re under a completely different administration.
So how is the government today currently making policy on AI? **For that, I talked to one of the key authors of America’s AI Action Plan, released in July of 2025.**
I spoke with Dean Ball, a senior fellow at the Foundation for American Innovation, who served as senior policy advisor for AI at the Office of Science and Technology Policy, where he led the creation of this document, the National Roadmap for US AI Strategy.
> You’re immediately thrust into things where you’re negotiating with the UAE. But then for a really long time, the action plan itself was like a Word doc on my computer, formatted in Calibri, the default Microsoft Word font, size 11. And so it became quite normal.
>
> It was open on my computer all day long. I remember the moment there was an all-day event organized by All In and Hill and Valley, which is where the president announced the plan and the executive orders. That was truly one of the most incredible moments of my life.
>
> You’re sitting there, and it’s like, wow, this is really America’s AI strategy. In many, many important ways, I wrote it.
>
> _Dean Ball_
He explained the key pillars of America’s AI Action plan:
> The first one is innovation. And that is really the core of getting rid of red tape on the adoption side. There are a lot of laws that regulate AI development in America right now.
>
> So we don’t want to over-regulate the development of AI. And that’s part of what we talk about in that section. But there’s also so many ways in which the current laws that we have were designed for a different world.
>
> Then the next pillar is infrastructure. And that’s, like, pretty familiar to most people at this point who have been paying attention to AI at all. This is about the huge industrial build-out that we’re going to have to do to power AI, and also making sure, importantly, that we have the workforce, the skilled workforce that can build all that stuff.
>
> _Dean Ball_
To get a broader sense of how the U.S. is approaching AI policy, I talked to Mackenzie Arnold.
> I direct U.S. policy work at the Institute for Law and AI. We’re a nonpartisan think tank that does research and analysis related to AI policy. We do work at both ends of the policy development funnel.
>
> So how do you actually operationalize the thing that you want to do in policy? Then at the totally other end of the spectrum, we are doing some very future-looking work. So if you wanted agents to be deployed across the government, how would you govern that as a principle within the U.S. government? People often talk about technological innovation. You also see policy innovation.
>
> _Mackenzie Arnold_

### How Do We Stay Flexible as AI Evolves?
I asked Mackenzie how agencies strike the balance between flexibility for industry and accountability for safety.
> Right now, we just simply don’t have enough information to write really prescriptive rules. We don’t know what the best practices are. We don’t know what direction the technology is going to take. We don’t know which players are even going to be relevant. There’s a possibility of a few major companies guiding the technology, or it could be diffused among many different players.
>
> We just don’t know. It means developing a robust evaluations ecosystem, both for evaluations at the companies and done within the government. It means building out a secure infrastructure such that third-party evaluations can be done.
>
> Serious evaluations can be done without undermining the trade secrets of companies. You need to build out your ability to actually understand the technology and to make informed decisions. It’s going to be really important to make sure that our regulatory apparatus is flexible.
>
> By default, Congress can only move so fast. It’s very hard to adjust its decisions once they make them.
>
> _Mackenzie Arnold_
Regulating AI isn’t just about writing rules. **It’s about building the ability to understand and evaluate the technology as it changes.**
That means flexible oversight, giving agencies room to adapt without losing accountability. _But that flexibility matters most when it comes to the physical side of artificial intelligence._
The infrastructure. Because these data centers aren’t built in Washington, D.C. **They’re built in real places by real people.**
> To actually build things, you need individual people and individual companies to go out there and build something. You need someone to literally identify the land where you’re going to put your data center, and you need to de-risk it, and you need to hook it up to the relevant energy that you need. And that takes individual people doing specific things. It requires people going through specific permitting processes and working around specific zoning laws.
>
> And a lot of that is governed at the local or regional level, right? So on some level, the government’s hands are quite tied and it’s quite difficult to really sort of turn the dial up. What you can do is that you can put money into and invest. You can help coordinate as sort of like a communal entity to try to convince many of these local areas to make it easier for that investment to be made.
>
> You can coordinate with companies to try to incentivize people to enter the market. You can offer more favorable loans.
>
> _Mackenzie Arnold_
Laying the physical groundwork is one challenge.
**Once that’s in place, the next question is what we do with it.** _How we turn that infrastructure into real innovation and social benefit._
> We want to develop technology that is really useful, that increases our productivity, that creates a bunch of social benefits. We want a diversity of different companies to develop different models with very different applications to actually figure out how to deploy them in the economy. The dilemma is uncertainty, both for good and for ill, that this could be a really transformative technology and so it’s about setting ourselves up for success.
>
> _Mackenzie Arnold_
So if innovation is the goal, the real challenge is how we set ourselves up for it.
How we make sure our institutions, our talent, and our rules can actually keep up with the pace of AI development.
> For all of us lawyers who dream of, you know, creative administrative mechanisms and other things to help agencies update, this might be the context where we actually see some of those things go into practice. There’s just such a demand for being able to adjust your rules over time, to be able to adjust the scope of which systems and which companies are regulated, and to adjust what the best practices are. How do you make government work well? How do you get the right trade-off between societal benefit at the least cost to innovation?
>
> _Mackenzie Arnold_

### The Trade-off: Speed Versus Safety
Over the three years since the Blueprint was first introduced, the world of AI and AI policy has only ramped up.
GPT-4’s release in March 2023 kicked off the foundation model era. With Gemini 1 in December 2023 and Claude 3 in 2024, AI went from narrow tools to general-purpose multi-mobile systems capable of text, image, audio, and even video reasoning.
These models became the platform of the new AI economy, the base layer on which startups, apps, and governments are now building everything. AI isn’t just another tech competition, as some analysts argue. _It’s a strategic contest with implications for every part of our lives._
The gap between US model performance and China’s has closed significantly. One visualization shows the difference dropping from over 100 points in 2024 to just 23 points in 2025. **The risk of falling behind is real.**
But if we move fast without safety, we’re setting ourselves up for disaster. So maybe it’s not speed versus safety. _Maybe it’s about finding a balance between innovating fast but building with a plan._
We’ve done this before. When the space race kicked off, we built NASA. When new frontiers in biology and energy opened, we created public research programs to guide them.
Major breakthroughs in America have often had a public institution helping steer the technology forward for public good. So why not AI?

### How Do We Get That Right with AI?
> I think open source and open scientific research is going to remain an important part of the future of AI. And it’s not just for science. That’s a really important part of it that I care a lot about because I value discovery.” -_Dean Ball_
>
> “Part of government’s important job is providing safety. Especially as we commit or have committed to a sort of a market-driven way we structure the economy. Government’s job is to sort of help steer the economy in ways that make sure all of us benefit from these transformations. Then it should make sure we all have access to that.
>
> _Suresh_
Look, AI is obviously a big topic.
This is one of those big things that, yeah, maybe the regulation or the way the policy is passed is something that the government and your representatives are going to decide for you. But despite that, I think this is still something to be paying attention to.
I know the letters “A” and “I” are overplayed and overhyped and bubbly at the moment, but these are things that I think we do need to be talking about and exploring.
_Until next time, keep on building the future!_
_–Jason_