On January 20th, 2025, President Trump revoked Executive Order 14110, also known as the Executive Order on Artificial Intelligence. This order was the most comprehensive piece of governance on AI in the USA.

Photo by Will Ma

The AI Act and Executive Orders

Table of Contents

On January 20th, 2025, President Trump revoked Executive Order 14110, also known as the Executive Order on Artificial Intelligence. This order was the most comprehensive piece of governance on AI in the USA.

The order directed various government agencies and departments to, among other things, implement guidelines for the purchase and use of AI in the American government, uphold labor laws, and create Chief AI Officer positions within those agencies and departments.

So why am I writing this? Well, firstly, it’s because it’s interesting to see how the EU and the United States diverge on this critical issue. Second, because I work with and on AI as a developer, and constantly navigate complicated ethical questions on exactly what we’re using it for.

AI has great technological potency and potential. Potential to revolutionize all industries, and change the way we interact with our world in very fundamental ways.

There is danger as well. The most costly accidents happen when something we take for granted stops working for a minute. It’s the everyday things — like the brakes on your car, the food you put in your mouth, or the airplane you took to a meeting — that lead to life-ending disasters when they stop working for a minute.

We know this, or at least our legislators do. That’s why we have regulators and laws governing how each of those things should be tested and monitored before we — the public — come into contact with them. The reason we can take these things for granted is because laws and regulators made them safe and continue to do so.


Executive Order 14110 required developers of AI to share and provide transparency to testing, and testing procedures of it. It specified that the order covered AI which pose risks to U.S. national security, the economy, public health or safety. This puts the requirements for such AI on the same level as products produced for the Department of Defence, which is governed by the Defence Production Act.

It also required the development of Watermarking for AI generated content, which would make it easier to identify AI generated content. This would specifically mean that if you provide a service for generative video, image, audio or text, you must make it identifiable as generated by AI.

The applications of this for protecting against AI-generated pornography, false imagery, automated call center bots, chatbots, and much more cannot be underestimated. In a time when we’ve already seen the first convictions for the possession of AI-generated child exploitation material, what can be more important than making it easier to find out how it was made?

The order was also made to stem the tide of intellectual property theft. It’s no secret that lawsuits involving copyright law and generated images are on the rise, and rightfully so. As generative AI becomes better, hordes of artists, photographers, writers, and content creators are being forced out. But they are being forced out not by fair competition but by AI built on their own creations, with no compensation or ability to say no.

Revoking executive order 14110 is — in my opinion — a huge mistake, when there is nothing to replace it. The White House calls it a legislative reset. Republicans in congress calls it protecting free speech. I don’t see how it’s protecting anything. If there is nothing to replace it, it’s simply a step backwards.

When the order was issued by President Biden it was hailed by Democrats in congress as a “comprehensive strategy for responsible innovation”, while acknowledging that the initiative to make lasting legislation was now in their hands. Polling showed that 69% of all voters (both parties) supported the executive order, and yet it has now been revoked.


In the EU, the majority of the AI Act is coming into effect in 2025. The Act addresses the safety concerns of AI by dividing them up into risk categories and applying different rulesets to them based on these categories. Many AI systems pose little, if any, risk, but all will need to be categorized using this system. The most groundbreaking part of the Act — in my opinion — is the Unacceptable Risk category and how it addresses the issue of AI watching AI.

In the Unacceptable risk category, we find things like:

  • Cognitive behavioral manipulation of people
  • Social Scoring
  • Biometric identification or categorization of people

It should be clear to all why these things pose an unacceptable risk to our societies. If it’s not, ask yourself what a free society looks like, and then try fitting these things into that society.

In the High Risk category we find things like AI used in toys, aviation, cars, medical devices and lifts. These things fall under the EU Product safety legislation, and are already highly regulated for public safety so obviously AI in these areas pose a high risk to our society.

We also find AI systems that fall into specific categories, such as Law Enforcement, Education, Asylum seeking, and Assistance tools in sensitive areas such as legal or medical understanding.

Most importantly, all High risk systems must be assessed before being put on the market, and throughout their availability. People will also have the right to file complaints against these systems, with the relevant authorities.

In — what I would call — a groundbreaking move, the Act actually specifies that people and not automation should monitor all AI. Think about that sentence earlier: “It’s the everyday things …” This addresses that. It specifically reminds us not to take AI for granted.

The AI Act also contains a lot of the same things Executive Order 14110 does, with the important caveat that this is legislation, and won’t be easily overturned by the next President of the European Commission. This includes things like watermarking, and identification of AI generated content so important to law enforcement and public interest.


So what happens from here?

This isn’t the biggest divide in the western world, and I can honestly understand why people are more concerned about U.S. withdrawal from the Paris Agreement on Climate Change, than 14110. It shouldn’t come as a surprise, but I understand the concern. It concerns me too.

Revoking 14110 shouldn’t come a surprise either, but it should be cause for concern. AI is entering every part of our society, and it will only continue to do so. What was a fun gimmick a few years ago, is now becoming frighteningly realistic, to the point where most people can’t tell if an image or video is AI or not anymore.

“Believe nothing you hear, and only one half that you see” — Edgar Allan Poe

The “Age of AI” might as well be called the “Age of Disbelief”. When you can’t trust what you see, what you hear, or what you read, then what is left but disbelief? This is why AI must be regulated, made transparent, and its use made clear to all. Because if not, we might as well be living in the Age of Disbelief.

In 2025 (most of) the AI Act comes into effect. With the current cast of characters in the White House, I expect we will see it come under pressure. My hope is that our leaders and legislators will face that pressure head on, and make sure we — the public — know who is putting it under pressure.

Related Posts

I work with APIs a lot, so I decided to solve an annoying problem: updating request tokens automatically. Here is how I did it.

Postman — Pre-Request scripting

I work with APIs a lot, so I decided to solve an annoying problem: updating request tokens automatically. Here is how I did it.

Read More
When I made my new site, I made a decision to not have any cookies on it. Here's why.

Why this site doesn't use cookies

Did you notice the lack of a consent popup on this website? That’s because there are no cookies here, because this site doesn’t need them.

Read More
In today's tooltip I'll be showcasing go-task. It's a tool designed to make executing terminal commands or even lists of commands needed for specific operations easier.

Why you should be using Go-Task

In today’s tooltip I’ll be showcasing go-task. It’s a tool designed to make executing terminal commands or even lists of commands needed for …

Read More