
I’ve started tracking the IETF’s work on AI content preferences recently, and I’m honestly surprised it hasn’t drawn more attention. Amid the noise of regulation proposals and courtroom battles, this is one of the few efforts aimed at a practical, everyday problem, tackled at the infrastructure level of the web.
It isn’t flashy. It won’t fix everything. Still, it’s worth understanding what’s actually on the table here, and just as important, what isn’t.
What the IETF is, and why it matters
A bit of context. The IETF, or Internet Engineering Task Force, is one of the organizations responsible for the technical standards that keep the internet running. Protocols like HTTP, email delivery, and much of the web’s basic plumbing come out of its working groups.
The IETF doesn’t pass laws or enforce rules. What it does have is influence. When it publishes a standard that gains traction, companies tend to follow it because interoperability depends on it.
That’s the backdrop for a working group called AIPREF, short for AI Preferences. You can read the official charter for this working group here: AI Preferences Working Group (IETF datatracker).
The problem AIPREF is trying to solve
If you publish content online and want to set boundaries on how automated systems use it, your options today are pretty crude.
There’s robots.txt, which was built to guide search engine crawlers. Some AI systems appear to respect it. Others interpret it loosely, or treat it as a polite suggestion. Even when systems try to comply, there’s no shared understanding of what a directive like “don’t crawl” means when the crawler isn’t a search engine.
The deeper issue is the lack of a common language. A publisher might want to say something precise, such as “index this page, but don’t use it to train models.” Right now, there’s no standard, machine-readable way to express that intent.
That gap is exactly what AIPREF is meant to address.
What the working group is actually building
The effort is split into two deliberately separate pieces.
A shared vocabulary
First, there’s a standardized vocabulary for content usage preferences. This is more concrete than it sounds.
Instead of fuzzy language, the draft defines specific terms and values, along with rules for how automated systems should interpret them. Preferences can clearly state whether something is allowed, disallowed, or left unspecified. Ambiguity isn’t ignored. It’s accounted for.
The goal isn’t to force anyone’s hand. It’s to ensure that when a site expresses a preference, systems that choose to honor it all read it the same way.
Ways to attach preferences to content
The second piece focuses on how those preferences get communicated.
The drafts describe a few mechanisms, including:
- HTTP headers that carry preference signals alongside normal web requests and responses
- Robots-style directives, similar in spirit to robots.txt, but designed to express AI-specific usage preferences rather than crawl paths
In both cases, the idea is simple. When an automated system fetches content, the preferences are available in a predictable, machine-readable place, assuming the system is built to look for them.
That assumption matters. None of this works unless systems decide to implement it.
How far along this is
Everything here currently lives as Internet-Drafts. These are public, evolving documents, not finished standards. The working group is still debating key details, including how to handle conflicting signals, how preferences interact across layers, and how this all fits into existing web infrastructure.
If the group reaches consensus, the drafts could eventually become RFCs on the standards track. That process often takes years, and there are no guarantees. Even after publication, adoption would depend entirely on whether major platforms and AI developers see enough value to support it.
Why this still matters
By itself, AIPREF doesn’t create new legal rights. It doesn’t block scraping. It doesn’t enforce copyright. It doesn’t compel compliance.
What it does provide is a shared reference point.
Right now, discussions about AI training and consent are muddy partly because there’s no agreed-upon way to express preferences at the protocol level. AIPREF offers a common language that creators, platforms, and tool builders can point to when talking about expectations.
That doesn’t resolve policy disputes or legal fights. But it removes one convenient excuse, namely that there’s no clear signal to respect.
What it explicitly does not do
It’s important to be clear about the limits:
- It doesn’t make preferences legally binding
- It doesn’t guarantee that AI systems will comply
- It doesn’t replace copyright law or licensing agreements
This is a signaling system, not an enforcement tool. Any real consequences for ignoring those signals would still come from contracts, regulation, or courts.
Where things stand now
As of late 2025, the AIPREF working group has two core drafts in progress. One defines the vocabulary for expressing preferences. The other describes how those preferences can be attached to web content.
Both are still under active discussion. Details may change. Adoption could be uneven, and it’s entirely possible that some large players will opt out.
Leave a comment