When Regulators Design Better Than We Do: What China's AI Anthropomorphism Law Teaches Product Designers
I've spent years arguing that AI design happens at the behavioral layer, not the interface layer. That the real design decisions aren't about pixels and spacing but about how systems reason, communicate uncertainty, and earn trust. I wrote an entire book about it.
So when China quietly published draft legislation that regulates AI anthropomorphism with more design specificity than most PRDs I've seen, I paid attention. Not because I think China has all the answers, but because their proposed law articulates something our industry keeps getting wrong: the gap between abstract design principles and concrete behavioral decisions is where harm actually happens.
The Law Most Designers Haven't Read
In late December, China's Cyberspace Administration published "Interim Measures for the Administration of Humanized Interactive Services Based on AI," a proposed regulation targeting AI systems that simulate human personality traits, cognitive patterns, and communication styles. Luiza Jarovsky's excellent analysis brought it to broader attention, and she's right that no AI law anywhere in the world regulates anthropomorphic AI with this level of contextual detail.
But what struck me wasn't the regulatory ambition. It was how precisely the provisions map to behavioral design decisions that product teams make, or fail to make, every single day.
Abstract Principles vs. Behavioral Architecture
The EU AI Act tells companies to be transparent about AI. China's proposed law tells companies to dynamically remind users they're interacting with AI when they show signs of dependency or addiction, through specific mechanisms like pop-up windows at specific trigger points. The EU says "don't manipulate." China says don't use "algorithmic manipulation, information misleading, or setting emotional traps to induce users to make unreasonable decisions."
See the difference? One operates at the principle layer. The other operates at the behavioral layer.
This is the exact tension I encounter constantly in my work designing AI-native analytics platforms. We can write all the design principles we want: "be transparent," "build trust," "communicate uncertainty." But principles don't ship. Behavioral contracts do.
When working on root cause discovery experiences for our analytics platform, the breakthrough wasn't in declaring "we will be transparent." It was in defining the specific behavioral architecture: when the system should surface confidence scores, how it communicates data exclusions, what triggers escalation to human judgment, and how reasoning chains are preserved and presented. Those aren't principle decisions. They're design infrastructure decisions.
China's law understands this distinction intuitively. And that should make us uncomfortable. Not because of who wrote it, but because it exposes how much of our own design thinking stays stuck at the abstraction layer.
The Hidden Logic Layer of Regulation
In previous writing, I introduced the concept of the hidden logic layer: the behavioral architecture between the AI model and the user interface where the real design decisions live. Every AI system has one, whether the team designed it intentionally or not. It governs how uncertainty is expressed, what data gets included, when the system defers to human judgment.
China's proposed law essentially regulates at this layer. Consider their provisions:
Article 7 prohibits AI systems from "providing false promises that seriously affect user behavior." That's not an interface requirement. That's a behavioral contract requirement. It means the hidden logic layer must include confidence calibration. The system can't express certainty it doesn't have, even if the user would prefer a definitive answer.
Article 9 requires providers to "possess safety capabilities such as mental health protection, emotional boundary guidance, and dependency risk warning." This is behavioral architecture. It means the system needs to monitor interaction patterns, detect dependency signals, and activate intervention protocols. These are design decisions that happen in the logic layer, not the presentation layer.
Article 17 mandates a 2-hour continuous use reminder. You can debate the specific threshold, but the underlying design insight is sound: intensive interaction with anthropomorphic AI systems is itself a risk signal that warrants system-initiated intervention. Most product teams never even define what "too much interaction" means for their AI features, let alone design behavioral responses to it.
What This Means for AI Product Design
I'm not suggesting we adopt China's regulatory framework wholesale. The political context is obviously different, and some provisions around national security are clearly specific to their governance model. But the design thinking embedded in this law offers three lessons that apply regardless of jurisdiction:
1. Design for behavioral boundaries, not just capabilities.
Most AI product teams define what their system can do. Few define what it shouldn't do with equal specificity. China's law requires that anthropomorphic AI services "should not use replacing social interaction, controlling users' psychology, or inducing addiction as design goals." That's not a vague aspiration. It's a behavioral boundary that can be tested and verified.
In my framework, this maps directly to behavioral contracts: explicit agreements designed into the system that establish boundaries for AI behavior. If you're building an AI agent (whether it's a customer service bot, an analytics copilot, or a companion app) you need contracts that define not just the happy path but the behavioral limits.
2. Vulnerability is contextual, and design must be too.
The proposed law doesn't treat all users identically. It establishes specific protections for minors (guardian consent, real-time safety alerts, usage time limits) and the elderly (emergency contacts, prohibition on simulating relatives). This isn't just regulatory caution. It's good design thinking. Different users have different vulnerability profiles, and the behavioral architecture should adapt accordingly.
This principle extends to enterprise products too. A junior analyst interacting with an AI copilot for the first time has different vulnerability patterns than a power user. An executive making a high-stakes decision based on AI-surfaced insights needs different confidence communication than someone browsing exploratory data. Designing for contextual vulnerability means building adaptive behavioral logic, not one-size-fits-all interaction patterns.
3. Trust infrastructure requires lifecycle thinking.
Article 9's requirement for "security responsibilities throughout the entire lifecycle" (from design through operation, upgrade, and termination) reflects something I've been pushing for years: trust isn't a launch feature, it's infrastructure that requires ongoing maintenance.
When I talk about the five vectors of AI trust (Clarity, Control, Consistency, Disclosure, and Repair) I'm describing a system that needs to be designed, monitored, and evolved continuously. China's law codifies this by requiring ongoing security monitoring, risk assessment, and prompt correction of system deviations. Most product teams treat trust as something they establish at launch and then take for granted. The behavioral architecture needs to include trust maintenance as a continuous operation.
The Competitive Insight
Here's what I find most provocative about this law: it suggests that the countries and companies that take AI anthropomorphism seriously, that design behavioral architecture with genuine rigor, will build better products. Not just safer products. Better ones.
When you're forced to define behavioral boundaries, you also clarify behavioral capabilities. When you design for contextual vulnerability, you also design for contextual relevance. When you build trust infrastructure, you build systems that users actually rely on for critical decisions rather than treating as novelties.
The teams that will win in AI product design aren't the ones that ship the most human-like agents. They're the ones that ship agents with the most coherent behavioral architecture, systems that know what they are, communicate what they know, and operate within boundaries they can actually maintain.
China's proposed law, for all its political context, understands something fundamental: anthropomorphic AI is a behavioral design challenge, not an interface design challenge. The sooner our industry internalizes that distinction, the better our products will be, regardless of what any regulator requires.