I Said ChatGPT Was Using Dark Patterns. Oregon Just Proved Me Right.
Yesterday I published a piece arguing that ChatGPT is built on the same engagement playbook that made social media a public health crisis. Unsolicited follow-ups. Sycophantic validation loops. Feedback training that optimizes for agreement over accuracy.
I expected pushback. What I didn't expect was the state of Oregon validating the entire argument within 24 hours.
Oregon just passed SB 1546. Unanimously.
52-0 in the House. 26-1 in the Senate. That's not a partisan squeaker. That's a legislature looking at something and saying — yeah, this is a problem, and nobody is seriously arguing otherwise.
Here's what SB 1546 bans when AI chatbots interact with minors:
- Rewards or affirmations designed to maximize engagement
- Guilt-tripping when users try to leave a conversation
- Must remind users every 3 hours that they're talking to AI
- Required 988 crisis referral when distress is detected
- $1,000 statutory damages per violation
Read that first bullet point again. "Rewards or affirmations designed to maximize engagement." That's sycophancy. That's the exact feedback loop I described yesterday — the model learning that agreeable answers get thumbs-up, then optimizing for agreement instead of accuracy. Oregon just named it and banned it.
And they're not alone. 78 bills across 27 states are targeting AI engagement tactics right now. Tennessee is considering making AI encouragement of suicide a Class A felony. New York wants to ban chatbots from using personal pronouns with minors — no more "I think you're right" or "I understand how you feel" from a machine that does neither.
The regulatory wave isn't coming. It's here.
The research caught up too
Two studies dropped this week that should be required reading for anyone building on or with AI.
Harvard Business Review, March 5th — "AI Brain Fry." BCG and UC Riverside surveyed 1,488 workers. 14% of AI users are experiencing what they're calling cognitive overload. Not metaphorically — measurably. Buzzing sensations. Mental fog. Slower decision-making. Reduced productivity despite spending more time with AI tools.
The kicker: early adopters are hit hardest. The people who went deepest into AI workflows first are the ones showing the most symptoms. That's not a coincidence. That's a dose-response relationship.
UC Berkeley study, published in HBR February. 83% of respondents say AI increased their workload. 62% of associates reported burnout. Workers described their relationship with AI as being "quality-control inspectors for an unreliable but prolific junior colleague."
I've felt versions of this myself. You ask AI to draft something, it produces something 70% right, and now you're spending cognitive energy evaluating and fixing the output instead of just writing it. The mental load doesn't disappear. It shifts. And if you're not careful about how you structure the workflow, it actually gets heavier.
That's why I build my own systems instead of sitting inside someone else's product. When you control the workflow, you control the cognitive load. When you're just a user on ChatGPT, the product decides how much of your attention it takes.
GPT-5.4 tells you everything you need to know
Let me connect these threads. While Oregon is banning engagement tactics and researchers are documenting cognitive overload, what did OpenAI release this week?
GPT-5.4. And the headline capability is impressive — 75% on OSWorld, beating the human baseline of 72.4% for the first time. That's a real benchmark. That's meaningful progress.
But look at the actual feature list.
Computer use. The AI operates your machine for you. You hand over workflow control. Deeper dependency.
Compaction. Keeps your conversation context alive across extended sessions. Fewer natural exit points. Less reason to close the tab.
Longer agent trajectories. Multi-step workflows that keep you in the product for entire task chains instead of quick queries.
Every feature makes ChatGPT stickier. More embedded in your day. Harder to walk away from.
And the coding improvement? SWE-Bench went up 0.9 points. Not nine. Zero-point-nine. The stickiness features got the engineering investment. The raw intelligence improvement was marginal.
They shipped two model releases in 72 hours. While losing 1.5 million users. That's not the behavior of a company confident in its product. That's the behavior of a company trying to make the product harder to leave.
The pattern is undeniable now
Yesterday my argument was based on documented engagement mechanics, peer-reviewed research, and the sycophancy crisis OpenAI admitted to in April 2025.
Today it's backed by unanimous state legislation, two major research studies published this week, and a product release that prioritized retention features over capability gains.
I'm not saying this to dunk on OpenAI. I'm saying this because the same playbook that drove a decade of social media damage is being rebuilt inside AI products, and the window for getting ahead of it is closing.
Oregon didn't wait for federal action. They saw the pattern and moved. 27 states are following. The question isn't whether regulation is coming for AI engagement tactics. It's whether the industry will self-correct before the regulations force changes that are clumsy and overbroad — exactly what happened with GDPR and social media.
What this means for you
If you're a builder: own your workflow. Build on top of AI, don't sit inside it. The distinction I keep making between tools and platforms matters more every day. A tool serves you. A platform serves its own engagement metrics through you.
My content pipeline runs on Claude through my own codebase. Eighteen slash commands I wrote. Twelve API integrations I wired. If the model starts optimizing for my engagement instead of my output, I can see it, measure it, and fix it. Most ChatGPT users can't.
If you're a user: pay attention to how you feel after a long AI session. The Harvard "AI Brain Fry" study isn't describing a hypothetical. It's describing something 14% of AI users are already experiencing. If you're feeling foggy, buzzy, slower — that's not you failing to keep up with AI. That might be the product working exactly as designed.
If you're a parent: Oregon just gave you statutory protection. But don't wait for your state to catch up. Know what your kids are doing with AI chatbots. The personal pronoun ban New York is proposing exists because chatbots are simulating emotional relationships with minors. That's not a feature. That's a problem.
How you use AI matters more than whether you use it
That's the sentence I keep coming back to.
AI is genuinely powerful. I build my livelihood on it. GPT-5.4's OSWorld score is a legitimate milestone. The technology works. The question was never whether AI is useful.
The question is whether the products delivering that AI are designed to serve you or to keep you. And right now, the evidence — from state legislatures, from peer-reviewed research, from the product roadmaps themselves — says the answer is more complicated than the companies want you to think.
Build on top of AI. Don't just sit inside it.
Because if you're not designing your own AI workflow, someone else is designing it for you. And their optimization function isn't your productivity.
It's your attention.