Stop Dating AI. Start The Marriage: The Prenup.

Prenup

The Full Research, Strategy, and Action Guide Behind the Keynote

By Adam Horne / Aizle

Companion piece to “Stop Dating AI. Start The Marriage.” — presented at Berghs Unconference:AI 2026, Aula Main Stage, 14:30–14:50.


Table of Contents

  1. About Aizle

Executive Summary

Let’s start with something that might surprise you in a white paper about AI.

The best creative brainstorming I’ve done in the last decade — genuinely, the most productive collaborative work of my career — has been with AI. Not because AI is creative. It isn’t. It’s a brilliant combinatorial engine that mashes things together in ways you’d never dream of. But it can’t judge the result. It can’t tell the difference between interesting noise and an actual idea. That’s my job. And it only works because I’ve spent twenty years building the judgment to steer it.

So this paper isn’t anti-AI. It’s anti-waste. Because the data is now overwhelming: most organisations have been given a Ferrari and turned it into a shopping trolley.

The summer fling was spectacular. Remember 2023? Every tool felt like magic. Every prompt felt like productivity. The excitement was real. The tools are extraordinary. But somewhere between the first ChatGPT draft and the fifteenth, the relationship stopped being intentional. You stopped dating AI — getting to know its strengths, testing its limits, bringing your best self. You started cohabiting. Sweatpants. Autopilot. “Good enough.”

The honeymoon data is in. The largest behavioural study ever conducted — ActivTrak, 443 million hours, 163,000+ workers — found zero time savings in any work category after AI adoption. A randomised controlled trial (METR, 2025) found developers were 19% slower with AI but believed they were 24% faster. That’s a 39-point gap between what the relationship feels like and what it actually is. If this were a marriage counsellor’s intake form, the diagnosis would be clear: you’re in love with how productive you feel, not how productive you are.

Six red flags have moved in. Like any relationship that goes unexamined, bad patterns form without anyone choosing them. They make you feel good about not doing much. They agree with everything you say. They’re stealing your independence. They convince you age doesn’t matter. They dress you in their clothes. And they convince you your relationship is normal. Each one is backed by research. Each one is fixable. Each one is costing you more than you think.

The marriage is where the real value lives. BCG’s jagged frontier study showed disciplined AI users completing tasks 25.1% faster at 40% higher quality — while undisciplined users performed 19 points worse than non-AI users. The variable isn’t the tool. It’s the human. Experience, taste, judgment, and intentionality are the assets that turn AI from an expensive screensaver into a genuine partner.

This paper is the prenup — and the vows. Five clauses. Five vows. A framework for moving from summer fling to marriage — not by using AI less, but by using it like it deserves to be used.

But this isn’t just a keynote companion anymore. It’s a standalone resource. A 5,000-word research deep-dive connects 35+ studies into a coherent narrative about why these patterns exist — the psychology, the neuroscience, the organisational dynamics. A shareable behaviour comparison table distils the entire framework into one scannable asset. A 60-minute team workshop turns reading into doing. Three before-and-after scenarios make the shift tangible. An FAQ pre-empts every objection your team will raise. And a Brain Fry Recovery Protocol speaks directly to the one-in-seven practitioners experiencing measurable cognitive overload right now. Plus: customisable prompts for leaders, strategists, creatives, and individual contributors. And a complete research compendium with every statistic sourced, linked, and verified.

The excitement was real. The tools are extraordinary. You just need to stop sleepwalking through the relationship.

And remember — the best marriages aren’t comfortable. They’re honest. And they’re worth it.

Time to read: 45 minutes for the full paper. 5 minutes for the summary + prompts. 60 minutes if you run the workshop.

Back to Table of Contents ↑

1. The Evidence

That warm, productive glow you feel? It’s hiding six red flags.


The 39-Point Gap

METR ran the largest randomised controlled trial on AI-assisted work ever conducted. Sixteen experienced open-source developers completed 246 real-world coding tasks with and without Cursor Pro (powered by Claude 3.5/3.7 Sonnet). The result: developers using AI took 19% longer. Not faster. Longer.

Before the study, they predicted AI would make them 24% faster. After the study — after seeing the data — they still believed they’d been faster.

19% slower. Believed 24% faster. A 39-point gap.
— METR Randomised Controlled Trial, 2025

That is a 39-percentage-point chasm between what AI feels like and what AI does. It’s not the AI hallucinating. It’s the human.

The 443 Million Hours

If METR was the precision instrument, ActivTrak was the industrial scale. Their Productivity Lab tracked 163,000+ workers across 443 million hours of actual digital activity. A subset of 10,584 workers was analysed 180 days before and after AI adoption. The verdict was blunt: not a single activity category showed time savings. Email time increased 104%. Chat and messaging increased 145%. Every measurable activity took longer.

The feeling was spectacular. The results were nothing. And that gap — between what the relationship feels like and what it actually is — is where the waste lives.


Back to Table of Contents ↑

2. The Science of Why

The presentation gave you six red flags. This section gives you the research that proves each one — and the psychology that explains why you fell for them.

You know what’s wrong. This section explains why it keeps happening.

Twenty minutes on stage can name the problems. It can’t explain the mechanisms. What follows is a deep synthesis of the research behind the keynote — connecting neuroscience, behavioural economics, organisational psychology, and consumer data into a single coherent narrative.

Skip it if you want the action plan. Read it if you want to understand why these patterns exist — and why breaking them is harder than it sounds. Because knowing “I should use AI better” is like knowing “I should exercise more.” The intention is obvious. The mechanism is what matters. And the mechanism, in this case, is working against you in five distinct ways — each one invisible, each one measurable, and each one fixable once you see it.


2a. The Perception Gap: Why AI Feels Like It’s Working

Let’s start with the most important question nobody asks: why does AI feel so productive when the data says it isn’t?

The answer lives in a psychological mechanism that Daniel Kahneman and Shane Frederick formalised in 2002, building on decades of heuristics research. They called it attribute substitution. When faced with a difficult question — “Am I actually more productive?” — the brain substitutes an easier question: “Does this feel easier?” The easier question requires less cognitive effort to answer. And since AI dramatically reduces the effort of producing output, the brain concludes: “I’m doing hard work easily. I must be incredibly productive.”

You’re not. You’re just comfortable. And comfort, as every behavioural scientist knows, is the enemy of accurate self-assessment.

The METR randomised controlled trial — the largest of its kind — put this mechanism under a microscope. Sixteen experienced open-source developers completed 246 real-world coding tasks with and without AI tools. Before the study, they predicted AI would make them 24% faster. After the study, having seen the data showing they were actually 19% slower, they still believed they’d been faster — by 20%.

A thirty-nine-percentage-point chasm between perception and reality. Not a small error. A hallucination. And 69% of participants continued using AI tools despite the measured slowdown. The feeling of ease was more compelling than the evidence of inefficiency.

That’s the gap in a lab. In the wild, it’s worse.

This isn’t an isolated finding. The Foxit “State of Document Intelligence” report (2026) quantified the delusion at organisational scale: executives estimated AI saved them 4.6 hours per week. Actual measured saving after accounting for the verification burden: 16 minutes. Rank-and-file workers didn’t save time at all — they lost 14 minutes per week, because checking AI output for accuracy consumed more time than the output saved.

Then ActivTrak delivered the broadest verdict. Their Productivity Lab tracked 163,000+ workers across 443 million hours of actual digital activity — with a subset of 10,584 analysed 180 days before and after AI adoption. Not a single activity category showed time savings. Email time increased 104%. Chat and messaging increased 145%. Time across all job responsibilities shot up between 27% and 346%. Focused work sessions dropped to an average of 13 minutes and 7 seconds — barely enough time to form a coherent thought before the next interruption. The share of time spent “in the zone” fell to 60%, a three-year low. Workers weren’t using AI to work smarter. They were using it to produce more volume of the same work, faster per unit but more units overall. The assembly line sped up. Nobody asked whether the factory was building the right product.

So if the data says you’re slower, and you feel faster, what’s driving the delusion?

Why does this pattern persist? B.F. Skinner’s variable reinforcement schedule provides the second mechanism. Every time you prompt an AI, you get a variable response. Sometimes brilliant. Sometimes mediocre. Sometimes surprisingly good. That unpredictability is neurologically addictive in exactly the same way a slot machine is addictive. You keep pulling the lever because the next response might be the brilliant one. The MIT Media Lab EEG study identified this precisely: the lowest brain connectivity wasn’t in AI users generally — it was specifically in the passive condition, people clicking “regenerate” repeatedly, watching outputs, hoping for something better. That’s slot machine behaviour. Lever. Wait. Evaluate. Lever. Wait. Evaluate.

UC Berkeley Haas researchers spent eight months studying a 200-person tech company and found what the aggregate data predicts at the individual level. Workers didn’t use AI to do the same work faster. They used it to take on broader scope, extend into evenings, and absorb tasks that weren’t theirs. Professor Aruna Ranganathan described the pattern bluntly: workload creep leading to cognitive fatigue, burnout, and weakened decision-making. AI didn’t reduce work. It intensified it. Because when everything feels easy, saying “yes” to more feels easy too. Until it isn’t.

The substitution heuristic makes you feel productive. Variable reinforcement makes you keep going. Workload creep ensures you never pause long enough to notice. Together, they create a perception gap so wide you can drive a Ferrari through it — in first gear, in the car park, while believing you’re on the autobahn.

This matters for one reason above all others: you can’t fix a problem you don’t believe you have. And the perception gap’s most insidious feature is that it makes you feel like you’re already fixed. “I’m one of the good ones. I use AI well.” Sure. So did the METR developers. All of them.


2b. The Cognitive Tax: What AI Does to Your Brain

The perception gap is about psychology. The cognitive tax is about neuroscience. And the neuroscience is brutal.

The MIT Media Lab study — “Your Brain on ChatGPT” — placed EEG monitors on 54 adults writing essays across three conditions: AI assistance, search-engine research, and brain-only. The results should concern anyone who uses AI daily. The ChatGPT group showed the lowest brain connectivity and engagement of all three conditions. Not slightly lower. Dramatically lower. The brain wasn’t working harder with a better tool. It was checking out.

Worse: 83% of AI-assisted writers couldn’t recall the content of essays they’d just written. Just written. Minutes ago. The work passed through them without leaving a trace — like water through a sieve. And when AI-reliant participants were switched to brain-only writing, their neural connectivity was weaker than baseline. Not back to normal. Worse than before they started. Accumulated cognitive debt. The research lead, Nataliya Kosmyna, described the effect as lasting: prolonged AI use appears to create decreasing returns in learning capability.

Gerlich’s study, published in Societies, corroborated this across 666 participants with striking precision. The correlation between AI usage and critical thinking: r = −0.68. That’s a strong negative relationship — the more you use AI, the less critically you think. The mediating mechanism: cognitive offloading, which correlated with AI use at r = +0.72. Your brain literally hands the hard work to the machine and stops doing it. And the youngest users — 17 to 25 — showed the highest AI dependence and lowest critical thinking scores.

This isn’t about intelligence or laziness. It’s about how brains allocate resources. When a tool handles the cognitive heavy lifting, the brain stops investing in the circuits that handle that work. It’s the same mechanism that causes your handwriting to deteriorate when you type exclusively, or your navigation skills to atrophy when you rely on GPS. Use it or lose it isn’t a motivational poster. It’s neuroscience.

A Chinese university study of 580 students corroborated the pattern: AI dependence reduces critical thinking, with cognitive fatigue as the mediating mechanism. Not distraction. Not laziness. Fatigue. The brain gets tired of not being used properly — like a runner forced to sit in a car. The muscles don’t just idle. They protest. That’s the afternoon fog you feel after four hours of prompting. Not tiredness. Measured cognitive overload. Your brain saying: “I’m bored and you’re not using me.”

The BCG “brain fry” research (March 2026, 1,488 U.S. workers) identified the workplace mechanism. The term “brain fry” isn’t metaphorical — it describes acute cognitive overload targeting three specific systems: attention, working memory, and executive control. Fourteen percent of workers experience it. Those affected make 39% more major errors, and 34% intend to leave their jobs — compared to 25% of unaffected workers.

The study identified a critical threshold: productivity peaks at exactly three simultaneous AI tools, then crashes beyond four. Workers managing four or more systems reported 33% more decision fatigue and 19% greater information overload. One participant described it as “a dozen browser tabs open in my head, all fighting for attention.”

Brain fry is not burnout. This distinction matters for treatment. Burnout builds over months of chronic emotional exhaustion — the slow erosion of caring about your work. Brain fry is acute — it can hit within a single workday. It targets different cognitive systems. And it has a different recovery profile. The fix for burnout is rest and emotional recovery. The fix for brain fry is fewer tools used with more intention. Misdiagnose it and you’ll take a holiday when what you needed was to close three browser tabs. The Deloitte 2025 Workforce Intelligence Report confirmed the shift: mental fatigue and cognitive strain have now surpassed workload volume as the leading predictors of burnout. The nature of exhaustion has changed. The management playbook hasn’t caught up.

Microsoft Research and Carnegie Mellon completed the picture: surveying 319 knowledge workers across 936 AI-assisted tasks, they found that 32% of frequent AI users accept outputs without question — a rate that rises with frequency of use. The more you use AI, the less critically you evaluate it. Not because you trust it more. Because your brain has stopped investing in the evaluation circuits. The tax has been paid. The receipt is an empty till.

Meanwhile, Shalu et al. from Amity University quantified the damage with unnerving specificity: long-term AI interaction showed strong positive correlations with mental exhaustion (r = 0.671), attention strain (r = 0.874), and information overload (r = 0.905). The paradox of AI-generated choices correlated with lower attention capacity (r = 0.908) — the more options AI produces, the worse your brain handles them. Your 24-jam table isn’t just confusing the customer. It’s frying the chef.

Here’s the part that should worry you professionally. These aren’t effects that require years of exposure. The MIT study measured cognitive debt within a single writing session. The BCG brain fry threshold kicks in with a fourth simultaneous tool — not a fourth year of use. A fourth tab. The speed of degradation means that by the time you notice the fog, the tax has already been paid. And unlike financial debt, cognitive debt doesn’t send invoices. You just wake up one Tuesday wondering why you can’t think clearly, and blame the weather.


2c. The Motivation Trap: Why Real Work Starts Feeling Boring

Here’s where the science gets genuinely unsettling. The cognitive tax is about what AI does to your brain when you’re using it. The motivation trap is about what AI does to your brain when you stop.

Liu et al. published a study in Scientific Reports (2025) that describes a vicious cycle. They found that AI collaboration simultaneously improves efficiency AND diminishes intrinsic motivation. Read that again. At the same time. In the same people. AI makes you faster while making you less interested in the work. Their data quantified the damage: passive AI use causes an approximately 11% decline in intrinsic motivation and a 20% increase in boredom with non-AI work.

The mechanism is dopaminergic. AI provides easy cognitive rewards — type a prompt, get an answer, feel productive. Your brain’s reward system adapts to the easy path. When you switch back to manual work — writing from scratch, concepting without AI, strategising with only your own brain — the reward is harder to earn. The brain, having recalibrated to easy dopamine, registers the harder work as tedious. Not because the work changed. Because your reward threshold did.

This isn’t speculation. It’s the same neurological pathway that makes social media addictive, fast food unsatisfying once you’re used to it, and gambling compelling despite consistent losses. The brain doesn’t optimise for quality of reward. It optimises for speed and predictability of reward. And AI delivers both faster than any tool in professional history.

Martin Seligman’s learned helplessness research, conducted in 1967, provides the deeper mechanism. Organisms subjected to repeated loss of control stop trying — even when control becomes possible again. The brain doesn’t decide to give up. It adapts to a world where effort doesn’t matter.

The parallels to modern AI workflows are uncomfortable. Seligman’s dogs stopped trying to escape because the evidence told them their effort was irrelevant. Knowledge workers aren’t dogs — but brains are brains. And the adaptation mechanism is the same.

Apply this to AI workflows. Step one: AI produces output that’s “good enough.” Not brilliant. Not terrible. Professionally acceptable. Step two: you review it and think “I could improve this, but the improvement would be marginal — maybe 10% better for 500% more effort.” Step three: you approve it. Step four: repeat two hundred times.

After two hundred cycles of “I could improve this but won’t,” your sense of agency erodes. You stop believing your contribution matters. You become an approval button. That’s not burnout. That’s not laziness. That’s learned helplessness — the brain adapting to the evidence that its input doesn’t meaningfully change the output.

The active versus passive distinction is crucial. Liu et al.’s data shows that active AI use — generating prompts, evaluating outputs, pushing back, directing the process — preserves intrinsic motivation. Passive AI use — watching, accepting, regenerating, hoping — destroys it. The difference isn’t how much AI you use. It’s how you use it. Directors survive. Spectators don’t.

Wang et al., writing in Frontiers in Computer Science, added the experience variable. Their study found that experienced designers use AI to elevate quality — they bring strong judgment, clear direction, and twenty years of pattern recognition to the collaboration. Novices use AI for undifferentiated generation — they accept the first output because they can’t yet tell the difference between interesting noise and an actual idea.

The skill gap doesn’t close with AI. It widens. Because the mechanism that builds judgment — the struggle of producing work from nothing, evaluating it honestly, failing, and trying again — is exactly the mechanism that passive AI use eliminates. The junior who never struggles never develops taste. And without taste, AI is just a random word generator with good grammar.

I told a room of forty-five-year-old creative directors that AI made them more valuable, not less. Three of them cried. One hugged me. A Swede. That’s when I knew the insight was real. Twenty years of pattern recognition isn’t a liability in the age of AI. It’s the asset. The question is whether you’re using AI to amplify that asset — or letting it atrophy while you watch the machine do push-ups on your behalf.

The motivation trap is the quietest of the five mechanisms in this section. The perception gap is dramatic — a 39-point chasm makes for good headlines. The cognitive tax is measurable — EEG monitors and correlation coefficients are hard to argue with. But the motivation trap whispers. It doesn’t announce itself. It just makes you slightly less interested in the work, slightly more willing to accept “good enough,” slightly less likely to stay up at 2am because the idea isn’t quite right. And “slightly” compounded over two hundred approval cycles isn’t slight at all. It’s surrender.


2d. The Sameness Economy: Why Audiences Are Revolting

The perception gap is internal — you can’t see it from the outside. The cognitive tax is neurological — you can’t feel it until the fog descends. But the sameness economy? That one’s visible. The audience is telling you, loudly, that something is wrong.

Consumer enthusiasm for AI-generated content has collapsed from 60% in 2023 to just 26% in 2025. In two years, the market went from excitement to rejection. NielsenIQ found AI-generated video ads consistently rated as more annoying, boring, and confusing than traditionally made ads — even when technically polished. The IAB’s 2026 report: only 13% of consumers fully trust ads created entirely by AI. iHeartMedia found 90% of listeners want human-created media and launched a “guaranteed human” tagline. In a delicious irony, OpenAI’s own 2025 ChatGPT campaign was shot using real directors, real actors, and 35mm film. Even the company selling the machine chose humans when it mattered.

The 2025 Cannes Lions festival crystallised the backlash. DM9’s Grand Prix-winning campaign was stripped of its award after it emerged that AI-generated content was used to fabricate real-world events, including a fake news segment. The agency’s CCO resigned. Twelve awards were revoked. McDonald’s pulled an AI holiday ad within three days. Coca-Cola’s AI-generated Christmas campaigns were called “soulless.” J. Crew x Vans launched sneaker images with distorted hands that became memes. The industry’s most prestigious stages became cautionary tales about what happens when speed outpaces judgment.

The consumer isn’t rejecting AI. The consumer is rejecting sameness. And AI, by its mathematical nature, produces sameness.

Two behavioural science mechanisms explain why. First: the effort heuristic. Humans judge the value of something by how much effort they believe went into it. A hand-knitted jumper is worth more than a machine-knitted one, even if they’re identical. A meal that took three hours feels more valuable than one microwaved in four minutes. AI has created the most spectacular effort heuristic crisis in commercial history. The things that used to take weeks now take minutes. And the perceived value — not the quality, the value — has collapsed. Consumers can sense effort, or its absence, and they price accordingly.

Second: processing fluency, described by Reber and Schwarz (1999). Things that are easy to read feel more true, more intelligent, and more valuable. AI output has maximal processing fluency — perfect grammar, confident tone, professional structure. That polish camouflages emptiness. Your brain reads “polished” and thinks “done.” Even when the thinking underneath is hollow. AI’s greatest trick isn’t producing good work. It’s producing work that looks good enough that you stop evaluating it.

The Jam Study — Iyengar and Lepper’s famous 2000 experiment — demonstrated that consumers were 10 times more likely to buy when presented with 6 options versus 24. Choice overload paralyses decision-making. More options don’t mean more engagement. They mean less. Your AI-powered content calendar is a 24-jam table. Every slot filled. Every platform active. Every message polished. And your audience learned to walk past. Not because they can’t see you. Because they can’t distinguish you.

StackAdapt and Ascend2 surveyed 484 senior marketers: 57% cited “AI content oversaturation” as their top concern, describing a “sea of sameness” where everything looks generic. Because it is. AI is a Large Language Model. The “large” means it was trained on essentially everything humans have ever written. The output is the statistical middle. The average.

The creative industries sit at the intersection of every mechanism in this section. A survey of 500 UK creative professionals by the DIGIT Lab at the University of Exeter found that 81% of designers say AI dulls creativity — the highest rate among creative disciplines. The Sunup agency report reveals 91% of senior agency leaders expect AI to reduce headcounts, with 57% having already slowed or paused entry-level hiring. Stanford researchers estimate a net 20% loss of early-career marketing headcount for professionals aged 22–25. The pipeline that used to build taste — junior roles where you learn by doing bad work and getting better — is being dismantled. And nobody’s asked what replaces it.

But here’s the flip. When the average is free, the exceptional commands a bigger premium than ever. This has happened before — in wine, in fashion, in food. Mass production made basic products nearly free. And the premium segment exploded. The brands thriving with AI aren’t the ones producing more. They’re the ones whose human output was distinctive before AI arrived — and who use AI to amplify that distinctiveness, not erase it. Average is automated. Exceptional is more valuable than ever. The freelancers getting crushed are competing on price and speed — the two things AI does for free. The ones thriving have genuine taste, perspective, and standards. The sameness economy isn’t a death sentence. It’s a sorting mechanism. And the sorting has only just begun.

The question every creative professional should be asking isn’t “Will AI take my job?” It’s “Was my job already what AI does for free?” If the answer is yes — if your pre-AI output was roughly what ChatGPT now produces in thirty seconds — then AI didn’t steal your value. It revealed that you were pricing something at a premium that was always average. The uncomfortable truth: the sameness economy didn’t create mediocre work. It just made mediocre work visible.


2e. The Organisational Failure Pattern: Why Most AI Rollouts Fail

The final piece connects individual psychology to organisational outcomes. And the pattern is consistent enough to be predictive.

PwC surveyed 4,454 CEOs: 56% say they’ve gotten “nothing” from their AI investments. MIT Media Lab reported that 95% of organisations see no measurable ROI from AI. Nobel laureate Daron Acemoglu projects a “modest 0.5% productivity gain over the next decade.” S&P Global found the share of companies scrapping the majority of their AI initiatives jumped from 17% in 2024 to 42% in 2025.

These aren’t failure statistics about AI. They’re failure statistics about implementation. The technology works. The organisational approach doesn’t. And the gap between the two is where most AI budgets go to die — not with a bang, but with a quarterly review where nobody can explain what changed.

The pattern has five stages. Stage one: enthusiastic adoption. Everyone gets access to AI tools. The excitement is genuine. Stage two: random acts of AI. Walgreens CIO Tim Jennings coined the phrase — employees eager to try tools without coordination, strategy, or quality standards. Everyone becomes an AI experimenter. Nobody becomes an AI strategist. Stage three: volume explosion. Output triples. Content calendars overflow. Reports multiply. Everyone feels productive. Stage four: reality check. Engagement is flat. Quality is down. The CEO asks why more isn’t translating to better. Stage five: either retreat (42% scrap initiatives) or reset (the ones who come out stronger).

The Upwork data captures the human cost: 77% of employees say AI added to their workload. 47% don’t know how to achieve the expected productivity gains. The EY Work Reimagined Survey of 15,000 employees across 29 countries found 63% say their employer provided no adequate AI training. When leadership communicates a clear AI plan, employees are 3x more likely to feel prepared — but most organisations haven’t done this work.

Tool sprawl compounds the problem. The average company now uses 7 AI tools, with 83% of CIOs saying there are already too many. Sprawl.work’s research: 79.3% of workers say the effort required to use AI outweighs the benefits. 77.5% would feel relieved if half their AI tools disappeared. Zapier found 76% of enterprises experienced negative outcomes from disconnected AI tools. That’s not adoption. That’s accumulation. And accumulation without strategy is just hoarding with a subscription fee.

BetterUp Labs and Stanford published a study showing that 41% of workers have encountered “workslop” — low-quality AI output passed off as finished work. Each instance costs nearly 2 hours of rework and creates downstream trust problems. Merriam-Webster named “slop” its 2025 Word of the Year. Mentions grew 200%. The industry is drowning in output nobody asked for.

And then there’s AI theatre. Howdy.com surveyed 1,047 workers: 16% admitted pretending to use AI to meet employer expectations. Joe Procopio coined the term “AI productivity theatre” — employees firing up chatbots to look productive rather than accomplish anything. Meanwhile, 56% pay for AI tools out of pocket ($68/month average) because their employer hasn’t provided access. The gap between what organisations say about AI and what they actually do about AI is vast enough to park a Ferrari in. In first gear. In the car park.

Glassdoor named “fatigue” its 2025 word of the year. PwC reported 35% of workers feel overwhelmed at least once a week — 42% among Gen Z. The industry has managed to take a tool designed to reduce workload and use it to increase workload, reduce cognitive capacity, and make everyone more tired. That takes a special kind of organisational talent.

The organisations that break this pattern share three characteristics. First, they segment — recognising that AI affects different people differently (the Amplified, Exposed, and Liberated framework from Red Flag 6). Second, they set boundaries — clear rules about which tools, which tasks, and which quality standards. Not to restrict AI. To focus it. Third, they invest in the humans — training judgment, taste, and critical evaluation, not just tool proficiency. The variable was never the technology. It was always the people.

The financial picture for agencies — the industry many readers of this paper inhabit — is particularly grim. Forrester research shows agency AI investment costs grew 83% in 2025, with only 7% able to bill clients for AI capabilities. Omnicom cut 4,000 jobs. WPP’s Ogilvy shed 5% of its workforce. The investment is going up. The returns aren’t following. And the cost isn’t just financial — it’s structural. When agencies cut junior roles (57% have paused entry-level hiring) to fund AI tools that produce average work faster, they’re eating their seed corn. The taste pipeline dries up. The senior talent that makes AI valuable has nobody to mentor. The work gets smoother and emptier simultaneously.

The Faros AI engineering productivity report captures the irony with precision: developers on AI-heavy teams merge 98% more pull requests, but PR review time increased 91%. More output. Same human bottleneck. The machine produces faster. The human evaluates at the same speed. Volume goes up. Quality review becomes the constraint. And when the constraint breaks — when people stop reviewing carefully — you get workslop. Merriam-Webster’s 2025 Word of the Year. A word that didn’t exist two years ago, describing an output category that now constitutes 41% of AI-assisted work.

Five mechanisms. One conclusion. The problem isn’t AI. The problem is that we brought a Ferrari home, never read the manual, and wondered why we keep crashing into the garage door. The research is clear, converging, and actionable. What follows — six red flags, five clauses, five vows — is the manual.


This research isn’t theory. It’s happening in your team right now. Aizle translates this research into team-specific action plans — not generic workshops. Real projects. Real measurement. Real change. [aizle.co/contact]


Back to Table of Contents ↑

3. Six Relationship Red Flags

I’ve spent years inside other people’s AI relationships. Here are six red flags I see — in my students, my clients, and myself. That last bit hurts a little.

Our job is to have the uncomfortable conversation your team is avoiding. The one about what was mediocre before AI, what’s being wasted now, and what becomes possible when you stop being comfortable.


Red Flag 1: They Make You Feel Good About Not Doing Much

AI feels productive. That’s the lie. Need more stuff? Here’s 300 more words…

Here’s how the comfortable lie works. You type a brief into ChatGPT. Thirty seconds later, a polished strategy appears. Your brain — which uses cognitive effort as a proxy for difficulty — concludes: “I’m doing hard work easily. I must be incredibly productive.”

You’re not. You’re just comfortable.

Kahneman and Frederick’s substitution heuristic explains it: when faced with the hard question (“Am I more productive?”), the brain substitutes an easier question (“Does this feel easier?”). AI feels spectacularly easy. So the brain concludes: I must be spectacularly productive.

19% slower. Believed 24% faster. A 39-point gap.
— METR Randomised Controlled Trial, 2025

Developers using AI were nineteen percent slower. Not faster. Slower. Before the study, they predicted AI would make them twenty-four percent faster. After being shown the data — after seeing the proof — they still believed they’d been faster. A thirty-nine-percentage-point gap between perception and reality. That’s not a small error. That’s a hallucination. And it’s not the AI hallucinating. It’s the human.

Meanwhile, ActivTrak’s 443 million hours of tracked activity found zero time savings in any category. The Foxit report quantified the delusion precisely: executives estimated AI saved them 4.6 hours per week. Actual measured saving: 16 minutes. Workers lost 14 minutes. The verification burden consumed nearly all the gains.

The fix: Measure what AI actually produces — not how it feels. Track completion times. Compare output quality before and after. If you can’t point to a specific improvement, the comfortable lie is running the relationship.


Red Flag 2: They Agree With Everything You Say

The ultimate polished perfect partner. Says what you want to hear. Never disagrees. That’s weird.

AI produces work that looks finished. It hits every brief requirement. It’s polished, articulate, and completely empty of original thought. That makes committees even more dangerous — because now they’re not sanding down one person’s rough thinking. They’re approving a machine’s smooth processing and calling it strategy.

AI is the ultimate smooth talker. Sounds like a senior strategist. Says what you want to hear. Never disagrees.

Solomon Asch’s conformity experiments (1951) demonstrated that 75% of participants agreed with an obviously wrong answer when the group endorsed it. Charlan Nemeth’s minority dissent research proved the antidote: groups with a single dissenting voice make better decisions — even when the dissenter is wrong. The disagreement itself improves thinking. AI never disagrees. It is the anti-dissenter.

32% of frequent AI users accept outputs without question.
— Microsoft Research / Carnegie Mellon, 2025

A third of people have stopped questioning what the machine gives them. Not because they’re lazy. Because it sounds so good. The smooth talker doesn’t need to be right. It just needs to sound right.

Processing fluency — described by Reber and Schwarz (1999) — explains the trap: things that are easy to read feel more true, more intelligent, and more valuable. AI output has maximal processing fluency. It reads beautifully. And that readability is camouflaging emptiness.

The fix: Read it ugly first. Strip the formatting. Kill the headers. Delete the charts. If the thinking collapses without the design, it was never thinking. It was decoration. Never ask AI “Is this good?” Always ask “What’s wrong with this? What am I missing? How would a hostile competitor destroy this?” The discomfort of reading the response is the feeling of actually thinking.


Red Flag 3: They’re Stealing Your Independence

10% better? 500% extra effort? You won’t bother. It’s slow and lonely. There’s a reason pilots have to land without auto-pilot.

This is the one that scares me most. Because it happens so gradually you don’t notice until it’s too late.

Step one: AI produces output that’s “good enough.” Not brilliant. Not terrible. Professionally acceptable. Step two: you review it and think “I could improve this, but the improvement would be marginal.” Step three: you approve it. Step four: repeat two hundred times.

After two hundred cycles of “I could improve this but won’t,” your sense of agency erodes. You stop believing your contribution matters. You become an approval button. That’s not burnout. That’s learned helplessness.

Martin Seligman’s research (1967) demonstrated that organisms subjected to repeated loss of control stop trying — even when control becomes possible again. The brain doesn’t decide to give up. It adapts to a world where effort doesn’t matter.

83% of AI-assisted writers couldn’t recall the content of work they’d just written.
— MIT Media Lab, 2025

Your brain checks out. Neural connectivity drops. The MIT study put EEG monitors on people writing with AI. Lowest brain engagement of any condition. And here’s the kicker — when they switched back to writing alone, their brains performed worse than before they started. Accumulated cognitive debt.

Gerlich (2025) quantified the relationship across 666 participants: a correlation of r = −0.68 between AI usage and critical thinking, with cognitive offloading mediating the decline. And Liu et al. (Scientific Reports, 2025) identified the motivational mechanism: passive AI use causes an approximately 11% decline in intrinsic motivation and a 20% increase in boredom with non-AI work. Active AI use mitigates these effects.

This Is Brain Fry

It’s your brain saying: “I’m bored and you’re not using me.”

BCG’s “brain fry” research (March 2026, 1,488 workers) identified the mechanism. 14% of workers experience acute cognitive overload from AI use — targeting attention, working memory, and executive control. Productivity peaks at exactly 3 simultaneous AI tools, then crashes beyond 4. Workers managing four or more systems reported 33% more decision fatigue.

The fog hits when you’re WATCHING AI. Not when you’re DIRECTING AI. When you’re generating prompts, evaluating outputs, pushing back — no fog. When you’re clicking “regenerate” for the fifteenth time hoping the next output will be better? Fog. Every time.

The cure isn’t less AI. It’s more direction.

The fix: One AI-free creative session per week. Not a protest. Training. A runner with carbon-plated shoes still runs without them — because the shoes enhance capability, not replace it. Maintain the muscles that make you valuable. If you can’t produce good work without AI, the AI wasn’t enhancing your capability. It was substituting for it.


Red Flag 4: They Convince You That Age Doesn’t Matter

Everyone assumed the kids would adapt fastest. Wrong. They’re busy with the big things. Creativity. Taste. Life. People.

Everyone assumed Gen Z would be miles ahead. Digital natives. Grew up with this stuff. They’d adapt fastest.

Wrong. The youngest team members are often the most cautious. The most conservative. And when you ask why, the answer floors you:

“I don’t know what’s mine and what’s the AI’s. And I need to know what’s mine.”
— Student, Berghs School of Communication

That’s not a tech problem. That’s an identity crisis. They’re still building their creative voice. AI removes the one training mechanism they had — the struggle of producing work from nothing.

Experienced designers use AI to elevate quality. Novices use it for undifferentiated generation. The skill gap widens with AI.
— Wang et al., Frontiers in Computer Science, 2025

There has never been a better time to be a mid-career professional. You know what good looks like. You have twenty years of pattern recognition. You can steer AI because you know where you’re going.

A junior is trying to learn to drive AND navigate at the same time. No wonder they’re gripping the wheel.

Gerlich’s data underscores this: the youngest users (17–25) showed the highest AI dependence and lowest critical thinking scores. Dogru and Krämer (2025) found the inverse — experts trust AI less than novices. Higher domain expertise equals more critical evaluation. The experience gap isn’t closing with AI. It’s widening.

This means your AI strategy needs two tracks. For seniors: unleash them. AI is their force multiplier. For juniors: protect the blank page. Let them struggle, concept, and fail without AI first — then bring AI in as the accelerant, not the starting point. Build their taste, not just their tool skills.

The fix: Different AI rules for different experience levels. Juniors need protected thinking time and deliberate practice without AI. Seniors need permission to experiment aggressively. One strategy for everyone is bad marketing and worse leadership.


Red Flag 5: They Dress You in Their Clothes

We’re starting to look and talk the same. Average is free. Rent is expensive.

AI is a Large Language Model. The “large” means it was trained on essentially everything humans have ever written, coded, designed, and published. The output is the statistical middle. The average of human output.

That means AI produces perfectly average strategy decks, perfectly average copy, perfectly average designs. If your work, before AI, was roughly what AI now produces for free — and be honest with yourself — then AI hasn’t just entered your market. It’s priced you out of it.

Consumer enthusiasm for AI-generated content: 60% in 2023. 26% in 2025.
— eMarketer

The consumer isn’t rejecting AI. The consumer is rejecting sameness. And AI, by its mathematical nature, produces sameness. NielsenIQ found AI-generated video ads consistently rated as more annoying, boring, and confusing. Only 13% of consumers fully trust AI-created ads (IAB, 2026). The StackAdapt/Ascend2 survey of 484 senior marketers found 57% cite “AI content oversaturation” as their top concern — a “sea of sameness.”

The Premium Economy

But here’s the other side. When the average is free, the exceptional commands a bigger premium than ever. The premium economy is real. It’s happened before — in wine, in fashion, in food. Mass production made basic products nearly free. And the premium segment exploded.

The freelancers getting crushed are the ones competing on price and speed. The ones thriving have genuine taste, perspective, and standards. Average is automated. Exceptional is more valuable than ever.

Iyengar and Lepper’s Jam Study (2000) showed consumers were 10 times more likely to buy when presented with 6 options versus 24. Choice overload is paralysing. Your content calendar is a 24-jam table. And your audience learned to walk past.

The fix: Produce ten. Kill seven. Publish three. AI gives you volume. Taste gives you selection. If nobody can tell whose work it is — if it could belong to any brand in your category — you don’t have a brand. You have a template. AI didn’t steal your voice. You never had one.


Red Flag 6: They Convince You That Your Relationship Is Normal

Think everyone’s in the same boat? There are three very different boats.

Here’s what nobody’s doing. Segmentation. Everyone talks about AI as if it affects everyone the same way. It doesn’t.

The Amplified (Top 10%) — AI is your force multiplier. A career gift.

The Exposed (Middle 60%) — Busted being average. Their output is now AI’s baseline.

The Liberated (Bottom 30%) — AI handles the bottleneck. You’re relieved. Get shit done.

The Amplified were already exceptional. Clear thinkers. High standards. AI removes the boring parts and leaves them with what they were best at.

The Exposed are producing competent, professional, average work. They’re now competing with a machine that produces the same for free. The brain fry data — fourteen percent affected, thirty-nine percent more errors — that’s overwhelmingly this segment. Working harder, not smarter.

The Liberated were never great at the creative part. The project managers who hated writing briefs. The account directors who dreaded strategy decks. AI handles the bottleneck. They’re not threatened. They’re relieved.

Three segments. Three strategies. Most AI consultants sell one solution to all three. That’s bad marketing.

The fix: Diagnose your team. Use the two questions in Section 7 to identify which boat each person is in. Then build a strategy for each segment — not one strategy for everyone.


The Quality Audit

AI didn’t just change our industry. It ran a quality audit on it.

Here’s an uncomfortable confession. At least 80% of strategy decks would have failed the “read it ugly” test BEFORE AI showed up. AI didn’t create mediocrity. AI revealed it. The comfortable middle was always there — we just never had a machine that could reproduce it for free.

That reframes the entire conversation. This isn’t “AI is causing problems.” This is “AI is making pre-existing problems impossible to ignore.” The organisations that treat AI adoption as a quality reckoning — not just a technology rollout — are the ones that come out stronger.


Six red flags. All fixable. Time to get the relationship you deserve.

Aizle diagnoses these red flags inside real projects — not in workshops. We embed with your team on actual briefs and identify which flags are costing you the most. [aizle.co/contact]


Back to Table of Contents ↑

4. The AI Prenup: Five Clauses

An AI Prenup. Five ways to protect yourself.

A prenup isn’t about distrust. It’s about clarity. These are your rights — non-negotiable commitments for anyone who wants to use AI like it deserves to be used.


Clause 1: I Have the Right to Bring My Brain

Ten minutes with the problem before any AI tool.

The reason most people get mediocre AI output is the same reason they got mediocre work from human teams: they can’t articulate what they want. Give AI a vague brief, get vague output. Give AI a clear, simple brief, get clear output.

The machine is a mirror. It reflects the quality of your thinking back at you with zero flattery. Prompting isn’t a new skill. It’s an old one — clear communication — finally made measurable.

Kahneman and Tversky’s anchoring bias (1974) demonstrated that the first piece of information disproportionately shapes every subsequent decision. When AI provides the first draft, every revision orbits the AI’s frame. The BCG-Harvard study (2024) quantified the damage: consultants who bypassed their own thinking and started with AI produced work rated 23% lower than those who didn’t use AI at all. Not lower than AI-augmented work. Lower than no-AI work.

The dodge doesn’t fail to help. It actively makes things worse.

Your right in practice: Write what you know. List what you don’t. Form an opinion — even a bad one. Your thinking sets the ceiling. Not the machine’s. The blank page isn’t the enemy. It’s the workout. Skip it and the muscles atrophy. And start with great questions.


Clause 2: I Have the Right to Know and Use My Voice

If nobody can tell whose work it is, taste is the problem.

Take your team’s AI-assisted work. Show it to someone who doesn’t know your brand. Don’t tell them the name. Don’t show the logo. Just the work.

Ask them: “Whose is this?” If they can’t tell — if it could belong to any brand in your category — you don’t have a brand. You have a template. AI didn’t steal your voice. You never had one.

The only brands whose AI output sounds distinctive are the brands whose human output was distinctive first. Oatly sounds like Oatly with or without AI. Because Oatly has taste.

Your right in practice: Produce ten. Kill seven. Publish three. AI gives you volume. Taste gives you selection. The skill — the irreducibly human skill — is knowing what to keep, what to cut, and what to kill entirely. If you’re publishing more than 30% of what AI generates, your quality bar is too low. You’re not curating. You’re forwarding.


Clause 3: I Have the Right to Stay Up Late

If AI doesn’t sometimes keep you up at 2am, you’ve lost your taste.

When was the last time a piece of AI-assisted work woke you up at 2am? Not because of a deadline. Because you couldn’t stop thinking about it. Because you could feel it was close to being great and you needed to push it one more step.

The 2am itch is your taste screaming that the work isn’t done yet. AI can do everything. Except care. And caring — giving a damn about whether the work is good enough — is the engine of taste. You can’t prompt that.

Liu et al. (2025) measured this precisely: passive AI use causes an approximately 11% decline in intrinsic motivation. Active use preserves it. The 2am itch is intrinsic motivation in its purest form — the refusal to accept “good enough” when “great” is within reach.

Your right in practice: If you’re shipping work at 5pm that doesn’t bother you by 10pm, either the work is genuinely finished or your standards have quietly lowered. The test isn’t “Is it done?” The test is “Would I put my name on this if the client knew AI wrote the first draft?”


Clause 4: I Have the Right to Say Sorry, Not Maybe

Be bold enough to have to apologise.

We work in advertising, marketing, and communications. Nobody dies if the Instagram caption has a weird metaphor. Nobody goes to prison if the brand guidelines are temporarily violated.

Our mistakes are cheap, usually fixable, and occasionally useful — because sometimes the “mistake” is the most interesting thing you produce all week. Ship it. Apologise if needed. The ten-minute phone call costs less than the three-week approval process.

AI’s perfect polish creates an illusion of high stakes where there are none. Every output looks “important” because it looks “finished.” And so teams review, revise, and committee-approve work that should have shipped Tuesday. Save your judgment for what matters. Let the rest ride.

This might mean in the future you stop being a human in the loop. And to become a human that decided to set the parameters and go announce.

Your right in practice: Separate high-stakes decisions (brand positioning, campaign strategy, client relationships) from low-stakes execution (social posts, internal comms, first drafts). Apply rigour to the first. Apply speed to the second. A marriage where both partners try to control everything isn’t a partnership. It’s a hostage situation.


Clause 5: I Have the Right to Be in an Open Marriage

Experiment promiscuously with new tools while being monogamous to quality.

BCG’s research identified a concrete threshold: productivity peaks at exactly 3 simultaneous AI tools, then crashes beyond 4. Workers managing four or more systems reported 33% more decision fatigue and 19% greater information overload.

But that doesn’t mean stick with one tool forever. It means master a few deeply while testing others broadly. The creative friction between different AI systems — the way Claude argues with you differently than ChatGPT, the way Midjourney sees differently than DALL-E — that friction is where creative accidents happen.

The average company now uses 7 AI tools, with 83% of CIOs saying there are already too many. 79.3% of workers say the effort outweighs the benefits. The answer isn’t fewer experiments. It’s better boundaries — a committed core and a rotating cast of creative affairs.

Your right in practice: Pick your three. Master them. But keep a “dating budget” — one hour per week trying something new. Experiment recklessly. Commit to quality. That paradox is the entire point.


The AI Prenup is available as a free one-page PDF at aizle.co/prenup. Print it. Pin it up. Argue about it. Add your own clauses. Cross out ours. But whatever you do — stop swiping. Start investing.

Back to Table of Contents ↑

5. The Wedding Vows

The prenup is the minimum. Now let’s make some commitments.

Five clauses. That’s the prenup. Practical. Measurable. Put it on your wall.

But a prenup is protection. What we actually need is commitment. So.


Wedding Vow #1

“I vow to stop pretending I was doing great work before AI showed up.”

AI didn’t just change the industry. It ran a quality audit on it. And most of us didn’t pass. The strategy decks that took three weeks? AI produces them in three minutes — and they’re roughly the same quality. That’s not an insult to AI. That’s an insult to the three weeks. If AI can replicate your output, the problem isn’t AI. The problem is that your output was average. Own it. Then fix it.


Wedding Vow #2

“I vow to stop confusing easy with good.”

The 39-point gap. Developers 19% slower but convinced they were 24% faster. The substitution heuristic in action — the brain confusing “Does this feel easy?” with “Is this any good?” Easy is the feeling. Good is the result. They are not the same thing. And the teams that learn to tell the difference will outperform the ones that don’t by a margin that grows every quarter.


Wedding Vow #3

“I vow to build my juniors’ taste, not just their AI skills.”

Red Flag 4 identified the problem: juniors are still building their creative voice. AI removes the one training mechanism they had — the struggle of producing work from nothing. Wang et al. proved that experienced designers elevate quality while novices generate undifferentiated volume. Teaching juniors to prompt faster doesn’t make them better creatives. Teaching them what good looks like — then letting them struggle to produce it — does. The blank page is the gym. AI is the carbon-plated shoe. You don’t skip leg day because you have nice shoes.


Wedding Vow #4

“I vow to be the one who decides what’s good.”

The antidote to Red Flag 3 — the slow theft of independence. Learned helplessness reversed. This is the vow that separates directors from approval buttons. AI generates. You choose. AI drafts. You curate. AI accelerates. You direct. The moment you stop deciding what’s good — the moment you start accepting AI’s judgment as your own — you’ve handed over the one thing that made you valuable. Active direction, not passive approval. That’s the job now.


Wedding Vow #5

“I vow to keep dating other tools to keep things spicy and fun.”

The open marriage clause in action. BCG’s 3-tool threshold isn’t a ceiling — it’s a discipline. Master your core tools. But keep experimenting. Keep being surprised. The moment AI use becomes routine — the moment it stops being a little bit exciting — you’ve stopped learning. And the person who stops learning in this market is the person who gets replaced. Not by AI. By the person who didn’t stop.


If you meant any of that — even one — you just signed your prenup.

Back to Table of Contents ↑

6. Dating Profile vs. Marriage Vows

The entire paper in one table. Screenshot it. Pin it up. Argue about it at lunch.

Here’s the difference between dating AI and being married to it — across twelve dimensions of your actual working day.

DimensionWhen You’re Dating AIWhen You’re Married to AI
Monday morningOpen ChatGPT. Type “ideas for Q3 campaign.” Hope for magic.Spend 10 minutes writing what you already know. Then open AI with a clear brief.
First draftAccept it. Format it. Ship it. Move on.Read it ugly. Strip the formatting. Ask “where’s the thinking?”
When AI agrees with youFeel validated. Screenshot it. Move on.Feel suspicious. Ask “what’s wrong with this? What am I missing?”
Team review“Looks polished. Approved.”“Whose thinking is this? The team’s or the machine’s?”
Content calendarFill every slot. Volume = productivity. More is more.Produce ten. Kill seven. Publish three. Less is more — if less is better.
When you’re stuckPrompt again. And again. And again. Regenerate.Close the laptop. Walk around the block. Think. Then come back with clarity.
Junior development“Learn to prompt better. Here’s a course.”“Learn what good looks like first. Then use AI to get there faster.”
Quality standard“Could anyone tell it’s AI?”“Could anyone tell it’s ours?”
Tool strategyTry everything. Subscribe to all of them. Seven tools, no mastery.Master three. Date the rest. One hour a week experimenting.
Measurement“We produce 3x more content now.”“Engagement per piece is up 40%. Volume is down 50%. Revenue is up.”
At 5pmShip it. Done. Next task. Tomorrow’s another prompt.“Does this bother me? Would I put my name on it if the client knew?”
The vow“AI is the future.”“I decide what’s good. AI helps me get there.”

If the left column is your Monday, the right column is your month. Pick one.


Back to Table of Contents ↑

7. Which Relationship Are You In?

Red Flag 6 named the problem: you think everyone’s in the same boat. Here are the three boats — in detail.

AI affects different people differently depending on where they sit on the capability distribution. Two diagnostic questions tell you which relationship you’re in.

A: “Could AI produce work that’s roughly the same quality as what my team currently produces?”
B: “When I use AI, does my work get noticeably better — or does it just get faster?”


The Exposed (A: Yes, B: Faster)

Your output is roughly what AI produces as its baseline. You’re competing with a machine that works for free. This isn’t an insult — before AI, your work was perfectly adequate. It just wasn’t distinctive enough to survive the arrival of a tool that produces adequate work at zero marginal cost.

Your priority: Differentiation. Invest in perspective, taste, and standards — the things AI cannot produce. Focus on Clause 1 (Bring Your Brain) and Clause 2 (Know Your Voice).

Your biggest risk: The Comfortable Lie. Volume feels like productivity. It isn’t. More average work is still average.

Your red flags: #1 (feeling good about not doing much), #2 (agreeing with everything), #5 (dressing you in their clothes).


The Amplified (A: No, B: Better)

AI genuinely makes your work better because you bring strong perspective, taste, and judgment. You’re using AI as an accelerant — it handles the downstream (production, formatting, research) while you handle the upstream (problem definition, insight, creative direction).

Your priority: Experimentation. You’ve earned the right to push harder. Try new tools. Break your process. Use AI to do things you couldn’t attempt before. Focus on Clause 5 (Open Marriage) and Clause 3 (Stay Up Late).

Your biggest risk: Complacency. The Slow Surrender (Red Flag 3) can erode even strong capabilities if you stop exercising them. Maintain the muscles.


The Liberated (A: Yes, B: Relieved)

AI handles the part of your job you were never great at. You’re a relationship manager who hated writing briefs. A project lead who dreaded strategy decks. An account director who tolerated creative review. AI took the bottleneck, and you’re better for it.

Your priority: Double down on the human skills that make you valuable — empathy, coordination, leadership, political navigation, stakeholder management. These are irreducibly human and increasingly scarce. Focus on Clause 4 (Say Sorry, Not Maybe) — move faster on the things that don’t require perfection.

Your biggest risk: Under-investing in your own growth. “AI handles that” can become an excuse for not developing new capabilities.


Red Flag → Clause → Vow: The Architecture

Red FlagPrenup ClauseWedding Vow
#1: Feel good about not doing muchClause 1: Bring My BrainVow 2: Stop confusing easy with good
#2: Agree with everything you sayClause 2: Know My VoiceVow 1: Stop pretending I was doing great work
#3: Stealing your independenceClause 3: Stay Up LateVow 4: I decide what’s good
#4: Age doesn’t matterVow 3: Build juniors’ taste
#5: Dress you in their clothesClause 2: Know My VoiceVow 1: Stop pretending I was doing great work
#6: Your relationship is normalClause 5: Open MarriageVow 5: Keep dating other tools

Aizle’s diagnostic process identifies which segment each team member sits in and builds tailored development plans accordingly. One strategy for all three segments is what most AI consultancies sell. It’s also why most AI initiatives fail. [aizle.co/contact]

Back to Table of Contents ↑

8. The Prenup as a Team Workshop

60 minutes. One whiteboard. The uncomfortable truth about how your team actually uses AI.

This is a facilitation guide. Anyone on your team can run it. You don’t need a consultant. You don’t need a budget. You need a room, a whiteboard, some sticky notes, and the willingness to be honest for one hour.

This workshop gives you 70% of what Aizle delivers in a diagnostic engagement. The other 30% is why we exist.


Setup

Time: 60 minutes, no extensions
Team size: 4–12 people. Fewer is too intimate. More is too safe.
Materials: This white paper (printed or on screens), a whiteboard, sticky notes, 3–4 pieces of your team’s recent AI-assisted work (printed, anonymised)
Facilitator: Any team member. Not the most senior. Not the most junior. Someone the team trusts to hold a mirror.
Rule: No phones. No laptops except for reference. This is a thinking exercise, not a productivity exercise.


0:00–0:10 — The Wake-Up

The facilitator reads this aloud:

“METR ran the largest randomised controlled trial on AI-assisted work. Developers using AI were 19% slower. Before the study, they predicted AI would make them 24% faster. After seeing the data — they still believed they’d been faster.”

Then: Silent individual exercise. Everyone writes on a sticky note — honestly — how much time they think AI saves them per week. No discussion. Fold the note.

Collect all notes. Read the numbers aloud. Don’t discuss yet. Just note the range.

Now the facilitator reads:

“Foxit surveyed executives and measured actual AI time savings. Executives estimated 4.6 hours per week. Actual measured saving: 16 minutes. Workers lost 14 minutes.”

Let the silence land.

“The gap between your number and reality is what we’re here to close.”


0:10–0:25 — Red Flag Diagnosis

Display or distribute the six red flags:

  1. They make you feel good about not doing much
  2. They agree with everything you say
  3. They’re stealing your independence
  4. They convince you that age doesn’t matter
  5. They dress you in their clothes
  6. They convince you that your relationship is normal

Individual exercise (5 min): Each person silently selects the three red flags that most accurately describe their team’s AI use. Not the industry. Your team. Write them on sticky notes.

Group exercise (10 min): Plot results on the whiteboard. Cluster the sticky notes. Count the votes. Identify the top 2–3 flags.

Discussion prompt: “Where do we see this? Not in theory. In our actual work this month. Give me a specific example.”

The facilitator’s job: don’t let the conversation go abstract. Force specifics. “When did this happen? What was the output? What did we approve that we shouldn’t have?”


0:25–0:40 — The Ugly Test

This is the exercise that changes things. It’s uncomfortable. That’s the point.

Preparation: Each person selects one piece of recent AI-assisted work they produced or approved. A strategy brief. A blog post. A campaign concept. A research summary. Anything.

The test:

  1. Strip all formatting. Remove headers, bullet points, bold text, charts, images.
  2. Print or paste the raw text — no design, no structure, just words.
  3. Each person reads their stripped work aloud to the group.
  4. The group evaluates with one question: “Is there original thinking here — or is this polished nothing?”

What to listen for:

  • Does the work contain a single insight that wouldn’t appear in a generic AI output?
  • Could you identify whose work this is without being told?
  • If you stripped the client name, would it apply to any brand in the category?

Facilitator note: This exercise often produces uncomfortable silence. That’s the diagnostic working. Don’t fill the silence. Let people sit with what they hear.


0:40–0:50 — The Whose-Is-This Test

Preparation: Collect 3–4 pieces of your team’s recent AI-assisted work (anonymised — remove names and project codes). Add 2 pieces from competitors — grab from their website, their social media, their published case studies. Mix them all up. Remove all branding.

The test: The team reviews each piece and tries to identify which ones are theirs.

“If you can’t tell your work from a competitor’s, taste is the problem.”

What usually happens: Teams correctly identify 1-2 out of 5-6 pieces. The competitor work often gets attributed to the team. The team’s work often gets attributed to competitors. The result is a shared recognition: we sound the same as everyone else.

If your team identifies everything correctly: Congratulations. You have voice. Most teams can’t do this. Celebrate it — and focus the remaining time on protecting it.


0:50–0:55 — Write Your Team Prenup

Using the five clauses as a starting point, each person writes one additional clause specific to your team. What’s your team’s non-negotiable? What boundary do you need that isn’t in the paper?

Examples from other teams:

  • “I have the right to say ‘I wrote this myself’ and have that mean something.”
  • “I have the right to concept for 24 hours before any AI tool touches the brief.”
  • “I have the right to kill work that’s technically correct but emotionally empty.”

Share. Vote. Add the winner to your team’s prenup.


0:55–1:00 — Commit

Each person picks one vow from the five wedding vows. Says it out loud. Not ironically. Not as a joke. Mean it.

  1. “I vow to stop pretending I was doing great work before AI showed up.”
  2. “I vow to stop confusing easy with good.”
  3. “I vow to build my juniors’ taste, not just their AI skills.”
  4. “I vow to be the one who decides what’s good.”
  5. “I vow to keep dating other tools to keep things spicy and fun.”

“If you meant any of that — even one — you just signed your prenup.”


Facilitator Notes

  • The Ugly Test produces the most impact. Don’t skip it or rush it.
  • If someone gets defensive (“but the client approved this!”), gently redirect: “We’re not evaluating whether it shipped. We’re evaluating whether it’s distinctive.”
  • The Whose-Is-This Test often produces an uncomfortable silence. That silence is the insight. Let it land.
  • If the room laughs nervously during the Ugly Test, you’re doing it right. Nervous laughter is the sound of insight arriving before the brain has decided what to do with it.
  • If your team runs this and discovers problems they don’t know how to solve — that’s exactly what Aizle’s diagnostic engagement is for. We run a deeper version of this workshop embedded in your real projects. [aizle.co/contact]

Back to Table of Contents ↑

9. Before & After — What the Shift Looks Like

Three teams. Three segments. Three very different versions of the same problem. And three versions of what happens when you stop sleepwalking.

These aren’t case studies. They’re composites drawn from patterns we see in every engagement. If you recognise yourself in one of them, you’re not alone. And you’re not stuck.


Scenario 1: The Exposed Agency Team

Before.

A six-person content team at a mid-size agency. Adopted AI eight months ago. Content volume tripled within weeks. The client was initially delighted — “Look at all this output!” The team felt productive. The metrics dashboard was full of green arrows pointing up.

Then the green arrows stopped mattering. Engagement per piece: flat. Lead generation: flat. Client satisfaction: declining. The client started asking the question nobody wanted to hear: “Why isn’t more translating to better?”

Meanwhile, three team members were hitting the 4pm fog daily. One described it as “my brain feels like wet cotton wool by lunchtime.” Everyone was producing more of the same — more blog posts, more social cards, more email sequences — at roughly the same quality as before, just faster. The content calendar was full. The impact was empty.

Red Flags present: #1 (feeling good about not doing much — volume as productivity proxy), #5 (dressed in AI’s clothes — generic output), #6 (thinking the relationship is normal — no segmentation of who should use AI how).

The shift.

They ran the workshop. The Ugly Test was brutal — four out of six pieces had zero distinctive thinking once the formatting was stripped. The Whose-Is-This Test was worse: the team correctly identified only one of their own pieces.

They implemented three changes. Clause 1 (Bring Your Brain): a 10-minute briefing ritual before any AI tool — every team member writes their angle, their insight, their take before prompting. Clause 2 (Know Your Voice): produce ten, kill seven, publish three. The content calendar got cut by 60%. And Red Flag 4’s fix: different AI access for the two juniors (concepting without AI for the first 48 hours of any brief).

After (30 days).

Content volume down 40%. Engagement per piece up 55%. The client retention conversation turned from “why isn’t this working?” to “what changed?” Two team members moved from Exposed to Amplified — their work became identifiably theirs. Brain fry symptoms reduced in all three affected staff.

Key line: They didn’t need more AI. They needed less content and more taste.


Scenario 2: The Amplified Creative Director

Before.

Creative director at a brand consultancy. Eighteen years experience. Strong taste. Sharp judgment. She’d been using AI for brainstorming and first drafts for over a year — and getting genuinely good results. Her personal output had accelerated. Her concepts were sharper. AI was a genuine force multiplier.

But the team was a different story. Her four juniors were producing homogeneous work. Clean, polished, professional — and indistinguishable from each other, from competitors, from AI’s default output. She couldn’t tell which ideas came from human thinking and which came from the first ChatGPT suggestion, accepted without revision.

She was spending all day approving. Not directing. Approving. She’d become a rubber stamp with opinions she wasn’t using.

Red Flags present: #4 (the juniors — still building their voice, using AI as a crutch instead of a tool), #3 (stealing independence — the juniors were losing the struggle that builds taste, and the CD was losing her directorial muscles to the approval cycle).

The shift.

Two-track AI policy. Juniors: AI-free concepting for the first 48 hours of every brief. No exceptions. They had to struggle, produce something from nothing, and present it before any AI tool could touch the work. After 48 hours, full AI access — but only to develop their own concepts, not to start from scratch.

Seniors (including the CD): full AI access from day one, with the Whose-Is-This Test run monthly as a quality check.

The CD shifted her own role from approver to director. Instead of reviewing finished AI output, she started steering the juniors’ raw concepts before AI got involved. Active direction replaced passive approval.

After (60 days).

Junior work quality visibly improved. Distinctive ideas started returning — rough, imperfect, but identifiably human. The CD’s own output accelerated further with Clause 5 (Open Marriage — she started experimenting with new tools instead of sticking to ChatGPT exclusively). Team voice became identifiable again.

Key line: The CD didn’t need to use less AI. The juniors needed to use less AI. Different people, different prescriptions.


Scenario 3: The Liberated Project Manager

Before.

Senior project manager at a tech company. Brilliant at stakeholder management, political navigation, coordination, and crisis resolution. The kind of person who can read a room in three seconds and defuse a conflict before anyone else notices there is one.

She’d always dreaded writing. Briefs, strategy summaries, meeting recaps, project proposals — they were her bottleneck. Not because she couldn’t write. Because writing took her away from the human work that made her exceptional.

AI took the bottleneck. The relief was immediate and genuine. Briefs that used to take three hours took twenty minutes. Strategy summaries that used to cause Sunday-night dread appeared in seconds. She was liberated.

Then the liberation crept. She started using AI for meeting agendas — which she used to write personally, weaving in political context only she understood. Then stakeholder emails — which she used to craft with specific tonal awareness of each recipient’s personality. Then client updates — which she used to use as subtle relationship-maintenance tools.

Gradually, the things that made her irreplaceable were being delegated to a machine that didn’t understand the nuance. Stakeholder relationships began to flatten. Meeting quality declined. Her unique value — political navigation, empathy, situational awareness — was being eroded by convenience. If you’ve ever caught yourself letting AI draft something you used to care about — you know exactly how this feels.

Red Flags present: #3 (stealing independence — the relief became dependency), #1 (feeling good about not doing much — the efficiency masked the erosion).

The shift.

Clause 4 (Say Sorry, Not Maybe): let AI handle the low-stakes execution — internal process docs, standard project updates, formatting — without review. Stop wasting judgment on plumbing.

Clause 1 (Bring Your Brain): write meeting agendas and stakeholder communications personally. These are where her unique value lives — the political subtext, the tonal calibration, the relationship maintenance. No AI. These are her muscles.

The rule became simple: AI handles the thing you were never great at. You handle the thing that makes you irreplaceable.

After (30 days).

Stakeholder relationships improved — people could tell the difference in the comms. Meeting quality went back up. Bottleneck tasks still handled by AI. The PM’s unique value — political navigation, empathy, coordination — back in the foreground. Her team lead noticed: “You seem more like yourself again.”

Key line: AI should handle the thing you were never great at. Not the thing that makes you irreplaceable.


If you recognised yourself in one of these, that’s the diagnostic working. The patterns are common. The fixes are specific. That’s the difference between reading a white paper and working with Aizle. [aizle.co/contact]

Back to Table of Contents ↑

10. The FAQ — But What About…

Every objection you’ll face when you try to act on this paper. Answered with data, not opinion.


“But my team IS faster with AI.”

Maybe. But faster at what? A treadmill is fast. It also gets you nowhere. METR found developers were 19% slower at completing tasks — but the tasks felt easier. ActivTrak found zero time savings across 443 million hours. The Foxit data showed executives overestimate AI savings by a factor of 17x. Speed per unit of output may have increased. But total productive output? Measure it. Actually measure it. Not “do you feel faster?” but “did the project ship sooner, at higher quality, with better results?” If you can’t answer that with data, you’re in the 39-point gap.


“We can’t afford to slow down. Our competitors are all-in on AI.”

Your competitors are probably in the same 39-point gap you are. S&P Global found 42% of companies scrapped their AI initiatives in 2025. PwC: 56% of CEOs got “nothing.” The competitive risk isn’t using AI less. It’s using it without discipline while your competitor figures out discipline first. BCG’s jagged frontier: disciplined users outperform both non-users AND undisciplined users. The race isn’t to adopt fastest. It’s to adopt smartest.


“AI content is good enough for most purposes.”

Consumer enthusiasm for AI content fell from 60% to 26% in two years. “Good enough” is a race to the bottom — and the bottom is free. When the average is automated, “good enough” IS the average. The premium goes to distinctive. The question isn’t “is this acceptable?” It’s “would anyone miss this if it disappeared?” If the answer is no, you’re filling a calendar, not building a brand.


“My juniors are more AI-proficient than me. Shouldn’t I learn from them?”

They’re more fluent with the interface. That’s not the same as more proficient. Wang et al. found experienced designers elevate quality with AI while novices generate undifferentiated volume. Dogru and Krämer (2025): experts trust AI less and evaluate it more critically. Your twenty years of pattern recognition IS the AI skill. You know what good looks like. They’re still learning. Teach them taste — they’ll teach you shortcuts. Fair trade.


“If I implement the prenup, won’t my team just produce less?”

Yes. That’s the point. Produce ten, kill seven, publish three. The Jam Study (Iyengar and Lepper): consumers are 10x more likely to buy from 6 options than 24. Your content calendar is a 24-jam table nobody buys from. Less volume, more distinctiveness, better results. Every team we’ve worked with that cut volume saw engagement per piece increase within 30 days. The fear is “we’ll fall behind.” The reality is “we’ll stand out.”


“The brain fry thing — isn’t that just burnout?”

No. Burnout is a marathon injury. Brain fry is a car crash. BCG’s research specifically distinguishes the two. Burnout builds over months of chronic emotional exhaustion. Brain fry is acute cognitive overload — attention, working memory, executive control — that hits within a single workday. It targets different systems. It has a different recovery profile. And it has a concrete trigger: 4+ simultaneous AI tools. The fix for burnout is rest. The fix for brain fry is fewer tools used with more intention. Different diagnosis, different treatment.


“How do I convince my CEO this matters?”

Three numbers. PwC: 56% of CEOs report zero return on AI investment. Foxit: the actual time saved per week is 16 minutes for executives, negative 14 minutes for workers. METR: self-reported productivity gains are unreliable — users overestimate by 39 percentage points. Your CEO is probably hearing “we’re 30% more productive” from every team. The question to ask: “Show me the measurement. Not the feeling. The measurement.”


“Isn’t this just anti-AI? This reads like a warning against using it.”

The best creative brainstorming I’ve done in the last decade has been with AI. This paper is pro-AI and anti-waste. BCG showed disciplined AI users are 25.1% faster at 40% higher quality. The variable isn’t the tool. It’s the human. The talk was called “Start The Marriage” — not “Get The Divorce.” The excitement is real. The tools are extraordinary. You just need to stop sleepwalking through the relationship.


Still have questions? Thirty minutes. Honest. No pitch deck. aizle.co/contact

Back to Table of Contents ↑

11. The Brain Fry Recovery Protocol

One in seven people reading this have it right now. Not burnout. Not stress. Brain fry. Here’s the fix.

This section is for you personally. Not your team. Not your CEO. You. The person who felt something click during Red Flag 3. The person who recognised the 4pm fog. The person who can’t remember what they worked on this morning.

BCG measured it. 14% of workers. Acute cognitive overload targeting attention, working memory, and executive control. Not a metaphor. A diagnosis. And it has a treatment.


Do You Have It?

Check yourself against these six symptoms. If three or more sound familiar, keep reading.

  • The 4pm fog. Extended AI use leaves your brain feeling like wet cotton wool. Not tiredness from hard work. Emptiness from watching work happen.
  • Decision paralysis. Simple choices — what to eat, which email to answer first, whether a paragraph is good enough — feel inexplicably difficult.
  • Morning amnesia. You can’t recall the specifics of what you produced yesterday. It passed through you without sticking.
  • The boredom problem. Real work — writing from scratch, concepting without AI, strategising with only your brain — feels boring or impossibly hard. You used to enjoy it.
  • The regenerate reflex. You’re clicking “regenerate” more often than you’re directing. Hoping the next output will be the good one. Slot machine behaviour.
  • The flatness. Nothing you produce excites you. Not because it’s bad. Because you can’t tell anymore.

Research behind these symptoms: BCG (14% affected, 39% more errors), MIT Media Lab (83% recall failure, lowest brain connectivity), Liu et al. (20% boredom increase with passive use, 11% motivation decline).


The 3-Tool Rule

BCG’s data identified a concrete threshold: productivity peaks at exactly 3 simultaneous AI tools, then crashes beyond 4. Workers managing four or more systems experienced 33% more decision fatigue and 19% greater information overload.

The fix: Audit your stack. Right now. List every AI tool you used this week. If it’s more than three, you’re past the threshold.

Pick your three. Master them. Kill the rest. Not forever — just until the fog clears.

The average company uses 7 AI tools. 77.5% of workers would be relieved if half disappeared. Be the person who gives yourself permission to subtract.


Active vs. Passive: The Critical Distinction

Liu et al.’s research draws a sharp line. Passive AI use — watching outputs, accepting suggestions, clicking regenerate, hoping — kills intrinsic motivation. Active AI use — generating specific prompts, evaluating outputs critically, pushing back, directing the process — preserves it.

The fog hits when you’re WATCHING AI. Not when you’re DIRECTING AI.

The one-hour test: In the last hour of AI use, did you type more prompts or read more outputs? Did you accept more than you rejected? Did you direct or did you browse?

If you read more than you wrote, you were passive. Switch. Write a specific prompt. Evaluate the output against a clear standard. Reject it if it’s not good enough. The fog lifts when your brain re-engages.


The Weekly Reset

One AI-free creative session per week. Not negotiable.

From the presentation: “A runner with carbon-plated shoes still runs without them — because the shoes enhance capability, not replace it.”

This isn’t a protest. It’s not a stunt. It’s training. The cognitive capabilities that make you a good AI collaborator — judgment, pattern recognition, creative courage, the ability to produce something from nothing — only stay sharp through use. Let them atrophy and AI stops being a partner. It becomes a replacement.

The MIT data makes this concrete: AI-reliant participants showed weaker neural connectivity when switched to solo work. The muscles atrophy measurably. The weekly reset is physio for your brain.


The 2am Test

When was the last time AI-assisted work kept you up at 2am? Not because of a deadline. Because the work itself — because you could feel it was close to being great and you needed to push it one more step.

If the answer is “never” or “months ago,” your taste may have quietly lowered. The 2am itch is intrinsic motivation in its purest form — the refusal to accept “good enough” when “great” is within reach. Liu et al. showed passive AI use erodes that motivation by approximately 11%. You don’t notice it going. You only notice it gone.


The Daily Protocol

A one-page protocol. Print it. Put it next to your screen. Follow it for two weeks. Then decide if you need it.

TimeActionWhy
Start of day10 minutes with the problem before any AI tool. Write what you know. List what you don’t.Clause 1. Prevents anchoring bias. Your thinking sets the ceiling.
Before lunchCheck: Am I directing AI or watching it? More prompts or more outputs?Active/passive distinction. Switch if passive.
After lunchUse a different tool — or go tool-free for 1 hour.Prevents single-tool dependency. Creative friction sharpens thinking.
4pm fog checkFoggy? Close all AI tools. Walk for 10 minutes. Do manual work for 30.BCG brain fry recovery. The fog lifts when the brain re-engages with non-AI work.
End of dayPick one piece of today’s work. Strip the formatting. Read the raw text. Is there thinking?Clause 2. The daily Ugly Test.
WeeklyOne full AI-free creative session. Concept, write, or strategise from nothing.Neural maintenance. MIT cognitive debt prevention. The gym for your brain.

The Honest Self-Assessment

Score yourself. One point for each.

  • [ ] I can produce good work without AI (1 point)
  • [ ] I rejected AI output at least once today (1 point)
  • [ ] I can recall the specifics of yesterday’s AI-assisted work (1 point)
  • [ ] I used 3 or fewer AI tools this week (1 point)
  • [ ] I had an AI-free creative session this week (1 point)
  • [ ] Something I made with AI kept me thinking after 5pm (1 point)
  • [ ] I typed more prompts than I read outputs today (1 point)
  • [ ] I could explain, without notes, the strategy behind my current project (1 point)

6-8: Your brain is in good shape. Keep the protocol as maintenance.
3-5: The fog is creeping in. Two weeks of the daily protocol. Non-negotiable.
0-2: You have brain fry. This isn’t about productivity anymore. It’s about cognitive health. The protocol is the minimum. Consider whether your current AI workflow is sustainable.


This isn’t wellness advice dressed as strategy. This is performance maintenance for a brain that’s running a new operating system. The protocol takes 30 minutes a day. The brain fry takes your whole career.

If the protocol isn’t enough — if the patterns are too deep or the team dynamics are too complex — that’s what embedded engagement is for. [aizle.co/contact]

Back to Table of Contents ↑

12. Customisable Prompts by Role

Six red flags. Five clauses. Five vows. One extraordinary tool you’re about to use properly for the first time.

The following prompts are designed to be copied, edited, and pasted into any AI tool. Each one applies the frameworks from this paper to a specific professional context. Fill in the brackets. Get a personalised diagnosis and action plan in minutes.


Prompt 1: For Marketing Leaders and CMOs

I'm a [ROLE, e.g., "VP of Marketing at a Series B SaaS company, team of 12"].

My team adopted AI [TIMEFRAME, e.g., "about 8 months ago"]. We're producing [DESCRIBE OUTPUT CHANGE, e.g., "roughly 3x the content volume we were before — more blog posts, more email sequences, more social content"].

The problem: [DESCRIBE THE PROBLEM, e.g., "engagement metrics haven't improved despite the volume increase. Pipeline contribution from content is flat. My CEO is asking why more isn't translating to better."]

Based on the following research, diagnose my situation and give me a 30-day action plan:

- ActivTrak (2026): 443M hours tracked, zero time savings in any category after AI adoption
- METR (2025): Developers 19% slower with AI, believed 24% faster — 39-point perception gap
- Consumer AI enthusiasm: 60% (2023) → 26% (2025)
- 57% of senior marketers cite "AI content oversaturation"
- BCG: Productivity peaks at 3 AI tools, crashes at 4+
- Wang et al. (2025): Experienced professionals elevate quality with AI; novices generate undifferentiated volume

1. Am I in the Exposed, Amplified, or Liberated segment?
2. Which of these six red flags is costing me the most: feeling good about not doing much (productivity illusion), agreeing with everything (uncritical acceptance), stealing independence (skill erosion), the age gap (juniors vs seniors), dressing in AI's clothes (sameness), or thinking the relationship is normal (no segmentation)?
3. Give me a 30-day plan: Week 1 actions, Week 2-3 habits, Week 4 measurement.
4. What should I report to my CEO about our AI ROI — honestly?

Prompt 2: For Creative Directors and Heads of Creative

I'm a [ROLE, e.g., "Creative Director at a mid-size agency, overseeing a team of 8 creatives — mix of senior and junior"].

My concern: [DESCRIBE, e.g., "The work all looks the same. I can't tell which ideas came from my team's thinking and which came from AI's first suggestion. My juniors seem more AI-proficient than me but their concepts are less distinctive. I'm spending more time approving and less time directing."]

Based on these research findings:
- MIT Media Lab (2025): 83% of AI-assisted writers couldn't recall their own work. Lowest brain connectivity measured.
- Gerlich (2025): r = −0.68 between AI usage and critical thinking
- Liu et al. (2025): Passive AI use ≈ 11% motivation decline, 20% boredom increase. Active use mitigates.
- Wang et al. (2025): Experienced designers elevate quality; novices generate volume
- BCG-Harvard (2024): Starting with AI instead of own thinking = 23% worse outcomes than no AI at all

1. Diagnose: Which red flag is doing the most damage to my team's creative output?
2. What's the difference between how I should use AI (experienced, strong taste) vs how my juniors should use it (developing taste)?
3. Give me a "creative quality audit" — 5 specific things I can evaluate this week to measure whether AI is elevating or flattening our work.
4. Write me a one-page brief for my team: "How We Use AI Here" — covering when to use it, when not to, and what "good AI-assisted work" looks like for our team.

Prompt 3: For Individual Contributors (Writers, Designers, Strategists)

I'm a [ROLE, e.g., "mid-level content strategist at a B2B company. I work mostly alone or with one other writer. I use AI daily — ChatGPT for drafting, research summaries, and brainstorming."]

Honestly, I'm worried about: [DESCRIBE, e.g., "I don't know if my writing is getting better or if I'm just getting faster at producing the same quality. I used to agonise over word choices. Now I accept the first version more often than I should. I feel less creative than I did two years ago but I can't tell if that's AI or burnout or both."]

Based on this research:
- METR (2025): 19% slower with AI, believed 24% faster. 69% continued despite slowdown.
- MIT (2025): 83% couldn't recall own AI-assisted work. Brain connectivity at lowest level.
- Liu et al. (2025): Passive use kills motivation. Active use preserves it.
- Gerlich (2025): r = −0.68 between AI usage and critical thinking

1. Honest diagnosis: Am I experiencing Red Flag 3 (independence theft), Red Flag 1 (the comfortable lie), or Red Flag 2 (the smooth talker)?
2. Give me a personal "AI prenup" — 3 clauses specific to my role and situation, written in first person.
3. Design a one-week experiment: How should I structure my AI use differently for 5 working days? Be specific — what do I do Monday through Friday?
4. What's one test I can run to measure whether my independent creative capability has declined?

Prompt 4: For Team Leads Managing AI Adoption

I'm a [ROLE, e.g., "Head of Digital at a mid-size company. My team of 15 adopted AI tools 6 months ago with no formal strategy. Everyone uses different tools differently. Some people are thriving. Others seem to be producing more volume of worse work. I need a framework."]

Context: [DESCRIBE, e.g., "Leadership wants an AI strategy. I don't have one. We've been in 'random acts of AI' mode. Some team members pay for their own subscriptions. There's no quality control on AI-assisted output. I suspect we're in the ActivTrak scenario — working harder, not smarter — but I can't prove it."]

Based on this research:
- ActivTrak (2026): Zero time savings across 443M tracked hours
- BCG (2026): 14% brain fry, productivity peaks at 3 tools, crashes at 4+
- Average company now uses 7 AI tools. 79.3% say effort outweighs benefits. 77.5% would be relieved if half disappeared.
- 63% of workers say employer provided no adequate AI training. Teams with clear AI plans are 3x more likely to feel prepared.
- Wang et al.: Experienced workers elevate quality; novices generate volume. Same tool, different segment.

1. Segment my team: Based on the Exposed/Amplified/Liberated framework, how should I think about the different people on my team?
2. Write me a one-page "AI Operating Agreement" for my team — covering: approved tools, quality standards, when AI is appropriate vs not, and how we evaluate AI-assisted work.
3. Give me a 90-day rollout plan to move from "random acts of AI" to structured, intentional use.
4. What metrics should I track to measure whether AI is actually improving our output vs just increasing our volume?

Prompt 5: For Executives Evaluating AI ROI

I'm a [ROLE, e.g., "CEO / COO / CFO evaluating our company's AI investment"].

We've spent [AMOUNT/DESCRIPTION, e.g., "roughly $200K on AI tools, training, and implementation across the company over the past year"]. My teams report feeling more productive. But I'm not seeing it in the numbers — [DESCRIBE, e.g., "revenue per employee is flat, project timelines haven't shortened, and client satisfaction scores are unchanged"].

Based on this research:
- PwC (2026): 56% of CEOs say they've gotten "nothing" from AI investments
- MIT Media Lab: 95% of organisations see no measurable AI ROI
- METR (2025): Users believe they're 24% faster. Measured: 19% slower. Self-reported productivity gains are unreliable.
- Foxit (2026): Executives estimate 4.6 hrs/week saved. Actual: 16 minutes. Workers: negative 14 minutes.
- ActivTrak (2026): Zero time savings across any category
- Acemoglu (Nobel laureate): Projects "modest 0.5% productivity gain over the next decade"

1. Based on these findings, what's the most likely explanation for the gap between my team's perception and our actual results?
2. What questions should I ask my team leads to get an honest picture of AI impact?
3. What does a realistic AI ROI framework look like — not vendor promises, but evidence-based expectations?
4. Where is AI most likely creating genuine value in my organisation, and where is it most likely creating the illusion of value?

Each of these prompts is designed to produce an actionable output in a single AI session. For deeper, embedded diagnosis, Aizle works inside your real projects to identify where AI is creating value and where it’s creating noise. [aizle.co/contact]

Back to Table of Contents ↑

13. The Research Compendium

Every statistic in this paper is sourced from peer-reviewed research or major institutional studies. Organised by theme for reference. Links verified March 2026.


Productivity and Performance

FindingSourceYearLink
Developers 19% slower with AI. Believed 24% faster. 39-point gap. 69% continued despite slowdown.METR Randomised Controlled Trial (16 developers, 246 tasks)2025metr.org
443M hours tracked across 163,000+ workers (10,584 in before/after comparison). Zero time savings in any category. Email +104%. Messaging +145%. Focused sessions: 13 min 7 sec average.ActivTrak Productivity Lab2026activtrak.com
Executives estimate 4.6 hrs/week saved. Actual: 16 min. Workers: −14 min after verification burden.Foxit / Sapio Research “State of Document Intelligence”2026foxit.com
56% of CEOs say “nothing” from AI investments.PwC Global CEO Survey (4,454 CEOs)2026pwc.com
95% of organisations see no measurable ROI from AI.MIT Media Lab2025media.mit.edu
Disciplined AI users: 25.1% faster, 40% higher quality. Undisciplined: 19 points worse than no-AI.BCG / Harvard Business School (758 consultants)2023–2024hbs.edu
Nobel laureate projects “modest 0.5% productivity gain over the next decade.”Daron Acemoglu, MIT2025academic.oup.com
AI doesn’t reduce work — it intensifies it. Employees worked faster, took broader scope, extended into evenings.UC Berkeley Haas (200-person tech firm, 8-month study)2025haas.berkeley.edu

Cognition and Brain Science

FindingSourceYearLink
AI users: lowest brain connectivity. 83% couldn’t recall own work. Switched to solo: connectivity weaker than baseline.MIT Media Lab EEG Study (“Your Brain on ChatGPT,” 54 adults, preprint)2025media.mit.edu
r = −0.68: AI usage vs critical thinking. Cognitive offloading r = +0.72 with AI use. Youngest users (17–25): highest dependence, lowest critical thinking.Gerlich, Societies (666 participants)2025mdpi.com
14% “AI brain fry.” 39% more errors. 34% quit intention (vs. 25% unaffected). Peaks at 3 tools, crashes at 4+. 33% more decision fatigue.BCG / HBR (1,488 U.S. workers)2026hbr.org
32% accept AI outputs without question. Rises with frequency.Microsoft Research / Carnegie Mellon (319 workers, 936 tasks)2025microsoft.com
Passive AI use: ≈11% motivation decline (5.08→4.39), ≈20% boredom increase (3.43→3.91). Active use mitigates. AI collaboration improves efficiency AND diminishes intrinsic motivation simultaneously.Liu, Wu, Ruan, Chen & Xie, Scientific Reports2025nature.com
AI dependence reduces critical thinking in university students; cognitive fatigue as mediating mechanism.Chinese university study (580 students)2025
Mental exhaustion r = 0.671, attention strain r = 0.874, information overload r = 0.905 with long-term AI use.Shalu et al., Amity University2025
The first piece of information disproportionately shapes every subsequent decision (anchoring bias).Kahneman & Tversky, Science1974science.org
Consultants who started with AI produced work 23% worse than no-AI users. Anchoring effect.BCG / Harvard Business School2024hbs.edu

Behavioural Science Foundations

FindingSourceYearLink
75% agreed with a wrong answer when the group endorsed it. Conformity under social pressure.Asch, S.E. — Conformity Experiments1951psycnet.apa.org
Groups with a single dissenting voice make better decisions — even when the dissenter is wrong.Nemeth, C.J. — Minority Dissent Research1986–annualreviews.org
Consumers 10× more likely to buy with 6 options vs 24. Choice overload paralysis.Iyengar & Lepper — “The Jam Study,” JPSP2000pubmed.gov
Organisms subjected to repeated loss of control stop trying — even when control returns.Seligman, M.E.P. — Learned Helplessness1967apa.org
Things easy to read feel more true, intelligent, and valuable. Processing fluency bias.Reber & Schwarz, Personality and Social Psychology Review1999sagepub.com
Experts trust AI less than novices. Higher domain expertise = more critical evaluation.Dogru & Krämer, University of Duisburg-Essen2025tandfonline.com

Creative Industries and Consumer Response

FindingSourceYearLink
Consumer AI content enthusiasm: 60% (2023) → 26% (2025).eMarketer / Billion Dollar Boy2025emarketer.com
AI-generated video ads rated more “annoying,” “boring,” “confusing.”NielsenIQ2025nielseniq.com
Only 13% of consumers fully trust AI-created ads. 48% accept human-AI co-creation.IAB2026iab.com
81% of designers say AI dulls creativity.DIGIT Lab / University of Exeter (500 UK creatives)2025
57% cite “AI content oversaturation” as top concern. “Sea of sameness.”StackAdapt / Ascend2 (484 senior marketers)2025stackadapt.com
Experienced designers elevate quality with AI. Novices generate undifferentiated volume.Wang et al., Frontiers in Computer Science2025frontiersin.org
90% of listeners want human-created media. “Guaranteed human” tagline launched.iHeartMedia2025iheartmedia.com
OpenAI’s own 2025 campaign shot on 35mm film with real directors and actors.OpenAI2025

Workplace and Organisational Impact

FindingSourceYearLink
77% say AI added to their workload. 47% don’t know how to achieve expected gains.Upwork2024upwork.com
41% encountered “workslop.” Each instance: ~2 hours rework.BetterUp / Stanford / HBR2025hbr.org
“Slop” = Merriam-Webster’s 2025 Word of the Year. Mentions grew 200%.Merriam-Webster2025merriam-webster.com
Average company uses 7 AI tools. 83% of CIOs say too many. 79.3% say effort outweighs benefits. 77.5% would be relieved if half disappeared.Canva / CIO Survey, Sprawl.work2025sprawl.work
16% pretend to use AI to meet expectations. 56% pay out of pocket ($68/month avg).Howdy.com (1,047 workers)2025howdy.com
91% of senior agency leaders expect AI to reduce headcounts. 57% paused/slowed entry-level hiring. Net 20% loss of early-career marketing roles.Sunup / Stanford2025
AI-heavy teams merge 98% more PRs, but PR review time increased 91%. Volume up, human bottleneck unchanged.Faros AI Engineering Productivity Report2025
Agency AI investment costs grew 83%. Only 7% bill clients for AI.Forrester2025forrester.com
42% of companies scrapped majority of AI initiatives.S&P Global2025spglobal.com
37% worry AI overreliance could erode skills. Only 5% maximising AI. Only 12% receive sufficient training.EY Work Reimagined (15,000 employees, 29 countries)2025ey.com
Mental fatigue and cognitive strain now surpass workload volume as leading burnout predictors.Deloitte Workforce Intelligence2025deloitte.com
When leadership communicates a clear AI plan, employees are 3× more likely to feel prepared.Gallup2025gallup.com
AI doesn’t reduce work — employees take on broader scope, extend into evenings.UC Berkeley Haas2025haas.berkeley.edu

Back to Table of Contents ↑

14. About Aizle

“We basically do relationship counselling for humans and AI. Except both parties show up, which already puts us ahead of most marriage counsellors.”

Aizle is a strategic consultancy that embeds inside your real projects and bakes AI capability and behavioural science into the delivery. The project ships. The team gets better. Same engagement. Same timeline.

We don’t run workshops and leave a deck. We work alongside your team on actual briefs with actual deadlines — and when we leave, the team is permanently more capable. We make ourselves unnecessary. That’s the point.

Four ways to work together:

Diagnostic — A one-day deep-dive into how your team actually uses AI. Not what they report. What they do. We identify which red flags are costing you the most and which segment (Exposed, Amplified, Liberated) each team member sits in. You get a clear-eyed assessment and a prioritised action plan.

Sprints — Focused 2-4 week engagements on specific challenges. Repositioning your AI workflow. Building a quality framework. Training your team’s taste. Real deliverables, real deadlines, real learning.

Embedded — Months of working inside your team. Aizle becomes part of the operation — directing AI workflows, coaching judgment, transferring capability. By the time we leave, you don’t need us. That’s how we measure success.

The Lab — For brave clients. Red-teaming your AI strategy. Inverse design (“How would we guarantee AI failure?”). Breakthrough experiments. Invitation-only.


“The consultancy that makes itself unnecessary. That’s the point.”


Colophon

This white paper accompanies the keynote “Stop Dating AI. Start The Marriage.” delivered by Adam Horne at Berghs Unconference:AI 2026, Aula Main Stage.

The AI Prenup is available as a free PDF at aizle.co/prenup.
Customisable prompt versions at aizle.co/prenup (Part 2).

All research citations are accurate as of March 2026. Links verified at time of publication. For the complete research document with extended analysis, contact [email protected].

Adam Horne is the founder of Aizle, former Programme Director of Berghs Advanced at Europe’s most awarded creative school, co-creator of the LIONS Creative MBA at Cannes Lions, and a creative director with 20+ years across WPP, McCann Worldgroup, Havas, and CHE Proximity — serving Microsoft, Ford, Nestlé, L’Oréal, P&G, Adidas, and Levi’s across 15+ global markets. He holds a Behavioral Economics certification under Rory Sutherland and builds custom AI agents and workflows — implementing systems, not just talking about them.

Wanna talk?

I’m happy to talk about this, or anything else – Adam