“Did we approve this?”
The CMO’s question arrived with a LinkedIn screenshot attached: a competitive comparison Maya’s team had published through their AI content workflow, now live in a thread with 200 reactions and a tagged competitor.
The claims had been accurate when the workflow was built. They were not accurate when the content was published; No one flagged them for re-review because no one owned that decision.
What her CMO was really asking was whether Maya had a system for knowing which decisions needed a human. And she didn’t have a clear answer.
THE SITUATION
What was working and what it was hiding
Maya is VP of Marketing at a 200-person B2B SaaS company, 18 months into an AI transformation championed by her CMO. She assumed all was well. Her team delivered. Adoption metrics were strong. Workflow documentation was complete. AI usage across content, campaigns, and competitive intelligence became routine.
Maya’s team ran twelve AI-assisted workflows, and competitive-comparison content was among them. The asset had cleared the standard review process. No one questioned it because the AI workflow said it had cleared review, and clearing review had always been enough. That’s what made it a structural problem, not a human one.
WHY THIS MATTERS NOW
Adoption moved fast. Governance didn’t follow.
The pattern Maya walked into isn’t unusual. In a May–June 2025 Gartner survey of 360 IT leaders involved in generative AI rollouts, only 23% reported being very confident in their organization’s ability to manage security and governance when deploying GenAI tools. Over 70% cited regulatory compliance as a top-three challenge.
Adoption outpaced governance. That gap is where the exposure lives.
Gartner projects that by 2028, enterprises will spend more than $30 billion battling misinformation and disinformation, cannibalizing 10% of marketing and cybersecurity budgets combined. That figure reflects what happens when internal content governance doesn’t keep pace with the volume of content. Bad actors are part of the equation. Ungoverned internal workflows are too.
The accountability gap is documented: when AI-generated content causes legal or reputational damage, the liability belongs to the humans and organizations that published it. The tool doesn’t get sued, and the vendor doesn’t answer to the board. The marketing team does.
THE GAP
Designed to fail
Maya’s team didn’t make a careless mistake; They followed the process. What the process didn’t include was a defined point where a human was required to step in and own the judgment call. Research on human-AI decision making published in Scientific Reports in early 2026 identifies exactly this failure mode: when AI workflows lack explicit intervention triggers, humans shift from active control to passive monitoring and systematically fail to intervene when systems err. The team wasn’t negligent. They were operating exactly as humans do inside workflows that never told them when to stop and decide.
Competitive positioning, factual claims about named competitors, content that could attract legal scrutiny or go viral in a negative way: these are decisions, not tasks. Maya’s workflow treated them as tasks.
The boundary was never defined, so no one crossed it. It simply didn’t exist.
WHERE WE LEAVE MAYA
The unanswered question
Maya knows the asset was wrong. She knows how it got published. What she doesn’t have is an explanation for how a governed workflow produced an ungoverned decision, or a framework for ensuring it doesn’t happen again across the twelve other AI-assisted workflows her team runs.
Her CMO asked, “Did we approve this?”
What he’s really asking is: “Do we have a system for knowing what needs human approval and what doesn’t?”
Maya doesn’t have that answer yet, but she will by the end of this month.
ONE STORY. FOUR CHAPTERS.
This month, I’m trying something new. One narrative arc across four “chapters”—one published each week.
Here’s what you can expect:
Chapter 1 (this post): The protagonist’s situation (Maya) and the judgment boundary failure that created it.
Chapter 2: The tool Maya uses to drive change: the Judgment Boundary Matrix, a framework for mapping decisions by impact severity and context complexity, with the downloadable decision tool (for paid subscribers).
Chapter 3: What Maya learns from other marketing leaders who are drawing judgment boundaries and what they got wrong before they got it right.
Chapter 4: Maya revisits her solution. What shifted, what she’d do differently, and the one boundary she and her CMO still disagree on.
I’d love to hear your feedback on this new format. Share it in a comment!
SOURCES
Gartner (May–June 2025 survey of 360 IT leaders): AI Regulatory Violations Will Result in a 30% Increase in Legal Disputes for Tech Companies by 2028 — gartner.com/en/newsroom/press-releases/2025-10-06-gartner-predicts-ai-regulatory-violations-will-result-in-a-30-percent-increase-in-legal-disputes-for-tech-companies-by-2028
Gartner: Enterprise Spending on Battling Misinformation Will Surpass $30 Billion by 2028 (October 2025) — gartner.com/en/newsroom/press-releases/2025-10-21-gartner-predicts-enterprise-spending-on-battling-misinformation-and-disinformation-will-surpass-30-billion-dollars-by-2028
Gartner: 50% of Enterprises Will Invest in Disinformation Security and TrustOps by 2027 (November 2025) — gartner.com/en/newsroom/press-releases/2025-11-20-gartner-predicts-50-percent-of-enterprises-will-invest-in-disinformation-security-and-trustops-by-2027
Cummings & Cummings Law: Legal Issues in Using AI-Generated Content for Business Marketing (January 2026) — cummings.law/legal-issues-in-using-ai-generated-content-for-business-marketing/
Scientific Reports / Nature (2026): Examining human reliance on artificial intelligence in decision making — nature.com/articles/s41598-026-34983-y


