January Debrief: Five Leadership Practices for When Your Team Is Using AI But Not Trusting It
This month we explored why 2026 demands a different kind of leadership—and what you've discovered about your own transformation
What We Covered This Month
January was about naming what’s changing, and why the old playbook isn’t just insufficient, it’s getting in the way.
We started with The 2026 Leadership Inflection Point: the recognition that four forces are converging to make this year fundamentally different:
AI capability surpassing human comprehension.
Job displacement anxiety hitting new highs.
A widening gap between what AI can do and the value organizations capture.
Complexity that’s outpacing our capacity to manage it
Then we gave you The Discipline of Staying Human framework: five interconnected practices that redefine what leadership looks like when decisions are increasingly shared between humans and systems:
Establish judgment boundaries
Lead with cultural honesty
Protect human strengths
Operationalize trust
Exercise strategic restraint
And we closed with a case study showing what happens when a B2B SaaS CMO discovered that 87% AI adoption meant nothing if no one actually trusted the output. Her team was complying: using the agents, generating content, hitting the metrics. But they were redoing everything manually because they were terrified to rely on recommendations they couldn’t defend.
The breakthrough came when she stopped optimizing for adoption metrics and started building trust systems. She:
Set boundaries that made it clear when AI leads and when humans decide.
Practiced cultural honesty that named the fear instead of spinning past it.
Built trust that became operational, not aspirational.
What’s Landing with You
I’ve been reading your comments, DMs, and emails. Here’s what’s resonating:
“I thought it was just me.”
Multiple marketing leaders have told me they’re seeing the same pattern Alysa saw: high adoption rates masking low confidence, and teams performing AI adoption for dashboards while working the old way behind the scenes.
One VP of Marketing wrote: “I’ve been celebrating our ‘AI-first’ metrics in team meetings while employees are silently burning out from AI adoption. Reading the case study felt like someone finally named what I’ve been too afraid to admit.”
You’re not alone. This is the pattern everywhere right now.
“The ambiguity is killing us.”
The biggest pain point isn’t the technology. It’s the uncertainty about accountability.
When AI recommends something, and you approve it, and it fails—who’s responsible? The person who approved it? The person who didn’t catch the flaw? The system? Your team doesn’t know, so they treat every decision as high-stakes, creating paralysis.
Several of you asked: “How do I draw those boundaries when I don’t even know what AI can reliably do yet?”
Fair question. Here’s what I’m seeing work: Start by defining what’s too risky to get wrong (customer promises, proprietary content, brand positioning). Those are human-owned, full stop. Then work backward from there.
“My team thinks I’m chasing AI for the sake of AI.”
This one hit hard.
One comment: “I thought my best people were resistant to change. After reading the case study, I realize they think I don’t value their expertise anymore.”
Strategic restraint—Practice 5—is the action that changes this: saying no to automation that adds complexity without value, pausing use cases that erode confidence, and keeping humans in loops where judgment matters most.
When you demonstrate discernment (not just enthusiasm), your team starts trusting that you understand what they actually do.
What I’m Hearing You Struggle With
A few themes keep coming up in conversations:
1. “I can’t get executive buy-in to slow down.”
You’re framing it wrong.
Don’t ask for permission to slow down. Ask for permission to build a trust infrastructure that accelerates sustainable adoption.
Show them the cost of the compliance charade: teams working double time, high performers disengaging, decisions taking longer despite “high adoption,” and retention risk among key people.
Then show them what operationalized trust looks like: clear boundaries, explicit accountability, and escalation paths that remove ambiguity. That’s not slowing down. That’s removing the invisible drag.
2. “My team wants more certainty than I can give them.”
Good. That means they’re being honest about the uncertainty.
The mistake leaders make is thinking they need to provide certainty. You don’t. You need to make it safe to operate under uncertainty.
That’s what cultural honesty does (Practice 2). You name the uncertainty openly. You make three commitments explicit: raising concerns isn’t resistance, mistakes surfaced early are learning, and silence is riskier than slowing down.
You’re making uncertainty manageable.
3. “How do I protect human strengths when I don’t know what AI will be capable of next year?”
You need to name what’s irreplaceable right now in your business context.
Not generic “creativity” or “strategic thinking.” Specific capabilities:
Understanding the anxiety a CFO feels when evaluating a $300K decision during a budget freeze
Sensing hesitation in a customer’s voice when they ask about your product roadmap
Making ethical calls in gray areas where the data doesn’t tell you what’s right
Translating between what Product built and what Sales needs to message
Those capabilities aren’t going anywhere. And your team needs to hear you name them specifically.
Where We’re Going in February
Next month, we’re deep-diving into Practice 4: Operationalize Trust.
Why start with #4? Because this is where most AI transformations stall. Leaders talk about trust like it’s a feeling to cultivate. It’s not. It’s a system to build.
Here’s what we’ll cover:
Week 1: The Four Trust Pillars diagnostic
Competence Trust: Can we rely on this without risking credibility?
Integrity Trust: Are the rules clear and fair?
Agency Trust: Are we in control, or are we being replaced?
Care Trust: Does leadership have our backs?
Week 2: Building trust systems (not trust vibes)
Templates for AI policies that actually answer the questions your team is afraid to ask
Ownership models that make accountability explicit
Escalation paths that remove the guessing
Week 3: A new case study
How multiple marketing leaders operationalized trust in different contexts
What works across B2B SaaS, and what’s industry-specific
The artifacts they built (and you can adapt)
Week 4: Trust at scale
How trust infrastructure evolves as AI capability increases
When to revisit boundaries
How to know if trust is eroding before it shows up in retention data
Your Turn: What’s Your Biggest Question Right Now?
I want to make sure February’s content addresses what you’re actually wrestling with.
Reply in the comments or hit reply to this email:
What’s your biggest challenge with building trust around AI right now?
Which of the Four Trust Pillars (Competence, Integrity, Agency, Care) feels most fragile in your organization?
What would make the February content immediately useful for you?
One Last Thing
Several of you have asked if I offer coaching or advisory work.
I do both. I work with marketing executives (CMOs, VPs of Marketing, Founders) navigating AI transformation—particularly in B2B SaaS contexts where trust, long sales cycles, and brand credibility make “move fast and break things” a challenging strategy.
If you’re interested in exploring that, reply to this email.
And stay tuned for an upcoming announcement about quarterly group coaching calls that will be free for my paid subscribers.
See You Thursday
Next week: The Leadership Brief— the latest research, stats, and trends that inform how marketing leaders are currently building trust systems, and how they will evolve in 2026. This will kick off our February focus: Operationalizing Trust.
Until then,
Kim


