January marks the beginning of the Year of the Horse, symbolizing moving forward and building momentum. This will be the year marketing leaders get their mojo back. Instead of holding on to what worked in the past and resisting the change occurring around them, modern leaders will lean into the opportunity AI delivers and guide their teams toward an optimistic future.
But there’s a critical discipline leaders must develop (or redevelop) to thrive in this new reality. Recent research reveals that in 2026 and beyond, CMOs and marketing leaders will succeed not by becoming more technical, but by becoming executives who operationalize AI without surrendering humanity. This means anchoring growth in judgment, trust, and meaning while turning AI into a disciplined, value-creating system rather than an existential threat.
This week’s brief includes key takeaways from research that inform five principles to help you stay human in a world where people and technology collaborate.
Key Findings
Human judgment is a design requirement
MIT Sloan finds that high-performing AI organizations explicitly design “judgment layers” that specify when humans must intervene, especially in customer-facing, reputational, or ethical decisions.
What this means: Staying human isn’t about resisting AI. It’s about architecting where humanity is non-negotiable. Human-led governance isn’t a nice-to-have. Every marketing leader must build human judgment into critical workflows.
Emotional leadership drives adoption more than technical training
McKinsey Global Institute shows that workforce adoption accelerates when leaders openly acknowledge anxiety and reframe AI as a role evolution, not a role elimination. The research confirms employees trust AI systems more when leaders emphasize augmentation narratives over replacement narratives.
What this means: Leaders who skip the emotional work create silent resistance that tanks technical investments. Transparency and empathy are mandatory for ensuring your team doesn’t fill the white space with their own doomsday narrative.
AI value depends on protecting (not just deploying) human strengths
World Economic Forum identifies analytical judgment, emotional intelligence, and creative synthesis as increasingly critical in AI-enabled roles. And OECD emphasizes that productivity gains depend on complementary human skills, not substitution.
What this means: The discipline isn’t “what can AI do?”. It’s “what must humans do that AI can’t?” This requires strategic protection of irreplaceable human contributions and active curation of human vs. AI tasks—knowing what to amplify vs. what to automate.
Trust depends more on responsibility than innovation
Edelman Trust Institute reports trust in AI-using brands depends more on perceived responsibility and governance than innovation leadership. Forrester warns that brands deploying AI without sufficient CX safeguards will see measurable declines in loyalty and advocacy.
What this means: Staying human at scale means engineering trust into operations—not as brand messaging, but as system accountability. This requires discipline, transparency, explainability, and recourse mechanisms.
Strategic restraint builds more value than indiscriminate scaling
Boston Consulting Group finds that companies capturing the most AI value tie initiatives to a small number of clearly owned business outcomes and stop projects quickly when value isn’t demonstrated. The research emphasizes that leaders who model constraint—what not to automate—build more trust than those who chase scale indiscriminately.
What this means: The discipline of staying human includes saying “no” to projects that automate judgment, erode trust, or strip meaning from work. Discipline means knowing when to stop AI, not just when to start it.
Your Five Practices for 2026
The research converges on this: the discipline of staying human requires five practices every marketing leader should follow.
Establish decision boundaries: Set decision boundaries that define where humans—not AI—must make the call.
Practice cultural honesty: Provide transparency in how decisions are made, where accountability sits, and what happens when AI is wrong.
Protect human strengths: Identify what AI can’t replace and what you should automate.
Operationalize trust: Build in transparency about how AI works, clear ownership when it fails, and paths for customers to get human help.
Exercise strategic restraint: Know what not to automate. Leaders who show restraint build more trust than those who chase scale indiscriminately.
Which practice will you prioritize this month? Hit reply—I read every response.
Next week: I’ll share the AI Marketing Leadership Framework that operationalizes these five practices. Paid subscribers will get the downloadable framework to use with their teams.


