Kevin McCann
Culture, Change, Strategy, Working Life, Leadership, Communications, AI

AI and Communications: Practice, Leadership, Careers, Adaptation

People don’t adopt AI. They adopt new habits when those habits help them do better work, build their careers, earn trust, and stay relevant. Tools matter, but habits matter more. What sticks isn’t “adoption” in the abstract, but practices people are motivated to sustain.
AI and Communications: Practice, Leadership, Careers, Adaptation

I’m giving a talk this week on AI and communications: practice, leadership, careers (with the Canadian Public Relations Society).

Most conversations about AI in communications still begin with tools. Faster drafting. Better research. Fewer blank pages. That framing is understandable, and not wrong. But it is incomplete.

AI is no longer just a capability you decide to adopt. It is becoming part of the environment in which work happens. Increasingly assumed. Increasingly invisible. When the environment changes, techniques follow. When techniques change, roles follow. And when roles shift, careers get transformed.

This essay is an attempt to think clearly about what that means for people who communicate for a living, right now.

Where we are now

“Now” is a short‑lived word. Specific claims age quickly. Still, some patterns are clear enough to describe.

Individual AI adoption is widespread. Organizational adaptation is not.

Many people are already working alongside AI every day. They draft, summarize, analyze, test ideas, and think with tools like ChatGPT or Claude. What remains rare in many communications teams we work with is shared, normalized use. AI is not yet a routine part of team workflows. It is uncommon to pause a meeting and ask whether a problem has been worked through with an AI system the team trusts. In most organizations, AI lives at the level of the individual, not the institution.

That distinction matters. It creates unevenness that compounds over time.

Some people now work with AI as a constant cognitive partner. Others use it occasionally. Some avoid it almost entirely. Unless leadership intervenes deliberately, these differences harden. They show up in pace, confidence, and range of output. Over time, this begins to resemble a stratification that has little to do with title or tenure and a great deal to do with habit and fluency.

There is an irony here. The most deliberate AI laggards often (not always) have the strongest instincts about quality, judgment, and what good work actually looks like. When that judgment is absent from AI‑enabled workflows, mediocrity fills the gap. The result is a feedback loop: poor outputs reinforce skepticism, which keeps people away and stalls adaptation.

The audience has already changed

While many organizations remain focused inward on adoption, the external world has moved on.

The people communicators are trying to reach are already navigating AI‑mediated systems. Search behaviour is changing rapidly. Long‑standing assumptions about Google, page rank, keywords, and discovery are eroding. Information is increasingly filtered, summarized, and prioritized by AI systems before a human ever encounters a source.

This reshapes fundamentals communications has relied on for years, at least since the rise of mobile and social. What people see first. What they trust. What they remember. What persuades them to act. These dynamics are no longer governed primarily by the mechanics of the open web we learned to optimize for.

For communications professionals, this creates a difficult obligation: unlearning. Many of our instincts about audiences, channels, and influence were formed in a world that is already receding. Public behaviour is changing faster than many of our models, playbooks, and assumptions.

This feels personal for a reason

There is another layer to this shift that is easy to understate, but essential to acknowledge.

For people who communicate for a living—writers, designers, strategists, researchers, lobbyists—work is rarely just functional. It is often how value is expressed and recognized. In the knowledge economy, many of us expect work to provide not only income, but meaning, identity, and a sense of contribution.

AI unsettles this because it does not merely automate execution. It imitates judgment. It produces language, structure, and ideas that sit uncomfortably close to what many professionals have spent years getting good at.

The familiar reassurance that “AI won’t replace people” is comforting, but incomplete. AI will replace some jobs, and parts of many jobs, just as previous waves of technology have. What is new—and more destabilizing—is how directly AI overlaps with work that has traditionally defined professional identity. The deeper disruption is not wholesale replacement, but the erosion of human distinctness: the feeling that what once reflected hard‑won skill and experience can now feel partially reproducible.

For some, this is energizing. For others, it produces a simmering unease about relevance and value. That tension is not resolved by better prompts or clearer policies. It is addressed through leadership, culture, and how work itself is redesigned.

What works

When you look closely at individuals and teams who are navigating this transition well, two things tend to show up together. Rarely one without the other.

First, they are willing and able to decompose work.

They break work into steps, patterns, and decisions. They see tasks not as monoliths, but as sequences. They ask where AI can accelerate effort, where it can support thinking, and where it should stay out of the way. Research, synthesis, drafting, testing, analysis, content production—anything that can be pattern‑recognized will be, starting with components rather than entire roles.

This does not diminish the importance of human judgment. It relocates it at different places in the process of work. Decisions about framing, intent, strategy, audience, and consequence move closer to the centre of the workflow. Judgment migrates upstream, where the work is harder and more valuable.

Second, they approach this decomposition with a bias toward possibility.

Not enthusiasm. Not evangelism. A practical willingness to ask, “What is possible now?” rather than “Why this won’t work here.” This question shapes how people experiment, how quickly they learn, and whether AI is treated as a threat to be contained or a tool to be understood.

Either element on its own can work well in isolation, but they are most effective together. Decomposing work without a sense of possibility leads to brittle automation, mediocre output, and fear. A bias toward possibility without the discipline of decomposition leads to shallow experimentation and bursts of activity that never change how work actually gets done.

Together, they form a powerful combination: the ability to take work apart and the willingness to imagine how it might be reassembled with AI in the loop.

In an agency or professional‑services context, this combination has another important effect. When teams begin to decompose work consistently, patterns emerge. The same judgments, frameworks, prompts, and decision rules show up again and again. Over time, those patterns can be codified—into methods, tools, diagnostics, or repeatable offerings.

This is not about turning everything into software or abandoning bespoke work. It is about making professional judgment more repeatable and applicable. AI lowers the cost of externalizing how we think, testing it, and refining it. Done well, this turns individual expertise into shared practice value—something that can be taught, improved, and carried forward by more than one person.

Leadership matters most here. Not in declaring policies or chasing tools, but in modelling how work is examined, questioned, codified and redesigned. The organizations that get this right are not eliminating craft or judgment. They are making room for those human capabilities to do more important work, while being honest about what AI can and will do more of.

What to do

If all of this is true, the work ahead is less about announcing more AI strategies and more about building the conditions for change to stick.

  1. If you haven’t done it yet, start with a clear, balanced case for why AI matters in your organization, now. Not everyone gets it yet. Do this not as a vague inevitability and not as a cost‑cutting story. Tie it to outcomes people recognize that are true: better work, better service, fewer hours spent on tasks that few people want to do. It can take time, but focus on what is possible now.
  2. Then make it real where change actually happens: one person, one workflow, one week at a time. People do not adopt AI. They adopt new habits when they see how those habits help them do better work, sustain their careers, earn trust and recognition, and stay relevant. They adopt new sequences for how they research, draft, review, decide, and deliver—because those changes make their own work easier, better, or more resilient.
  3. This is also where leadership presence matters in a very practical sense. The most useful signal is not a slide deck or a slogan. It is time spent doing the work with the tools, learning their strengths and limits, and showing teams that this is not a side project delegated to enthusiasts.
  4. If you are in an organization or a leader, you've got to decide your tack and what you want need to become in an AI economy, both in how you work and what you produce. Once you do, you've got to dedicate the time and focus to make that prediction a reality in process design, accountability, fluency, transformation. This takes focus, new roles, investment in time and dollars.
  5. Finally, make experimentation part of regular work. Lower the stakes and make the work small enough to try. Carve out time that is protected and part of the work week. Share what works, but also what didn’t—and why. Normalize small failures as part of learning, not as something to explain away.

Whether you are leading a large team or operating as a practitioner on your own, the underlying task is the same: shaping your practice so it can participate meaningfully in an AI‑inflected economy. That does not require certainty or mastery. It requires openness to adoption, a willingness to keep adjusting how you work, and the discipline to turn small experiments into new ways of working and thinking over time.