I’ve been working with large language models since 2021, and in various forms of AI tooling for most of my career before that (including graph databases, for anyone who remembers when those were the interesting frontier). So when I wrote a piece early last year about AI tools in marketing, it wasn’t my first time thinking about the question. It was, in retrospect, still too optimistic about some specific bets.
I highlighted RB2B as a tool worth watching; it identifies individual visitors to your website, surfaces them in Slack, lets your sales team follow up while the signal is warm. My enthusiasm was genuine. What I underestimated was fit. Visitor identification tools turn out to work best for the organizations that need them least: teams with high traffic, tight sales processes, and reps who act fast. For everyone else, you get a notification and an awkward non-conversation. I’ve moved from recommending it broadly to recommending it selectively. That’s not a criticism of the product. It’s the kind of calibration that only happens after you’ve watched something in practice.
That recalibration is a small example of something larger. The question in early 2025 was still mostly evaluative: which tools are worth trying, what are the risks, how do you avoid getting burned. It made sense then. The landscape was genuinely new and very uncertain. That frame is less useful now, because most organizations have tried things, formed opinions, and started to see where the returns are real and where they aren’t. The more interesting question is where AI has actually changed the work, as opposed to where it’s been inserted into the work without meaningfully changing it. There’s quite a bit of the latter.
Some of the most durable changes are happening at the operational edges, in places that don’t generate much coverage. AI tools that connect to calendars, email, and file systems have gotten quietly good at surface-level relationship management: flagging follow-ups that have gone cold, surfacing context before a call, noting when a client hasn’t heard from you in a while. I use Copilot for this daily. Working across multiple client relationships, the question “did this person ever get back to you on that thing you asked three weeks ago?” is a real problem, and AI solves it in a way that a well-configured CRM never quite did (and I am, for the record, a genuine advocate for CRMs). It doesn’t require disciplined prompting or careful setup. It just works.
More interesting to me is AI’s usefulness as a thinking partner at the senior level: not for producing outputs, but for stress-testing ideas when the right person isn’t available. If you want to know how a skeptical CFO might receive a pricing proposal, or what a competitor would likely say about your positioning, or whether a strategy has obvious holes you’ve stopped seeing from too close, a well-framed conversation with a capable model is a surprisingly useful substitute. This isn’t a replacement for real colleagues or real judgment. It’s a workaround for the moments when the right conversation isn’t accessible, and in practice it’s more useful than people who haven’t tried it would expect.
The content production side is where I’d urge the most precision about what you’re actually trying to accomplish. AI can generate SEO-oriented copy at scale, and for some organizations that’s a legitimate choice. If the goal is volume and broad keyword coverage, and you’re willing to accept mixed quality in exchange for low cost per piece, AI handles that reasonably well. Most SEO agencies produce similarly inconsistent results at significantly higher cost. If that’s genuinely your strategy, AI is probably the better procurement decision. But it’s not a strategy I’ve ever advocated for, because it describes a race to produce content readers didn’t ask for in order to rank in searches that AI intermediaries are increasingly answering before anyone clicks. Search behavior has changed materially, and more queries are being resolved inside AI interfaces entirely, which means the volume playbook is producing fewer returns even when executed competently. The organizations that appear to be navigating this more successfully are investing in content that demonstrates genuine expertise and earns cited presence in AI-generated answers, rather than content optimized to rank. That’s harder to produce, and AI is a less reliable tool for it, because it requires organizational knowledge and a distinct point of view.
That raises a different set of questions, and a pattern I’ve been watching with growing interest.
Across a number of organizations right now, AI is being deployed as an IT initiative. Agents are getting rolled out through the infrastructure function, often without meaningful input from marketing, sometimes without input from sales or customer support. The parallel that keeps coming to mind is the early internet, when IT was given ownership of the company website. Those websites worked, technically. What they frequently didn’t reflect was any coherent sense of organizational purpose, customer communication, or marketing intent. They were websites in the sense that they existed and loaded. The same dynamic is playing out now, but faster and with more organizational surface area.
Social media went through a version of this too. Companies would hire someone to “do the social media,” and the goal would be expressed in the metrics the platform made visible: followers, likes, reach. Rarely was there a clear connection to business goals. The work was real; the direction was often missing. AI deployment without a coherent owner and clear intent tends to produce the same category of problem.
Sales has been among the most active self-directed adopters of AI tools. Clay, Instantly, and similar platforms are genuinely powerful: they can enrich prospect data at scale, automate personalized outreach, identify buying signals, and run sequences that would have required a team of SDRs a few years ago. There’s a legitimate case for all of it. There’s also a real failure mode, which is that sales teams running these tools independently tend to be operating without the organizational context that would make the campaigns actually work. Who is the ICP? How does the product solve their specific problem? What’s the right language for the moment the prospect is in? Are existing customers getting accidentally included in future-focused outreach that doesn’t reflect their current relationship with the company? These are questions sales often doesn’t know to ask, because they’re marketing questions. And when you add AI scale to outreach that’s imprecise at the targeting level, you get volume applied in the wrong direction: very much its own kind of problem, separate from annoying people (though it does that too).
The enterprise picture is different, and worth watching even for those of us who don’t primarily work there. Large organizations are rolling out AI initiatives, often because a CEO heard about it at a conference or a board member asked about it. What’s striking is that the definition of “AI” in many of these conversations is remarkably uneven. Many haven’t fully leveraged what they already have — Copilot is embedded in tools that hundreds of millions of employees use daily, and yet active adoption remains shallow. When employees have access to both Copilot and ChatGPT, only 18% choose Copilot voluntarily — when Copilot is the only available tool, that figure rises to 68%. Stackmatix That gap says something about the difference between distribution and genuine utility.
Meanwhile, the conversations happening inside those organizations about AI often land somewhere unexpected. When I ask people in enterprise settings what they’re actually getting value from in their AI tools, the two things I hear most often are: help writing emails, and help navigating internal politics. In highly matrixed organizations, heavy on bureaucracy, permission structures, acronyms, and stakeholder management, knowing how to word something to subtly achieve a purpose, or understanding the landscape before a difficult conversation, can be genuinely valuable. No judgment there. (Well, a little.) But it’s a narrow slice of what modern AI tools are capable of, and if most people in a large organization are converging on the same use case, the marginal value of that use case compresses over time.
What all of these patterns share is a common structural problem: AI deployed without a coherent owner, in service of goals that were never made clear before the tools were turned on. The IT team rolling out agents, the sales team running outbound at scale, the enterprise initiative that can explain the vendor but not the objective: these aren’t technology failures. They’re organizational failures that technology is making more visible.
The piece I wrote in early 2025 reflected an honest read on a fast-moving and genuinely uncertain landscape. What I’d add now, a year and change later, is that the pace of the tools has continued to outrun most organizations’ ability to integrate them with any coherence. The limiting factor was never access to AI. It was always clarity about what you were trying to accomplish before you turned it on, and that’s not a new problem. Too many organizations have asked too much of marketing for too long, with insufficient resources and loosely defined goals. AI doesn’t resolve that condition. In some cases, it just makes the ambiguity faster.

