Google quietly rolled out AI chatbots directly into the Google Ads and Google Analytics interfaces in December 2025, then published a "how to work with them" guide in March. The twist? These aren't standalone tools. They live inside the platforms where you already approve budgets, diagnose traffic spikes, and fix disapproval issues.

Ads Advisor and Analytics Advisor are conversational assistants that remember what you asked three prompts ago. Ask Analytics Advisor why conversions spiked last week, and it can trace the surge to organic search, calculate your add-to-cart and checkout rates, then walk you through a full drop-off funnel without leaving the chat. Ads Advisor will troubleshoot why your campaign isn't running, suggest Performance Max tweaks, write keywords and headlines, and even propose edits to fix policy violations (though you still have to click approve).

The difference from earlier AI features is persistence. Google describes this as conversational memory across interactions, meaning you can ask follow-up questions and the assistant tracks context. That's operationally useful for anomaly investigation or creative iteration, but it also means Google is building a layer of AI mediation between you and your own campaign data.

The March guide frames five collaboration patterns: ask in plain language, use follow-ups, let the assistant surface hidden trends, troubleshoot policies and performance, and thumbs-up or thumbs-down responses to train it. Google says that feedback loop improves what the advisors learn is valuable, which is another way of saying the product gets smarter the more you use it.

What Google didn't publish: sample sizes, benchmarks, or proof that this actually speeds up decisions or lifts performance. The post is product guidance, not a study. Teams are supposed to figure out the ROI themselves.

The practical shift is that Google now owns the interface layer where insights turn into action. If the advisor suggests a bid change or flags a funnel leak, you're reviewing and approving inside a Google-controlled chat, not a spreadsheet you built. That raises questions about governance, audit trails, and whether your team can even explain why a change was made three months later.

The era of exporting CSVs to diagnose campaign problems may be ending, but the era of trusting a chatbot with your ad budget is just beginning.

70% of CMOs Still Don't Lead AI Efforts

Only 30% of US CMOs are directly leading their marketing organizations' AI efforts, according to Forrester's Q2 2024 B2C Marketing CMO Pulse Survey. That means seven in ten aren't steering the technology that's already baked into their martech stack, programmatic media buys, and social platforms.

The gap isn't about curiosity. In late 2023, 171 US marketing executives told Forrester that generative and predictive AI was their number-one support need out of 30 listed priorities. A year later, 90% of B2C marketing executives said they planned to increase AI spending over the next 12 months. Money is moving. Leadership structure isn't keeping up.

The problem isn't that CMOs don't care. It's that AI changes who makes decisions, who owns workflow redesign, and who approves risk tradeoffs. When leadership is fragmented across marketing, IT, and vendors, scaling anything beyond a pilot becomes a negotiation instead of an execution.

Forrester's point is that "leading" doesn't mean the CMO personally runs every test. It means setting governance, defining decision rights, and establishing management expectations while direct reports execute. The firm says CMOs need to act as a connector and rely on a rock-solid management team, which only works if the CMO first creates space by letting go of other responsibilities.

The urgency comes from the fact that AI is no longer a separate innovation track. It's already embedded in routine systems that influence daily execution. Organizations treating it as a side project are under-managing a capability that's live in production.

Forrester's research shows AI is reshaping workflows, skills, decision-making, and organizational design simultaneously. Teams are dealing with unclear expectations, uneven readiness, and pressure to show rapid progress. The firms that figure this out will be the ones that assign formal ownership, document who approves what, and run narrow pilots tied to workflow redesign rather than output generation.

The uncomfortable truth is that most marketing organizations are spending more on AI while fewer than a third have clear leadership at the top. That's how you get a lot of tools and very little transformation.

HubSpot's AI Assistant Hits 1,000 Requests Per Day

HubSpot just documented hard usage limits for its Breeze AI assistant: 30 content-generation requests per minute and 1,000 per day. If you were planning to let your whole marketing team lean on AI for report summaries and content ideas, you now know exactly where the ceiling is.

The constraint showed up quietly in HubSpot's March 2026 product updates, alongside a bundle of friction-cutting features that won't make headlines but will save marketers from a dozen small annoyances. You can now export marketing emails to PDF or HTML (finally, no more screenshot approvals), clone repetitive workflow actions instead of rebuilding them, trigger automations based on campaign budget or spend, and create Canva graphics without leaving HubSpot.

None of this is revolutionary. It's table stakes dressed up as innovation. But the real story is what HubSpot isn't saying: there are no performance benchmarks, no customer outcome data, no sample sizes. The release notes describe what's possible, not what actually improved. PDF export "expands reuse and archiving options." Campaign triggers let you "respond to conditions without manual monitoring." Breeze summaries "reduce first-pass analysis effort." All true, all vague, all impossible to budget around.

The Canva integration is available across every HubSpot plan, which sounds generous until you hit the fine print: you can't import Canva fonts or brand kits into HubSpot editors. So much for full-fidelity design portability. Meanwhile, Breeze won't summarize survey feedback until you collect at least three responses, and HubSpot's own documentation warns the AI can generate incorrect or misleading output that requires human review.

What's useful here is the shift from one-off asset production to something more modular. Teams can now standardize templates, automate approvals, and stop manually watching campaign budgets. But you're still setting up workflows by hand, navigating beta flags, and working around permission dependencies.

The tension? HubSpot is selling you the dream of automated marketing operations while building in enough guardrails to ensure you'll still need humans in the loop. The 1,000-request daily cap isn't a bug; it's a boundary that keeps AI assistants exactly where HubSpot wants them - helpful, but never autonomous.