Your AI tools may not be as solid as they look.
This week, the reliability, privacy, and platform assumptions underneath your current investments all came into question at once.
Lead story
This is about your budget and planning.
A Reuters analysis published this week asked a question most marketing leaders have been avoiding: does the AI business model actually work? The argument is straightforward. Hundreds of billions of dollars are being invested on the assumption that AI will become reliable enough for high-stakes work. The evidence so far suggests otherwise. Models still produce confident wrong answers. Enterprise deployments stall after launch. And even Microsoft, with 450 million captive users and $150 billion invested, is struggling to convert AI capabilities into revenue.
For marketing teams, this is not an abstract concern. It is a budget question. If the AI tools your team adopted this year are built on a business model that has not proven it can sustain itself, two risks emerge. The first is that the tool disappears or degrades. Issue #3 covered exactly this when OpenAI shut down Sora. The second is that the tool persists but never becomes reliable enough to trust with the work it was bought for. Issue #4 covered the post-rollout stall. This week's Reuters analysis connects both patterns to the same structural problem.
The question to bring to your next planning cycle: for each AI tool currently in your stack, what happens if the vendor cannot sustain its current pricing or capability level? If the answer is "we have not thought about that," it is worth thinking about now. Not because every tool will fail. But because the ones that do will fail without warning, and teams without a contingency will lose time they cannot recover.
What to watch for: Pricing changes, feature removals, or sudden pivots from AI vendors your team depends on. Any of these is the early signal that the sustainability question has arrived at your door.
The stack
This is about your data governance and brand safety.
A class action lawsuit filed this week accuses Perplexity AI of sharing user conversations with Meta and Google through embedded trackers. The lawsuit alleges that even the platform's "incognito" mode did nothing to protect privacy. User queries, including potentially sensitive business research, were being passed to advertising networks without consent.
For marketing teams using AI search tools for competitive research, campaign planning, or market intelligence, this is a wake-up call. The question is not whether your team uses Perplexity specifically. It is whether anyone on your team is using AI tools for sensitive business queries without knowing where that data goes. A conversation with your data and compliance lead about which AI tools are approved for business use, and what their data handling policies actually say, is worth having before the next quarter.
This is about your content strategy.
A LinkedIn benchmarks report published this week found that document and PDF posts drive the highest engagement rates on the platform, outperforming image, video, and text-only formats. This contradicts the assumption many content teams have been operating under: that video is the dominant format for professional audiences.
If your content team has been investing heavily in video production for LinkedIn, this data suggests a rebalance may be overdue. Document-format content, such as carousel-style PDFs, research summaries, and visual frameworks, appears to reward depth over polish. The implication for your next content cycle: the format that works best on LinkedIn right now is the one that gives your audience something to read and save, not just watch.
This is about market access in APAC.
India has proposed making government advisories legally binding on technology platforms. If passed, Meta and other platforms would be required to comply with government directives on content moderation and platform operations. This follows a pattern: India's Gujarat High Court separately restricted AI use in judicial decisions this week.
For cross-border brands operating in India, no action is required today. But the regulatory ground underneath your platform strategy in the world's largest population market is shifting. If your team runs campaigns on Meta or Google in India, flag this with your regional compliance lead. The question to track: when does content moderation guidance become content moderation law?
The Synthesis
The thread connecting this week is not a single technology shift. It is a structural one. The assumptions underneath your current tool, data, and platform investments are moving.
Reuters questions whether AI can sustain the business model your tools are built on. Perplexity's privacy lawsuit shows that the data flowing through your AI stack may not be staying where you think. LinkedIn's benchmarks suggest the content format assumptions your team is working from may already be outdated. India's regulatory proposals signal that the platform rules in your largest APAC market could change without a transition period.
None of these require emergency action. All of them require the same thing: a willingness to check the foundations before the next planning cycle assumes they are still solid. The teams that handle the next quarter well will not be the ones that moved fastest. They will be the ones who verified their assumptions before building on top of them.
The M+ Signal is published by Metanoia+.
Intelligence infrastructure for AI-accelerated economies.