Your AI investments are live.

That does not mean they are working.

Two weeks ago, The M+ Signal covered the pilot failure rate. This week's evidence tells you what happens after the pilot survives.


Lead story

This is about your budget and planning.

In Issue #2, The M+ Signal reported that nine out of ten enterprise AI pilots fail before reaching deployment. This week, a Tech in Asia analysis looked at what happens to the ones that make it through. The pattern is the same. Corporate AI programmes that successfully launched are stalling in place, and the failure point is still not the technology. It is the absence of clear decision-making processes for using it. Teams that got a tool live found that getting it live was the easy part.

For marketing teams, this distinction carries a specific weight. Pilot failure happens before budget is committed. Post-rollout stall happens after it. The questions are the same: what specific decision does this tool serve, and who is accountable for that decision? But the stakes are different. A failed pilot is a missed opportunity. A stalled deployment is money being spent with nothing coming back.

The question to bring to your next budget review: for each AI investment your team currently has in production, can you name the specific decision it was meant to improve? Can you point to evidence that it is improving it? If the answer is unclear, the cost is active, not theoretical.

What to watch for: If your team has moved from "we are piloting this" to "we are using this" without defining what success looks like in production, the stall is probably already under way.


The stack

This is about your creative and video team.

In Issue #3, The M+ Signal flagged OpenAI's deprioritisation of Sora as a WATCH signal. That signal has now closed. OpenAI shut Sora down this week: compute costs were too high and competition had moved faster than anticipated. Resources are being redirected to enterprise coding tools. In the same week, ByteDance upgraded Seedance 2.0, its AI video tool, with built-in watermarking and IP checks ahead of a global launch.

The AI video landscape shifted materially in a single week. One significant option is gone. A ByteDance alternative arrived with governance features designed for brand-safe commercial use. If your team was building any part of its content roadmap around Sora, that plan needs revisiting now, not at the next planning cycle.

 

This is about your campaign planning.

Google has added AI video generation to its Demand Gen ad campaigns, using Veo to produce video from static images. This is not a future product announcement. It is available now inside your existing Google ad account.

The question for your media team: do you use this, and if so, who decides what the AI-generated video should look like and say? That is a creative brief question, not a technical one. Google can now generate the video. Your team still has to tell it what the video is for. That brief needs to exist before your next campaign cycle, not after.

 

This is about your internal AI governance.

A Stanford study published this week found that AI chatbots give inconsistent, sometimes harmful guidance when used for personal decision-making. The structural problem applies beyond personal advice: the tools present answers with confidence that does not correlate to accuracy, and users without domain expertise cannot easily identify the gaps.

For marketing teams, the operational version of this risk is already present. AI tools are being used to draft customer-facing copy, generate campaign summaries, and inform positioning decisions. Most teams do not have a defined review layer for AI outputs in any of these workflows. No action required today. But a conversation with your head of content and your brand safety lead about which AI outputs get reviewed before they go out, and by whom, is worth having before the next major campaign.

 

The Synthesis

The thread connecting this week is not the tools themselves. It is the governance layer that most marketing teams still have not built.

AI programmes stall post-rollout because the decision processes were never defined. AI video tools close without warning because the landscape is still volatile. AI creative production is embedded inside ad platforms before teams have a brief ready for it. AI chatbots give confident wrong answers to questions nobody reviewed.

Every one of these is the same failure at a different stage. Speed without a decision and review layer is not momentum. It is exposure. The teams that handle the next 12 months well will not be the ones that adopted the most AI tools. They will be the ones who decided, early and clearly, which human judgements AI was and was not allowed to replace.

 

The M+ Signal is published by Metanoia+. Intelligence infrastructure for AI-accelerated economies.


 
Next
Next

Your marketing tools just changed. Nobody asked.