Google’s marketing team has stopped translating their content. They’re generating it directly in-market, in-language — a signal their team shared openly at Google Cloud Next 2026.

If the world’s largest content machine is rethinking how global content gets made, localization leaders should be paying close attention. The Smartling artificial intelligence (AI) research and development team was at Google Cloud Next 2026 in Las Vegas. Five themes came up in nearly every session, every customer spotlight, and every conversation on the floor. Each one has a direct implication for how localization programs need to be built.

Here’s what the team took away, and what it means specifically for localization.

 

1. The AI pilot era is over

Unilever has multi-agent procurement systems running in production. Virgin Voyages has 1,000+ specialized agents. These aren’t pilots, they’re operational infrastructure. The MIT NANDA report from 2025 still pegs enterprise AI implementation failure rates at 95%, almost always because stakeholders did not set up the governance to measure return on investment (ROI) when kicking off these projects.

If your localization program is still running off-production AI translation experiments, you're not behind on the technology: you're behind on the governance, the measurement, and the accountability structures that turn experiments into programs. The good news is that catching up is still attainable. Start by asking the question the 95% didn't: what does quality look like at scale, and how will you measure it?

 

2. RAG is soon replacing fine-tuning — and it’s why generic AI breaks on translation

The conference consensus was clear: Retrieval-Augmented Generation (RAG) may be soon replacing model fine-tuning as the standard approach for getting reliable output from AI. Fine-tuning, while still of value for certain use cases, can be too slow and too expensive for many teams. RAG is how you get reliable, brand-consistent translation output, enriched by your in-platform linguistic assets — translation memories, glossaries, and style guides.

This is the technical explanation behind a problem localization teams already know firsthand: generic AI shifts tone, mistranslates brand terms, and has no memory of what your organization has already approved. Without your linguistic assets applied at translation time, the model is working without context. That’s the argument to make to any stakeholder who thinks copy-pasting into ChatGPT is good enough. The next question to answer is: where are your linguistic assets actually stored, are they updated in real time, and are they being applied every time?

 

3. Data governance is everyone’s problem now

Data governance is everyone’s problem now, no matter which industry. Agentic workflows are only as reliable as the data they act on. For localization leaders, data governance means: is your translation memory clean and up-to-date? Is your glossary enforced across all of your enterprise global content? Do your style guides capture the stylistic preferences of your brand? Is your quality data traceable and auditable? Are your translation workflows safe and secure?

If the answer is “somewhat,” or “I keep my glossaries in a spreadsheet” — that’s the technical debt that compounds once your AI implementation scales. Clean, curated multilingual linguistic data, stored and dynamically updated in a centralized secure translation management system, is what separates AI output you can trust from AI output you have to fix.

 

4. Agentic workflows are operational — and localization needs to be in the pipeline

Marketing agents, data agents, engineering agents collaborate across Jira, Looker, GitHub, and Slack, with humans only at key decision points. If your centralized localization platform isn’t plugged into those pipelines, it gets bypassed and content ships without translation, or gets translated by whatever low-resistance AI option is closest to hand.

This is not something to address someday. It needs to be addressed now, no matter where your organization sits on the AI maturity curve. The programs that get plugged in early will set the standard. The ones that don’t will spend next year catching up.

 

5. Out-of-the-box multilingual AI is getting better — which makes your program’s value harder to explain and more important than ever

In our conversation with the Google Cloud team, they were direct about the multilingual capabilities coming to every Vertex user. Translation is becoming a commodity, which means localization’s value isn’t “can we translate” anymore. It’s “can we translate in a way that reflects our brand, meets our quality bar, and scales without breaking governance” — and that’s the argument you need ready when your CFO asks.

 

Quality is the advantage

The throughline is the same across all five: access to AI is no longer the advantage. Quality, governance, and workflow integration are. If your localization program is built on that foundation, you're ahead. If it isn't, there's no better moment to start.

Ready to build quality into your translation program from the start? Join us at the Global Ready Conference on May 20 to find out how.

 

Register for Global Ready Conference here

 

Olga Beregovaya

Vicepresidente dell'intelligenza artificiale
Olga has over 20 years of experience in language technology, NLP, machine learning, global content transformation, and AI data development, and is passionate about growing businesses through driving change and innovation. 

Perché aspettare per tradurre in modo più intelligente?

Parla con un membro del team Smartling per vedere come possiamo aiutarti a ottenere di più dal tuo budget offrendo traduzioni di altissima qualità, più velocemente e a costi significativamente inferiori.
Cta-Card-Side-Image