“Can’t we just use AI?”: How learning agencies can answer this with confidence

Posted: 30 Mar 2026
AI in learning localisation

“Can’t we just use AI for this?”

It’s a question that comes up in almost every multilingual conversation now. Sometimes it comes from clients looking to reduce cost or speed up delivery, or maybe it comes from internal teams who have seen what machine translation can do and are wondering how far it can go.

On the surface, it sounds like a simple question. In practice, however, it’s anything but. 

What we see across learning agencies is that this question often appears at the point where decisions feel least clear. The project is already defined, timelines are tight, and there’s pressure to move quickly across multiple languages. At that stage, AI can feel like an obvious answer.

The difficulty is that “using AI” is not one single approach. It’s a range of options, each with different implications for quality, effort and risk. Without a clear structure, those options can be hard to explain and even harder to scope confidently.

Why does the question feel difficult to answer?

Most agencies are comfortable talking about learning design, delivery and experience. Those conversations are well-practised and familiar. When it comes to localisation, however, and particularly AI localisation, the conversation can feel less defined.

The problem isn’t the capability; agencies (or their partners) usually have that in spades. The problem is how we’re framing the question. 

If the question is “AI or human?”, there are only two possible answers, and neither of them feels quite right. A fully AI-led approach can raise concerns about quality, tone and learner impact. On the other hand, a fully human approach can feel too slow or too expensive, particularly for large-scale rollouts.

In reality, most multilingual learning programmes sit somewhere in between.

What agencies need is not a yes-or-no answer, but a way to explain how different levels of human input apply to different types of content.

What actually happens in practice

Across the agencies we work with, this is where the conversation has clearly shifted over the past couple of years.

For high-volume, terminology-heavy content, such as compliance modules, product training or system walkthroughs, a more AI-led approach using machine translation with automated post-editing is now a very practical option. When configured properly, this can deliver speed and consistency at scale without introducing unnecessary manual effort.

At the same time, there are many areas where adding human post-editing makes a noticeable difference. Good examples of this are onboarding programmes, where a second layer of human review can help to instil a sense of belonging, or training aimed at influencing or changing behaviour.

There are also cases where full human translation is still the right (arguably the only) choice. Highly sensitive content, complex subject matter or programmes where the learning experience relies heavily on language will often require that level of input.

Rather than choosing one method for everything, agencies are increasingly working with a mix of approaches, applied more deliberately across different parts of the same programme. In practice, this becomes a tiered approach in which different levels of AI and human input are applied depending on the content.

Where things can start to go wrong

Agencies usually run into difficulty when they haven’t defined their mix of translation approaches early on.

Without a clear structure, decisions about AI tend to happen reactively. A client asks whether they can use AI, leaving the agency to assess suitability in the moment. The conversation becomes focused on cost rather than content, and it can be difficult to explain trade-offs in a way that feels confident.

And here’s where agencies can run into problems. Teams spend more time discussing options than expected. Review stages become less predictable. There is a risk that content is either over-processed, incurring unnecessary costs, or under-reviewed, affecting quality and the learner experience.

Pricing also becomes harder to hold. When the level of effort is unclear, explaining what is included and why gets difficult.

None of this is unusual, by the way, it’s simply what happens when agencies don’t anchor their AI decisions to a clear model.

A more confident way to approach the conversation

We find that what makes the biggest difference is introducing structure at the very start of the project.

When agencies define a simple, tiered approach to localisation at the proposal stage, the conversation around AI becomes much easier to manage. Instead of responding to “can we just use AI?”, the discussion shifts to “what level of input does this type of content need?

That’s a much more grounded starting point, and it makes the planning stage much easier to manage.

In practice, we recommend grouping content into broad categories. High-volume, lower-risk material can be handled with a more AI-led approach like MTAP, while machine translation with human post-editing (MTPE) can handle the bulk of core learning content. Full human translation would be our recommended approach for more complex, creative or high-impact content. 

It might sound complicated at first, but it doesn’t need to be overly detailed. The aim isn’t to create a rigid framework, but to introduce enough structure to guide decisions.

Once you’ve categorised the content, AI becomes easier to position as one part of a wider approach aligned with learning impact.

What does this change for agencies?

When you introduce this kind of strategic thinking early on, the benefits tend to show up quickly.

  1. Conversations with clients become clearer because there is a rationale behind each approach. Instead of discussing AI in abstract terms, agencies can explain how different options apply to different types of content.
  2. Pricing becomes more confident because the level of effort is defined earlier. There is less need to adjust the scope mid-project, and fewer surprises once delivery is underway.
  3. Delivery itself becomes more predictable because review stages are easier to plan, and teams have a clearer understanding of what they need at each stage.
  4. Agencies can position themselves as strategic partners. Rather than reacting to questions about AI, they’re leading the conversation, guiding clients through the options in a way that feels considered and practical.

How we support learning agencies to meet the translation needs of their clients

The question “can we just use AI?” isn’t going away. If anything, it’s just going to become more common as expectations around speed and cost continue to evolve, and as the technology’s capabilities do.

But the opportunity for learning agencies lies in reframing the question. 

With the right structure in place, it becomes much easier to respond with confidence and actually guide the conversation and the process. Your clients will learn how AI fits within a broader localisation approach that protects both learning outcomes and commercial reality.

This is exactly the thinking behind the model we have been sharing with agencies in our recent session, From Add-On to Advantage.

If this is something youre navigating, it can be helpful to step back and examine how your client is currently handling different types of content. . Even a light-touch review can help clarify where a more AI-led approach is appropriate, where human input adds value, and how to explain that balance more confidently to clients.

If helpful, we are always happy to talk this through and share how other agencies are approaching it in practice.

If this sounds like something you or your team would benefit from, don’t hesitate to get in touch. We’d be more than happy to review where your client’s at now and give some recommendations. No obligation, no cost, just get a feel for how we work and how we support other agencies navigating the same thing.

 

You can get in touch with me directly by dropping me an email edecker@comtectranslations.com or calling me on 01926 335 681 (ext: 216).