Setting up a localization team? It’s a complicated one, and it’s possible to do wrong – as we heard in our recent webinar, not all localization teams are equal.
When you plan to build your localization department, or whether you’re building your 100th, this is our guide for the three most important decisions you’ll have to make to get things right.
1/ How do you incentivize / direct your l10n focus?
In our recent playbook for localization leadership, we talked about how different functions think about what to localize and why they’re localizing. This is the most important question you can answer
To quickly recap, we drew a distinction between:
- A delivery narrative (and team) – a delivery narrative goes, “localization teams are about delivering localization”. That’s sometimes reflected in teams where localization works as a tail-end language support effort at the end of a product life.
- An investment narrative (and team) – these teams are able to marshall their own localization “investments”; where a loc team is judged by both the effort and the commercial results of their localization effect.
- A strategy narrative (and team) – these teams are more strategic globalization teams which own the loc function, and which are the voice of international users in the product.
How it affects QA
We asked our delivery team about the way that different kinds of team, and how that affects the localization QA capabilities they use at GAT.
“When a team is a delivery team, they’re often using QA to verify they’ve executed the line item quickly and effectively. So, they’re interested in the fast turnaround GAT can deliver, and they’re interested in QA which confirms that their localized changes are accurate to the language and function flawlessly. Their external QA is adversarial to their team and LSP, because the system is designed to ensure that everyone is marking everyone else’s homework.”
“The alternative is when localization teams are using LQA and testing to identify where they should be setting the agenda. Then, they’re hung up on a question like “what’s the smallest amount of work we can do to achieve an outcome?”’ When we found something which is actionable, whether it’s a localization, UX, or even a functional issue, we’re working with the team to help them identify the opportunity.”
"That’s a completely different way of working – one operational approach to help enable faster release cycles, and one commercial approach to help drive commercial outcomes. Both of them work great – but my favourite is the second."
2/ How do you build local expertise?
The second question which we’ve seen teams handle very differently is the way to build local expertise. This is often a hiring question, i.e. “do you employ local country experts – and if so, where? And what kind of person?”
It’s a popular choice. Some members of our recent interview series employed local experts in some countries but not others. (Japan came up the most often; a large affluent market which can often require specialized knowledge to produce great software.)
That person is there to help both the proactive insight and globalization strategy side and a quality sense-check at the end. (The card expenses solution Pleo brought all their quality-checks in-house for every market, and asked employees who spoke that language – but were not copywriting professionals – to check that it matched brand tone, over an external but professional LSP).
What’s the challenge?
In some senses, professionals dubbed “country experts” in positions of tremendous power. They become not only the owner of quality, but a proxy for user sentiment in that country. In some scenarios they can be a kind of local founder.
But as founders quickly learn, it’s not possible to be an expert in everything. Think about the size and breadth of skills required to make your domestic business work. Copywriting, UX and UI, local solutions, local marketing, and local sales, are all different skills – and one person probably can’t do everything.
Their challenge is therefore to harness the skills in the main business. Is it possible to use the US marketing team’s skills in Japan? The US solutions design function in India? And how do you gather the necessary information for those people to do great work?
How does it affect QA?
We asked our delivery team, and they said:
“So often, what we try to produce is communicable globalization data.
“We generally prefer to use testers in the country location of the release, who is a native language speaker, and who therefore is best attuned to the local language and culture of the region. But it’s tough, because they need to communicate what they think in a way that a domestic product team can understand. That might be something easy like an overlong string or it might be a translation dispute, depending on what the client has chosen to audit.”
“It’s really essential that they manage to communicate effectively – it’s not enough for our tester to decide something is important. We train our testers to emulate what product teams we work with feel is important, and to describe clearly in English what they feel an issue is, including the full production details and all the rest of it.
“But it’s tough – there’s no perfect way to do local expertise.”
3/ How do you scale your localization approach
Whatever you select for the previous two questions, you’re going to have to invest in scaling your localization at some point (and with it, scaling quality). Multiple localization managers we have spoken to are using broad-scale machine translation without checks; and although they don’t always want to admit this, the quality output of machines is indeed getting better.
You can also use AI for some kinds of quality check on the policing stage. We work closely alongside automated tests of various kinds and enable a QA approach which is focused on generated and personalized software, to keep a robust quality framework in place without demanding exponential QA alongside exponential software releases.
How can it affect QA?
We asked our delivery team, and they said:
“The needs of scale make localization really difficult. There’s 200 countries out there. They all have specific needs and requirements. Businesses often have hundreds of thousands – sometimes millions – of words. And as experiences in software become more reliant on generative technology, and more bespoke, there are exponential numbers of user experiences out there, it’s tough to check everything.”
“We’re one of a few solutions that help. Access to 100K testers in 190 countries is a start. We’re evolving to meet the needs of businesses with a more generative approach to the software they produce."
If you're interested in getting started with crowdtesting, you can start a conversation with Global App Testing below.