
Code.org is a nonprofit organization headquartered in Seattle with a bold mission: ensuring every student in every school has the opportunity to learn computer science and artificial intelligence as part of their core education, regardless of background or future career pathway. Since launching in 2013, Code.org has grown into a global movement and now supports more than 107 million students and 3 million teachers across 190+ countries.
As the organization expanded internationally, Code.org recognized that localization wasn’t just a translation need—it was a mission-critical effort. Their goal was to ensure that global learners and non-English speaking users could access content that felt culturally and linguistically relevant, not simply converted word-for-word.
As Code.org Product Manager Doyeon Kim shared during the webinar, “We realized how critical it is to actually localize our content, not just translate it, but really make it feel culturally and linguistically relevant to global learners and non-English speaking users.”
Code.org had already invested significantly in localization before partnering with Localize, but their existing workflow wasn’t sustainable as the organization continued to scale. The biggest issue wasn’t the translation itself—it was the manual effort required to move work through the system and get updates live.
Doyeon explained that their previous process relied heavily on human translators for both initial translation and quality checks. This created slow review cycles and limited throughput. “For us, the biggest challenge was really the manual throughput and review cycles,” she said.
In addition, even once translations were completed, Code.org still faced a significant engineering bottleneck. Under their previous system, it could take a week or even two weeks for translations to be published and live on the site. “Even after translations were done, it could take anywhere from a week or two weeks to actually get them published and live,” Doyeon shared.
Behind the scenes, the operational load was also growing. The team was coordinating across agencies, contractors, and a large volunteer translator network spread across regions and time zones. That model required constant onboarding, answering questions, managing contracts, and coordinating timelines. Over time, it became increasingly labor-intensive and frequently resulted in delayed global release schedules.
To scale localization without expanding headcount, Code.org shifted away from a workflow built around heavy project management and manual coordination. Instead, they moved to a system that leveraged AI for speed while preserving human review where it mattered most.
The team began using machine translation to generate first-pass translations quickly, then focused their time and effort on targeted human review and post-editing. Rather than managing hundreds of translators across different workflows, Code.org partnered with a smaller set of selective local experts and partners to refine quality and ensure cultural relevance.
Doyeon described this as a major efficiency unlock: “We moved away from that model and we use machine translation for the initial task and then focus more of our time and efforts on human review and post-editing.”
Localize also enabled Code.org to collaborate more effectively with volunteers and stakeholders. Instead of relying on one-on-one onboarding sessions, the team could invite contributors directly into the platform and onboard them in groups. Volunteers were able to contribute using the on-page editor and see updates reflected immediately, making collaboration faster and more intuitive.
As Doyeon explained, “Instead of doing this one-on-one onboarding session, now we can do a group onboarding session… invite them as a translator on the platform using the on-page editor… and they can see literally what they made a change on the platform immediately after they update it.”
Beyond speed and collaboration, Code.org also adopted tools to improve consistency across languages. Glossary features helped enforce preferred terminology and maintain a unified voice across global markets. The team also began using Translation Quality Scoring (TQS) to evaluate quality across languages and better understand which models performed best for different regions as they expanded into new markets.
The impact of the shift was immediate. Code.org reduced localization cycle time by more than half, removed the publishing lag that previously slowed releases, and created a workflow that scaled without requiring more internal resources.
“First, of course, speed,” Doyeon shared.
Where the team previously waited one to two weeks to publish translations, they can now push updates in real time—an improvement that fundamentally changed their ability to launch and maintain global content. “Now we can push updates in real time,” she said. “That’s been a game changer.”
In addition to speed, Code.org also saw measurable improvements in consistency. With glossary tools in place, the team reported stronger alignment in tone and terminology across languages. This reduced reviewer fatigue, improved quality, and helped build trust with global users as the platform continued to grow internationally.
As Doyeon noted, “We’ve been seeing a lot of improvement already in tone, terminology, and overall quality… it’s been helping us maintain a unified voice and tone across different languages.”