top of page

[Gen AI Service Launching Story] AhnLab Global AI Translation Service

ree

TecAce, an IT company based in Seattle, with 24 years of experience, has changed its vision and business strategy to AI Transformation starting in 2023 and is continuously developing solutions utilizing LLM Foundation and carrying out collaborative projects with partners.


In this post, I will share some cases I experienced while developing the AhnLab AI Global Translation Service, which was commercialized in June 2024.


CHALLENGE

The global professional security company AhnLab, Inc. provides security news and information to customers in various industries to quickly detect and respond to rapidly evolving security vulnerabilities due to the rapid growth of the IT industry. In this process, numerous specialized security terms emerge, and even the same term often carries meanings different from its general definition, making specialized security translation essential. When security risks are discovered in the global IT market, the latest security reports must be quickly distributed. However, there are not many trained translators familiar with specialized security terms and AhnLab's technical terminologies. Furthermore, despite using machine translation services, quality errors frequently occur, leading to limitations in rapid response through MTPE (Machine Translation Post Editing).


SOLUTION

TecAce has trained various translation data pairs through the PoC stage to launch a competitive AI-specialized translation service required by AhnLab, and as a result, was able to generate translated reports at a level similar to existing AhnLab reports. In particular, generative AI can cause unpredictable issues unlike traditional system development, and we successfully addressed these through prompt engineering, fine-tuning, and pre/post software coding, leading to commercialization. We realized that generative AI could not perfectly produce correct output values for all input values, and different outputs can arise from the same input, making diverse evaluations of results and establishing customer-tailored systems extremely important.


LESSON


1. How to Ensure the Reliability of LLMs


Recently, various benchmark rankings of Foundation Models have been announced, emphasizing the excellence of these models. However, it is not easy for companies to assess the suitability of the models they intend to implement. The TecAce AI development team tested AI translation capabilities using external models from OpenAI, Meta, Anthropic, etc. They selected the optimal translation model through a self-built model validation platform. To achieve this, the TecAce AI Team evaluated the model's response results using BERT Score and industry evaluation metrics such as METEOR and BLEU Score, based on the existing translation data of the client company. By establishing this model evaluation system and process, continuous evaluation of new models has become possible, and the operational cost efficiency can also be reviewed based on the client's on-premises requests and traffic.


2. Deriving translation results tailored to the client by understanding the context


In the digital translation industry, it provides consistent translation quality by utilizing a 1:1 word substitution feature with a glossary and reusing existing translation content through TM (Translation Memory). TecAce has classified the original text and translation results of its clients as Pairs data in this manner and has trained it to produce translation results optimized for the client's style.


This not only learns to align with external samples but also provides optimal customized translation quality based on validated data according to the client's needs. Additionally, by utilizing the Chat mode provided by the LLM model and performing fine-tuning by classifying it into chunks according to the relevant context, we have derived AI translation results with a consistent style.


3. Managing Unexpected Errors in Gen AI


When implementing a service using external LLM models, the most concerning aspect is the occurrence of unpredictable errors. A model that was functioning normally yesterday may produce unintended results today, and there are instances where attempts to resolve the issue fail due to the inability to reproduce it. This is an unavoidable problem unless all the data that the LLM model has learned is directly managed, and fundamental improvements are also difficult.


TecAce is developing and operating various metrics-based sLLMs tailored to each company to address these hallucinations and unexpected errors. This method evaluates the output of fine-tuned generative AI based on standards and data that match the characteristics of each company and business, which is essential for ensuring service quality at a commercial level. To achieve improved results based on the evaluation outcomes, pre/post-engineering must accompany the process.


4. Strengthening service competitiveness through continuous quality assurance


As industries utilizing AI continue to grow, each company is preparing AI services that reflect its data characteristics and importance. In the early stages of the AI market, it is crucial to build customized AI solutions with stable quality, which will secure a competitive advantage in AI technology and contribute to the success of the business.


Whenever the engine is updated, regression testing is conducted to ensure that there is no impact on the existing quality, and stability must be secured by periodically monitoring the quality of the LLM model through batch testing. Naturally, it is necessary to adjust the threshold value for evaluation according to the sensitivity and importance of the services where the LLM is applied.


TECACE AI SUPERVISION

TecAce focuses on the commercialization transformation of AI by finding models tailored to the needs of client companies, fine-tuning them, and verifying and improving quality, rather than developing LLM models themselves and participating equally in benchmark competitions. To enhance the competitiveness of AI quality evaluation solutions, TecAce provides a customized quality monitoring service called the AI Supervision solution, based on its technology and experience, offering various evaluation systems and processes. We expect to stand out in the field of AI Quality Assurance for the development and operation of AI services, and we are looking for partners to create successful stories in the new AI market together.


Contact

TecAce is an end-to-end service provider in the AI industry, offering services from AI infrastructure setup to solution development and operation. We aim to assist businesses in implementing and expanding AI-based services within their own operations.


ree

US OFFICE


sales@tecace.com


+1 (425) 952-6073

840 140th Ave NE, Bellevue, WA 98005

SOUTH KOREA OFFICE


gx@tecace.com


+82 (2) 6959-1200

14F, 373, Gangnam-daero Seocho-gu Seoul,South Korea 06621


BizDev Sr. Director Charlie Kim (charliekim@tecace.com)

Comments


bottom of page