LY Corporation (the "Company") is committed to protecting user privacy and ensuring proper information management while widely utilizing various AI technologies in services and development, in line with its Basic Policy on AI Ethics. Specifically, for generative AI, the Company aims to become the most active user in Japan by building and enhancing the technical infrastructure and usage environment for employees. By promoting the use of generative AI to improve operational efficiency and providing more convenient services to users, the Company strives to expand its revenue in the medium to long term.
Leveraging its strengths, such as extensive Japanese data and numerous touchpoints with Japanese users, the Company is developing a "highly practical" environment to propel the cycle of adopting generative AI.
As guiding principles to use AI safely while protecting user privacy, LY Corporation establishes guidelines and rules such as Basic Policy on AI Ethics. For details, please refer to "Responsible AI."
LY Corporation adopts a multi-vendor strategy meaning that it works to create an environment where employees can choose from a variety of generative AI options, collaborating with the best partner to build the technological foundation for utilizing generative AI. Large language models provided by multiple AI development companies, including OpenAI and Google, are also made available.
Trainings on the risks of generative AI and prompt crafting techniques are mandatory for all employees and officers,* ensuring they acquire basic knowledge and literacy. With this foundation, they can utilize an in-house conversational AI assistant, "ChatGPT Enterprise," starting from June 2025. These efforts aim to establish a safe environment for employees and officers to utilize diverse generative AI, boosting productivity.
*Includes officers, permanent employees, fixed-term employees, contract employees, temporary staff, part-time employees, and subcontract employees.
LY Corporation offers various features leveraging generative AI in services such as LINE and Yahoo! JAPAN. For details, please refer to "Applications of Generative AI in Various Services."
In June 2021, LY Corporation established the Expert Panel on AI Ethics where the appropriate use of AI in consideration of societal needs (particularly its ethical aspects), is discussed continuously with external experts. In July 2022, the Board of Directors resolved to establish the Basic Policy on AI Ethics, which is the guiding principle to use AI safely while protecting user privacy.
Furthermore, in October 2023, to realize effective AI governance based on the Basic Policy on AI Ethics, the AI ethics governance division was established.
In May 2025, in line with the principles outlined in the Basic Policy on AI Ethics, the Company established the Basic Regulations on AI Governance which define the framework and responsibilities related to AI governance. These regulations establish the position of the person responsible for AI governance, to be appointed by the president, and clearly outline the responsibilities of the person responsible for AI governance, officers, employees, and individual organizations.
Additionally, in the same month, the AI Governance Guidelines were updated to respond to the rapid advancements in generative AI technology. These guidelines not only promote proactive AI utilization but also provide detailed recommendations on risk management approaches that align with the latest technological trends.
The Basic Policy on AI Ethics outlines the following eight items:
To effectively adhere to these principles, the Company undertakes the following initiatives:
The risks associated with the use of AI extend beyond technical safety and legal compliance—they encompass a wide range of concerns, including privacy and fairness. When assessing risks, it is necessary to consider how acceptable they are to society at that point in time.
LY Corporation establishes a governance framework that involves diverse internal and external stakeholders to appropriately address these complex and multifaceted risks.
Within the Company, the AI ethics governance division plays a central role in building a framework where relevant experts can be flexibly involved based on the content of the AI use and nature of the associated risks. This enables LY Corporation to make well-balanced and integrated decisions that take into account the complexity and multifaceted nature of the risks.
For major decisions related to AI governance, the Company draws on the expertise of the Expert Panel on AI Ethics. The Company also refers to a broad range of perspectives, including domestic and international policy trends, public debates, and user feedback, in its efforts to establish transparent and proactive governance that is both collaborative and sound.
Through these comprehensive initiatives, the Company establishes a system to prevent issues associated with the use of generative AI and provides services that users can utilize with a sense of security and safety.
Prioritizing user safety and trust, LY Corporation has long been committed to implementing best practices in security and privacy.
Data, which is essential for using AI, is also handled appropriately in accordance with the Basic Policy on Data Protection and LY Corporation Group's Cybersecurity Policy.
In addition to these efforts, the Company is working to provide safe and reliable AI services through ongoing, cross-functional collaboration involving the AI ethics governance division, the security division, and other relevant teams to address challenges specific to AI systems.
Going forward, the Company remains committed to balancing technological innovation with safety, ensuring that its AI services continue to offer a secure experience for all users.
LY Corporation takes a risk-based approach to AI utilization, properly assessing potential risks from ethical, legal, and social perspectives, and implementing appropriate measures based on the level of the risk.
Specifically, the Company has established a framework that allows for flexible adjustment of support and engagement methods depending on the degree of risk.
When a risk is judged to be high level, the AI ethics governance division takes the lead in decision-making in collaboration with C-level executives, including the CDO and CISO, as well as cross-functional committees.
For medium-level risks, appropriate actions are taken based on expert advice and support aimed at risk mitigation.
Furthermore, for low-risk use cases that do not fall into the categories above, the Company has clearly defined the applicable conditions in advance and ensures that employees are able to to make autonomous and appropriate decisions based on a foundational understanding of AI-related risks gained through mandatory training content.
This framework promotes responsible AI utilization across the Company while encouraging innovation driven by creativity at the operational level.
To appropriately identify and assess risks associated with AI utilization, the AI ethics governance division has identified key AI risk areas based on the Company’s Basic Policy on AI Ethics (as of May 2025).
Key AI risk areas:
In addition to AI-specific perspectives, the Company conducts a comprehensive analysis that includes factors such as compliance and transparency. When deciding whether or not to utilize AI internally or in its own products, the Company works closely with divisions including legal affairs, privacy, security, and intellectual property to evaluate risks from specialized perspectives before making a holistic decision on whether or not to proceed.
These AI risk assessment perspectives are clearly outlined in the AI Governance Guidelines, which provide easy-to-understand criteria and cautionary notes to help employees make informed and autonomous decisions about AI use. This approach is designed to ensure that responsible AI utilization is practiced at the individual level, even in initiatives driven by front-line teams.
By making a multi-faceted assessment of both AI-specific risks and the more general risks associated with traditional information systems, LY Corporation supports responsible AI use and ongoing risk management.
LY Corporation requires all officers and employees* to undergo mandatory training to enhance awareness of the risks associated with the use of generative AI—such as information leakage, rights infringement, privacy violations, and hallucinations—and to promote preventive decision-making and actions to mitigate the materialization of these risks. This training consists of e-learning and tests, and it also covers techniques for crafting prompts to improve the quality of generative AI outputs. The training content is regularly reviewed in response to social conditions and advancements in AI technology, and all officers and employees undergo the training approximately once every six months.
*Includes officers, permanent employees, fixed-term employees, contract employees, temporary staff, part-time employees, and subcontract employees.
- Training completion rate for the first half of FY2024: 97%
Major Risks | Information leakage, violation of rights, inaccurate output, discriminatory output, privacy |
---|---|
Prompts | How to improve output quality (Good and bad examples of prompts, sample templates) |
To further promote the active use of AI, LY Corporation is also providing targeted training in FY2025 for employees who are using or considering the use of generative AI to deliver services or create content. The training covers risks associated with business use of generative AI and their mitigation measures, as well as an overview of relevant overseas regulations that may apply.
AI technology is recently evolving at a rapid pace, particularly around generative AI. The environment is also undergoing swift changes with major tech companies and new enterprises making significant strides, as well as the development of regulatory frameworks both in Japan and abroad.
Under such circumstances, LY Corporation is creating a flexible governance mechanism that is not restricted by standardized operations and can accommodate these environmental changes.
When new technologies emerge or when there are changes in regulations or social conditions, the AI ethics governance division takes the lead in working closely with relevant departments and product development teams to swiftly implement a variety of governance measures.
To keep pace with the rapidly evolving AI landscape, the Company has adopted an agile approach to internal standards, including guidelines—treating them not as fixed documents but as resources subject to continuous review and revision. Revisions are made promptly as needed, based on internal feedback, operational challenges, and advice from external experts, and are shared across the Company.
In addition, services and internal systems that use generative AI are developed and operated in accordance with internal rules that reflect the latest technologies and social trends. Training content is also regularly updated to incorporate current developments, helping employees continuously enhance their knowledge and ethical awareness.
Through this agile governance framework, the Company is able to actively promote the use of AI while also ensuring responsible practices in response to both technological advancement and societal expectations.
LY Corporation offers various features leveraging generative AI in services such as LINE and Yahoo! JAPAN.
In executing the growth strategy for FY2024 and beyond, which focuses on reinforcing media and search domains based on ID linkage, Yahoo! JAPAN Search implements improvements by releasing features that enhance convenience using generative AI, based on the diverse needs of users.
The communication app "LINE" offers "LINE AI," a service that allows users to ask questions, gather information, and generate images for free, just like chatting with a friend, utilizing APIs such as those from OpenAI. By connecting users with generative AI, LINE aims to enhance communication richness and evolve into an even more convenient communication app.
In April 2024, "Yahoo! JAPAN Shopping" established a specialized team called the "Generative AI Tackle Office" within the organization. This team actively provides features that enable users to shop more conveniently and at a better value.
At LY Corporation, monitoring for non-compliant posts is conducted through a combination of AI and human review by a dedicated team, while respecting users’ freedom of expression.
The Company has implemented a machine learning system that utilizes resources such as its proprietary deep learning supercomputer, enabling the swift detection of posts that violate the prohibitions. While specific applications vary by service, the system estimates the likelihood that a post violates the prohibitions. If there is a possibility of a violation, the post will be automatically removed or prioritized for review by the dedicated team.
AI is employed to automatically assess the risk of user-generated content such as text, images, and videos violating the Company’s posting guidelines. This AI is a violation detection system optimized for LY Corporation's services, creating an environment where user posts can be reviewed appropriately.
Number of posts made in FY2024
Yahoo! JAPAN Knowledge Search | 66,199,309 |
Yahoo! JAPAN Finance | 29,415,652 |
Yahoo! JAPAN News Comments Section | 113,995,832 |
LINE OpenChat | 5,514,828,787 |
LINE VOOM | 403,331,897 |
- AI, etc. is utilized for purposes such as automatic removal and assisting human reviewers in making removal decisions.
By combining AI-powered systems with human eyes, LY Corporation conducts ad reviews around the clock, year-round, aiming to quickly deliver quality ads that align with user intent.
Number of disapprovals made in Yahoo! JAPAN Ads (FY2024)
Accounts | 10,364 |
Ad creatives | 197,910,220 |
To swiftly and reliably detect and remove problematic content, LY Corporation combines AI-based systems with human eyes to conduct around the clock, year-round monitoring.
Account suspensions and content removals (second half of 2024)
Number of suspended accounts | 137,813 |
Number of content removals | 27,626 |
As the use of generative AI and advanced technologies expands, the volume of data processing is rapidly increasing, raising new risks of greenhouse gas (GHG) emissions. In response to this challenge, the LY Corporation Group is committed to achieving carbon neutrality in its business activities by FY2030, aiming to reduce GHG for Scope 1 (direct emissions) and Scope 2 (indirect emissions) to net zero through the procurement of electricity derived from renewable energy with additionality, among other measures. Furthermore, the Group is also targeting "net zero" for total emissions, including Scope 3 (other indirect emissions) related to suppliers and the supply chain, by FY2050, in line with its Basic Policy on AI Ethics of realizing a peaceful and sustainable society.