Japan is actively working on creating its own version of ChatGPT, the renowned AI chatbot developed by the US-based company, OpenAI. The Japanese government, in collaboration with major tech giants like NEC, Fujitsu, and SoftBank, is investing heavily to develop AI systems that utilize the Japanese language. The primary motivation behind this initiative is the belief that current large language models (LLMs), such as GPT, perform exceptionally well in English but often falter when it comes to Japanese. This is attributed to the complexities of the Japanese alphabet system, limited data, and other linguistic nuances.
English Bias in LLMs
LLMs, including ChatGPT, are trained on vast amounts of data, predominantly in English, to predict the sequence of words in a text. This English-centric training has led to concerns in Japan regarding the AI’s ability to understand and convey the intricacies of the Japanese language and culture. The Japanese language’s structure differs significantly from English, and direct translations can sometimes miss out on cultural nuances and politeness levels expected in Japanese communication.
Cultural Sensitivity and Language Complexity
For AI models to be effective and potentially profitable, they need to mirror cultural norms in addition to the language. A significant challenge with ChatGPT is its potential to overlook standard Japanese expressions of politeness, making its responses seem like direct translations from English. The Japanese language, with its two sets of 48 basic characters and over 2,000 regularly used kanji (Chinese characters), presents a unique challenge for AI models. The complexity of the language often leads ChatGPT to produce rare characters or unfamiliar words.
Japan’s Ambitious AI Projects
Several ambitious projects are underway in Japan to develop a native LLM. One such initiative involves using Fugaku, a Japanese supercomputer, to train mainly on Japanese-language input. This project, backed by prominent institutions like the Tokyo Institute of Technology, Tohoku University, Fujitsu, and the government-funded RIKEN group, aims to release its LLM next year. This model will be open-source, allowing all users access to its code. Another significant project, funded by Japan’s Ministry of Education, Culture, Sports, Science and Technology, aims to create a Japanese AI program tailored for scientific needs. This model, expected to be released in 2031, will focus on generating scientific hypotheses by learning from published research.
Commercialization and Future Prospects
Several Japanese companies are already capitalizing on, or planning to exploit, their LLM technologies. NEC, for instance, has started using its generative AI based on the Japanese language and claims significant reductions in time required for creating internal reports and software source code. SoftBank, another major player, is investing heavily in generative AI trained on Japanese text and plans to launch its LLM next year. The ultimate goal for these initiatives is not just linguistic accuracy but also to bridge the cultural gap, fostering better international collaborations and research outcomes.
This post contains affiliate links.
Author
-
This article was written with the assistance of AI. Edited and fact-checked by Ronan Mullaney.
View all posts