[AI Seminar] How well does it understand human language, and what can it do?

Jun 7, 2021

The first seminar on Artificial Intelligence (AI) was opened with a lecture by Youjin Kim, a research fellow at the AI Research, Future Technology Center of LG Electronics. LG Electronics is presenting the most realistic future for the era of home IoT by incorporating “speech intelligence” technology in its consumer electronics offerings. Let’s take a look at the lecture offered by Kim, who is in charge of the speech intelligence technology.

 

Session 1. The Status of Natural Language Processing Technology

In the first session, we discussed the status of AI from the natural language processing perspective.

 

You come across AI language processing technology through common AI-enabled speakers, whether you knew it or not. If you say “get me some U-Quiz videos from YouTube” to your AI-enabled speaker, it shows a list of U-Quiz videos retrieved from YouTube through “speech recognition,” “intent analysis,” “conversation processing,” and “speech synthesis.” In this process, natural language processing is responsible for “intent analysis” and “conversation processing.”

 

Natural Language Processing

Natural Language Processing

 

AI still has much to overcome to answer hypothetical or qualitative questions, even though being able to answer questions they have already learned just like we are. For instance, although IBM’s Watson demonstrated overwhelming performance over humans in a quiz show about 10 years ago, Watson for Oncology, a technology intended to provide treatment options based on patient records, was discontinued due to the low matching rate between the options it suggests and the ones by human experts. This implies that it is challenging for AI to provide expert-grade answers in specific areas.

 

AI_Seminar_2-2_en.jpg

Watson and the Jeopardy! Challenge

 

When it comes to localization with which we are very familiar, the East prefers literal translation while the West prefers liberal translation due to the cultural difference, but AI cannot allow for these qualitative preferences; there is a clear consensus that it is still premature to apply AI to the localization industry. Finally, the recent controversial case of the “Iruda” chatbot in Korea led to a social issue, due to privacy issues and biased learning and response mechanism.

 

Even though large-scale language models based on sequence-to-sequence modeling that far exceed the level of existing models have emerged since late 2010, they just demonstrated intelligence equivalent to common knowledge in certain areas, exposing many things to be improved to have commercially meaningful, human-equivalent linguistic intelligence in professional areas.

 

Session 2. Large Scale Language Models

In the second session, we discussed the concept and implications of large-scale language model.

 

The author, Simin Yu said that reading comprehension could only be obtained through reading, and writing should start with reading. Likewise, the advanced natural language processing technologies are based on large-scale language models, meaning they are learning like reading from a myriad of books. We can say large-scale language model is a technology that challenges to have general human intelligence. It is a process like becoming a member of a company after learning elementary, middle, and high school-level knowledge through reading and learning field-specific knowledge through university courses. The latest language models are generated based on pre-trained models learning from 8 million web pages, which is not doable for humans, and then fine-tuned by learning a very small amount of additional data for specific areas to be functional in those areas.

 

We introduced transformer-based language model technologies from Bidirectional Encoder Representations from Transformers or BERT, which processes language comprehension and classification, to Generative Pre-Training 3 or GPT-3, which focuses on documentation processes such as text creation, translation, and summarization. In relation to this, we also looked into some programs that, based on some simple inputs, create a paragraph or compose code lines using language models.

 

LG Electronics is also developing its ThinQ.AI platform to keep up with these trends with AI. ThinQ.AI is already available in consumer products from LG Electronics, including TVs and consumer electronics, and the application range of the platform is also expanding gradually. Please look forward to the continuous development of ThinQ.AI for the natural communication between devices and users.


 

How was the first in-house seminar on AI? The speed of development of AI is surprising, isn’t it? I was really impressed at AI being applied in such diverse areas and deeply involved with our daily lives, and I was very excited about how it would develop in the future.

 

The second seminar will be held with the title, “Will we be able to draw and paint by speaking?.” We ask for your interest in AI seminars to be held until June, and this concludes the review of the first AI seminar.