CES 2024 Key Summary: How World Models and Action AI Will Change Our Future and Investment Opportunities
[CES 2024: Looking Beyond Generative AI]
CES 2024 has once again sparked an AI craze worldwide. However, this trend is a bit different from last year's ChatGPT shock. Beyond simply 'Generative AI' that writes and draws, it is now evolving into a stage where it understands the physical world and performs tasks directly on behalf of users. In this post, we will delve into 'World Models' and 'Action AI', the key topics of CES 2024, and see how they will change our future.
The Signal of CES 2024: The Era of World Models and Action AI
1. World Model AI: Machines Realize the 'Logic' of the World
The Large Language Models (LLMs) we have used so far operate by learning vast amounts of text data to probabilistically predict 'the next word.' While they demonstrate amazing writing skills, they have a fatal flaw: they do not truly 'understand' the physical laws or causal relationships of how the world works. This often leads to 'Hallucination' phenomena where they plausibly fabricate false information.
World Models emerged to overcome these limitations. Emphasized by Meta's Chief Scientist Yann LeCun, this concept aims to teach AI not only text but also the physical laws and common sense of the real world.
Changes Brought by World Models
- Equipped with Common Sense: By understanding physical causality, such as "if you drop a cup, it falls to the floor," AI makes much more accurate and logical judgments.
- Safe Autonomous Driving: Since it predicts and learns risks in advance through simulation, we can expect a dramatic improvement in safety in fields like autonomous driving and robotics.
2. Action AI: "Just Say the Word, I'll Do It All"
If World Models upgrade the AI's 'brain', Action AI gives the AI 'hands and feet'. While existing AI chatbots remained in the role of 'advisors' that search for and summarize information, Action AI becomes a 'competent assistant' that receives user commands, actually runs applications, and completes tasks.
One of the devices that received the most attention at CES 2024, Rabbit's 'R1', is a prime example. This device is equipped with a Large Action Model (LAM).
Characteristics of Large Action Models (LAM)
- App Manipulation Learning: AI learns how humans use smartphone apps, enabling it to perform tasks like calling an Uber, making restaurant reservations, or playing music on its own.
- Interface Revolution: You no longer need to open complex apps one by one and click through them; just say a word to the AI, and the entire process is handled automatically.
3. Beyond Generative AI to AGI (Artificial General Intelligence)
World Models and Action AI are ultimately heading towards one point: AGI (Artificial General Intelligence), which thinks and acts like a human.
When the ability to physically understand the world to make accurate judgments (World Models) combines with the ability to autonomously perform tasks in both the digital and real worlds (Action AI), AI will go beyond a simple tool to become a true partner. CES 2024 was a significant turning point showing that AI is changing from an 'observer' of our lives to an 'actor'.
Key Summary:
CES 2024 showcased the evolution of AI. The emergence of 'World Models' that understand physical laws and 'Action AI' that performs practical tasks signals that we are entering the AGI era where AI goes beyond text generation to solve real-world problems.
The content of this blog is for reference purposes only regarding investment judgments, and investment decisions must be made under the individual's own judgment and responsibility. Under no circumstances can the information in this blog be used as evidence of legal liability for investment results.
댓글
댓글 쓰기