Throughout human history, technology has been the core driving force behind the advancement of civilization. Throughout human history, technology has always been the core driving force behind the progress of civilization. From the fire of the Stone Age to the steam engine of the Industrial Revolution, and now to artificial intelligence and quantum computing, the leaps in technology are astonishing. In recent years, with the rapid development of information technology, new technologies such as the internet, smartphones, and cloud computing have emerged, making information dissemination and exchange more convenient and efficient. At the same time, significant breakthroughs have been achieved in fields such as biotechnology and new energy technologies, bringing new hope and opportunities to human society.

- Core Metrics: Computing Power, Data, and Algorithm Efficiency
- Computing Power (hardware performance): This is the fastest aspect of training. Over the past few years, the amount of computation required for the largest AI models has doubled approximately every six months, which is expected to be faster than Moore's Law (which predicts a doubling of transistor counts approximately every 18 months). This is made possible by more advanced chips (GPUs, TPUs) and their massive workloads.
- Data: The size of training datasets is growing exponentially. Today's models use trillions of words and billions of images from the internet. This massive amount of data serves as the "teaching material" for model learning.
- Algorithmic efficiency: This is perhaps the most important driver. We are becoming increasingly adept at achieving better results with the same amount of compute and data. New architectures (such as the Transformer, introduced in 2017) and training techniques are enabling models to perform tasks for which they were not specifically trained (i.e., "cognitive" abilities) and do more with fewer resources.
- Observable Performance Milestones
- Shifting Goals and Emerging Capabilities
- Industrial and Economic Impact
- From Research to Product: Techniques like Large Language Models (LLMs) transitioned from academic papers to global, consumer-facing products (e.g., ChatGPT) in less than five years.
- Democratization: Powerful models are continuously released, allowing millions of developers to immediately build upon the latest advances. Access to open source models is rapidly decreasing.

- Reasoning and True Understanding: Models remain prone to "hallucination" (making up facts), struggle with complex chains of logical reasoning, and lack a true understanding of the world. Progress in this area is slower and more unpredictable.
- Energy Efficiency: Training large models consumes enormous amounts of energy. While the efficiency of model inference (using models) is improving, the environmental cost remains a significant challenge.
- AI Safety and Alignment: Ensuring that AI systems do what we want them to do, are reliable, and free of bias, is a significant unresolved problem. Currently, the pace of capability advancement clearly outpaces the pace of security research.
- Hardware Bottlenecks: There are physical and economic limits to the expansion of computing power. The exponential growth in training costs cannot continue indefinitely.