From fire in the Stone Age to the steam engine of the Industrial Revolution to today's artificial intelligence and quantum computing, technological advancements have been astonishing. However, as the pace of technological advancement continues to accelerate, a question gradually emerges: Has technological development already outstripped human control? Is it advancing in a way we cannot predict or control?
The following analyzes the pace of AI development from different perspectives:

- Core Metrics: Computing Power, Data, and Algorithm Efficiency
Primary growth is driven by three factors, often referred to as the "three elements of AI":
- Computing Power (hardware performance): This is the fastest aspect of training. Over the past few years, the amount of computation required for the largest AI models has doubled approximately every six months, which is expected to be faster than Moore's Law (which predicts a doubling of transistor counts approximately every 18 months). This is made possible by more advanced chips (GPUs, TPUs) and their massive workloads.
- Data: The size of training datasets is growing exponentially. Today's models use trillions of words and billions of images from the internet. This massive amount of data serves as the "teaching material" for model learning.
- Algorithmic efficiency: This is perhaps the most important driver. We are becoming increasingly adept at achieving better results with the same amount of compute and data. New architectures (such as the Transformer, introduced in 2017) and training techniques are enabling models to perform tasks for which they were not specifically trained (i.e., "cognitive" abilities) and do more with fewer resources.
- Observable Performance Milestones
The pace of progress in AI becomes apparent when we see how quickly it surpasses human performance on specific benchmarks.
Key Takeaway: Tasks considered the preserve of "artificial intelligence" (requiring true general intelligence) a decade ago are now considered trivial or solved.
- Shifting Goals and Emerging Capabilities
The pace of progress is reflected not only in improving old tasks but also in accomplishing entirely new things. This is the crux of the concept of "peace of mind."
2018: GPT-1 was only capable of basic content completion.
2020: GPT-3 can write coherent articles and simple code. 2023: GPT-4 and similar models are capable of reasoning about complex problems, passing expert exams, generating images from text (Midjourney, DALL-E 3), and creating videos from text (Sora).
These enabling capabilities—skills that arise not from explicit programming but from scaling—arise suddenly and unpredictably, making the pace of progress feel even more explosive.
- Industrial and Economic Impact
Speed of adoption itself is a metric that enhances progress.
- From Research to Product: Techniques like Large Language Models (LLMs) transitioned from academic papers to global, consumer-facing products (e.g., ChatGPT) in less than five years.
- Democratization: Powerful models are continuously released, allowing millions of developers to immediately build upon the latest advances. Access to open source models is rapidly decreasing.

Important Notes and Limitations (The "Slow" Part)
It's important to understand that progress is uneven. Some areas are progressing more slowly:
- Reasoning and True Understanding: Models remain prone to "hallucination" (making up facts), struggle with complex chains of logical reasoning, and lack a true understanding of the world. Progress in this area is slower and more unpredictable.
- Energy Efficiency: Training large models consumes enormous amounts of energy. While the efficiency of model inference (using models) is improving, the environmental cost remains a significant challenge.
- AI Safety and Alignment: Ensuring that AI systems do what we want them to do, are reliable, and free of bias, is a significant unresolved problem. Currently, the pace of capability advancement clearly outpaces the pace of security research.
- Hardware Bottlenecks: There are physical and economic limits to the expansion of computing power. The exponential growth in training costs cannot continue indefinitely.
Conclusion: How Fast?
Artificial intelligence is advancing at an exponential rate, particularly in expanding raw capabilities and mastering specific tasks. The pace can best be described as "multiple breakthroughs per year," with each breakthrough in the past potentially being a decade-defining event.
However, this rapid progress is primarily in expanding capabilities, not necessarily in safety, reliability, or fundamental understanding. Consequently, the field faces a race: on the one hand, to exploit these incredible new tools, and on the other, to responsibly manage their risks.