← back

intelligence is cheap, finding it is not

feb 2026

The prevailing assumption about modern AI is that intelligence requires scale: giant GPU clusters, megawatts of power, planetary infrastructure. That assumption is wrong, or rather, it mistakes a symptom for a law.

Those massive data centers are not a fundamental requirement of intelligence. They are the cost of our current, immature algorithms. The human brain runs on roughly 20 watts. It uses noisy electrochemical signals, slow biological parts, and no exotic physics. It obeys the same laws that govern everything else in the universe.

And yet, it produces intelligence.

That fact alone is decisive. The algorithms for general intelligence are physically possible. They are not magic. If evolution could stumble onto them by accident, then they are, in principle, discoverable by us.

the catch: evolution already paid the bill

The reason biological intelligence seems unreplicable is not because the brain is mysterious. It is because we tend to forget the enormous computational debt that produced it.

Evolution ran the largest optimization process in the history of this planet: roughly 500 million years of complex nervous systems, what I imagine were trillions of organisms alive in parallel, optimizing blindly for survival rather than intelligence, discarding nearly all of every attempt. The algorithm encoded in the human brain is not elegant because evolution was smart. It exists because evolution could afford to fail, forever, at planetary scale.

This makes the cost comparison with modern AI striking. Publicly reported estimates put training a frontier model at somewhere around 10²⁵ to 10²⁶ floating-point operations. That sounds enormous. But by my rough reasoning, biology's bill, spread across geological time, was almost certainly larger by many orders of magnitude, just paid slowly and without anyone noticing.

And that is just the training cost. The infrastructure required to serve that intelligence to the world, millions of users, continuously, around the clock, is a separate and ongoing bill. The brain handles both for 20 watts.

The human brain did not emerge because evolution understood intelligence. It emerged because evolution had the luxury of failing forever. Humans do not have that luxury.

the real question

This reframes the AGI question entirely. It is not really about whether humans can build general intelligence. The harder question is whether we can compress billions of years of blind, massively parallel search into a few decades of directed, resource-constrained effort.

A reasonable objection here: modern AI is not brute force. Gradient descent is genuinely clever. It does not randomly try things; it follows the gradient, which is a powerful signal. Attention mechanisms, RLHF, residual connections, these are sophisticated ideas, not random search.

That objection is correct, but it operates at the wrong level. Within a single training run, we are not doing brute force. But we do not yet have a gradient over the space of learning algorithms themselves. The search over architectures, objectives, and inductive biases still happens the slow way: researchers making guesses, running experiments, publishing papers. Techniques like meta-learning and neural architecture search are beginning to address this, but the search remains expensive and far from solved. Local optimization is elegant. The meta-level search is still largely empirical.

This is also why today's AI feels so power-hungry. We are compensating for what evolution did slowly by doing it loudly. We are throwing compute at a problem that evolution solved with time. Neither approach is efficient. One bill has already been paid.

the quiet implication

If we succeed in that compression, if we find the right inductive biases, the right learning algorithms, the right architectures, then intelligence will become cheap again. Possibly cheaper than biology ever achieved.

The path does not run through bigger data centers indefinitely. It runs through better algorithms: systems that learn how to learn, that generalize from less, that encode structure instead of brute-forcing it from data. The brain is existence proof that this is possible. Evolution is existence proof that finding it is hard.

The tension between the cheapness of running intelligence and the extraordinary cost of discovering it is where the future of AI really lives. We are not waiting on hardware. We are waiting on ideas.


author's note: i'm just a student who is curious about intelligence and ai, not an expert, not a researcher, and definitely not someone with definitive answers. this is an attempt to think clearly, not to claim authority over them.