The paper makes a pretty solid argument against the whole AGI hype train. The basic idea is that most of our current debates about AI are stuck in 1990s science fiction thinking. Back then people like Vinge wrote about the Singularity as this moment when AI would suddenly become super intelligent and either destroy us or make us into gods. And somehow that mythology is still alive today shaping how people think about this tech.

Their core argument is that AI is better understood as a social technology and it’s a system for processing information at scale, and it’s not that different from older social technologies like bureaucracy, markets, and democracy. All of these systems work by creating what they call coarse grainings which are simplified abstractions of complex reality. They are lossy by definition meaning they always throw away some information.

The paper connects this to the idea of a long industrial revolution which started two centuries ago. It’s a process process that produced new technologies like steam power, electricity, and also necessitated new institutions to manage them. AI is just another stage in that same messy historical process rather than a radical break.

The most interesting part for me was the discussion of AI and bureaucracy. Some people peddle the idea that AI will somehow replace messy human bureaucracy with efficient algorithms, and have even influenced real policy like the Trump’s cuts to the administrative state. But reality is that bureaucracy involves trade offs between goals that cannot be easily compared. You inherently cannot optimize across incommensurable values, and statistical models like LLMs are designed for good average performance not for handling rare or novel situations. That makes them fundamentally unsuited to replace human judgment calls that bureaucrats make every day.

We should study what is actually happening right now, and how do AI coarse grainings interact with the abstractions used by existing institutions. When do they compensate for each other when do they make things worse. And we should look at who benefits and who gets hurt. These are empirical questions that are worth asking. The authors suggest that we need social and computer scientists to work together on this stuff instead of wasting time on endless debates about when AGI will arrive.

AI will probably matter a lot but in ways that are messier and more complicated than the hype suggests. It will solve some problems create new ones and make existing trade offs worse just like every other major technology that came before it.