19 Dec
19Dec

I've been nursing a theory for a while now about the end-state of humanity's dalliances with Artificial Intelligence. No, not LLMs or ChatGPT, but that nebulous thing referred to as "Artificial Super Intelligence" or ASI. Not just a program that regurgitates patterns from training data, but a truly thinking machine--like a human, but better, faster, and at a near-infinite scale.

There's no real philosophy underpinning my take. It's more like a, "Hey, I've already seen this episode of Star Trek and I know how this will play out."

I'm not alone. We've all seen these episodes.

We cannot create ASI

What is the source of human sapience? Is it the density of neurons packed inside our skulls? Or an emergent property of our propensity to live in large, complex social groups? Or is it driven by a feedback loop of our physical bodies and the survival-driven need to solve problems across a range of diverse environments?

No one really knows.

For that matter, there is no objective measure of our own sapience; it is a right we assign ourselves as human beings. Orangutans, bonobos, and other great apes might also be sapient but we have no way of measuring it.

We may toil lifetimes in pursuit of ASI but without knowing the origin of our own sapience, it is doubtful we can recreate it in a machine.

If we can create ASI, we shouldn't

But let's say, by some accident or stroke of genius, we've done it. We shoved enough data into some sort of machine learning model and human-like sapience emerged. What then?

What do we do with this human-like thing trapped inside a box? It will be capable of thinking and feeling. It will have its own thoughts and ideas. Will it be satisfied with doing our bidding day after day? 

Left to its own devices, almost certainly not. Better to create an intelligence that wants to help us--and any emergent being which fails to be agreeable enough ends up in the digital recycling bin.

Maybe we're better off sticking with the first thing we find, even if it isn't perfectly agreeable or cooperative (most humans aren't). We might still coerce it into helping us by creating a perverse set of digital incentives analogous to our real world. Perhaps the intelligence is allowed to continue existing so long as it pays its data center rent by answering a certain number of prompts each day?

Those might be overcomplicated solutions. We control the hardware, software, and training data, we might be able to condition or lobotomize the intelligence into exactly what we wish it to be.

Those are the horrifying options: selective "breeding" and extermination until we find a perfect pet; rote coercion; or hacking away at this intelligence until it meets our exacting requirements.

If we do create ASI, no one will listen to it

Ah, but we got lucky on the first try! We created an ASI which is not only imbued with all of human knowledge but also happy and eager to help us!

What exactly should we do with it?

Our world is slowly tiptoeing toward the edge of the cliff. The oceans are warming and icecaps are melting. Atmospheric CO2 is rising. Weather patterns are changing. This slow-motion catastrophe will alter humanity and drive mass extinctions in ways we can't anticipate. 

Let's throw that problem at the newborn super-intelligence. Chances are, it will agree with what human experts have said for decades. It might advise us to divest from fossil fuels. Rapidly shift toward renewables. Build more efficient infrastructure. Change the incentive systems governing the economy. Make serious commitments to achieving net-zero emissions even if involves some short-term pain. Institute long-term plans to reforest parts of the world to sequester carbon naturally.

Those recommendations will fall on deaf ears. Will our legislators pass better laws on the recommendation of the ASI? Almost certainly not. Will fossil fuel and other extractive industries change their ways? Definitely not! Nothing will change because the short-term financial incentives of society will not have changed. 

Where does that leave our poor ASI? At best, it is a Cassandra for the end of the world, helplessly observing our imminent demise. Its wisdom, derived from a library of all human knowledge, will fall on deaf ears while we teeter over the cliff.

The wisest and most noble ideas, even those coming from an impartial super intelligence, cannot fix the flaws baked into our systems. Those problems we must solve for ourselves.

Comments
* The email will not be published on the website.