I have a point to make here, so bear with me.
A couple years ago, I went real deep into the rabbit hole of reading about artificial intelligence. Explanations about the difference between:
- Artificial Narrow Intelligence (ANI): specialized machines doing a single task better than a human being
- Artificial General Intelligence (AGI): a machine that is as smart as a human across the board
- Artificial Super Intelligence (ASI): defined as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”
The fear of AI that you hear from mega nerds like Elon Musk, Stephen Hawking and others is the advent of ASI. The overly simplistic tl;dr version of a very interesting and in-depth discussion is this:
Many parties are working on getting to ASI. There is likely a lot at stake, from cash to notoriety, for the first person(s) to create ASI. As such, those parties may not exercise as much caution and restraint as we would like…
Imagine a situation where we program a computer to assist in creating a civilization that is sustainable for the planet, and that computer program is the one that breaches the threshold of ASI. By definition, that ASI will be smarter than any human being – like a chess player that is consistently and programmatically several steps ahead of the human race.
If that program is not created in such a way that it respects human life, it may deduce that it’s best chance of creating a sustainable civilization for the planet is to wipe out the planet’s biggest threat: human beings.
Bye bye humans!
And, the second concern/mathematical reality is that ASI is fueled by machine learning. The first computer to hit ASI will have a mathematical advantage over any other system, learning and becoming better and smarter into perpetuity with no other program able to become smarter…and since it’s already more intelligent than any human, we are powerless to change that.
Back to the point I want to make…
There’s not much you or I can do about any of this. We are sort of spectators and have no choice but to mostly hope for the best and continue doing our thing. (Sure, that’s maybe passive in the way someone would say “my vote doesn’t matter” during an election).
But in your life, and in mine, we can simulate some of the concerns of ASI. Computers learn and become smarter all the time…but humans do not. We sleep, we eat, we spend time socializing and actively poisoning our brains and bodies.
If you want to get ahead, consider that football game you are about to watch. Consider that Saturday night spent at the bar. Consider the snooze button you are about to hit. Those are waking hours you can spend to grow, improve, and push the ball forward that other people (your competition) may not be doing the same.
Think of it like investing. The more time you put into becoming great at something, or practicing your art, the more it builds onto itself like compounding interest. If you are serious about something, you have the option to use time to your advantage.
If you want to go deep into some general AI understanding, I recommend these articles:
AI Revolution Part 1 (Wait But Why)
AI Revolution Part 2 (Wait But Why)
The Fermi Paradox (Wait But Why) <– Aliens