Tim Urban latest epic-blog on Artificial Intelligence is superb, you can check it out here. Though it is 75 pages long.
I put together 1 picture, condensing what Tim has already condensed based on his research. The powerful idea is simple, the exponential rate of technological advancement, as we have already from the past suggests that
AI at human level experience might not be as far away as it seems.
The most powerful notion that has to be understood, that Artificial Intelligence is just another way of saying Alien Intelligence which is just another way of saying, we have no idea what it means, which is even more alien when you know that humans are creating it. The terrifying aspects of AI is how fast it can get to become superintelligence, which is another word of saying that by all human definition it would have all the powers that we now attribute to God.
here are several quotes of quotes that I found cool to quote:
“movies have really confused things by presenting unrealistic AI scenarios that make us feel like AI isn’t something to be taken seriously in general. James Barrat compares the situation to our reaction if the Centers for Disease Control issued a serious warning about vampires in our future”
“Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’”
“Nick Bostrom uses the term “the village idiot”—we’ll be like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot-level and being declared an AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us”
“But it’s not just that a chimp can’t do what we do, it’s that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans”
“” [computer program that has a goal to hand] write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy.It seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of. But it’s true. And the only thing that scares everyone on Anxious Avenue more than ASI is the fact that you’re not scared of ASI”
“That leads us to the question, What motivates an AI system?
The answer is simple: its motivation is whatever we programmed its motivation to be. ”
“Of everything I read, the best shot I think someone has taken was Eliezer Yudkowsky, with a goal for AI he calls Coherent Extrapolated Volition. The AI’s core goal would be:
Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted ”
“Many of them are trying to build AI that can improve on its own, and at some point, someone’s gonna do something innovative with the right type of system and we’re going to have ASI on this planet. The median expert put that moment at 2060; Kurzweil puts it at 2045; Bostrom thinks it could happen anytime between 10 years from now and the end of the century, but he believes that when it does, it’ll take us by surprise with a quick takeoff. He describes our situation like this:
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”