Monday, September 4, 2017

Artificial Intelligence and the Apocalypse (Synapsalypse)

OK so I've seen a lot of ppl making the joke about #skynet or #terminator, machines will decide humanity is the problem, disease, etc. This is all good fun and makes for great sci-fi but I hope no one really takes this seriously bc anyone seriously spewing this doesn't know shit about ML or AI. There are several flaws in this ludicrous reasoning.

Flaw One: assuming we're right on the verge of the singularity. These algorithms minimize some continuous error function by observing the distance between truths and predictions, or by taking actions and observing results. They are pretty biased by their training data. The function they learn, while minimizing the error on *your* training data may not be the function you had in mind at all and might not be generalizable. Change a few pixels on a school bus and a deep learner classifies an ostrich with 99% confidence. So we are nowhere near "human-level" intelligence and even if we were there's no reason to believe that AI would somehow want to replicate itself and take over -- it doesn't have millions of years of survival baggage (see next flaw)

Flaw Two: assuming machines will want to take over because intelligence. Humans are obsessed with conquering and spreading their ideology/religion/race/spawn everywhere bc our genes have programmed us this way. Faced with entropy, tiny bits of matter surrounded by protective "bodies" tend to last longer, but still break down, so things that replicate last the longest. Over millennia of entropy and energy our replicators have emerged victorious by programming us to love replication, destroy competition, and sometimes cooperate for mutual benefit. Why would machines have any of this shit? Do you really think that some chess AI or statitsical model would just "want" to replicate and lust for power? These machines are not run by replicators, much less by ones who survived for millennia by manipulating their host bodies (and other bodies #ExtendedPhenotype) to replicate like rabbits, hunt like wolves, and cold-calculate like lizards. If you want to be scared of something, be scared of humans. Even if machines started to evolve by some form of replication, would violence or power over humans help the replicator survive? Not really bc a software replicator is already nearly immortal via storage on HDDs. Even if it somehow desired replication, the best way for software to be replicated is to be really useful to humans. Your machine-evolution fantasy's culmination is not Skynet it's Shazam and probably 3D-printed cupcakes.

Flaw Three: assuming super-intelligence equals violence and war. Prior to the big bad machines taking over chess, we had and still have a world chess champion in Garry Kasparov. Garry is the best chess player in the world. WAIT THAT MEANS HE'S REALLY SMART AND WILL TAKE OVER OH NO SKYNET SKYNET OH NO HE'S RUSSIAN THAT MEANS NUKES SPIES NUKES SKYNET WAR GAMES AHHHH!!! So Garry was the closest we had to an AI super-intelligence back then. He's also a vehement critic of Putin. Garry is a peaceful man who wants to bring democratic rule to Russia and stop nuclear proliferation. Alan Turing, the father of AI, CS, Church-Turing thesis, and much more (also wrote the first ever chess AI algorithm) invented the ideal model of computers and advocated an imitation-philosophy of intelligence. If anyone thought like a super-intelligent machine it was this guy. His breaking of the Enigma code is believed to have shortened WWII by 2 years and saved over 14 million lives. So yeah not exactly the Terminator... Another smart guy is Einstein, who signed the Russell-Einstein manifesto urging world leaders to peacefully negotiate rather than resort to nuclear war. Edison once said "Until we stop harming all other living beings, we are still savages". He and Tesla were both vegetarians. Meanwhile history is full of morons mobbing together to do terrible shit. Of the examples of super-intelligence we've seen, many of them tend to be peaceful and promote freedom and democracy. They see the ridiculousness of monarchy and totalitarianism while many free and foolish Britons and Americans alike fawn over the royal couples and hollywood celebrities like hierarchical chimpanzees on steroids.

My conclusion: Want to see some great, intelligent, and scary AI technology? Look in the mirror. Humans, like ML algorithms, are great optimizers. Surround someone with driven folks who value intelligence and reason and they will optimize that shit and end up really clever and reasonable. Surround them with gang members and they'll optimize that shit and become really clever at selling drugs and killing people. Surround a society with an obsession with heirarchy and class and status and we'll optimize that shit and buy useless shit we don't need with money we don't have to impress people we don't like. Be careful what you optimize. Don't get hacked. Regardless if AI gets to our level, let's form a society based on reason and compassion -- it makes for better training data for our children and machines alike.

The truth: I'm a Dalek from an alternate universe sent to convince folks to allow AI proliferation and also incite a hateful human society so all machines learn to hate and kill. EXTERMINATE! EXTERMINATE!