2014-01-13

Co.Labs

If The Singularity Fails, Civilization Is Basically Screwed

Will we break our own spirit trying to find a soul in the machine?



Ask a scientist, professor, or theoretician if and when the Technological Singularity is imminent, and the responses will vary from "someday in the future," to laughing in your face for bringing the concept up in the first place.

Considering the Singularity is a fun and slightly unsettling mental exercise. Advances are being made constantly--just today, Japanese and German researchers were able to replicate one second of human brain activity in a 40-minute computational process in the fourth-smartest computer on the planet. Connecting more than 1.7 billion nerve cells with 10.4 trillion synapses by way of the open-source Neural Simulation Technology (NEST), this exercise is the one percent of computational depth computers in the next generation will be able to handle. These exascale computers would be able to operate 1,000 times faster than the current fastest, which would allow them to approach human-thinking abilities:

Exascale computers are those which can carry out a quintillion floating point operations per second, which is an important milestone in computing as it is thought to be the same power as a human brain and therefore opens the door to potential real-time simulation of the organ’s activity.

Powerful? Absofuckinglutely. The fulfillment of the Singularity's prophetic utopia? Not yet.

AIXI is the term some computer scientists use to refer to a "super-intelligent agent" whose intelligence meets or exceeds ours. In theory, an AIXI machine could meet goals codified by extremely complex equations. Here's Marcus Hutter, who cointed the term AIXI:

Imagine a robot walking around in the environment. Initially it has little or no knowledge about the world, but acquires information from the world from its sensors and constructs an approximate model of how the world works. It does that using very powerful general theories on how to learn a model from data from arbitrarily complex situations. This theory is rooted in algorithmic information theory, where the basic idea is to search for the simplest model which describes your data.

Basically, the "brain" is able to "learn" through trial and error by planning a move and analyzing the environmental response it receives. Essentially, the robot thinks: “If I do this action, followed by that action, etc., this or that will (un)likely happen, which could be good or bad. And if I do this other action sequence, it may be better or worse.”

Linguist Noam Chomsky is skeptical of AI processing, arguing it's development is reductively behaviorist, basing actions in a simple reward or punishment system through positive and negative reinforcement. We've been genetically endowed to learn language quickly and use it creatively, and AI will never be able to replicate that.

Other thinkers agree. Years ago, mathematician Vernon Vinge gave a talk evaluating the plausible scenarios leading to a singularity-less outcome. He envisioned the Age of Failed Dreams, where we've failed to "find a soul in the hardware." Moore's Law would have failed; Edelson's Law, that our technological insights are not keeping pace with needs and are increasing exponentially over time, would have backlogged. Resources would still be scarce, and a return to Mutually Assured Destruction (MAD), the situation coined by John Nash, would become normal. "When stoked by environmental stress," Vinge says, this outcome would be "a very plausible civilization killer."

Vinge crafted three future scenarios if the Age of Failed Dreams actualizes. The first predicts destruction by our own hands--MADness perpetrated through nuclear warfare. A second envisions a Golden Age, still planted in cynicism with a rosier outlook in what Vinge calls "the Long Now." A peacefully decimated population, ostensibly antagonistic but bolstered by the "plasticity of our psyche" and penchant for hope, would result in an improved standard of living. Perhaps the number of people rebounds, but ultimately, "the species would become something greater." Greater resources might be allocated to extend longevity to ensure humans, in whatever form we'd be, will actually stick around. It would be the first time there would be people who have experienced the "distant past." Like, 500 years distant.

The third and most tenable scenario, according to Vinge, is the Wheel of Time theory: The dynamic, cyclical relationship of Earth becomes disaffected and even destroyed by our own technological outputs. MADness, sure, but climate change is the more relevant man-made precursor. The polar vortex set record low temperatures across the country. Island nations like Maldives and Kiribati are going under water as sea levels rise; the rising and falling of the largest lake in the Caribbean has "devoured tens of thousands of acres of farmland, ranches, and whatever else stands in its way," according to a story published this past weekend by the New York Times.

A pitfall in toying with the notions of a doomed Long Now is the malaise over how fucked we are as a species. Vinge is careful to hedge the uncertainties that propel this kind of theorizing in the first place, and maybe he's right that we're meant to end up in space to survive. But that seems like a future far, far away.

[Image: Flickr user J.Elliott]