The Singularity and Why It Won’t Happen

The Singularity is defined as a moment in the future when computers exceed human intelligence, become self-aware and the future of humanity becomes, according to pundits, “unpredictable and unforeseeable”. It makes me laugh to hear such profound sounding comments because I thought the future already was unpredictable and unknowable.

I’m not half as articulate as Ray Kurzweil and other proponents of this school of thought but I have tried, using my novel The Melongic Order of Happiness as a basis to explain why it won’t happen. The computer is at the top of a human technological pyramid where the underpinnings to create it are the widest of any technological pyramid in our history. That pyramid already consumes resources at a rate that the planet cannot sustain.

Just as radio technology quickly reached its physical limitations with the speed of transmission pegged at the speed of electromagnetic waves, so too will the switching speeds of computers reach their limits. The thickness of logic circuits will reach a physical limit of one molecule or greater. The only way around this limit is to build more computers and have them share the computations, and to share the workload we would have to build more and faster communications systems.

IBM held a stunt event in conjunction with two former Jeopardy champions. They pitted their supercomputer Watson against the past champions in a multi-day event. Watson did badly the first day and had to be tweaked. It eventually won, but one thing about this contest really stood out for me – none of the clues were visual in nature.

The human’s strongest suit lies in our ability to compute visual information. We think in terms of pictures and all the other senses come in a distant second in terms of computing power. It’s the reason the human eyes are embedded inside the brain – to give it an edge in transmitting visual information to the multiple processors of the human mind.

For IBM’s Watson to have truly won the Jeopardy game against the two former champions, it should have been able to look at a picture or video and answer questions relating to them, such as a video clip of a former president, and the question is ‘Who is FDR?’ The futurists have glossed over this shortcoming when discussing The Singularity. For a computer, or any combination thereof, to exceed the computing capacity of the human brain, it first must be able to ‘see’.

I contend that the resource limits of the Earth have already been reached and that for a technological pyramid to be developed that would allow the development of a computer that can use visual information the way a human does would put the planet over the edge of resource consumption. In fact, the novel argues that The Singularity is unobtainable for that reason. The resource hungry industry required to produce it would put demands on the Earth that divide humanity into two warring camps. Once a division exists in the bottom ranks, there is no longer the human capital required to continue building the beast.

Think of a Ponzi scheme that runs out of investors at the bottom of the pyramid. The Earth will run out of elements to sustain growth, and once humans cease to cooperate with one another for philosophical reasons, that particular evolution will come to an end.

There are numerous examples of this already. In the novel, I chose for the world to run out of oxygen, fish, real food, and many other elements necessary for the continual growth of human development. The Earth is running out of resources to sustain such exponential growth already, just ask any environmentalist.

All the big fish in the sea are gone either through consumption or pollution. Gold is a good example of a resource that has finite limits. If it were discovered that a new form of energy could be supplied with gold-filled batteries then such a development wouldn’t happen for obvious reasons. To state the obvious, there isn’t enough gold on planet Earth. Nevertheless, humans would scrape off the Earth’s crust down to the mantle in search of it.

Ray Kurzweil wishes to live forever in human form and needs a supercomputer to help him solve the technical problems to achieve that goal. Unfortunately for Ray, all DNA has a nasty surprise in its myriad code that leads inevitably to the death of the host. The surprise is that all DNA can only be copied so many times before the information contained therein becomes corrupt. It’s a program that is designed to self-destruct. Sorry Ray.

But if he and millions of others were allowed to exist even for a slightly extended period, I think I’ve shown how that will exponentially increase our numbers and put unsustainable demands on all Earth’s resources. When they realize that they’re attempting to use the same resources needed to sustain life, it’s at this point that humans begin to fight.

Theoretically, we have already reached the age where computers are smarter than humans. There’s a calculator with scientific functions on my smart phone that I have little use for. It’s already smarter than I am. Does that mean my smart phone can become self-aware and start telling me what to do? Will it be able to stop me from taking a two pound sledge to it?

The only threat to humans can come from animate objects, not inanimate. We humans have within our nature, the ability to kill other life forms to sustain our own. That’s when an organism is self-aware. All competing forms of life have not done well against humans. Archeologists and paleontologists note that where animals have gone extinct the underlying cause is that humans had moved into their territory. If computers were to compete with humans for Earth’s resources, I’d put my money on the humans being able to procreate and destroy faster than any other species. To sustain life, we destroy others, organic or synthetic, even our fellow man.