Home > Essays, School > Technological Singularity

Technological Singularity

A fringe group, initially led by Ray Kurzweil, proposes that “The Singularity is Near”. Their actual meaning of this varies depending on who you ask, but his intent seems to be this: The rate of major discoveries and advancements in technology which significantly impact our culture is increasing exponentially. To understand his predictions, we must begin with his earliest writing on the matter.

In his first book, “The Age of Intelligent Machines”, published in 1990, proposes mainly that computers will grow in intelligence. He specifically mentions that a computer will be able to beat a human in chess, which is now the case, and his overall meaning is that computers will become overall more computationally powerful than humans. This is also certainly true. It was also true of the first computers. Computers were first created to perform simple computations. They use a certain instruction set, which operates on binary numbers, which exist as a series of electrical pulses. By using a certain series of instructions, we can perform some action. The simplest is addition: we start with the least significant bit, and a set of transistors adds those. If a carry is necessary, then a certain transistor outputs true, otherwise it outputs false. A different transistor does the same based on whether the result of the addition was 1 or 0. The computer does this for each bit, stores the result in memory, and then moves on to the next instruction. Each instruction does something similar. Programmers don’t actually use these instructions, since the very first computers we simply built a language on top of that. And as computers got faster and we wanted to do more complex things, we built more complex languages – these languages make programming easier, but run much slower. So we built faster computers, and so on the cycle goes. But this isn’t intelligence, it’s just computational power. (It’s easy to miss the distinction. In the original, I said in the second sentence that computers will become more intelligent than humans, however my meaning was that they will be able to make faster computations. We can see this in computers now – they can perform mathematical operations very quickly – however computers have only memory to draw on, while a brain has experience: memory can allow a computer to solve the exact problem it was programmed to solve, and experience allows us to apply that solution to similar problems. We are also able to make and test inferences, while computers are only capable of being programmed with a brute-force search.)

Computers can’t think. They can’t come up with anything on their own. Computers can only beat humans at chess because we programmed them to check every possible move, going forward a few moves, and deciding which one was the least risky. They check each move using brute force, and compare the outcomes using an algorithm that we taught them. That isn’t a sign of intelligence, only of computational power.

The end result of his books is very few meaningful short-term predictions, but rather the prediction that humans will become computers, or something like that. Artificial sentience would be a good way to describe it – computers that can actually think like humans. Their hope is that computers will be able to ‘absorb’ human consciousnesses, or something like that. That’s just absurd. They think that if we have fast enough computers, we’ll achieve immortality. I think it’s reasonable that computers will eventually be able to model a human brain, but any supposition of immortality is beyond the realm of possibility. Duplicating the functionality of a brain is one thing, but copying the entire state – literally copying someone’s consciousness – well, it would probably require violating the uncertainty principle, and might even be impossible to do in any deterministic sense due to quantum effects – and there’s still the question of what will happen to the original.

Immortality is impossible. There’s a thermodynamic principle of entropy. It’s a value that always decreases, and it’s essentially a measure of the number of the disorder within a system. Any reaction that occurs within an isolated system, any reaction at all, will decrease the entropy of that system. In addition, any reaction that occurs within any other system that increases the entropy of that system will increase the entropy of the universe at least as much. In addition, due to inefficiencies in any reaction such as friction or energy loss to heat. Since every reaction decreases entropy, then there is only one possible end state: the total entropy of the universe will decrease to zero. It’s a slow process, and it gets slower as we get closer to that point, but eventually the last star will die. There won’t be enough mass concentrated anywhere for a new star to form. The molecules will slowly break down into their constituent atoms. The atoms will decay, first just the radioactive ones, but the so-called stable ones will eventually decay too. The standard model predicts that the decay will continue until all the universe is diffuse electromagnetic energy, but either way: if all the universe is chemical reactions, electrical interactions, and forces, then as entropy approaches a minimum, there won’t be enough energy to create or maintain any life, even computer-based life. Isaac Asimov wrote a short story on this called The Last Question, which I strongly recommend. (full text)

There’s a certain leap from computation to consciousness that we don’t fully understand. Something within us allows the formulation of new ideas, the creativity and thought that defines life. It arose in our distant past, we don’t understand how. We can’t make it, not yet at least, and it won’t be with a computer. Transistors can only go so far, they can do what we design, but they can’t possibly actually become alive by any meaningful cognitive definition of the term.

The argument can be made that software, not hardware, will allow the computers to simulate a brain. But software doesn’t actually extend hardware at all in this way. Any calculation that software can perform is still just a series of pulses sent through a lot of transistors, the software just provides us with an interface. Whatever limitations are in hardware, they don’t just disappear with the right incantations of code.

And think of the ethics of that. Sentient computers. You’d either have to cripple them to protect the human race – and any other biological race they meet – or you’re unleashing a mechanical invasion of the universe. From fiction – take Cylons, built to serve human’s every need, who turn against their masters. Or the Daleks, created by Davros to replace his ailing body, but the machines evolve to start several wars. Don’t say “oh, we’ll just give them the three laws of robotics”. That’s all fine and dandy if you’re building a robot that you want to program. A computer program, like Data of Star Trek, can be given arbitrary code, like “don’t kill humans”, but you can’t program a computer simulation of a human mind any more than you can program your next door neighbor. There is no neural code for “don’t kill humans”, that’s what makes brains different from computers. There’s no program somewhere in our Creator’s hard drive which, when compiled, gives the the file human.exe – with subroutines and instructions and all. Just a blueprint for a series of neural links which learn from experience and, above all, act to preserve themselves. Because a neural net that did anything else would be rather silly.

So Kurzweil’s idea is that the time between ‘major technological advances’ is decreasing to zero, and that therefore at some point, when we predict mathematically that that time reaches zero, we’ll simply discover everything else there is to know in an instant. His logic seems somewhat flawed. When we look at an equation and see that it predicts something weird – like the mean time between discoveries going to zero – then rather than start an ‘I want to be a robot cult’, we go and look at the assumptions we’ve made along the way. We can first consider that there may be some sort of historical bias against older discoveries – things that people who compiled the data didn’t think were relevant, but that were relevant when they were first discovered. There may also be a selection bias towards the technological things that affect our life today over events in our less recent past. Perhaps the data is better fit by a different model, which has a constant term in addition to a term that decreases to zero – there’s a minimum time. And anyway, who’s to say that we’ll ever discover the secrets of the brain? We really have only probed the surface, analyzing every single neuron with enough accuracy to model the entire brain in software could violate some uncertainty principle. And there are the questions of consciousness, perhaps better left to the philosophers.

BCIs are interesting and all that, but they’re in the very experimental phase. My understanding of the BCI prosthetics we have now is that we just plug them into existing connections from the brain to the old hand, for example, and tell the user to try to move their hand. Their brain needs to learn how to interface with this new tool, rather than us interfacing with the brain. Attaching ‘memory chips’ directly to the brain is an even different matter. Even if we did model the behavior of existing neurons, it would still be a very invasive process because of how very interconnected the brain is.

Their last idea is nanobots – essentially, small robots that run around inside our head, with sensors to record the state of the brain and report back. This is highly speculative, so all I’ll say about it is that whatever sensors they have that can, in a non-invasive way, detect and record all brain connections and activity from what would have to be a huge number of neurons, and still fit on a chip the size of a red blood cell…well, they sound more like the sensors that Star Trek has, that can detect a single ‘life sign’ from light-years away than anything actually practical, and perhaps our time is better spent on actual medicine.

Publication Note: This is a revised version of an essay written for HUM401. It has been slightly revised for grammar, content, and continuity, and both the original work and the revisions are mine.

Advertisements
Categories: Essays, School Tags: ,
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: