Peet Brits

Hmm, but that doesn't make any sense…

Archive for the ‘Technologies, Theories and Philosophies’ Category

A more scientific reason why robots will not rule the earth

Posted by Peet Brits on December 20, 2010

My previous post had very little to do with robots, so I will dedicate this post to some views on what intelligent computers might, or might not, become.

From an evolutionary point of view, the neocortex is the last addition to the brain. It is our cortex that makes us think and act consciously. Mammals have a smaller and less developed cortex, so they also have memories and some creativity, but it is our highly developed conscious thought that distinguishes us humans from animals. As far as computer intelligence is concerned, the neocortex is what we care about.

Taken from the book On Intelligence by Jeff Hawkins, here is a reason why the robots we see in Sci-fi movies will not rule the earth. Firstly, even if we copy the cortex, we still lack emotions from the “old brain” and the complex inputs from the human body and nervous system. No emotions mean no greed, no ambition and no desire for wealth, social recognition or sensual gratification. That is, unless we painstakingly design these, which will be extremely difficult and quite pointless.

Secondly, the high cost and effort required to make such a machine makes it completely impractical. For example, an intelligent robot butler would be more expensive and less helpful than a human assistant would be. Despite being intelligent, it will not have the understanding that a fellow human would share.

That said, assuming we do not run out of natural resources or kill each other in another world war, I believe that there is much room for intelligent machines in our future. Machines will be considered intelligent because they can learn and make useful predictions. Machines will be the strongest where we are the weakest, be that due to intellectual difficulty, inadequateness of our senses or with activities that we find boring.

I recently saw a video (but lost the link) where computers are intelligently applied to predict a medical diagnosis for a patient. They claimed that computer predictions would be faster and more accurate than that of a human doctor, since it has all the available data about the patient. Humans can only keep a limited amount of data in memory, which potentially limits the time and accuracy of the diagnosis. Since medical conditions are well documented and researched, the computer can combine the rules with the symptoms and a long medical history of the patient to make an accurate prediction.

The above example is still far from the intelligence found in the cortex, but who knows where the future will take us. Intelligent machines will aid us to invent extremely useful tools, but nothing more.

Advertisements

Posted in Technologies, Theories and Philosophies | Tagged: , , , , | 1 Comment »

The real reason why robots will not rule the earth

Posted by Peet Brits on December 12, 2010

This post has nothing to do with robots and Sci-fi. If that is all you care about, then wait for the next post.

Here is the real reason why robots will not rule the earth: Machines require electricity, which requires oil, and in our lifetime we will see the end (or steady decline) of oil consumption.

You don’t believe me? Here’s the proof. This presentation by Dr. Albert A. Bartlett explains the simple yet not so well known implications of the exponential function, and below I will highlight some of the points he made. For example, a mere 7% growth will result in a doubling every ten years.

“But we will discover new resources.” Really? Do you understand how much we will need to discover to maintain exponential growth? It will just make a little bump on the downward curve.

The classical example to explain this concept is from the wheat and chessboard problem. A king wanted to reward a mathematician for an invention, and the wise man asked for a very simple reward: on the first square of a chessboard he would receive one grain of wheat, two on the second, four on the third, and so forth, doubling the amount each time. That doesn’t sound like much, right? Well, the result would be about 400-times the 1990 worldwide harvest of wheat! That is more wheat than from the entire history of earth!

Even more shocking than the total amount is the idea that every square contains more wheat than the total of all the cells that came before it. The implication of this is that, if it takes one hour to fill a bottle with the steady growth of a doubling function, then the bottle would be no more than halfway full at ONE minute before the end. THIS is the real impact of the exponential function: It bursts out so unexpectedly that we never see it coming.

Apply this idea to oil consumption (or any of the earth’s finite resources) in combination with a steady population growth, then it is not hard to imagine how unsustainable it really is. The video suggests that even if we discover a second earth it would not be enough to sustain continual exponential growth. We have bred a culture that expects growth, yet all growth destroys the environment, smart growth just does it in good taste.

This and many other problems boil down to the earth’s over-population. A great example of the over-population problem is that of Asimov’s bathroom metaphor, but I suggest you stop listening to me and watch the video to get the full impact.

This is just a small taste of what the YouTube video has to offer. Listen and be amazed!

Posted in Technologies, Theories and Philosophies | Tagged: , , , | Leave a Comment »

The Singularity – What?

Posted by Peet Brits on September 12, 2010

The Singularity is the notion that computers will eventually take over the role of humans, or at least create a new entity with smarter-than-human intelligence. This is not exactly like sci-fi movies, such as The Terminator. For a more descriptive overview, see SIAI and Wikipidea. The current estimate for the coming singularity is the year 2045 (Kurzweil, 2005).

Why It Feels Silly

My ex-colleague Marius thought it completely silly, especially when they talk about human extinction by AI, and the more I think about it the more I agree with him. Let me explain why I belief these ideas are nothing but pies in the sky.

I am halfway through the book “On Intelligence” by Jeff Hawkins, a man with firm roots in computing and possessing great knowledge about neuroscience. His main argument against the coming singularity is that we do not understand the nature of intelligence. Bigger and faster does not equal intelligent. Hawkins expressed that, although computers are already five million times faster than the human brain, it cannot do tasks like recognizing a cat in a photograph. I still have to read about his new suggested framework, but it does not change the concept.

Computers and brains are fundamentally different, and they achieve fundamentally different tasks. Computers are brilliant at mathematical calculations, but terrible and inferred ideas. For example, chess is a big challenge for a computer, but it can reach professional level. With the Chinese game Go, computers can just about reach strong amateur level. Many AI researchers believe that Go mimics elements of human thought much more than chess. (Anybody wants to learn Go?)

Hawkins claimed that Turing’s definition for intelligence is incomplete. He explained, through the thought experiment of the Chinese room, that one cannot measure understanding by external behaviour, and therefore “programs are neither constitutive of nor sufficient for minds.” Some criticism on Wikipedia agrees that it is impossible for machines to be truly intelligent.

I have yet to compare the ideas with that of the brilliant futurist Ray Kurzweil’s book “The Singularity is near.” His predictions are probably more related to sudden technological growth, not just intelligence.

For the Paranoid

I wanted to label this section “For the Religious and the Paranoid,” but that would only get me in trouble. The older generation often gets paranoid, and it just so happens that many of them are religious. The late Douglas Adams explained the concept of people’s paranoia about the future very nicely:

1) Everything that’s already in the world when you’re born is just normal;

2) Anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;

3) Anything that gets invented after you’re thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it’s been around for about ten years when it gradually turns out to be alright really.

Apply this list to movies, rock music, word processors and mobile phones to work out how old you are.

Whoever you are, I have good news for you. If the idea of a coming singularity makes you feel uneasy and angry, then do not worry, because it will probably not happen in your lifetime. On the other hand, if you are getting butterflies of excitement, then hold your thumbs and keep your eyes open.

Conclusion

I am a big fan of Douglas Adams. He has a brilliant way of mocking the overall silliness of humanity in his books, especially in The Hitchhikers Guide to the Galaxy. Humans are constantly on the search for some sort of a greater meaning, yet none exists. Yes, you heard me. There is no greater meaning or ultimate answer to our existence other than that which we give it.

Referring back to Jeff Hawkins, it is in our nature to create patterns, and so we find them even where none exists. Whatever the nature of the coming singularity will be, it will probably be no more harmful than the internet and computers were for the previous generation, and rock-and-roll music for the generation before that, and probably even the wheel back when the cave dwellers invented it.

Posted in Technologies, Theories and Philosophies | Tagged: , , | 1 Comment »