Wednesday 4 March 2015

A.I. still don't think so...

Sometimes I promise to return to a topic and never do (e.g. this one, optimistically labelled 'part one') and sometimes I stick doggedly to a topic that I find interesting (e.g. my obsession with non-Skaldic Norse verse last year) in defiance of dwindling interest and declining readership. I'm sure you will all be delighted to see that today I am returning to my theme of the week: Artificial Intelligence.

Steve from Weymouth posted on my alter-ego's Facebook page that he agrees with me about the non-imminence of the Singularity (the hypothesised point at which machine intelligence surpasses human intelligence) but wants to know whether I think it will always be unachievable.

Persecuted genius Alan Turing was the first to point out that 'Can machines think?' is basically a meaninglessly unanswerable question and replaced it with the rather less snappy: "Is it true that by modifying a computer to have an adequate storage, suitably increasing its speed of action and providing it with an appropriate programme, it can be made to play satisfactorily the part of 'A' in the Imitation game?" (The imitation game - now usually called the Turing Test - is the game where an interrogator has to guess which of two screened off interviewees is a machine and which a person.) 

Turing predicted in a paper written in 1950 (well worth a read) that a computer with a storage capacity of 10 to the power 9 would be available by the end of the century and would be adequate to the task. Your average Playstation 3 could perform nearly two trillion floating point operations per second if called on to do so (and you use it to shoot zombies - shame on you). Infinite (in the narrow sense that more space could be created faster than we could fill it) amounts of information can be stored in the Cloud. So presumably we are just waiting for the relevant program to be written. It hasn't been written. And you'd think that the $100,000 Loebner Prize on offer since 1995 would be sufficient incentive.

Designing a computer programme of greater intelligence than a human being is a conceptually difficult idea. Is it even a possibility, given that the designer is human (or a machine of equivalent human intelligence)? Almost by definition, such a programme must have the capability to design and solve questions that a human cannot consider. So how can a human designer instil such a capability?

There may be more of this later in the week. It's something I clearly need to get out of my system. But today's my day off so I'm off to the White Lion for a pint. 



No comments:

Post a Comment