By Andy Berry
In 1979, Dr Chris Evans wrote a book that, many thought at the time, heralded the Computer Revolution - "The Mighty Micro".
I met Dr Evans shortly after the publication and liked his prediction that we would have what he called UIM's (Ultra-Intelligent Machines) by the 1990's. I'd played with handwriting recognition, was then writing ever-more complex programs for micros and could see no limits to what was achievable.
What was a UIM? It wasn't a robot, more an intellectual partner of undefined shape and size, taking more and more of the intellectual burden and freeing us from work. It would, of course, have the ability to understand writing and spoken input and would pass the Turing Test (that is, a person communicating with it wouldn't know it was a computer) with ease.
He didn't predict, as did many science fiction writers, that such developments would lead to chaos. Instead, he predicted rocky times through a transition period in the 1980's followed by the golden age in the 1990's.
Nearly twenty years later, I came across a copy in a second-hand bookshop. What has gone wrong? Why can't I ask my wristwatch UIM to write this article while I bask on a beach? Isn't it funny how the mere mention of the Turing Test now appears to date me back into the 1950's?
Apart from mankind's eternal over-optimism about the future, there are four factors that have made progress so slow:
Priorities that society imposes on people working in the IT industry has turned out to be different than Dr Evans predicted. For example, he forsees the end of the professions as UIMs become better at medicine (say) than humans. I see no evidence of that happening. In fact, professions seem to be encouraging the development of ever more complex 'helper' applications that, if anything, more firmly entrench the roles of professionals. It's almost as if they are using IT to reinforce the old model of professionals as priests doing arcane rituals that are outside the domain of their supplicants.
I'm sure that if society consistently (or even, occasionally) said that they wanted UIMs, then they would be closer.
I can't remember my feelings in the late 1970's, but I'm sure that we felt that we were making progress in understanding how the mind works. We even had programs, as Dr Evans mentioned, that could make a passable attempt at holding a conversation with you - provided you didn't expect them to have any general knowledge or to possess any originality. Now, my feeling is that we are no further forward in understanding what makes humans human. Indeed, the closer we look, the more complicated it all becomes.
Dr Evans dismisses the 'software gap' in one page - the realisation that we'd have to develop new ways of writing software if we are truly to reap the benefits of computers. Have we? The only answer I can give is 'No', we still write programs by writing each step painstakingly. The way we write may have altered, what we write may have altered, but the basic process is still the same. To those in the industry who may point out the recent developments in graphical ways of developing software I say, "fine, but I bet you still spend most of your time writing and then correcting lines of code."
A related topic is the organisation of software development. I'm sure that Dr Evans had no idea that, in his golden age, people would still be writing software as if it were a craft industry. I'm impatient for the Industrial Revolution to arrive in software land.
What directions have we gone off in?
Well, I'd highlight two:
Interestingly, Dr Evans does mention PRESTEL (there's a blast from the past) and electronic mail. But he gets more enthusiastic when talking about MINNIE (a precursor of the Psion Organiser) and the possibility of putting complete encyclopaedias on single chips. What he missed are the awsome possibilities when you link computers together.
It won't be long before all the computers in the world (at least, those with interesting data) are linked by the Internet. At the moment, it's all essentially passive. When it becomes truly interactive, who knows what will happen - I'm certainly not going to predict.
In 1979, most computers generated green text on a black background. Computer generated graphics were mainly of the Pong variety (large single- coloured rectangles) or required months of programming and large budgets.
Compare that with now. I'm writing this on a computer many times more powerful than any available in the 1970's. But, I'm still writing it All the power at my disposal is being used so that I can see the text as it will be printed. That's a totally different way of using the vastly enhanced computing power now available than anything that Dr Evans predicted.
Compared to MINNIE, Dr Evans would probably have said that my Windows CE palmtop had enough power to be a UIM. Instead, this power is devoted to allowing it to display a picture that is supposed to look like a desktop. I'm not saying that is a waste, I'm just saying that's what has happened.
Do we need UIMs?
My answer is: 'Probably'. The problem is: who is going to pay for them? The Japanese tried with their 5th Generation project (by the way, what happened to that?) but seem to have failed as it turned out to be more complicated than anyone imagined.
What's needed is a 'killer application' - a program that nearly everyone needs and is prepared to pay for. If I knew what that is, I'd be writing it now!
I'm really keen to hear what you think of this. Please email any comments to: firstname.lastname@example.org