Wilson da Silva

Science journalist, feature writer and editor.

Dec. 1, 1992
Published on: 21C Magazine
20 min read

The fuzzy logic that makes the artificial intelligent is already in washing machines and production lines. But how close are we to simulating neurons for a virtual cortex? 

By Wilson da Silva

THREE STOREYS below the teeming cacophony of Kowloon in Hong Kong, at a platform of the Tsim Sha Tsui subway station, a billboard with sprinter Carl Lewis advertises a television set. Using artificial intelligence, the unit delivers a crisper, more realistic picture, a smiling Lewis informs the hurrying millions who whizz past. A split-screen photograph shows the difference in the image of a Bengali tiger.

So, this is it, I thought, while on a recent visit. Artificial intelligence in the family home. I guess I visualised artificial intelligence as more something you would find driving a servant robot or an android bartender that greets you by name and knows your favoured poison. But as always with the march of technology in a society oh-so fond of mod-cons, the pedestrian things are first touched by the future.

It’s hard to believe that just over a decade ago Artificial Intelligence – AI to aficionados – was in virtual decline. A pessimism about the slow pace of advances prevailed and a growing number of scientists despaired about finding practical applications.

But advances in the 1980s, commercialisation successes and a perceived threat from a big-budget Japanese assault on AI helped revive the field. Today, AI is not only embedded in Hong Kong televisions but in use in Tasmanian pulp mills, New South Wales banking and sheep shearing robots in Western Australia.

‘Expert systems’ is one branch of AI research that is now turning quite a few dollars. These are large and complex pieces of software that mimic the thinking process of a human experts – be they chemical engineers in a factory or bank managers considering loan applications. Researchers interview the experts in detail, picking their brains for how they would react to a host of situations, and then design a logic lattice incorporating their expertise. 

That way, a cybernetic engineer or bank manager is on tap 24 hours a day. In a factory, a chemical engineering expert system could monitor thousands of sensory inputs, note slight variations in a manufacturing process that a human might miss, deduce the problem, warn the flesh-and-blood engineer on watch that a potential problem was building up and offer a solution. The engineer would then investigate and if in agreement, act on the electronic tip-off.

“There are lots of little problems out there, all of which are amenable to expert systems,” researcher Phil Collier of the University of Tasmania said from Hobart. “They are good for solving problems an expert could solve over a telephone.”

Collier and his team of students have installed an expert system at the Pasminco Metals smelter in Risdon, Tasmania, which monitors the hydrometallurgical process for dealing with zinc concentrates. Another University of Tasmania researcher, Paul Crowther, has also installed an expert system at the Associated Pulp and Paper Mills facility in Burnie, Tasmania, which monitors recovery of expensive chemicals from the pine pulping process. 

AI is also about to arrive at branches of the State Bank of New South Wales. Its Consumer Loan Assistant expert system will help managers make decisions about whether a potential loan borrower is a good risk. Based on the answers to countless interviews with bank managers and analysis of loans that went bad, the expert system will make a balanced judgement – unimpeded by race, sex or ethnic background. 

“You can have bank officers enter details into the computer and it will make a preliminary decision about whether an applicant should get a loan,” said Sue Zawa, the bank’s manager of new technologies. 

Zawa said expert systems could be applied to money market dealing rooms. A system could plough through the volumes of technical data – currency swings, fluctuations in bond interest rates, gross national product figures – constantly monitoring minute changes in the direction of the market and advising on steps to take to protect the bank’s investments or identify an opportunity for the bank to make a quick killing. This would free up dealers to concentrate more on the human element of markets – to decide whether a comment by Australian Treasurer John Dawkins augurs tighter or looser monetary policy, or whether a remark by Prime Minister Paul Keating suggests greater or lower government spending – all of which can see millions of dollars in bonds and other securities changing hands in seconds.

Expert systems may be applied to money market dealing rooms

One of the advantages of an expert system is that it never blinks. It doesn’t lose concentration, go to the bathroom or get bored – all of which can affect the most able experts and lead them to miss a vital clue in a developing situation. More attentive operators might have prevented Chernobyl, as a more attentive crew might have saved flight KAL 007.

AI is blossoming, and applications are squirming their way into all facets of life. Auditing at some local banks utilise AI, as do some high-powered programs for scheduling of international air flights, and a new air traffic management system is being developed by the Civil Aviation Authority. An Australian company, ISR Holdings Ltd, have developed a fifth-generation computer language called XL that mimics reasoning ability and operates on existing Unix computer systems. The company has seen its share price leap on the Australian Stock Exchange since announcing a partnership with Japanese trading house C. Itoh to further develop the technology.

Even Australian beer, of all things, is succumbing to cybernetic brain cells. Carlton & United Breweries Ltd is working with the Melbourne-based Australian Artificial Intelligence Institute to develop scheduling system for its packaging plant. The institute is on the leading edge of local AI, and is even doing work for the American space agency NASA on developing diagnostic systems for the international Freedom space station. Other Institute partners include shipping company Conaust Ltd, miner CRA Ltd, the Australian Army and Telecom.

ANOTHER FIELD of AI entering commercial application in a big way are neural networks. These are the closest things to human brains, or to what little we understand of the mysterious grey lumps. They are machines that for all intents and purposes, think through a problem and offer solutions based on previous experience. They also learn from experience. They arrive at solutions like a human being with a command of logic would, and like a human brain cannot be opened up so the process can be tracked.

“You can train it, give it lots of examples and it will make judgements,” said Dr Stephen Hood of the Defence Science and Technology Organisation in Adelaide. “It’s analogous to the human brain. It’s great if you want speed, but [like human brains] you can’t look inside it to double-check, to see how the solution was arrived at.”

“AI is very broad,” said Professor Robin Stanton of the Australian National University in Canberra. “It covers a lot of people doing a lot of different things.”

There are more than 300 researchers working on AI in universities, government science agencies and corporations in Australia, of which less than a third spend most of their time on AI. Some, such as Professor Ross Quinlan of the University of New South Wales in Sydney, lead the world in ‘machine learning’ and ‘decision trees’.

Australia is not an outcast in the world of AI, but not often a leader either. Local expertise is recognised enough for the 1991 confab of AI, the International Joint Conference on Artificial Intelligence, to be held in Sydney. Among the luminaries was AI godfather Marvin Minsky of the United States and young Turk Rodney Brooks of Adelaide, now on top of the AI tree working at Boston’s Massachusetts Institute of Technology.

HAL, the onboard artificial intelligence that runs the spaceship in 2001: A Space Odyssey

But many argue that AI will always be hampered by what little we know about the human brain. How can we imitate something we don’t really understand in the first place?

“Trying to understand the cognitive behaviour which creates that challenging, mystifying and highly engaging behaviour we know as intelligence,” said Stanton, who is also chairman of the Australian Computer Society’s national AI committee. “That’s the challenge. Most people really believe we will create an intelligent, surrogate companion.”

Stanton said the rational components of brain activity can be mimicked, but not the irrational: “It’s only the rational that is going to be replicated. Humans are erratic and irrational; I think it is beyond us to make an intelligence that would match the engaging and emotionally complex behaviour we take for granted.”

One thing becomes clear about the quest for the manufactured mind – how incredibly complex is our own. Things like taking down messages from the telephone, recalling the taste of some dish and comparing it to the meal you’re eating, or reading a book while pedalling an exercise bike – these are all complex tasks that require a lot of mental firepower. When you break such simple actions into their constituent commands and try to write a program for a machine to imitate – then you realise the power of the grey neurons you take for granted.

It’s been half a century since the first true computers were wired up, yet we have yet to create an artificial intelligence anywhere near the intellect of your average moggy, and are certainly far from producing anything as erudite as HAL the shipboard computer. 

“If one talks about the capability of a say, one or two-year-old child, I think we’re at that level now,” said Professor Ray Jarvis of Monash University, an AI researcher and president of the Australian Robot Association. “What we’d like to be able to do is push it up, maybe to five or six, or seven or eight, within the next decade. If we get anywhere near the capability of a five or six-year-old child ... we’ve really got a winner.”

Some argue that AI is heading down the wrong path by trying to mimic human thinking. They say it is its own unique brand of thinking, separate and independent of the human thinking processes.

Others, like Minsky, believe computers can match humans and in many cases already ‘think’. They exhibit all the signs of a child that is clever, precocious, impressive; but also stupefyingly dumb. In seconds they can sort through confusing reams of data and pick out emerging trends, schedule air flights and runways while allowing for wind resistance and estimated times of arrival – but cannot tell the difference between a dog and a car, nor navigate in a room without bumping into furniture. 

In his book, Society of Minds, Minsky argues that intelligence is not one simple, all-encompassing presence but most likely a co-operative association between a myriad of interacting ‘thought tasks’. These thought tasks, all very fast, very focused on single problems and very dumb, are run by one very simple traffic management program we would call sentience, awareness, or – if you really want to push it – the soul. 

“If you’re holding a cup of coffee, you don’t want to have to think about whether the cup is tilting,” Minsky told 21C in an interview last year at IJCAI ‘91, demonstrating the point by lifting up a white china cup of coffee from its saucer. “In the spinal column and the cerebellum, you set up little automatic sub-robots that keep measuring the pressure on your thumb and your finger, and if there’s more pressure on your thumb, it sends a message back to your wrist to rotate and keep the cup level.

“And that doesn’t bother the part of you that’s talking or doing other things,” he said. In such a way, the brain acts as if it were “300 or 400 rather complicated computers in a big network.”

The replicant Pris in Blade Runner

Minsky said the old AI approach of greater speed and computing power is never going to create a thinking computer that can match the diversity and flexibility of human thinking: “You can’t do this sort of thinking by brute force. Thinking is too clever and tricky. But you can do it by 300 different approaches and somehow managing them to work together.

“You can say that machines are thinking, they’re just thinking dumb little thoughts,” Minsky said. “They’re very precise and very fast, but they’re limited thoughts.”

Minsky gave conference participants a stirring, rally-round-the-flag speech at the conference, urging scientists to press ahead with AI applications and saying research was now bringing the day of cybernetic consciousness closer.

Some researchers are not so upbeat: “It’s the old story. You can climb a tree and say you’re on the way to the Moon, but there’s still a long way to go. That gets you down sometimes,” said one Australian AI scientist.

JAPAN HAS TRIED to kick-start the process. In 1982 the powerful Ministry for International Trade and Industry (MITI) announced a 50 billion yen ($550 million) 10-year program to develop fifth generation computer technology. This was a bold attempt to tackle some of the big barriers in AI, and was greeted by a mixture of derision and panic. Derision because Japan was not highly regarded in the computer science world, and panic because some Western researchers were afraid that a Nipponese ‘Manhattan Project’ might crack some of the big questions and leave the West floundering in the wake of the Rising Sun. 

The Institute for New Generation Computer Technology (ICOT) was created and for a decade worked with 200 Japanese researchers and 75 scientists from 12 countries on a host of very tough AI problems. Yet, despite the saddlebags of cash, it has been widely labelled a failure. At a recent international conference in Tokyo, director Kazuhiro Fuchi defended the group’s work.

The project did spawn what are probably the best programming systems in the world, and brought a generation of young Japanese scientists into AI research, said technology analyst David Kahaner of the U.S. Office of Naval Research in a recent report on the project. As a consequence, Japan is no longer on the bottom-rung of computer science. But the hardware and software developed by ICOT is unused or unusable outside of Japan, and little utilised inside the country, he said.

MITI hasn’t given up. On the 10th anniversary conference it announced it would renounce all proprietary claim to ICOT software – in the hope of encouraging its wider use – and move to convert the lessons learned from ICOT’s research into Unix or C, powerful computer languages widely used in the West. This amounts to 70 large programs, including parallel operating software, parallel logic programming languages and other high-end computer applications. Fuchi said that today’s advanced computer languages like Unix, C and RISC are only now being applied when they have been around 10 to 20 years.

But Fuchi admitted that the project, directed by a committee, had its difficulties. Setting out to design a horse, the committee had instead produced a camel, he said. ICOT will continue, and another project – the Real-World Computing Project – is to be funded until 1995 and designed to create fast and powerful computers that can-do intelligence-intensive tasks.

Spectacular research failures are the sorts of things that give AI a bad name and scares industry away, Stanton said.

The android Commander Data in Star Trek: The Next Generation

THE WHOLE QUESTION of integrating artificial intelligence into something approaching human form is a whole separate front of the research war. It is also one that has been arguably more successfully commercialised – there are now an estimated 330,000 robots around the world and Australia is home to about 1,400. Japan leads with 176,100 and the United States with 33,000, but underdeveloped nations are quickly entering the field – Czechoslovakia has 5,700, Taiwan 700 and Brazil 120.

There are still many problems to solve before we reach the droids of Star Wars or the replicants of Blade Runner. Object recognition, visual systems, mobility in a three-dimensional environment – bringing AI into the real world of humans will take some time. But robots are already performing a surprising range of tasks – agriculture and livestock handling, ocean development and fisheries, transportation and warehousing, medical care, civil engineering, building maintenance, fire-fighting – along with the more traditional assembly-line work.

A robot in Denmark trims bones from fish, the U.S. Postal Service has contracted a company to build a toilet-cleaning robot, and a computer-controlled 26-metre arm washes jumbo jets. Maybe that android bartender is not such a crazy prospect after all.

Are some AI systems already thinking? Dr Paul Thistlewaite of the Australian National University wonders if researchers will ever be certain: “I don’t know if we’ll ever be able to tell. What is clear is they [AI systems] can certainly do some fairly smart things. And the number of smart things is increasing.”

Yes, but when will we have a machine that can sit with us and sip cappuccinos by the beach? Perhaps never, perhaps it is only a generation away. Meanwhile, the consumer fruits of AI research will continue to creep into our homes: already this year, stores are selling washing machines that use ‘fuzzy logic’ to guess your wishes. 

Can a dryer that does not lose your socks be all that far away?