downtoearth-subscribe

Hear here

Hear here WHILE working on the screenplay of 2001: A Space Odyssey, one of the century's most acclaimed science fiction films, director Stanley Kubrick and legendary science fiction author Arthur C Clarke created HAL 9000, a 22nd century supercomputer on-board the spaceship Discovery. HAL could do amazing things: it could fly the spaceship while the astronauts were suspended in cryogenic solution, play chess like a pro, and even discuss philosophy. Later, however, HAL turns renegade and tries to eliminate the astronauts on board. One astounding scene from the film shows HAL "eavesdropping" on the astronauts, simply by observing the movements of their lips through a hidden camera. Though we still lack the technology to replicate HAL'S lipreading abilities, scientists, according to the latest reports, have managed to design a computer that can "hear" sounds the same way we do.

Researchers at the University of Plymouth, USA, have developed a neural network that can filter sounds in the same way as the human brain. This network could increase our understanding of hearing problems and boost the development of more accurate speech recognition systems.

Sue McCabe and Michael Denham from the university's neurodynamics research group are investigating how, in the earliest stages of hearing, the brain sorts streams of sound to help focus our attention on those we are interested in. Until now, researchers have used digital sound processors to study sound recognition, but these chips are typically very bad at picking out patterns and their programs cannot be easily changed.

The pair created the neural network in order to study the "pre-attentional" streaming process by which humans sort out sounds. The software is made up of two interacting arrays of virtual neurons. When an oscillating signal of high and low frequencies is played, the software spontaneously sorts the stream by frequency.

Each array homes in on one frequency, inhibiting the reactions of the other array so that it picks a different signal. The software swiftly divides the streams and separates sounds. The work supports theories of pre-attentional hearing that suggest humans listen first and organise sounds later, rather than the other way around.

The neural network has been built to function like the human hearing system rather that to represent an accurate physiological model of the ear and brain structures behind it. "This is a very abstract model of performance and streaming," says McCabe, "but we are interested in getting it to mimic human performance."

The fact that the neural network succumbs to the same auditory illusions that humans do is a good guide to how accurately it works.

These aural illusions arise because we have no control over the pre-attentional process of sorting sounds. Our brain does it for us and then expects us to deal with the results. This means when oscillating stream of sound, flipping between two frequencies, are played, we instantly sort them. Once they have been sorted, it is quite impossible to consciously merge the two sounds back into a single stream.

The neural network does the same thing. Equally, a rising scale is played to one ear and a falling scale to the other, we do not hear the two separately: instead we hear a bouncing pattern. The neural network gets just as confused as we do when this happens.

One problem is that the array is much slower that the brain, taking up to eight seconds to organise a stream of signals. But McCabe says this is still fast enough to be useful. She is planning further research to investigate the physiological basis of hearing. She also wants to get a better idea of why humans are prey to these illusions and what parts of the brain are involved in streaming.

Related Content