We're a little crazy, about science!

New learning procedure for neural networks

neural network

Rustling leaves, a creaking branch: To a mouse, these sensory impressions may, at first, seem harmless — but not if a cat suddenly bursts out of the bush. If so, they were clues of impending life-threatening danger. Researcher Robert Gütig has now found how the brain can link sensory perceptions to events occurring after a delay.

In a new computer model, he developed a learning procedure in which the model neurons can learn to distinguish between many different stimuli by adjusting their activity to the frequency of the cues. The model even works when there is a time delay between the cue and the event or outcome. Not only is the learning procedure vital for the survival of every living creature in that it enables them to filter environmental stimuli; it also helps solve a number of technological learning difficulties. One possible application is in the development of speech recognition programs.

In the animal world, dangers are frequently preceded by warning signs: telltale sounds, movements, and odors may be clues of an imminent attack. If a mouse survives an attack by a cat, its future will be brighter if it learns from the failed attempt and reads the clues early next time round. However, mice are constantly bombarded with a vast number of sensorial impressions, most of which are not associated with danger. So how do they know which sounds and odors from their environment presage a cat attack and which do not?

This poses a problem for the mouse’s brain. In most cases, the crucial environmental stimuli are temporally dispersed from the actual attack, so the brain must link a clue and the resulting event (e.g. a sound and an attack) even though there is a delay between them. Previous theories have not provided satisfactory explanations as to how the brain bridges the gap between a cue and the associated outcome.

So Gütig created a computer-simulated neural network that reacts to stimuli in the same way as a cluster of biological cells. This network can learn to filter out the cues that predict a subsequent event. The network learns by strengthening or weakening specific synapses between the model neurons. The foundation of the computer model is a synaptic learning rule, under which, individual neurons can increase or decrease their activity in response to a simple learning signal.

“This ‘aggregate-label’ learning procedure is built on the concept of setting the connections between cells in such a way that the resulting neural activity over a certain period is proportional to the number of cues,” explains Gütig.

In this way, if a learning signal reflects the occurrence and intensity of certain events in the mouse’s environment, the neurons learn to react to the stimuli that predict those events.

However, the networks can learn to react to environmental stimuli even when no learning signals are available in the environment. They do this by interpreting the average neural activity within a network as a learning signal. Individual neurons learn to react to stimuli that occur in the same numbers as those to which other neurons in the network react.

This ‘self-supervised’ learning follows a principle different to the Hebbian theory that has frequently been applied in artificial neural networks. Hebbian networks learn by strengthening the synapses between neurons that spike at the same time or in quick succession.

“In self-supervised learning, it is not necessary for the neural activity to be temporally aligned. The total number of spikes in a given period is the deciding factor for synaptic change,” says Gütig.

This means that such networks can link sensory clues of different types, e.g. visual, auditory and olfactory, even when there are significant delays between their respective neural representations. Not only does the learning procedure explain biological processes; it could also pave the way for far-reaching improvements to technological applications such as automatic speech recognition.

“That would facilitate considerable simplification of the training requirements for computer-based speech recognition. Instead of laboriously segmented language databases or complex segmentation algorithms, aggregate-label learning could manage with just the subtitles from newscasts, for example,” says Gütig.

Gutig, R. (2016). Spiking neurons can discover predictive features by aggregate-label learning Science, 351 (6277) DOI: 10.1126/science.aab4113

One response

  1. Daniel M.

    Very interesting read. Thinking of how Gütig was able to develop this is way above my realm of understanding yet I still find the concept very interesting and it poses questions. I find it interesting how he was able to recognize a vast network of stimuli to the point of being able to filter certain ones out. This makes me wonder if his system is based on temporal or spatial summation, or both? You also mention how specific synapses are able to become strengthened or weakened over time depending on the cues they receive. What kind of cues is he using when he distinguishes between weakening or strengthening these synapses, does he try to form them on the same basis as neurotransmitters would work? Also, just as important as sending the information in this system he developed, what about those on the receiving end? You then go on to close with the concept of potential automatic speech recognition. If this was instilled in a technology, would that piece of equipment need a certain type of “brain” or main piece of equipment, or would it be able to rely on the multiple variations of speech that are programmed into it? Overall, a very interesting article!


    April 22, 2016 at 3:16 pm

But enough about us, what about you?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.