Rebel Science News
11/28/2012
Jeff Hawkins Is Close to Something Big
 
8/26/2012
The Myth of the Bayesian Brain
 
8/23/2012
The Second Great AI Red Herring Chase
 
8/15/2012
Rebel Speech Recognition Theory
 
8/8/2012
Rebel Speech Update
 

The Perceptual Network (under revision)

 

 

Rebel Science Home

Temporal Intelligence
Animal
Perceptual Learning
Perceptual Network

Memory

Motivation

Motor Learning

Something Different
Contact Me

 

Sensory Layer
Signal Separation Layer
     Signal Separation Neuron
  Causal Perception
  Signal Noise
  Columns
  Branching
  Feedback
Neurobiological Prediction
  Henry Markram
  Chicken and Egg
  Signal Sorting Mechanism
Coincidence Layer
  Coincidence Neuron
  Masters and Slaves

 

In this section I present a neural network model for perceptual learning based on the principles I covered in Perceptual Learning. The perceptual network consists of several feed-forward layers or modules: sensory, signal separation, association, working memory and attention. Please refer to the network diagram for an overview of the network's architecture.

Sensory Layer

This is the input layer that detects changes in the environment. As mentioned in the previous section, every sensor has a complement, i.e., for every phenomenon there is sensor for detecting stimulus onset and stimulus offset. The output of a sensor normally branches into multiple parallel pathways. They make both successor and predecessor connections with target neurons.

As an example, consider how Animal detects a chess piece on a given square. The detection can only happen if either the eye or the piece moves. This is no different than the way the human eye works-- our eyes continually move in minute, back-and-forth movements called saccades. This is true even when we fixate our stare on a dot. It is known that when the eye is immobilized, we loose our ability to see stationary images. Here is an example of what happens when Animal's eye moves from one location to another. If there is a queen on a given destination square, the positive queen sensor for that square fires. If there is a pawn on the starting square, the negative pawn sensor for that square fires. All visual sensors work on the same principle.

Signal Separation Layer

The problem with most sensory signals is that they do not come prepackaged in neatly labeled boxes. A visual sensor, for example, may fire whenever it detects a certain change in brightness. The individual firings of the sensor have no meaning in and of themselves other than marking a temporal transition. A single sensory stream, such as might be generated by a retinal ganglion cell, will contain many mixed signals that are mostly unrelated to each other. The signals are said to be independent and must be separated from the stream into parallel pathways. Luckily for us, there is a good chance that they are related to signals in other streams. By comparing their times of arrival with those in other streams, it is possible to find statistically salient correlations among them. These temporal correlations are used to sort signals and guide them into specific slots for further processing. I use a special signal separation neuron for this purpose.

Signal Separation Neuron (SSN)

An SSN has one successor input and one predecessor input. It fires if the successor signal arrives immediately after the predecessor. The neuron may have several predecessor inputs in the beginning but after a brief learning period, only one input predecessor survives. The SSN works by assuming that the probability that sequentially correlated signals in two parallel streams will arrive contiguously is much better than random. Here's how it works:

An SSN is a probabilistic filter neuron. That is to say, it does not fire every time a successor signal arrives but only when the successor signal is immediately preceded by a signal arriving on the predecessor connection. As such, an SSN can be likened to a gate guarding the entrance of a pathway: only successor signals that meet the right conditions are allowed through. During the initial learning period, predecessor connections are slightly weakened every time they fire. By contrast, successor connections are slightly strengthened when they fire. If a successor fires immediately after a predecessor, both are strongly strengthened. The predecessor connection that is the first to reach a given mature strength is considered the winning connection; the other predecessors are immediately severed. When the strength of the successor reaches maturity, the SSN's output is ready to be connected to a downstream layer. If the successor reaches maturity without securing a predecessor, it is considered an independent successor and the signal just passes through. Here is how it is implemented in Animal:

It is important to use the right correlation probability. I now use an 10 to 1 ratio in Animal. This means that a predecessor input must consistently arrive at least once for every 10 successor arrivals in order to be considered correlated. I use an 10 to 1 ratio because this is the number that seems to work best in Animal at this point, although it may change in the future. Implementing this in code is rather simple. In animal, an SSN can initially have up to 32 predecessor input connections, each with a starting strength (10). Every time a predecessor fires, its strength is weakened by 1. Every time a signal correlation is detected, the predecessor is strengthened by 10. As soon as a predecessor reaches a predetermined maturity level (40 in Animal), the other predecessors are disconnected and the SSN is marked as mature and can no longer be changed. Only mature SSNs can make connections with the downstream coincidence layer.

Causal Perception

One of the most important skills that an intelligent agent should have is the ability to perceive the evolution of sensory events. It must learn to recognize the causal correlations in its environment. Causality has to do with before and after phenomena. Signal separation is a method of classification based on the causal correlations between signals in a changing environment. Successor events are identified by their predecessors. Some causal correlations involve time scales that are much longer than that used in an SSN. For example, an intelligent system must be able to discern that letting go of a ball causes it to hit the ground. Multiple time-scale correlations are also tied to the ability to anticipate or predict events before they happen. This subject is treated in greater detail in the memory section.

Signal Noise

The use of a fixed temporal asymmetry between successor and predecessor signals is an effective approach to temporal learning especially when signal noise is present in the sensory stream. While random noise would decrease the likelihood of finding a temporal correlation, once a correlation is found, separation neurons become effective barriers to the further propagation of noise throughout the rest of the system.

Columns

As I mentioned in the Animal section, the separation layer is highly compartmentalized. Separation neurons in each compartment receive input connections only from a given type of sensor. The purpose of compartments is to restrict the proliferation of irrelevant or redundant connections in the separation layer. The actual type is irrelevant. The important thing is to prohibit connections across types. This is analogous to the brain's cortical columns. In the visual cortex, for example, we observe a retinotopic mapping of connections which are divided into bundles called columns specializing in orientation, ocular dominance, etc... 

Branching

How does one determine how many branches or separations are required for a given sensory stream? My approach to this is to start out with a single axonal branch from a sensor neuron. The branch makes a successor input connection with an SSN in the signal separation layer. The SSN makes several random predecessor input connections with other sensors. As soon as all the existing branches become mature, another branch is added. Predecessor redundancy should be avoided for performance reasons. In other words, a sensor should not make predecessor input connections with more than one successor branch originating from the same sensor. Also, for obvious reasons, a sensor should not make both successor and predecessor connections on the same destination SSN. However, even if the system does not check for double connections, the normal operation of an SSN is enough to eliminate them. Animal is designed to block double SSN connections.

Feedback

Feedback is used to find correlations among temporally contiguous signals in a given sensory stream over more than one short time scale. However, feedback will only separate correlated signals that arrive in relatively close succession. There is a way to find correlations between temporally distant signals as I explain in the section on memory.

Neurobiological Prediction

Henry Markram

In 1997, Dr. Henry Markram and colleagues at the Weizmann Institute of Science in Israel published a watershed paper on cortical pyramidal neurons describing the dependence of synaptic efficacy on a precise delay between pre and post-synaptic spikes. What Markram discovered is that the efficacy of a synapse increases if the pre-synaptic neuron fires a short time (about 10 to 20 ms) before the target neuron. At all other times, the synapse is weakened. This timing-dependent plasticity is equivalent to the behavior of predecessor synapses as explained above.

Chicken and Egg

Markram's in vitro experiments revealed a heretofore unknown temporal aspect of neural processing.  However, it does not explain how the target or post-synaptic neuron reaches action potential in vivo. Since the input synapses cannot gain strength unless the neuron fires a short time after the arrival of the pre-synaptic spike, and since they are initially too weak to induce a post-synaptic potential, the neuron can never fire. This chicken and egg problem is solved if at least one of the synapses serves as the actual trigger or successor input. The timing of the trigger signal must be such that the neuron is caused to fire right after the predecessor fires. Again, the arrival of the trigger signal is not enough to cause an action potential in the target neuron. It must be immediately preceded by the arrival of one or more predecessor signals. The trigger synapse is equivalent to the successor connection as I explained above. The signal separation theory, as described on this page, predicts the existence of both successor and predecessor synapses in cortical neurons. It further predicts that pyramidal neurons in the input layer of the sensory cortex are in fact signal separation neurons.

Signal Sorting Mechanism

It is known that about one million retinal fibers synapse with about four hundred million cells (lower in animals) in the human visual cortex, a 1 to 400 ratio. This is consistent with the signal separation theory. This theory predicts that the afferent axon with the most synaptic contacts with target neurons in a given cortical column makes successor connections with the target neurons. Each axonal branch is equivalent to a separated signal pathway. The branching parallel pathways and their target neurons form a highly efficient sorting or filtration mechanism based on the causal correlations between sensory signals. Of course, each cortical column receives a huge number of predecessor connections from other afferent fibers of the same type (orientation, color, etc...), not to mention an equally huge number of feedback connections.

Coincidence Layer

Once sensory signals have been properly separated into parallel independent pathways, they can be tested for concurrency. This allows the system to find repeating concurrent regularities in the incoming sensory streams. This is done with the help of association neurons.

Coincidence Neuron (CN)

A coincidence neuron has a single master input and multiple slave inputs. Every neuron in the separation layer makes one or more master input connection with a target CN. Here is how a CN works.

When a master signal arrives, the CN fires if the slaves arrived at the same time as the master. In Animal, I sum up the strengths of all the slave synapses that fire concurrently. I use the word 'agreement' to label a situation where the sum is over ninety percent of the total strength. The neuron fires if there is an agreement with the master signal. A new slave input connection is added when all existing slaves have reached a predetermined adult strength. Every time a slave fires, it is slightly weakened. When a master signal arrives, any slave connection that did not fire concurrently is strongly weakened.

Masters and Slaves

We know that signals can be either masters or slaves but what determines whether a new connection should be treated as a master or a slave? The answer has to do with competition. All signals are born with an equal opportunity to be either masters or slaves. In Animal, I first create a master and a slave connection in the coincidence detection layer for every path coming from the signal separation layer. As soon as either connection reaches a predetermined adult level, the other connection is severed. If a master connection looses its slaves, the process starts over.

Markram. H.. Liibke. J.. Frotscher. M.. and Sakmann. B. (1997). Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science, 275:213-215. Download the paper.

 

Next: Memory

 

2004-2006 Louis Savain

Copy and distribute freely