Adaptive Resonance Theory (ART)

Introduction

Grossberg’s Adaptive Resonance Theory, developed further by Grossberg and Carpenter, is for the categorization of patterns using the competitive learning paradigm. It introduces a gain control and a reset to make certain that learned categories are retained even while new categories are learned and thereby addresses the plasticity–stability dilemma.

Adaptive Resonance Theory makes much use of a competitive learning paradigm. A criterion is developed to facilitate the occurrence of winner-take-all phenomenon. A single node with the largest value for the set criterion is declared the winner within its layer, and it is said to classify a pattern class. If there is a tie for the winning neuron in a layer, then an arbitrary rule, such as the first of them in a serial order, can be taken as the winner.

The neural network developed for this theory establishes a system that is made up of two subsystems, one being the attentional subsystem, and this contains the unit for gain control. The other is an orienting subsystem, and this contains the unit for reset. During the operation of the network modeled for this theory, patterns emerge in the attentional subsystem and are called traces of STM (short-term memory). Traces of LTM (long-term memory) are in the connection weights between the input layer and output layer.

The network uses processing with feedback between its two layers, until resonance occurs. Resonance occurs when the output in the first layer after feedback from the second layer matches the original pattern used as input for the first layer in that processing cycle. A match of this type does not have to be perfect. What is required is that the degree of match, measured suitably, exceeds a predetermined level, termed vigilance parameter. Just as a photograph matches the likeness of the subject to a greater degree when the granularity is higher, the pattern match gets finer when the vigilance parameter is closer to 1.

The Network for ART1

The neural network for the adaptive resonance theory or ART1 model consists of the following:

  A layer of neurons, called the F1 layer (input layer or comparison layer)

  A node for each layer as a gain control unit

  A layer of neurons, called the F2 layer (output layer or recognition layer)

  A node as a reset unit

  Bottom-up connections from F1 layer to F2 layer

  Top-down connections from F2 layer to F1 layer

  Inhibitory connection (negative weight) form F2 layer to gain control

  Excitatory connection (positive weight) from gain control to a layer

  Inhibitory connection from F1 layer to reset node

  Excitatory connection from reset node to F2 layer

A Simplified Diagram of Network Layout

Simplified diagram of the neural network for an ART1 model.

Processing in ART1

The ART1 paradigm, just like the Kohonen Self-Organizing Map to be introduced in chapter The Kohonen Self-Organizing Map, performs data clustering on input data; like inputs are clustered together into a category. As an example, you can use a data clustering algorithm such as ART1 for Optical Character Recognition (OCR), where you try to match different samples of a letter to its ASCII equivalent. Particular attention is made in the ART1 paradigm to ensure that old information is not thrown away while new information is assimilated.

An input vector, when applied to an ART1 system, is first compared to existing patterns in the system. If there is a close enough match within a specified tolerance (as indicated by a vigilance parameter), then that stored pattern is made to resemble the input pattern further and the classification operation is complete. If the input pattern does not resemble any of the stored patterns in the system, then a new category is created with a new stored pattern that resembles the input pattern.

Special Features of the ART1 Model

One special feature of an ART1 model is that a two-thirds rule is necessary to determine the activity of neurons in the F1 layer. There are three input sources to each neuron in layer F1. They are the external input, the output of gain control, and the outputs of F2 layer neurons. The F1neurons will not fire unless at least two of the three inputs are active. The gain control unit and the two-thirds rule together ensure proper response from the input layer neurons. A second feature is that a vigilance parameter is used to determine the activity of the reset unit, which is activated whenever there is no match found among existing patterns during classification.

Notation for ART1 Calculations

Let us list the various symbols we will use to describe the operation of a neural network for an ART1 model:

wij

Weight on the connection from the ith neuron in the F1 layer to the jth neuron in the F2 layer

vji

Weight on the connection from the jth neuron in the F2 layer to the ith neuron on the F1 layer

ai

Activation of the ith neuron in the F1 layer

bj

Activation of the jth neuron in the F2 layer

xi

Output of the ith neuron in the F1 layer

yj

Output of the jth neuron in the F2 layer

zi

Input to the ith neuron in F1 layer from F2 layer

ρ

Vigilance parameter, positive and no greater than 1 (0<ρ ≤ 1)

m

Number of neurons in the F1 layer

n

Number of neurons in the F2 layer

I

Input vector

Si

Sum of the components of the input vector

Sx

Sum of the outputs of neurons in the F1 layer

A, C, D

Parameters with positive values or zero

L

Parameter with value greater than 1

B

Parameter with value less than D + 1 but at least as large as either D or 1

r

Index of winner of competition in the F2 layer

 

Algorithm for ART1 Calculations

The ART1 equations are not easy to follow. We follow the description of the algorithm found in James A. Freeman and David M. Skapura. The following equations, taken in the order given, describe the steps in the algorithm. Note that binary input patterns are used in ART1.

Initialization of Parameters

wij

should be positive and less than L / ( m - 1 + L)

vji

should be greater than ( B - 1 ) / D

ai = -B / ( 1 + C )

Equations for ART1 Computations

When you read below the equations for ART1 computations, keep in mind the following considerations. If a subscript i appears on the left-hand side of the equation, it means that there are m such equations, as the subscript i varies from 1 to m. Similarly, if instead a subscript j occurs, then there are n such equations as j ranges from 1 to n. The equations are used in the order they are given. They give a step-by-step description of the following algorithm. All the variables, you recall, are defined in the earlier section on notation. For example, I is the input vector.

F1 layer calculations:

       ai = Ii / ( 1 + A ( Ii + B ) + C )
       xi = 1 if ai > 0
            0 if ai ≤ 0

F2 layer calculations:

       bj = Σ wij xi, the summation being on i from 1 to m
       yj = 1 if jth neuron has the largest activation value in the F2
            layer
          = 0 if jth neuron is not the winner in F2 layer

Top-down inputs:

       zi = Σvjiyj, the summation being on j from 1 to n (You will
       notice that exactly one term is nonzero)

F1 layer calculations:

       ai = ( Ii + D zi - B ) / ( 1 + A ( Ii + D zi ) + C )
       xi = 1 if ai > 0
          = 0 if ai ≤ 0

Checking with vigilance parameter:

If ( Sx / SI ) <Σ, set yj = 0 for all j, including the winner r in F2 layer, and consider the jth neuron inactive (this step is reset, skip remaining steps).

If ( Sx / SI ) ≥ Σ, then continue.

Modifying top-down and bottom-up connection weight for winner r:

       vir  = ( L / ( Sx + L -1 ) if xi = 1
            = 0 if xi = 0
       wri  = 1 if xi = 1
            = 0 if xi = 0

Having finished with the current input pattern, we repeat these steps with a new input pattern. We lose the index r given to one neuron as a winner and treat all neurons in the F2 layer with their original indices (subscripts).

The above presentation of the algorithm is hoped to make all the steps as clear as possible. The process is rather involved. To recapitulate, first an input vector is presented to the F1 layer neurons, their activations are determined, and then the threshold function is used. The outputs of the F1 layer neurons constitute the inputs to the F2 layer neurons, from which a winner is designated on the basis of the largest activation. The winner only is allowed to be active, meaning that the output is 1 for the winner and 0 for all the rest. The equations implicitly incorporate the use of the 2/3 rule that we mentioned earlier, and they also incorporate the way the gain control is used. The gain control is designed to have a value 1 in the phase of determining the activations of the neurons in the F2 layer and 0 if either there is no input vector or output from the F2 layer is propagated to the F1 layer.

Other Models

Extensions of an ART1 model, which is for binary patterns, are ART2 and ART3. Of these, ART2 model categorizes and stores analog-valued patterns, as well as binary patterns, while ART3 addresses computational problems of hierarchies.

C++ Implementation

Again, the algorithm for ART1 processing as given in Freeman and Skapura is followed for our C++ implementation. Our objective in programming ART1 is to provide a feel for the workings of this paradigm with a very simple program implementation. For more details on the inner workings of ART1, you are encouraged to consult Freeman and Skapura, or other references listed at the back of the book.

A Header File for the C++ Program for the ART1 Model Network

The header file for the C++ program for the ART1 model network is art1net.hpp. It contains the declarations for two classes, an artneuron class for neurons in the ART1 model, and a network class, which is declared as a friend class in the artneuron class. Functions declared in the network class include one to do the iterations for the network operation, finding the winner in a given iteration, and one to inquire if reset is needed.

//art1net.h   V. Rao,  H. Rao
//Header file for ART1 model network program
 
#include <iostream.h>
#define MXSIZ 10
 
class artneuron
{
 
protected:
       int nnbr;
       int inn,outn;
       int output;
       double activation;
       double outwt[MXSIZ];
       char *name;
       friend class network;
 
public:
       artneuron() { };
       void getnrn(int,int,int,char *);
};
 
class network
{
public:
       int  anmbr,bnmbr,flag,ninpt,sj,so,winr;
       float ai,be,ci,di,el,rho;
       artneuron (anrn)[MXSIZ],(bnrn)[MXSIZ];
       int outs1[MXSIZ],outs2[MXSIZ];
       int lrndptrn[MXSIZ][MXSIZ];
       double acts1[MXSIZ],acts2[MXSIZ];
       double mtrx1[MXSIZ][MXSIZ],mtrx2[MXSIZ][MXSIZ];
 
       network() { };
       void getnwk(int,int,float,float,float,float,float);
       void prwts1();
       void prwts2();
       int winner(int k,double *v,int);
       void practs1();
       void practs2();
       void prouts1();
       void prouts2();
       void iterate(int *,float,int);
       void asgninpt(int *);
       void comput1(int);
       void comput2(int *);
       void prlrndp();
       void inqreset(int);
       void adjwts1();
       void adjwts2();
};

A Source File for C++ Program for an ART1 Model Network

The implementations of the functions declared in the header file are contained in the source file for the C++ program for an ART1 model network. It also has the main function, which contains specifications of the number of neurons in the two layers of the network, the values of the vigilance and other parameters, and the input vectors. Note that if there are n neurons in a layer, they are numbered serially from 0 to n–1, and not from 1 to n in the C++ program. The source file is called art1net.cpp. It is set up with six neurons in the F1 layer and seven neurons in the F2 layer. The main function also contains the parameters needed in the algorithm.

To initialize the bottom-up weights, we set each weight to be –0.1 + L/(m – 1 + L) so that it is greater than 0 and less than L/(m – 1 + L), as suggested before. Similarly, the top-down weights are initialized by setting each of them to 0.2 + (B – 1)/D so it would be greater than (B – 1)/D. Initial activations of the F1 layer neurons are each set to –B/(1 + C), as suggested earlier.

A restrmax function is defined to compute the maximum in an array when one of the array elements is not desired to be a candidate for the maximum. This facilitates the removal of the current winner from competition when reset is needed. Reset is needed when the degree of match is of a smaller magnitude than the vigilance parameter.

The function iterate is a member function of the network class and does the processing for the network. The inqreset function of the network class compares the vigilance parameter with the degree of match.

//art1net.cpp  V. Rao, H. Rao
//Source file for ART1 network program
 
#include "art1net.h"
 
int restrmax(int j,double *b,int k)
       {
       int i,tmp;
 
       for(i=0;i<j;i++){
              if(i !=k)
              {tmp = i;
              i = j;}
              }
 
       for(i=0;i<j;i++){
 
       if( (i != tmp)&&(i != k))
 
         {if(b[i]>b[tmp]) tmp = i;}}
 
       return tmp;
       }
 
void artneuron::getnrn(int m1,int m2,int m3, char *y)
{
int i;
name = y;
nnbr = m1;
outn = m2;
inn  = m3;
 
for(i=0;i<outn;++i){
 
       outwt[i] = 0 ;
       }
 
output = 0;
activation = 0.0;
}
 
       void network::getnwk(int k,int l,float aa,float bb,float
       cc,float dd,float ll)
{
anmbr = k;
bnmbr = l;
ninpt = 0;
ai = aa;
be = bb;
ci = cc;
di = dd;
el = ll;
int i,j;
flag = 0;
 
char *y1="ANEURON", *y2="BNEURON" ;
 
for(i=0;i<anmbr;++i){
 
       anrn[i].artneuron::getnrn(i,bnmbr,0,y1);}
 
for(i=0;i<bnmbr;++i){
 
       bnrn[i].artneuron::getnrn(i,0,anmbr,y2);}
 
float tmp1,tmp2,tmp3;
tmp1 = 0.2 +(be - 1.0)/di;
tmp2 = -0.1 + el/(anmbr - 1.0 +el);
tmp3 = - be/(1.0 + ci);
 
for(i=0;i<anmbr;++i){
 
       anrn[i].activation = tmp3;
       acts1[i] = tmp3;
 
       for(j=0;j<bnmbr;++j){
 
              mtrx1[i][j]  = tmp1;
              mtrx2[j][i] = tmp2;
              anrn[i].outwt[j] = mtrx1[i][j];
              bnrn[j].outwt[i] = mtrx2[j][i];
              }
       }
 
prwts1();
prwts2();
practs1();
cout<<"\n";
}
 
int network::winner(int k,double *v,int kk){
int t1;
 
t1 = restrmax(k,v,kk);
return t1;
}
 
void network::prwts1()
{
int i3,i4;
cout<<"\nweights for F1 layer neurons: \n";
 
for(i3=0;i3<anmbr;++i3){
 
       for(i4=0;i4<bnmbr;++i4){
 
              cout<<anrn[i3].outwt[i4]<<"  ";}
 
       cout<<"\n"; }
 
cout<<"\n";
}
 
void network::prwts2()
{
int i3,i4;
cout<<"\nweights for F2 layer neurons: \n";
 
for(i3=0;i3<bnmbr;++i3){
 
       for(i4=0;i4<anmbr;++i4){
 
              cout<<bnrn[i3].outwt[i4]<<"  ";};
 
       cout<<"\n";  }
 
cout<<"\n";
}
 
void network::practs1()
{
int j;
cout<<"\nactivations of F1 layer neurons: \n";
 
for(j=0;j<anmbr;++j){
 
       cout<<acts1[j]<<"   ";}
 
cout<<"\n";
}
 
void network::practs2()
{
int j;
cout<<"\nactivations of F2 layer neurons: \n";
 
for(j=0;j<bnmbr;++j){
 
       cout<<acts2[j]<<"   ";}
 
cout<<"\n";
}
 
void network::prouts1()
{
int j;
cout<<"\noutputs of F1 layer neurons: \n";
 
for(j=0;j<anmbr;++j){
 
       cout<<outs1[j]<<"   ";}
 
cout<<"\n";
}
 
void network::prouts2()
{
int j;
cout<<"\noutputs of F2 layer neurons: \n";
 
for(j=0;j<bnmbr;++j){
 
       cout<<outs2[j]<<"   ";}
 
cout<<"\n";
}
 
void network::asgninpt(int *b)
{
int j;
sj = so = 0;
cout<<"\nInput vector is:\n" ;
 
for(j=0;j<anmbr;++j){
 
       cout<<b[j]<<" ";}
 
cout<<"\n";
 
for(j=0;j<anmbr;++j){
 
       sj += b[j];
       anrn[j].activation = b[j]/(1.0 +ci +ai*(b[j]+be));
       acts1[j] = anrn[j].activation;
 
       if(anrn[j].activation > 0) anrn[j].output = 1;
 
       else
              anrn[j].output = 0;
 
       outs1[j] = anrn[j].output;
       so += anrn[j].output;
       }
 
practs1();
prouts1();
}
 
void network::inqreset(int t1)
{
int jj;
flag = 0;
jj = so/sj;
cout<<"\ndegree of match: "<<jj<<" vigilance:  "<<rho<<"\n";
 
if( jj > rho ) flag = 1;
 
       else
       {cout<<"winner is "<<t1;
       cout<<" reset required \n";}
 
}
 
void network::comput1(int k)
{
int j;
 
for(j=0;j<bnmbr;++j){
 
       int ii1;
       double c1 = 0.0;
       cout<<"\n";
 
       for(ii1=0;ii1<anmbr;++ii1){
 
              c1 += outs1[ii1] * mtrx2[j][ii1];
              }
 
       bnrn[j].activation = c1;
       acts2[j] = c1;};
 
winr = winner(bnmbr,acts2,k);
cout<<"winner is "<<winr;
for(j=0;j<bnmbr;++j){
 
       if(j == winr) bnrn[j].output = 1;
 
       else bnrn[j].output =  0;
       outs2[j] = bnrn[j].output;
       }
 
practs2();
prouts2();
}
 
void network::comput2(int *b)
{
double db[MXSIZ];
double tmp;
so = 0;
int i,j;
 
for(j=0;j<anmbr;++j){
 
       db[j] =0.0;
 
       for(i=0;i<bnmbr;++i){
 
              db[j] += mtrx1[j][i]*outs2[i];};
 
       tmp = b[j] + di*db[j];
       acts1[j] = (tmp - be)/(ci +1.0 +ai*tmp);
       anrn[j].activation = acts1[j];
 
       if(anrn[j].activation > 0) anrn[j].output = 1;
 
       else anrn[j].output = 0;
 
       outs1[j] = anrn[j].output;
       so += anrn[j].output;
       }
 
cout<<"\n";
practs1();
prouts1();
}
 
void network::adjwts1()
{
int i;
 
for(i=0;i<anmbr;++i){
 
       if(outs1[i] >0) {mtrx1[i][winr]  = 1.0;}
 
       else
 
              {mtrx1[i][winr] = 0.0;}
 
       anrn[i].outwt[winr] = mtrx1[i][winr];}
 
prwts1();
}
 
void network::adjwts2()
{
int i;
cout<<"\nwinner is "<<winr<<"\n";
 
for(i=0;i<anmbr;++i){
 
       if(outs1[i] > 0) {mtrx2[winr][i] = el/(so + el -1);}
 
       else
 
              {mtrx2[winr][i] = 0.0;}
 
       bnrn[winr].outwt[i]  = mtrx2[winr][i];}
 
prwts2();
}
 
void network::iterate(int *b,float rr,int kk)
{
int j;
rho = rr;
flag = 0;
 
asgninpt(b);
comput1(kk);
comput2(b);
inqreset(winr);
 
if(flag == 1){
 
       ninpt ++;
       adjwts1();
       adjwts2();
       int j3;
 
       for(j3=0;j3<anmbr;++j3){
 
              lrndptrn[ninpt][j3] = b[j3];}
 
       prlrndp();
       }
 
else
 
       {
 
       for(j=0;j<bnmbr;++j){
 
              outs2[j] = 0;
              bnrn[j].output = 0;}
 
       iterate(b,rr,winr);
       }
}
 
void network::prlrndp()
{
int j;
cout<<"\nlearned vector # "<<ninpt<<"  :\n";
 
for(j=0;j<anmbr;++j){
 
       cout<<lrndptrn[ninpt][j]<<"  ";}
 
cout<<"\n";
}
 
void main()
{
int ar = 6, br = 7, rs = 8;
float aa = 2.0,bb = 2.5,cc = 6.0,dd = 0.85,ll = 4.0,rr =
       0.95;
int inptv[][6]={0,1,0,0,0,0,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,0,\
       1,0};
 
cout<<"\n\nTHIS PROGRAM IS FOR AN -ADAPTIVE RESONANCE THEORY\
       1 - NETWORK.\n";
cout<<"THE NETWORK IS SET UP FOR ILLUSTRATION WITH "<<ar<<" \
       INPUT NEURONS,\n";
cout<<" AND "<<br<<" OUTPUT NEURONS.\n";
 
static network bpn;
bpn.getnwk(ar,br,aa,bb,cc,dd,ll) ;
bpn.iterate(inptv[0],rr,rs);
bpn.iterate(inptv[1],rr,rs);
bpn.iterate(inptv[2],rr,rs);
bpn.iterate(inptv[3],rr,rs);
}

Program Output

Four input vectors are used in the trial run of the program, and these are specified in the main function. The output is self-explanatory. We have included only in this text some comments regarding the output. These comments are enclosed within strings of asterisks. They are not actually part of the program output - shows a summarization of the categorization of the inputs done by the network. Keep in mind that the numbering of the neurons in any layer, which has n neurons, is from 0 to n – 1, and not from 1 to n.

Categorization of Inputs

input

winner in F2 layer

0 1 0 0 0 0

0, no reset

1 0 1 0 1 0

1, no reset

0 0 0 0 1 0

1, after reset 2

1 0 1 0 1 0

1, after reset 3

The input pattern 0 0 0 0 1 0 is considered a subset of the pattern 1 0 1 0 1 0 in the sense that in whatever position the first pattern has a 1, the second pattern also has a 1. Of course, the second pattern has 1’s in other positions as well. At the same time, the pattern 1 0 1 0 1 0 is considered a superset of the pattern 0 0 0 0 1 0. The reason that the pattern 1 0 1 0 1 0 is repeated as input after the pattern 0 0 0 0 1 0 is processed, is to see what happens with this superset. In both cases, the degree of match falls short of the vigilance parameter, and a reset is needed.

Here’s the output of the program:

THIS PROGRAM IS FOR AN ADAPTIVE RESONANCE THEORY
1-NETWORK. THE NETWORK IS SET UP FOR ILLUSTRATION WITH SIX INPUT NEURONS
AND SEVEN OUTPUT NEURONS.
*************************************************************
Initialization of connection weights and F1 layer activations. F1 layer
connection weights are all chosen to be equal to a random value subject
to the conditions given in the algorithm. Similarly, F2 layer connection
weights are all chosen to be equal to a random value subject to the
conditions given in the algorithm.
*************************************************************
weights for F1 layer neurons:
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
 
weights for F2 layer neurons:
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
 
activations of F1 layer neurons:
-0.357143 -0.357143 -0.357143 -0.357143 -0.357143 -0.357143
*************************************************************
A new input vector and a new iteration
*************************************************************
Input vector is:
0 1 0 0 0 0
 
activations of F1 layer neurons:
0   0.071429   0   0   0   0
 
outputs of F1 layer neurons:
0   1   0   0   0   0
 
winner is 0
activations of F2 layer neurons:
0.344444   0.344444   0.344444   0.344444   0.344444   0.344444   0.344444
 
outputs of F2 layer neurons:
1   0   0   0   0   0   0
 
activations of F1 layer neurons:
-0.080271   0.013776   -0.080271   -0.080271   -0.080271   -0.080271
 
outputs of F1 layer neurons:
0   1   0   0   0   0
*************************************************************
Top-down and bottom-up outputs at F1 layer match, showing resonance.
*************************************************************
degree of match: 1 vigilance:  0.95
 
weights for F1 layer neurons:
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
 
winner is 0
 
weights for F2 layer neurons:
0  1  0  0  0  0
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
 
learned vector # 1  :
0  1  0  0  0  0
*************************************************************
A new input vector and a new iteration
*************************************************************
Input vector is:
1 0 1 0 1 0
 
activations of F1 layer neurons:
0.071429   0   0.071429   0   0.071429   0
 
outputs of F1 layer neurons:
1   0   1   0   1   0
 
winner is 1
activations of F2 layer neurons:
0   1.033333   1.033333   1.033333   1.033333   1.033333   1.033333
 
outputs of F2 layer neurons:
0   1   0   0   0   0   0
 
activations of F1 layer neurons:
0.013776   -0.080271   0.013776   -0.080271   0.013776   -0.080271
 
outputs of F1 layer neurons:
1   0   1   0   1   0
*************************************************************
Top-down and bottom-up outputs at F1 layer match,
showing resonance.
*************************************************************
degree of match: 1 vigilance:  0.95
 
weights for F1 layer neurons:
0  1  1.964706  1.964706  1.964706  1.964706  1.964706
1  0  1.964706  1.964706  1.964706  1.964706  1.964706
0  1  1.964706  1.964706  1.964706  1.964706  1.964706
0  0  1.964706  1.964706  1.964706  1.964706  1.964706
0  1  1.964706  1.964706  1.964706  1.964706  1.964706
0  0  1.964706  1.964706  1.964706  1.964706  1.964706
 
winner is 1
 
weights for F2 layer neurons:
0  1  0  0  0  0
0.666667  0  0.666667  0  0.666667  0
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
 
learned vector # 2  :
1  0  1  0  1  0
*************************************************************
A new input vector and a new iteration
*************************************************************
Input vector is:
0 0 0 0 1 0
 
activations of F1 layer neurons:
0   0   0   0   0.071429   0
 
outputs of F1 layer neurons:
0   0   0   0   1   0
 
winner is 1
activations of F2 layer neurons:
0   0.666667   0.344444   0.344444   0.344444   0.344444   0.344444
 
outputs of F2 layer neurons:
0   1   0   0   0   0   0
 
activations of F1 layer neurons:
-0.189655   -0.357143   -0.189655   -0.357143   -0.060748   -0.357143
 
outputs of F1 layer neurons:
0   0   0   0   0   0
 
degree of match: 0 vigilance:  0.95
winner is 1 reset required
*************************************************************
Input vector repeated after reset, and a new iteration
*************************************************************
Input vector is:
0 0 0 0 1 0
 
activations of F1 layer neurons:
0   0   0   0   0.071429   0
 
outputs of F1 layer neurons:
0   0   0   0   1   0
 
winner is 2
activations of F2 layer neurons:
0   0.666667   0.344444   0.344444   0.344444   0.344444   0.344444
outputs of F2 layer neurons:
0   0   1   0   0   0   0
 
      activations of F1 layer neurons:
-0.080271   -0.080271   -0.080271   -0.080271   0.013776   -0.080271
 
outputs of F1 layer neurons:
0   0   0   0   1   0
*************************************************************
Top-down and bottom-up outputs at F1 layer match, showing resonance.
*************************************************************
degree of match: 1 vigilance:  0.95
 
weights for F1 layer neurons:
0  1  0  1.964706  1.964706  1.964706  1.964706
1  0  0  1.964706  1.964706  1.964706  1.964706
0  1  0  1.964706  1.964706  1.964706  1.964706
0  0  0  1.964706  1.964706  1.964706  1.964706
0  1  1  1.964706  1.964706  1.964706  1.964706
0  0  0  1.964706  1.964706  1.964706  1.964706
 
winner is 2
 
weights for F2 layer neurons:
0  1  0  0  0  0
0.666667  0  0.666667  0  0.666667  0
0  0  0  0  1  0
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
 
learned vector # 3  :
0  0  0  0  1  0
*************************************************************
An old (actually the second above) input vector is retried after trying a
subset vector, and a new iteration
*************************************************************
Input vector is:
1 0 1 0 1 0
 
activations of F1 layer neurons:
0.071429   0   0.071429   0   0.071429   0
 
outputs of F1 layer neurons:
1   0   1   0   1   0
 
winner is 1
activations of F2 layer neurons:
0   2   1   1.033333   1.033333   1.033333   1.03333
 
outputs of F2 layer neurons:
0   1   0   0   0   0   0
 
activations of F1 layer neurons:
-0.060748   -0.357143   -0.060748   -0.357143   -0.060748   -0.357143
 
outputs of F1 layer neurons:
0   0   0   0   0   0
 
degree of match: 0 vigilance:  0.95
winner is 1 reset required
*************************************************************
Input vector repeated after reset, and a new iteration
*************************************************************
Input vector is:
1 0 1 0 1 0
 
activations of F1 layer neurons:
0.071429   0   0.071429   0   0.071429   0
 
outputs of F1 layer neurons:
1   0   1   0   1   0
 
winner is 3
activations of F2 layer neurons:
0   2   1   1.033333   1.033333   1.033333   1.033333
 
outputs of F2 layer neurons:
0   0   0   1   0   0   0
 
activations of F1 layer neurons:
0.013776   -0.080271   0.013776   -0.080271   0.013776   -0.080271
 
outputs of F1 layer neurons:
1   0   1   0   1   0
*************************************************************
Top-down and Bottom-up outputs at F1layer match, showing resonance.
*************************************************************
degree of match: 1 vigilance:  0.95
 
weights for F1 layer neurons:
0  1  0  1  1.964706  1.964706  1.964706
1  0  0  0  1.964706  1.964706  1.964706
0  1  0  1  1.964706  1.964706  1.964706
0  0  0  0  1.964706  1.964706  1.964706
0  1  1  1  1.964706  1.964706  1.964706
0  0  0  0  1.964706  1.964706  1.964706
 
winner is 3
 
weights for F2 layer neurons:
0  1  0  0  0  0
0.666667  0  0.666667  0  0.666667  0
0  0  0  0  1  0
0.666667  0  0.666667  0  0.666667  0
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
 
learned vector # 4  :
1  0  1  0  1  0

Summary

This chapter presented the basics of the Adaptive Resonance Theory of Grossberg and Carpenter and a C++ implementation of the neural network modeled for this theory. It is an elegant theory that addresses the stability–plasticity dilemma. The network relies on resonance. It is a self-organizing network and does categorization by associating individual neurons of the F2 layer with individual patterns. By employing a so-called 2/3 rule, it ensures stability in learning patterns.