Further Applications


In this chapter, we present the outlines of some applications of neural networks and fuzzy logic. Most of the applications fall into a few main categories according to the paradigms they are based on. We offer a sampling of topics of research as found in the current literature, but there are literally thousands of applications of neural networks and fuzzy logic in science, technology and business with more and more applications added as time goes on.

Some applications of neural networks are for adaptive control. Many such applications benefit from adding fuzziness also. Steering a car or backing up a truck with a fuzzy controller is an example. A large number of applications are based on the backpropagation training model. Another category of applications deals with classification. Some applications based on expert systems are augmented with a neural network approach. Decision support systems are sometimes designed this way. Another category is made up of optimizers, whose purpose is to find the maximum or the minimum of a function.

NOTE:  You will find other neural network applications related to finance presented toward the end of chapter Application to Financial Forecasting.

Computer Virus Detector

IBM Corporation has applied neural networks to the problem of detecting and correcting computer viruses. IBM’s AntiVirus program detects and eradicates new viruses automatically. It works on boot-sector types of viruses and keys off of the stereotypical behaviors that viruses usually exhibit. The feedforward backpropagation neural network was used in this application. New viruses discovered are used in the training set for later versions of the program to make them “smarter.” The system was modeled after knowledge about the human immune system: IBM uses a decoy program to “attract” a potential virus, rather than have the virus attack the user’s files. These decoy programs are then immediately tested for infection. If the behavior of the decoy program seems like the program was infected, then the virus is detected on that program and removed wherever it’s found.

Mobile Robot Navigation

C. Lin and C. Lee apply a multivalued Boltzmann machine, modeled by them, using an artificial magnetic field approach. They define attractive and repulsive magnetic fields, corresponding to goal position and obstacle, respectively. The weights on the connections in the Boltzmann machine are none other than the magnetic fields.

They divide a two-dimensional traverse map into small grid cells. Given the goal cell and obstacle cells, the problem is to navigate the two-dimensional mobile robot from an unobstructed cell to the goal quickly, without colliding with any obstacle. An attracting artificial magnetic field is built for the goal location. They also build a repulsive artificial magnetic field around the boundary of each obstacle. Each neuron, a grid cell, will point to one of its eight neighbors, showing the direction for the movement of the robot. In other words, the Boltzmann machine is adapted to become a compass for the mobile robot.

A Classifier

James Ressler and Marijke Augusteijn study the use of neural networks to the problem of weapon to target assignment. The neural network is used as a filter to remove unfeasible assignments, where feasibility is determined in terms of the weapon’s ability to hit a given target if fired at a specific instant. The large number of weapons and threats along with the limitation on the amount of time lend significance to the need for reducing the number of assignments to consider.

The network’s role here is classifier, as it needs to separate the infeasible assignments from the feasible ones. Learning has to be quick, and so Ressler and Augusteijn prefer to use an architecture called the cascade-correlation learning architecture, over backpropagation learning. Their network is dynamic in that the number of hidden layer neurons is determined during the training phase. This is part of a class of algorithms that change the architecture of the network during training.

A Two-Stage Network for Radar Pattern Classification

Mohammad Ahmadian and Russell Pimmel find it convenient to use a multistage neural network configuration, a two-stage network in particular, for classifying patterns. The patterns they study are geometrical features of simulated radar targets.

Feature extraction is done in the first stage, while classification is done in the second. Moreover, the first stage is made up of several networks, each for extracting a different estimable feature.

Crisp and Fuzzy Neural Networks for Handwritten Character Recognition

Paul Gader, Magdi Mohamed, and Jung-Hsien Chiang combine a fuzzy neural network and a crisp neural network for the recognition of handwritten alphabetic characters. They use backpropagation for the crisp neural network and a clustering algorithm called K-nearest neighbor for the fuzzy network. Their consideration of a fuzzy network in this study is prompted by their belief that if some ambiguity is possible in deciphering a character, such ambiguity should be accurately represented in the output. For example, a handwritten “u” could look like a “v” or “u.” If present, the authors feel that this ambiguity should be translated to the classifier output.

Feature extraction was accomplished as follows: character images of size 24x16 pixels were used. The first stage of processing extracted eight feature images from the input image, two for each direction (north, northeast, northwest, and east). Each feature image uses an integer at each location that represents the length of the longest bar that fits at that point in that direction. These are referred to as bar features. Next 8x8 overlapping zones are used on the feature images to derive feature vectors. These are made by taking the summed values of the values in a zone and dividing this by the maximum possible value in the zone. Each feature image results in a 15,120 element feature vectors.

Data was obtained from the U.S. Postal Office, consisting of 250 characters. Results showed 97.5% and 95.6% classification rates on training and test sets, respectively, for the neural network. The fuzzy network resulted in 94.7% and 93.8% classification rates, where the desired output for many characters was set to ambiguous.

Noise Removal with a Discrete Hopfield Network

Arun Jagota applies what is called a HcN, a special case of a discrete Hopfield network, to the problem of recognizing a degraded printed word. HcN is used to process the output of an Optical Character Recognizer, by attempting to remove noise. A dictionary of words is stored in the HcN and searched.

Object Identification by Shape

C. Ganesh, D. Morse, E. Wetherell, and J. Steele used a neural network approach to an object identification system, based on the shape of an object and independent of its size. A two-dimensional grid of ultrasonic data represents the height profile of an object. The data grid is compressed into a smaller set that retains the essential features. Backpropagation is used. Recognition on the order of approximately 70% is achieved.

Detecting Skin Cancer

F. Ercal, A. Chawla, W. Stoecker, and R. Moss study a neural network approach to the diagnosis of malignant melanoma. They strive to discriminate tumor images as malignant or benign. There are as many as three categories of benign tumors to be distinguished from malignant melanoma. Color images of skin tumors are used in the study. Digital images of tumors are classified. Backpropagation is used. Two approaches are taken to reduce training time. The first approach involves using fewer hidden layers, and the second involves randomization of the order of presentation of the training set.

EEG Diagnosis

Fred Wu, Jeremy Slater, R. Eugene Ramsay, and Lawrence Honig use a feedforward backpropagation neural network as a classifier in EEG diagnosis. They compare the performance of the neural network classifier to that of a nearest neighbor classifier. The neural network classifier shows a classifier accuracy of 75% for Multiple Sclerosis patients versus 65% for the nearest neighbor algorithm.

Time Series Prediction with Recurrent and Nonrecurrent Networks

Sathyanarayan Rao and Sriram Sethuraman take a recurrent neural network and a feedforward network and train then in parallel. A recurrent neural network has feedback connections from the output neurons back to input neurons to model the storage of temporal information. A modified backpropagation algorithm is used for training the recurrent network, called the real-time recurrent learning algorithm. They have the recurrent neural network store past information, and the feedforward network do the learning of nonlinear dependencies on the current samples. They use this scheme because the recurrent network takes more than one time period to evaluate its output, whereas the feedforward network does not. This hybrid scheme overcomes the latency problem for the recurrent network, providing immediate nonlinear evaluation from input to output.

Security Alarms

Deborah Frank and J. Bryan Pletta study the application of neural networks for alarm classification based on their operation under varying weather conditions. Performance degradation of a security system when the environment changes is a cause for losing confidence in the system itself. This problem is more acute with portable security systems.

They investigated the problem using several networks, ranging from backpropagation to learning vector quantization. Data was collected using many scenarios, with and without the coming of an intruder, which can be a vehicle or a human.

They found a 98% probability of detection and 9% nuisance alarm rate over all weather conditions.

Circuit Board Faults

Anthony Mason, Nicholas Sweeney, and Ronald Baer studied the neural network approach in two laboratory experiments and one field experiment, in diagnosing faults in circuit boards.

Test point readings were expressed as one vector. A fault vector was also defined with elements representing possible faults. The two vectors became a training pair. Backpropagation was used.

Warranty Claims

Gary Wasserman and Agus Sudjianto model the prediction of warranty claims with neural networks. The nonlinearity in the data prompted this approach.

The motivation for the study comes from the need to assess warranty costs for a company that offers extended warranties for its products. This is another application that uses backpropagation. The architecture used was 2-10-1.

Writing Style Recognition

J. Nellis and T. Stonham developed a neural network character recognition system that adapts dynamically to a writing style.

They use a hybrid neural network for hand-printed character recognition, that integrates image processing and neural network architectures. The neural network uses random access memory (RAM) to model the functionality of an individual neuron. The authors use a transform called the five-way image processing transform on the input image, which is of size 32x32 pixels. The transform converts the high spatial frequency data in a character into four low frequency representations. What they achieve by this are position invariance, and a ratio of black to white pixels approaching 1, rotation invariance, and capability to detect and correct breaks within characters. The transformed data are input to the neural network that is used as a classifier and is called a discriminator.

A particular writing style that has less variability and therefore fewer subclasses is needed to classify the style. Network size will also reduce confusion, and conflicts lessen.

Commercial Optical Character Recognition

Optical character recognition (OCR) is one of the most successful commercial applications of neural networks. Caere Corporation brought out its neural network product in 1992, after studying more than 100,000 examples of fax documents. Caere’s AnyFax technology combines neural networks with expert systems to extract character information from Fax or scanned images. Calera, another OCR vendor, started using neural networks in 1984 and also benefited from using a very large (more than a million variations of alphanumeric characters) training set.

ART-EMAP and Object Recognition

A neural network architecture called ART-EMAP (Gail Carpenter and William Ross) integrates Adaptive Resonance Theory (ART) with spatial and temporal evidence integration for predictive mapping (EMAP). The result is a system capable of complex 3-D object recognition. A vision system that samples two-dimensional perspectives of a three-dimensional object is created that results in 98% correct recognition with an average of 9.2 views presented on noiseless test data, and 92% recognition with an average of 11.2 views presented on noisy test data. The ART-EMAP system is an extension of ARTMAP, which is a neural network architecture that performs supervised learning of recognition categories and multidimensional maps in response to input vectors. A fuzzy logic extension of ARTMAP is called Fuzzy ARTMAP, which incorporates two fuzzy modules in the ART system.


A sampling of current research and commercial applications with neural networks and fuzzy logic technology is presented. Neural networks are applied toward a wide variety of problems, from aiding medical diagnosis to detecting circuit faults in printed circuit board manufacturing. Some of the problem areas where neural networks and fuzzy logic have been successfully applied are:


  Image processing

  Intelligent control

  Machine vision

  Motion analysis


  Pattern recognition


  Time series analysis

  Speech synthesis

  Machine learning and robotics

  Decision support systems


  Data compression

  Functional approximation

The use of fuzzy logic and neural networks in software and hardware systems can only increase!