Multiclass Perceptron in C#

Download Multiclass Perceptron C# Project

In order to understand multiclass perceptron prior knowledge of binary perceptron is required.

Multiclass Perceptron is used for classifying input data into linearly separable classes. Each class in a multiclass perceptron has a separate set of input weights. Just like in a simple perceptron the inputs are multiplied by weights and then summed, but instead of score passing through an activation function the class with the highest score is chosen.

MulticlassPerceptronDiagram

Multiclass Perceptron

Training

Multiclass perceptron is trained using a data set of known inputs and expected output classes, also knows as a training set. When the resulting class produced from the training set data does not match the expected class the weights of resulting and expected class are adjusted by:

  1. Setting theĀ  error = 1; for the expected class,
  2. Setting theĀ  error = -1; for the result class,
  3. Calculating the new weights of both classes using
    Weights[i] += LearningRate * error * inputs[i];
    where LearningRate is a scalar between 0 and 1.

Pseudo code for learning operation operation is located below:

Limitations

PerceptronANDChartAND Function

Multiclass perceptrons only work on linearly separable data. If data is mixed or overlaps between the classes the perceptron cannot be taught to fully separate inputs into the desired classes.

Code

Download Multiclass Perceptron C# Project

Below is the example usage for training a multiclass perceptron to classify 4 binary inputs into hexadecimal groups of ‘0to3’, ‘4to7’, ‘8toB’, and ‘CtoF’. The learning method is perceptron.Learn(item.ClassName, item.Inputs); and is called for as long as the calculated result does not match the expected result. After the learning is done perceptron.GetResult(...); will return the correct result.

Example usage:

MulticlassPerceptron implementation: