Artificial Neural Network (ANN) classifier is inspired by the biological neural networks that constitute mammal brains. The original goal of such approaches was to solve problems in the same way that a human brain would. Hence, ANN classifiers learn (or progressively improve performance) by considering training samples, generally without class-specific features.
An ANN is based on a collection of connected units (neurons). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving neuron can process the signals and then signal downstream neurons connected to it. Neurons and synapses have weights that vary as learning proceeds and control the strength of the signal that is sent downstream. Typically, neurons are organized in layers, where different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.
We initialize the number of neurons in the input layer to be equal to the number of features m (m = 2 for this example) and the number of neurons in the output layer to be equal to the number of labels k (k = 3 for this example). In order to represent the ANN decision on the correct label into the form of the posterior probability, we take the k scaled signals leaving the neurons from the output layer and use these signals’ strength directly for filling the needed potentials (see Fig. 11).
PDF[featureVector][state] = CvANN(featureVector)
Semantic Image Segmentation with Conditional Random Fields
Users browsing this forum: No registered users and 2 guests