GClasses
|
This class performs backpropagation on a neural network. (I made it a separate class because it is only needed during training. There is no reason to waste this space after training is complete, or if you choose to use a different technique to train the neural network.) More...
#include <GNeuralNet.h>
Public Member Functions | |
GBackProp (GNeuralNet *pNN) | |
This class will adjust the weights in pNN. | |
~GBackProp () | |
GBackPropLayer & | layer (size_t layer) |
Returns a layer (not a layer of the neural network, but a corresponding layer of values used for back-prop) | |
void | backPropFromSingleNode (GNeuron &nnFrom, GBackPropNeuron &bpFrom, GNeuralNetLayer *pNNToLayer, GBackPropLayer *pBPToLayer) |
Backpropagates the error from a single output node to a hidden layer. | |
void | adjustWeightsSingleNeuron (GNeuron &nnFrom, GNeuralNetLayer *pNNToLayer, GBackPropNeuron &bpFrom, double learningRate, double momentum) |
Adjust the weights of a single neuron that follows a hidden layer. (Assumes the error of this neuron has already been computed). | |
void | adjustWeightsSingleNeuron (GNeuron &nnFrom, const double *pFeatures, bool useInputBias, GBackPropNeuron &bpFrom, double learningRate, double momentum) |
Adjust the weights of a single neuron when there are no hidden layers. (Assumes the error of this neuron has already been computed). | |
void | backpropagate () |
This method assumes that the error term is already set at every unit in the output layer. It uses back-propagation to compute the error term at every hidden unit. (It does not update any weights.) | |
void | backpropagateSingleOutput (size_t outputNode) |
Backpropagates error from a single output node over all of the hidden layers. (Assumes the error term is already set on the specified output node.) | |
void | descendGradient (const double *pFeatures, double learningRate, double momentum, bool useInputBias) |
This method assumes that the error term is already set for every network unit. It adjusts weights to descend the gradient of the error surface with respect to the weights. | |
void | descendGradientSingleOutput (size_t outputNeuron, const double *pFeatures, double learningRate, double momentum, bool useInputBias) |
This method assumes that the error term has been set for a single output network unit, and all units that feed into it transitively. It adjusts weights to descend the gradient of the error surface with respect to the weights. | |
void | adjustFeatures (double *pFeatures, double learningRate, size_t skip, bool useInputBias) |
This method assumes that the error term is already set for every network unit. It descends the gradient by adjusting the features (not the weights). | |
void | adjustFeaturesSingleOutput (size_t outputNeuron, double *pFeatures, double learningRate, bool useInputBias) |
This adjusts the features (not the weights) to descend the gradient, assuming that the error is computed from only one of the output units of the network. | |
Static Public Member Functions | |
static void | backPropLayer (GNeuralNetLayer *pNNFromLayer, GNeuralNetLayer *pNNToLayer, GBackPropLayer *pBPFromLayer, GBackPropLayer *pBPToLayer, size_t fromBegin=0) |
Backpropagates the error from the "from" layer to the "to" layer. (If the "to" layer has fewer units than the "from" layer, then it will begin propagating with the (fromBegin+1)th weight and stop when the "to" layer runs out of units. It would be an error if the number of units in the "from" layer is less than the number of units in the "to" layer plus fromBegin. | |
static void | backPropLayer2 (GNeuralNetLayer *pNNFromLayer1, GNeuralNetLayer *pNNFromLayer2, GNeuralNetLayer *pNNToLayer, GBackPropLayer *pBPFromLayer1, GBackPropLayer *pBPFromLayer2, GBackPropLayer *pBPToLayer, size_t pass) |
This is another implementation of backPropLayer. This one is somewhat more flexible, but slightly less efficient. It supports backpropagating error from one or two layers. (pNNFromLayer2 should be NULL if you are backpropagating from just one layer.) It also supports temporal backpropagation by unfolding in time and then averaging the error across all of the unfolded instantiations. "pass" specifies how much of the error for this pass to accept. 1=all of it, 2=half of it, 3=one third, etc. | |
static void | adjustWeights (GNeuralNetLayer *pNNFromLayer, GNeuralNetLayer *pNNToLayer, GBackPropLayer *pBPFromLayer, double learningRate, double momentum) |
Adjust weights in pNNFromLayer. (The error for pNNFromLayer layer must have already been computed.) (If you are backpropagating error from two layers, you can just call this method twice, once for each previous layer.) | |
static void | adjustWeights (GNeuralNetLayer *pNNFromLayer, const double *pFeatures, bool useInputBias, GBackPropLayer *pBPFromLayer, double learningRate, double momentum) |
Adjust weights in pNNFromLayer. (The error for pNNFromLayer layer must have already been computed.) (If you are backpropagating error from two layers, you can just call this method twice, once for each previous layer.) | |
Protected Attributes | |
GNeuralNet * | m_pNN |
std::vector< GBackPropLayer > | m_layers |
Friends | |
class | GNeuralNet |
This class performs backpropagation on a neural network. (I made it a separate class because it is only needed during training. There is no reason to waste this space after training is complete, or if you choose to use a different technique to train the neural network.)
GClasses::GBackProp::GBackProp | ( | GNeuralNet * | pNN | ) |
This class will adjust the weights in pNN.
GClasses::GBackProp::~GBackProp | ( | ) | [inline] |
void GClasses::GBackProp::adjustFeatures | ( | double * | pFeatures, |
double | learningRate, | ||
size_t | skip, | ||
bool | useInputBias | ||
) |
This method assumes that the error term is already set for every network unit. It descends the gradient by adjusting the features (not the weights).
void GClasses::GBackProp::adjustFeaturesSingleOutput | ( | size_t | outputNeuron, |
double * | pFeatures, | ||
double | learningRate, | ||
bool | useInputBias | ||
) |
This adjusts the features (not the weights) to descend the gradient, assuming that the error is computed from only one of the output units of the network.
static void GClasses::GBackProp::adjustWeights | ( | GNeuralNetLayer * | pNNFromLayer, |
const double * | pFeatures, | ||
bool | useInputBias, | ||
GBackPropLayer * | pBPFromLayer, | ||
double | learningRate, | ||
double | momentum | ||
) | [static] |
Adjust weights in pNNFromLayer. (The error for pNNFromLayer layer must have already been computed.) (If you are backpropagating error from two layers, you can just call this method twice, once for each previous layer.)
static void GClasses::GBackProp::adjustWeights | ( | GNeuralNetLayer * | pNNFromLayer, |
GNeuralNetLayer * | pNNToLayer, | ||
GBackPropLayer * | pBPFromLayer, | ||
double | learningRate, | ||
double | momentum | ||
) | [static] |
Adjust weights in pNNFromLayer. (The error for pNNFromLayer layer must have already been computed.) (If you are backpropagating error from two layers, you can just call this method twice, once for each previous layer.)
void GClasses::GBackProp::adjustWeightsSingleNeuron | ( | GNeuron & | nnFrom, |
const double * | pFeatures, | ||
bool | useInputBias, | ||
GBackPropNeuron & | bpFrom, | ||
double | learningRate, | ||
double | momentum | ||
) |
Adjust the weights of a single neuron when there are no hidden layers. (Assumes the error of this neuron has already been computed).
void GClasses::GBackProp::adjustWeightsSingleNeuron | ( | GNeuron & | nnFrom, |
GNeuralNetLayer * | pNNToLayer, | ||
GBackPropNeuron & | bpFrom, | ||
double | learningRate, | ||
double | momentum | ||
) |
Adjust the weights of a single neuron that follows a hidden layer. (Assumes the error of this neuron has already been computed).
void GClasses::GBackProp::backpropagate | ( | ) |
This method assumes that the error term is already set at every unit in the output layer. It uses back-propagation to compute the error term at every hidden unit. (It does not update any weights.)
void GClasses::GBackProp::backpropagateSingleOutput | ( | size_t | outputNode | ) |
Backpropagates error from a single output node over all of the hidden layers. (Assumes the error term is already set on the specified output node.)
void GClasses::GBackProp::backPropFromSingleNode | ( | GNeuron & | nnFrom, |
GBackPropNeuron & | bpFrom, | ||
GNeuralNetLayer * | pNNToLayer, | ||
GBackPropLayer * | pBPToLayer | ||
) |
Backpropagates the error from a single output node to a hidden layer.
static void GClasses::GBackProp::backPropLayer | ( | GNeuralNetLayer * | pNNFromLayer, |
GNeuralNetLayer * | pNNToLayer, | ||
GBackPropLayer * | pBPFromLayer, | ||
GBackPropLayer * | pBPToLayer, | ||
size_t | fromBegin = 0 |
||
) | [static] |
Backpropagates the error from the "from" layer to the "to" layer. (If the "to" layer has fewer units than the "from" layer, then it will begin propagating with the (fromBegin+1)th weight and stop when the "to" layer runs out of units. It would be an error if the number of units in the "from" layer is less than the number of units in the "to" layer plus fromBegin.
static void GClasses::GBackProp::backPropLayer2 | ( | GNeuralNetLayer * | pNNFromLayer1, |
GNeuralNetLayer * | pNNFromLayer2, | ||
GNeuralNetLayer * | pNNToLayer, | ||
GBackPropLayer * | pBPFromLayer1, | ||
GBackPropLayer * | pBPFromLayer2, | ||
GBackPropLayer * | pBPToLayer, | ||
size_t | pass | ||
) | [static] |
This is another implementation of backPropLayer. This one is somewhat more flexible, but slightly less efficient. It supports backpropagating error from one or two layers. (pNNFromLayer2 should be NULL if you are backpropagating from just one layer.) It also supports temporal backpropagation by unfolding in time and then averaging the error across all of the unfolded instantiations. "pass" specifies how much of the error for this pass to accept. 1=all of it, 2=half of it, 3=one third, etc.
void GClasses::GBackProp::descendGradient | ( | const double * | pFeatures, |
double | learningRate, | ||
double | momentum, | ||
bool | useInputBias | ||
) |
This method assumes that the error term is already set for every network unit. It adjusts weights to descend the gradient of the error surface with respect to the weights.
void GClasses::GBackProp::descendGradientSingleOutput | ( | size_t | outputNeuron, |
const double * | pFeatures, | ||
double | learningRate, | ||
double | momentum, | ||
bool | useInputBias | ||
) |
This method assumes that the error term has been set for a single output network unit, and all units that feed into it transitively. It adjusts weights to descend the gradient of the error surface with respect to the weights.
GBackPropLayer& GClasses::GBackProp::layer | ( | size_t | layer | ) | [inline] |
Returns a layer (not a layer of the neural network, but a corresponding layer of values used for back-prop)
friend class GNeuralNet [friend] |
std::vector<GBackPropLayer> GClasses::GBackProp::m_layers [protected] |
GNeuralNet* GClasses::GBackProp::m_pNN [protected] |