Trainer that trains the parameters of a module according to a supervised dataset (potentially sequential) by backpropagating the errors (through time).
Create a BackpropTrainer to train the specified module on the specified dataset.
The learning rate gives the ratio of which parameters are changed into the direction of the gradient. The learning rate decreases by lrdecay, which is used to to multiply the learning rate after each training step. The parameters are also adjusted with respect to momentum, which is the ratio by which the gradient of the last timestep is used.
If batchlearning is set, the parameters are updated only at the end of each epoch. Default is False.
weightdecay corresponds to the weightdecay rate, where 0 is no weight decay at all.
Return winner-takes-all classification output on a given dataset.
If no dataset is given, the dataset passed during Trainer initialization is used. If return_targets is set, also return corresponding target classes.
Train on the current dataset for the given number of epochs.
Additional arguments are passed on to the train method.
Set the dataset and train.
Additional arguments are passed on to the train method.
Train the module on the dataset until it converges.
Return the module with the parameters that gave the minimal validation error.
If no dataset is given, the dataset passed during Trainer initialization is used. validationProportion is the ratio of the dataset that is used for the validation dataset.
If maxEpochs is given, at most that many epochs are trained. Each time validation error hits a minimum, try for continueEpochs epochs to find a better one.
Note
This documentation comprises just a subjective excerpt of available methods. See the source code for additional functionality.
Train the parameters of a module according to a supervised dataset (possibly sequential) by RProp without weight backtracking (aka RProp-, cf. [Igel&Huesken, Neurocomputing 50, 2003]) and without ponderation, ie. all training samples have the same weight.
Set up training algorithm parameters, and objects associated with the trainer.
Parameter: | module – the module whose parameters should be trained. |
---|---|
Key etaminus: | factor by which step width is decreased when overstepping (0.5) |
Key etaplus: | factor by which step width is increased when following gradient (1.2) |
Key delta: | step width for each weight |
Key deltamin: | minimum step width (1e-6) |
Key deltamax: | maximum step width (5.0) |
Key delta0: | initial step width (0.1) |
Note
See the documentation of BackpropTrainer for inherited methods.