Rprop

This is an old revision of this page, as edited by 68.107.83.19 (talk) at 06:05, 27 November 2007. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Rprop, short for resilient backpropagation, is a learning heuristics for supervised learning in artificial neural networks. Similarly to the Manhattan update rule, Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude). If there was a sign change with respect to the last iteration, the update value is multiplied by a factor η-. If the last iteration produced the same sign, the update value is increased by η+. η+ is empirically set to 1.2 and η- to 0.5.

Next to the Cascade correlation algorithm and the Levenberg-Marquardt algorithm, Rprop is one of the fastest weight update mechanisms.

References