Rprop, short for resilient backpropagation, is a learning heuristics for supervised learning in artificial neural networks. Similarly to the Manhattan update rule, Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude). If there was a sign change with respect to the last iteration, the update value is multiplied by a factor η-. If the last iteration produced the same sign, the update value is increased by η+. η+ is empirically set to 1.2 and η- to 0.5.
Next to the Cascade correlation algorithm and the Levenberg-Marquardt algorithm, Rprop is one of the fastest weight update mechanisms.
References
- Rprop - Description and Implementation Details Martin Riedmiller, 1994. Technical report.