Skip to content

Better averages for online machine-learning

Averages are used, in some form or other, and many machine-learning algorithms. Stochastic gradient descent is a great example of an average in disguise, thin though it may be.

Picking the right kind of average can be critical. As learning algorithms explore sub-optimal choices, the resulting negative impact on backed-up state values can persist over epochs, hampering performance. Alternatively, some kinds of average don't converge, preventing the algorithm from settling into optimal outcomes.

Here, I officially release a paper on a particular kind of average that's adaptable like an exponential moving average, but has guaranteed convergence like a simple average.

This average is probably best suited to non-stochastic training, such as I mentioned in Action-selection and learning-rates in Q-learning.

Download the paper, An adaptable, convergent moving average for online machine-learning.

Trackbacks

No Trackbacks

Comments

Display comments as Linear | Threaded

No comments

The author does not allow comments to this entry

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.
To leave a comment you must approve it via e-mail, which will be sent to your address after submission.
Enclosing asterisks marks text as bold (*word*), underscore are made via _word_.
Form options

Submitted comments will be subject to moderation before being displayed.