Efficient mini-batch training for stochastic optimization

    KDD, pp. 661-670, 2014.

    Cited by: 365|Bibtex|62|
    EI
    Keywords:
    big datageneraldistributed computingminibatchmachine learningMore(1+)

    Abstract:

    Stochastic gradient descent (SGD) is a popular technique for large-scale optimization problems in machine learning. In order to parallelize SGD, minibatch training needs to be employed to reduce the communication cost. However, an increase in minibatch size typically decreases the rate of convergence. This paper introduces a technique bas...More
    Your rating :
    0

     

    Tags
    Comments