Hidden factors and hidden topics: understanding rating dimensions with review text

    RecSys, 2013.

    Cited by: 1028|Bibtex|Views41|Links
    EI
    Keywords:
    latent factoruser dimensionnew productlatent rating dimensionuser feedbackMore(10+)
    Wei bo:
    Reviews are useful at modeling new users: one review tells us much more than one rating

    Abstract:

    In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as w...More

    Code:

    Data:

    Introduction
    • Reviews are useful at modeling new users: one review tells them much more than one rating.
    • Better predict ratings Automatically identify product categories Identify reviews that the community considers “useful”
    Highlights
    • Reviews are useful at modeling new users: one review tells us much more than one rating
    • We report the F1 score between the predicted categories and the ground-truth
    Results
    • The authors report the F1 score between the predicted categories and the ground-truth.
    Conclusion
    • 1. The authors discovered “topics” that simultaneously explain variation in ratings and reviews.
    • 2. A small number of reviews tells them more about a user/item than a small number of ratings.
    • Code and data is available online!
    • Code: http://i.stanford.edu/~julian/ Data: http://snap.stanford.edu/data/web-Amazon-links.html.
    • 3. The authors' model outperforms alternatives on a variety of large-scale recommendation datasets 4.
    • The authors' model allows them to automatically discover product categories, and to identify useful reviews
    Summary
    • Introduction:

      Reviews are useful at modeling new users: one review tells them much more than one rating.
    • Better predict ratings Automatically identify product categories Identify reviews that the community considers “useful”
    • Results:

      The authors report the F1 score between the predicted categories and the ground-truth.
    • Conclusion:

      1. The authors discovered “topics” that simultaneously explain variation in ratings and reviews.
    • 2. A small number of reviews tells them more about a user/item than a small number of ratings.
    • Code and data is available online!
    • Code: http://i.stanford.edu/~julian/ Data: http://snap.stanford.edu/data/web-Amazon-links.html.
    • 3. The authors' model outperforms alternatives on a variety of large-scale recommendation datasets 4.
    • The authors' model allows them to automatically discover product categories, and to identify useful reviews
    Funding
    • We report the F1 score between the predicted categories and the ground-truth
    Your rating :
    0

     

    Tags
    Comments