AI helps you reading Science
AI generates interpretation videos
AI extracts and analyses the key points of the paper to generate videos automatically
AI parses the academic lineage of this thesis
AI extracts a summary of this paper
This paper focuses on developing effective and efficient algorithms for top-N recommender systems
SLIM: Sparse Linear Methods for Top-N Recommender Systems
ICDM, pp.497-506, (2011)
This paper focuses on developing effective and efficient algorithms for top-N recommender systems. A novel Sparse Linear Method (SLIM) is proposed, which generates top-N recommendations by aggregating from user purchase/rating profiles. A sparse aggregation coefficient matrix W is learned from SLIM by solving an `1-norm and `2-norm regula...More
PPT (Upload PPT)
- The emergence and fast growth of E-commerce have significantly changed people’s traditional perspective on purchasing products by providing huge amounts of products and detailed product information, making online transactions much easier.
- Given the user purchase/rating profiles, recommending a ranked list of items for the user so as to encourage additional purchases has the most application scenarios.
- This leads to the widely used top-N recommender systems.
- Among the neighborhood-based methods, those based on item neighborhoods can generate recommendations very fast, but they achieve this with a sacrifice on recommendation quality.
- Model-based methods, those based on latent factor models incur a higher cost while generating recommendations, but the quality of these recommendations is higher, and they have been shown to achieve the best performance especially on large recommendation tasks
- The emergence and fast growth of E-commerce have significantly changed people’s traditional perspective on purchasing products by providing huge amounts of products and detailed product information, making online transactions much easier
- We propose a novel Sparse LInear Method (SLIM) for top-N recommendation that is able to make highquality recommendations fast
- We propose a Sparse LInear Method (SLIM) to do top-N recommendation
- We evaluated the performance of Sparse LInear Method methods on eight different real datasets whose characteristics are shown in Table I
- We evaluated Sparse LInear Method-b, Pure Singular-Value-Decomposition-based-b, Weighted Regularized Matrix Factorization-b and be well adopted for item knn method-b on the four datasets, for which the models are still learned from binary user-item purchase matrix but the recommendations are evaluated based on ratings
- We proposed a sparse linear method for topN recommendation, which is able to generate high-quality top-N recommendations fast
- The authors present the performance of SLIM methods and compare them with other popular top-N recommendation methods.
- In the first set of experiments, all the top-N recommendation methods use binary user-item purchase information during learning, and all the methods are appended by -b to indicate binary data used (e.g., SLIM-b) if there is confusion.
- In the second set of experiments, all the top-N recommendation methods use user-item rating information during learning, and correspondingly they are appended by -r if there is confusion.
- The authors only report the performance corresponding to the parameters that lead to the best results
- A. Observed Data vs Missing Data.
- The entries with “0” value can be ambiguous
- They may either represent that the users will never purchase the items, the users may purchase the items but have not done so, or the authors do not know if the users have purchased the items or not or if they will.
- Differentiation of observed data and missing data in Equation 4 is under development
- Table1: The Datasets Used in Evaluation dataset #users #items
- Table2: Comparison of Top-N Recommendation Algorithms ccard
- Table3: Performance Difference on Top-N Recommendations dataset
- Table4: Performance on the Long Tail of ML10M
- Top-N recommender systems are used in E-commerce applications to recommend size-N ranked lists of items that users may like the most, and they have been intensively studied during the last few years. The methods for top-N recommendation can be broadly classified into two categories. The first category is the neighborhood-based collaborative filtering methods . For a certain user, user-based k-nearest-neighbor (userkNN) collaborative filtering methods first identify a set of similar users, and then recommend top-N items based on what items those similar users have purchased. Similarly, itembased k-nearest-neighbor (itemkNN) collaborative filtering methods first identify a set of similar items for each of the items that the user has purchased, and then recommend top-N items based on those similar items. The user/item similarity is calculated from user-item purchase/rating matrix in a collaborative filtering fashion with some similarity measures (e.g., Pearson correlation, cosine similarity) applied. One advantage of the item-based methods is that they are efficient to generate recommendations due to the fact that the item neighborhood is sparse. However, they suffer from low accuracy since there is essentially no knowledge learned about item characteristics so as to produce accurate top-N recommendations.
- This work was supported in part by NSF (IIS-0905220, OCI-1048018, and IOS-0820730) and the Digital Technology Center at the University of Minnesota
- F. Ricci, L. Rokach, B. Shapira, and P. B. Kantor, Eds., Recommender Systems Handbook. Springer, 2011.
- M. Deshpande and G. Karypis, “Item-based top-n recommendation algorithms,” ACM Transactions on Information Systems, vol. 22, pp. 143–177, January 2004.
- P. Cremonesi, Y. Koren, and R. Turrin, “Performance of recommender algorithms on top-n recommendation tasks,” in Proceedings of the fourth ACM conference on Recommender systems, ser. RecSys ’10. New York, NY, USA: ACM, 2010, pp. 39–46.
- R. Pan, Y. Zhou, B. Cao, N. N. Liu, R. Lukose, M. Scholz, and Q. Yang, “One-class collaborative filtering,” in Proceedings of the 2008 Eighth IEEE International Conference on Data Mining. Washington, DC, USA: IEEE Computer Society, 2008, pp. 502–511.
- Y. Hu, Y. Koren, and C. Volinsky, “Collaborative filtering for implicit feedback datasets,” in Proceedings of the 2008 Eighth IEEE International Conference on Data Mining. Washington, DC, USA: IEEE Computer Society, 2008, pp. 263–272.
- J. D. M. Rennie and N. Srebro, “Fast maximum margin matrix factorization for collaborative prediction,” in Proceedings of the 22nd international conference on Machine learning, ser. ICML ’05. New York, NY, USA: ACM, 2005, pp. 713–719.
- N. Srebro, J. D. M. Rennie, and T. S. Jaakkola, “Maximum-margin matrix factorization,” in Advances in Neural Information Processing Systems 1MIT Press, 2005, pp. 1329–1336.
- V. Sindhwani, S. S. Bucak, J. Hu, and A. Mojsilovic, “One-class matrix completion with low-density factorizations,” in Proceedings of the 2010 IEEE International Conference on Data Mining, ser. ICDM ’10. Washington, DC, USA: IEEE Computer Society, 2010, pp. 1055– 1060.
- T. Hofmann, “Latent semantic models for collaborative filtering,” ACM Trans. Inf. Syst., vol. 22, pp. 89–115, January 2004.
- Y. Koren, “Factorization meets the neighborhood: a multifaceted collaborative filtering model,” in Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, ser. KDD ’08. New York, NY, USA: ACM, 2008, pp. 426–434.
- S. Rendle, C. Freudenthaler, Z. Gantner, and S.-T. Lars, “Bpr: Bayesian personalized ranking from implicit feedback,” in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, ser. UAI ’09. Arlington, Virginia, United States: AUAI Press, 2009, pp. 452–461.
- R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society (Series B), vol. 58, pp. 267–288, 1996.
- H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal Of The Royal Statistical Society Series B, vol. 67, no. 2, pp. 301–320, 2005.
- J. H. Friedman, T. Hastie, and R. Tibshirani, “Regularization paths for generalized linear models via coordinate descent,” Journal of Statistical Software, vol. 33, no. 1, pp. 1–22, 2 2010.
- A. Paterek, “Improving regularized singular value decomposition for collaborative filtering,” Statistics, pp. 2–5, 2007.
- F. Bach, J. Mairal, and J. Ponce, “Convex sparse matrix factorizations,” CoRR, vol. abs/0812.1869, 2008.
- J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res., vol. 11, pp. 19– 60, March 2010.
- P. O. Hoyer, “Non-negative matrix factorization with sparseness constraints,” Journal of Machine Learning Research, vol. 5, pp. 1457–1469, December 2004.