AI helps you reading Science
AI generates interpretation videos
AI extracts and analyses the key points of the paper to generate videos automatically
AI parses the academic lineage of this thesis
AI extracts a summary of this paper
In this work we showed how the interpolation weights can be computed as a global solution to an optimization problem that precisely reflects their role
Scalable Collaborative Filtering with Jointly Derived Neighborhood Interpolation Weights
ICDM, pp.43-52, (2007)
Recommender systems based on collaborative filtering predict user preferences for products or services by learning past user-item relationships. A predominant approach to collaborative filtering is neighborhood based (" k-nearest neighbors"), where a user-item preference rating is interpolated from ratings of similar items and/or users. W...More
PPT (Upload PPT)
- Recommender systems analyze patterns of user interest in items or products to provide personalized recommendations of items that will suit a user’s taste .
- Their excellent ability to characterize and recommend items within huge collections represent a computerized alternative to human recommendations.
- Content based strategies require gathering external information that might not be available or easy to collect
- Recommender systems analyze patterns of user interest in items or products to provide personalized recommendations of items that will suit a user’s taste 
- We focus on an alternative strategy, known as Collaborative Filtering (CF) , which relies only on past user behavior—e.g., their previous transactions or product ratings—and does not require the creation of explicit profiles
- Collaborative filtering through neighborhood-based interpolation (“kNN”) is probably the most popular way to create a recommender system. The success of these methods depends on the choice of the interpolation weights, which are used to estimate unknown ratings from neighboring known ones
- In this work we showed how the interpolation weights can be computed as a global solution to an optimization problem that precisely reflects their role
- Normalization is essential to kNN methods, as otherwise mixing ratings pertaining to different unnormalized users or items can produce inferior results
- The authors' strategy is to estimate one “effect” at a time, in sequence. At each step, the authors use residuals from the previous step as the dependent variable for the current step.
- After the first step, the rui refer to residuals, rather than raw ratings.
- For each of the effects mentioned above, the goal is to estimate either one parameter for each item or one parameter for each user.
- For the rest of this subsection, the authors describe the methods for estimating user specific parameters; the method for items is perfectly analogous.
- The authors denote by xui the explanatory variable of interest corresponding to user u and item i.
- The two other components, namely data normalization and interpolation weights, have proved vital to the success of the scheme.
- The authors revisit these two components and suggest novel methods to significantly improve the accuracy of kNN approaches without meaningfully affecting running time
- Collaborative filtering through neighborhood-based interpolation (“kNN”) is probably the most popular way to create a recommender system
- The success of these methods depends on the choice of the interpolation weights, which are used to estimate unknown ratings from neighboring known ones.
- This work offered a comprehensive approach to data normalization, fitting 10 effects that can be readily observed in user-item rating data
- Those effects cause substantial data variability and mask the fundamental relationships between ratings.
- Their incluson brings the ratings closer and facilitates improved estimation accuracy
- Table1: RMSE for Netflix probe data after adding a series of global effects to the model as much as the first. Two additional time interactions concentrate on the complementary movie viewpoint. That is, for each movie, they model how its ratings change with time. Once again, the time variables are either square root of days since first rating of the movie, or square root of days since first rating by the user
- Table2: Comparing our interpolation scheme against conventional correlation-based interpolation, by reporting RMSEs on the Probe set. Various levels of data normalization (preprocessing) are shown, and different sizes of item-neighborhoods (K) are considered
- Suggests a novel scheme for low dimensional embedding of the users. evaluates these methods on the Netflix dataset, where they deliver significantly better results than the commercial Netflix Cinematch recommender system
- Focuses on an alternative strategy, known as Collaborative Filtering , which relies only on past user behavior—e.g., their previous transactions or product ratings—and does not require the creation of explicit profiles
- Identifies a set of neighboring items N(i; u) that other users tend to rate to their rating of i
- Presents a more comprehensive treatment of data normalization in Section 3
- Addresses the aforementioned issues of neighborhood based approaches, without compromising running time efficiency
- Revisit these two components and suggest novel methods to significantly improve the accuracy of kNN approaches without meaningfully affecting running time
- G. Adomavicius and A. Tuzhilin, “Towards the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions”, IEEE Transactions on Knowledge and Data Engineering 17 (2005), 634–749.
- R. M. Bell, Y. Koren and C. Volinsky, “Modeling Relationships at Multiple Scales to Improve Accuracy of Large Recommender Systems”, Proc. 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2007.
- J. Bennet and S. Lanning, “The Netflix Prize”, KDD Cup and Workshop, 2007.
- B. Efron and C. Morris, “Data analysis using Stein’s estimator and its generalization”, Journal American Statistical Association 70 (1975), 311–319.
- S. Funk, “Netflix Update: Try This At Home”, sifter.org/ ̃simon/journal/20061211. html, 2006.
- D. Goldberg, D. Nichols, B. M. Oki and D. Terry, “Using Collaborative Filtering to Weave an Information Tapestry”, Communications of the ACM 35 (1992), 61–70.
- K. Goldberg, T. Roeder, D. Gupta and C. Perkins, “Eigentaste: A Constant Time Collaborative Filtering Algorithm”, Information Retrieval 4 (2001), 133–151.
- J. L. Herlocker, J. A. Konstan, A. Borchers and John Riedl, “An Algorithmic Framework for Performing Collaborative Filtering”, Proc. 22nd ACM SIGIR Conference on Information Retrieval, pp. 230–237, 1999.
- J. Konstan, B. Miller, D. Maltz, J. Herlocker, L. Gordon and J. Riedl, “GroupLens: Applying Collaborative Filtering to Usenet News”, Communications of the ACM 40 (1997), 77–87, www.grouplens.org.
- C.L. Lawson and B. J. Hanson, Solving Least Squares Problems, Prentice-Hall, 1974.
- G. Linden, B. Smith and J. York, “Amazon.com Recommendations: Item-to-item Collaborative Filtering”, IEEE Internet Computing 7 (2003), 76–80.
- J. Nocedal and S. Wright, Numerical Optimization, Springer, 1999.
- R. Salakhutdinov, A. Mnih, and G. Hinton, “Restricted Boltzmann Machines for Collaborative Filtering”, Proc. 24th Annual International Conference on Machine Learning, 2007.
- B. M. Sarwar, G. Karypis, J. A. Konstan, and J. Riedl, “Application of Dimensionality Reduction in Recommender System – A Case Study”, WEBKDD’2000.
- B. Sarwar, G. Karypis, J. Konstan and J. Riedl, “Itembased Collaborative Filtering Recommendation Algorithms”, Proc. 10th International Conference on the World Wide Web, pp. 285-295, 2001.