How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs
ISWC (1), pp. 38-56, 2019.
Model-based approaches to recommendation can recommend items with a very high level of accuracy. Unfortunately, even when the model embeds content-based information, if we move to a latent space we miss references to the actual semantics of recommended items. Consequently, this makes non-trivial the interpretation of a recommendation pr...More
PPT (Upload PPT)