How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs
ISWC (1), pp. 38-56, 2019.
EI
Abstract:
Model-based approaches to recommendation can recommend items with a very high level of accuracy. Unfortunately, even when the model embeds content-based information, if we move to a latent space we miss references to the actual semantics of recommended items. Consequently, this makes non-trivial the interpretation of a recommendation pr...More
Code:
Data:
Best Paper
Best Paper of ISWC, 2019
Tags
Comments