How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs

Vito Walter Anelli
Vito Walter Anelli
Joseph Trotta
Joseph Trotta

ISWC (1), pp. 38-56, 2019.

Cited by: 6|Bibtex|Views61|
EI

Abstract:

Model-based approaches to recommendation can recommend items with a very high level of accuracy. Unfortunately, even when the model embeds content-based information, if we move to a latent space we miss references to the actual semantics of recommended items. Consequently, this makes non-trivial the interpretation of a recommendation pr...More

Code:

Data:

Your rating :
0

 

Best Paper
Best Paper of ISWC, 2019
Tags
Comments