Details
Original language | English |
---|---|
Pages (from-to) | 65-97 |
Number of pages | 33 |
Journal | User Modeling and User-Adapted Interaction |
Volume | 19 |
Issue number | 1-2 SPEC. ISS. |
Early online date | 18 Jul 2008 |
Publication status | Published - Feb 2009 |
Abstract
Collaborative filtering systems are essentially social systems which base their recommendation on the judgment of a large number of people. However, like other social systems, they are also vulnerable to manipulation by malicious social elements. Lies and Propaganda may be spread by a malicious user who may have an interest in promoting an item, or downplaying the popularity of another one. By doing this systematically, with either multiple identities, or by involving more people, malicious user votes and profiles can be injected into a collaborative recommender system. This can significantly affect the robustness of a system or algorithm, as has been studied in previous work. While current detection algorithms are able to use certain characteristics of shilling profiles to detect them, they suffer from low precision, and require a large amount of training data. In this work, we provide an in-depth analysis of shilling profiles and describe new approaches to detect malicious collaborative filtering profiles. In particular, we exploit the similarity structure in shilling user profiles to separate them from normal user profiles using unsupervised dimensionality reduction. We present two detection algorithms; one based on PCA, while the other uses PLSA. Experimental results show a much improved detection precision over existing methods without the usage of additional training time required for supervised approaches. Finally, we present a novel and highly effective robust collaborative filtering algorithm which uses ideas presented in the detection algorithms using principal component analysis.
Keywords
- Collaborative filtering, Dimensionality reduction, PCA, PLSA, Robust statistics, Shilling
ASJC Scopus subject areas
- Social Sciences(all)
- Education
- Computer Science(all)
- Human-Computer Interaction
- Computer Science(all)
- Computer Science Applications
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: User Modeling and User-Adapted Interaction, Vol. 19, No. 1-2 SPEC. ISS., 02.2009, p. 65-97.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Unsupervised strategies for shilling detection and robust collaborative filtering
AU - Mehta, Bhaskar
AU - Nejdl, Wolfgang
PY - 2009/2
Y1 - 2009/2
N2 - Collaborative filtering systems are essentially social systems which base their recommendation on the judgment of a large number of people. However, like other social systems, they are also vulnerable to manipulation by malicious social elements. Lies and Propaganda may be spread by a malicious user who may have an interest in promoting an item, or downplaying the popularity of another one. By doing this systematically, with either multiple identities, or by involving more people, malicious user votes and profiles can be injected into a collaborative recommender system. This can significantly affect the robustness of a system or algorithm, as has been studied in previous work. While current detection algorithms are able to use certain characteristics of shilling profiles to detect them, they suffer from low precision, and require a large amount of training data. In this work, we provide an in-depth analysis of shilling profiles and describe new approaches to detect malicious collaborative filtering profiles. In particular, we exploit the similarity structure in shilling user profiles to separate them from normal user profiles using unsupervised dimensionality reduction. We present two detection algorithms; one based on PCA, while the other uses PLSA. Experimental results show a much improved detection precision over existing methods without the usage of additional training time required for supervised approaches. Finally, we present a novel and highly effective robust collaborative filtering algorithm which uses ideas presented in the detection algorithms using principal component analysis.
AB - Collaborative filtering systems are essentially social systems which base their recommendation on the judgment of a large number of people. However, like other social systems, they are also vulnerable to manipulation by malicious social elements. Lies and Propaganda may be spread by a malicious user who may have an interest in promoting an item, or downplaying the popularity of another one. By doing this systematically, with either multiple identities, or by involving more people, malicious user votes and profiles can be injected into a collaborative recommender system. This can significantly affect the robustness of a system or algorithm, as has been studied in previous work. While current detection algorithms are able to use certain characteristics of shilling profiles to detect them, they suffer from low precision, and require a large amount of training data. In this work, we provide an in-depth analysis of shilling profiles and describe new approaches to detect malicious collaborative filtering profiles. In particular, we exploit the similarity structure in shilling user profiles to separate them from normal user profiles using unsupervised dimensionality reduction. We present two detection algorithms; one based on PCA, while the other uses PLSA. Experimental results show a much improved detection precision over existing methods without the usage of additional training time required for supervised approaches. Finally, we present a novel and highly effective robust collaborative filtering algorithm which uses ideas presented in the detection algorithms using principal component analysis.
KW - Collaborative filtering
KW - Dimensionality reduction
KW - PCA
KW - PLSA
KW - Robust statistics
KW - Shilling
UR - http://www.scopus.com/inward/record.url?scp=58849149412&partnerID=8YFLogxK
U2 - 10.1007/s11257-008-9050-4
DO - 10.1007/s11257-008-9050-4
M3 - Article
AN - SCOPUS:58849149412
VL - 19
SP - 65
EP - 97
JO - User Modeling and User-Adapted Interaction
JF - User Modeling and User-Adapted Interaction
SN - 0924-1868
IS - 1-2 SPEC. ISS.
ER -