Revisiting Positive and Negative Samples in Variational Autoencoders for Top-N Recommendation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Top-N recommendation is a common tool to discover interesting items, which ranks the items based on user preference using their interaction history. Implicit feedback is often used by recommender systems due to the hardness of preference collection. Recent solutions simply treat all interacted items of a user as equally important positives and annotate all no-interaction items of a user as negatives. We argue that this annotation scheme of implicit feedback is over-simplified due to the sparsity and missing fine-grained labels of the feedback data. To overcome this issue, we revisit the so-called positive and negative samples for Variational Autoencoders (VAEs). Based on our analysis and observation, we propose a self-adjusting credibility weight mechanism to re-weigh the positive samples and exploit the higher-order relation based on item-item matrix to sample the critical negative samples. Besides, we abandon complex nonlinear structure and develop a simple yet effective VAEs framework with linear structure, which combines the reconstruction loss function for the positive samples and critical negative samples. Extensive experiments conducted on 4 public real-world datasets demonstrate that our VAE++ outperforms other VAEs-based models by a large margin.
Collaborative Filtering, Implicit Feedback, Recommendation, Variational AutoEncoders
W. Liu et al., “Revisiting positive and negative samples in variational autoencoders for top-N recommendation,” Database Systems for Advanced Applications, pp. 563–573, 2023. doi:10.1007/978-3-031-30672-3_38