Reviews ------------------------------------------------ Paper ID225 Paper TitleOne-Class Recommendation With Asymmetric Textual Feedback REVIEWER #1 REVIEW QUESTIONS 1. The paper is relevant to SDM Yes 2. Which of the following is the most accurate summary of the type of the work proposed in the paper A new or improved algorithm, data structure or analysis for a known problem or a small variation of a known problem 3. Which of the following is the best characterization of the paper contribution Incremental work [idea] but solid execution 4. Is the abstract an accurate summary of the paper Yes 5. Does the paper properly contextualize the "Research Question" [e.g., through the Introduction and or Related Work Section] Yes 6. Which of the following is the most accurate description of the writing style of the paper Unnecessarily verbose 7. Is the primary claimed contribution clearly reflected in the theoretical and/or experimental sections of the paper No 8. Detailed Comments [Please be as specific as possible] The paper proposes a recommendation model for implicit feedback. The main contribution is using text information to enhance the recommendation. Here, reviewing something is regarded as positive (even if the rating is low), whereas not reviewing something is regarded as negative (even if the rating could have been high). The model is an extension of BPR, with one modification where the latent factors are tied to the topic distribution of reviews. In some sense, this could be seen as an extension of HFT, where instead of the overall rating, it now preserves the implicit ranking feedback. This extension is rather incremental. The experiments are rather weak. In particular, the major comparison is to implicit feedback models without text. There are some comparisons to explicit models that use text, e.g., HFT-b and CAPRF-b, but I believe that this comparison is flawed. For one, the comparison is not apple-to-apple. Where the implicit feedback models work with triples of , the explicit feedback models work with ratings that are binarized using some arbitrary threshold. The final evaluation goals are closer to implicit feedback. A fairer way to compare is to use the same input. For example, we can put the triples through models that can generate scores from pairwise comparisons (e.g., TrueSkills, Bradley-Terry). These continuous scores could then be used as input to explicit feedback models. Overall, I find the experiments less than convincing to recommend acceptance. 9. Overall Rating Reject REVIEWER #2 REVIEW QUESTIONS 1. The paper is relevant to SDM Yes 2. Which of the following is the most accurate summary of the type of the work proposed in the paper A new or improved algorithm, data structure or analysis for a known problem or a small variation of a known problem 3. Which of the following is the best characterization of the paper contribution Incremental work [idea] but solid execution 4. Is the abstract an accurate summary of the paper Yes 5. Does the paper properly contextualize the "Research Question" [e.g., through the Introduction and or Related Work Section] Yes 6. Which of the following is the most accurate description of the writing style of the paper Sharp and Precise 7. Is the primary claimed contribution clearly reflected in the theoretical and/or experimental sections of the paper Yes 8. Detailed Comments [Please be as specific as possible] In this paper, the authors propose to leverage the textual information from users to items for the one-class recommendation problem. The basic idea is to build a likelihood function upon the BPR model to capture the textual information. Experiments show the effectiveness of the proposed method. Strong points: - The paper is well motivated. - Extensive experiments are conducted. Weak points: - It is hardly to say the proposed method is suitable for cold-start cases. In Table 3, take the comparison with BPR as an example. The improvement in the cold-start case is not always larger than that of the normal case. 9. Overall Rating Accept REVIEWER #3 REVIEW QUESTIONS 1. The paper is relevant to SDM Yes 2. Which of the following is the most accurate summary of the type of the work proposed in the paper A new or improved algorithm, data structure or analysis for a known problem or a small variation of a known problem 3. Which of the following is the best characterization of the paper contribution Innovative work [idea] and solid execution 4. Is the abstract an accurate summary of the paper Yes 5. Does the paper properly contextualize the "Research Question" [e.g., through the Introduction and or Related Work Section] Yes 6. Which of the following is the most accurate description of the writing style of the paper Sharp and Precise 7. Is the primary claimed contribution clearly reflected in the theoretical and/or experimental sections of the paper Yes 8. Detailed Comments [Please be as specific as possible] The paper proposes a way to integrate textual feedback with implicit feedback and a way to learn a rank-optimized recommender system using both these type of data, which are very different from each other. The textual information and implicit feedback based ranking information are glued to form a single coherent model that incorporates both these attributes. The optimization problem is formulated and solved as a SGD using the Adam optimizer The paper is very well-written and has an extensive experimental evaluation albeit only on two data sets. The number of data sets could be certainly expanded. The improvements shown by the proposed techniques are quite significant. A 5% improvement is a very good improvement. On top of that, the authors have shown quantitative analysis of their approach, which is very good too. The competitor used in the comparison are recent and compatible competitors for the proposed method. 9. Overall Rating Accept REVIEWER #4 REVIEW QUESTIONS 1. The paper is relevant to SDM Yes 2. Which of the following is the most accurate summary of the type of the work proposed in the paper A new or improved algorithm, data structure or analysis for a known problem or a small variation of a known problem 3. Which of the following is the best characterization of the paper contribution Innovative work [idea] and solid execution 4. Is the abstract an accurate summary of the paper Yes 5. Does the paper properly contextualize the "Research Question" [e.g., through the Introduction and or Related Work Section] Yes 6. Which of the following is the most accurate description of the writing style of the paper Sharp and Precise 7. Is the primary claimed contribution clearly reflected in the theoretical and/or experimental sections of the paper Yes 8. Detailed Comments [Please be as specific as possible] The primary goal of this paper, i.e., building personalized ranking models from implicit feedback by resolving the asymmetric issue in textual feedback (p. 1), is quite interesting. Mathematical formulae are nicely organized in Sections 3 and 4. The insight on the role of textual information toward the end of Section 4 is convincing. Key papers on related recommendation are listed in the References, which I found quite useful. The experiments in Section 5 are well designed and the baselines methods for comparison seem to be sufficient. I also liked the analysis in p. 8. In p. 1, for "this is regarded", I prefer "it is regarded". In p. 3, "and depend on the user or the item" -> "and depends on the user or the item". In p. 5, use bigger fonts in Fig. 2. In p. 6, "This a large-scale dataset" -> "This is a large-scale dataset". 9. Overall Rating Accept Meta-Reviews ------------------------------------------------ Paper ID225 Paper TitleOne-Class Recommendation With Asymmetric Textual Feedback META-REVIEWER #1 META-REVIEW QUESTIONS 1. Overall Rating Accept 2. Detailed Comments This paper is well motivated and organized. The one-class recommendation from implicit feedback is interesting. The experiments are well organized and solid. I think this paper is ready to publish.