A comparison between Tsetlin machines and deep neural networks in the context of recommendation systems

Authors

  • Karl Audun Kagnes Borgersen University of Agder
  • Morten Goodwin University of Agder
  • Jivitesh Sharma University of Agder

DOI:

https://doi.org/10.7557/18.6807

Keywords:

Machine Learning, Neural Networks, Recommendation Systems, Tsetlin Machines, Interpretability, Explainability

Abstract

Recommendation Systems (RSs) are ubiquitous in modern society and are one of the largest points of interaction between humans and AI. Modern RSs are often implemented using deep learning models, which are infamously difficult to interpret. This problem is particularly exasperated in the context of recommendation scenarios, as it erodes the user's trust in the RS. In contrast, the newly introduced Tsetlin Machines (TM) possess some valuable properties due to their inherent interpretability. TMs are still fairly young as a technology. As no RS has been developed for TMs before, it has become necessary to perform some preliminary research regarding the practicality of such a system. In this paper, we develop the first RS based on TMs to evaluate its practicality in this application domain. This paper compares the viability of TMs with other machine learning models prevalent in the field of RS. We train and investigate the performance of the TM compared with a vanilla feed-forward deep learning model. These comparisons are based on model performance, interpretability/explainability, and scalability. Further, we provide some benchmark performance comparisons to similar machine learning solutions relevant to RSs.

References

K. D. Abeyrathna, B. Bhattarai, M. Goodwin,S. R. Gorji, O.-C. Granmo, L. Jiao, R. Saha,and R. K. Yadav. Massively parallel and asynchronous tsetlin machine architecture supporting almost constant-time scaling. In M. Meilaand T. Zhang, editors, Proceedings of the 38thInternational Conference on Machine Learning, volume 139 of Proceedings of MachineLearning Research, pages 10–20. PMLR, 18–24Jul 2021. URL https://proceedings.mlr.press/v139/abeyrathna21a.html.

K. D. Abeyrathna, O.-C. Granmo, andM. Goodwin. Extending the tsetlin machinewith integer-weighted clauses for increasedinterpretability. IEEE Access, 9:8233–8248,2021. doi: 10.1109/ACCESS.2021.3049569.

B. Bhattarai, O.-C. Granmo, and L. Jiao.Explainable tsetlin machine framework forfake news detection with credibility score assessment. arXiv preprint arXiv:2105.09114,2021. URL https://doi.org/10.48550/arXiv.2105.09114.

P. Covington, J. Adams, and E. Sargin. Deepneural networks for youtube recommendations. In Proceedings of the 10th ACM conference on recommender systems, pages 191–198,2016. doi: https://doi.org/10.1145/2959100.2959190.

O.-C. Granmo. The Tsetlin Machine - A GameTheoretic Bandit Driven Approach to OptimalPattern Recognition with Propositional Logic.arXiv preprint arXiv:1804.01508, 2018. URLhttps://arxiv.org/abs/1804.01508.

O.-C. Granmo. An introduction to tsetlinmachines. https://tsetlinmachine.org/,2022. Online; accessed 21 September 2022.

O.-C. Granmo, S. Glimsdal, L. Jiao, M. Goodwin, C. W. Omlin, and G. T. Berge. Theconvolutional tsetlin machine. arXiv preprintarXiv:1905.09688, 2019. URL https://arxiv.org/abs/1905.09688.

L. Guelman. Gradient boosting trees for autoinsurance loss cost modeling and prediction.Expert Systems with Applications, 39(3):3659–3667, 2012. ISSN 0957-4174. doi:https://doi.org/10.1016/j.eswa.2011.09.058.URL https://www.sciencedirect.com/science/article/pii/S0957417411013674.

H&M Group. H&M PersonalizedFashion Recommendations. https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/data, 2022. Online; accessed 2 September2022.

D. Jannach, G. de Souza P. Moreira, andE. Oldridge. Why are deep learning models notconsistently winning recommender systemscompetitions yet? a position paper. In Proceedings of the Recommender Systems Challenge 2020, RecSysChallenge ’20, page 44–49,New York, NY, USA, 2020. Association forComputing Machinery. ISBN 9781450388351.doi: 10.1145/3415959.3416001. URL https://doi.org/10.1145/3415959.3416001.

H. Ko, S. Lee, Y. Park, and A. Choi. A surveyof recommendation systems: Recommendation models, techniques, and application fields.Electronics, 11(1), 2022. ISSN 2079-9292. doi:10.3390/electronics11010141. URL https://www.mdpi.com/2079-9292/11/1/141.

S. M. Lundberg and S.-I. Lee. A unifiedapproach to interpreting model predictions.Advances in neural information processing systems, 30, 2017. URL https://proceedings.neurips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf.

A. Nguyen, F. Krause, D. Hagenmayer, andM. F ̈arber. Quantifying explanations of neural networks in e-commerce based on lrp. InJoint European Conference on Machine Learning and Knowledge Discovery in Databases,pages 251–267. Springer, 2021. doi: https://doi.org/10.1007/978-3-030-86517-7 16.

M. T. Ribeiro, S. Singh, and C. Guestrin. ”why should i trust you?” explaining the predictions of any classifier. In Proceedings ofthe 22nd ACM SIGKDD international conference on knowledge discovery and data mining,pages 1135–1144, 2016. doi: http://dx.doi.org/10.1145/2939672.2939778.

C. Rudin. Stop explaining black box machinelearning models for high stakes decisions anduse interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019. doi:https://doi.org/10.1038/s42256-019-0048-x.

UCLA. How do i interpret odds ratios inlogistic regression? https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faq-how-do-i-interpret-odds-ratios-in-logistic-regre2021. Accessed: 2022-09-12.

Downloads

Published

2023-01-23