Determinants of using AI-powered discovery services



In 2023 the scholarly communities are witnessing a spring of Artificial Intelligence (AI) powered tools for scientific work. Scholars are tempted to integrate various time-saving AI applications in their workflow, from data analysis to disseminating research results. Among these novel “research assistants”, several enhanced discovery services apply machine learning to identify the most relevant results for the information seeker and visualize them to the user in innovative ways.

The rapid emergence of these tools has raised concerns about the impact of AI technology on scientific research and led to requirements of transparency, accountability, and explainability of the new AI tools. 

From the systems viewpoint, responsibility for the impact of technology extends beyond developers to the broader society. The user communities, including librarians providing services for academia, are considered counterparts in the effects of AI technology systems. Individuals decide how they behave with the new information technology, for example, whether they trust the system and its outcome. Thus, an individual user is also part of the socio-technical evolution of building transparent, accountable, and explainable AI.

In this study, we explore the challenges of adopting AI tools in scientific research on the level of an individual librarian working for academia. We aim to detect poorly addressed mindsets around explainability, fairness, and privacy, named “blind spots” in AI ethics (Hagendorff, 2022). The goal is to understand the “determinants” of librarians’ information behavior with novel AI tools. We focus on two AI-powered visual discovery services: and These tools help users to navigate and analyze research articles as concept graphs.

In this poster, our primary research question is: What are the determinants of librarians’ intentions when they adopt/use new AI-powered tools?

We conducted an expert evaluation (Tessmer, 1993) on these two discovery services using the Theory of Planned Behavior (TPB) as a theoretical framework that explains human behavior through three individual beliefs: attitudes, norms, and control. This framework helped us detect new “blind spots” in the behavioral determinants that have remained unnoticed in the recent discourses about AI ethics in libraries.

Our study indicated a lack in the area of normative beliefs, a “blind spot”: The social pressure to quickly adopt the newest technology and the lack of library-specific norms for using AI in academia may become a handicap for an individual librarian who contemplates whether or not to use an AI tool.

Author Biographies

Andrea Alessandro Gasparini, University of Oslo

Andrea Gasparini finished in February 2020 his Ph.D. at the Department of Informatics, University of Oslo, Norway. His Ph.D. is about the use of Design Thinking and Service Design in academic libraries. Right after, he had a part-time position in the same department as a senior lecturer for one year. In March 2022, he started in the same position, and from August 2022, he is also acting leader of the Regenerative technologies research group. Those are in the context of energy and Artificial Intelligence. He has also worked for more than 20 years as a chief engineer at the University of Oslo Library.

Heli Kautonen, Finnish Literature Society

Heli Kautonen, PhD, works as Director of the Finnish Literature Society Library in Helsinki, Finland. Earlier, she has worked for digital cultural heritage in different positions for over 15 years. She was one of the managers in the team at the National Library of Finland, which built the Finna service that joins together the collections of practically all Finnish archives, libraries and museums. She has been involved in various activities in her work as a university teacher, communications team member and work package leader in a European Union project. Heli Kautonen’s formal education ranges from art and design to information technology. Her doctoral thesis from the Aalto University School of Science studied the strategic aspects of user-centred design (UCD) in public digital services. Her current research interests are in applying human-centred design viewpoints to solving the societal challenges in the age of digitalisation and, particularly, artificial intelligence in the context of research libraries.


Hagendorff, T. (2022). Blind spots in AI ethics. AI and Ethics, 2(4), 851–867.

Tessmer, M. (1993). Planning and conducting formative evaluations: Improving the quality of education and training. Kogan Page.



How to Cite

Gasparini, A. A., & Kautonen, H. (2023). Determinants of using AI-powered discovery services. Septentrio Conference Series, (1). Retrieved from