Determinants of using AI-powered discovery services
DOI:
https://doi.org/10.7557/5.7164Abstract
In 2023 the scholarly communities are witnessing a spring of Artificial Intelligence (AI) powered tools for scientific work. Scholars are tempted to integrate various time-saving AI applications in their workflow, from data analysis to disseminating research results. Among these novel “research assistants”, several enhanced discovery services apply machine learning to identify the most relevant results for the information seeker and visualize them to the user in innovative ways.
The rapid emergence of these tools has raised concerns about the impact of AI technology on scientific research and led to requirements of transparency, accountability, and explainability of the new AI tools.
From the systems viewpoint, responsibility for the impact of technology extends beyond developers to the broader society. The user communities, including librarians providing services for academia, are considered counterparts in the effects of AI technology systems. Individuals decide how they behave with the new information technology, for example, whether they trust the system and its outcome. Thus, an individual user is also part of the socio-technical evolution of building transparent, accountable, and explainable AI.
In this study, we explore the challenges of adopting AI tools in scientific research on the level of an individual librarian working for academia. We aim to detect poorly addressed mindsets around explainability, fairness, and privacy, named “blind spots” in AI ethics (Hagendorff, 2022). The goal is to understand the “determinants” of librarians’ information behavior with novel AI tools. We focus on two AI-powered visual discovery services: openknowledgemaps.org and www.litmaps.com. These tools help users to navigate and analyze research articles as concept graphs.
In this poster, our primary research question is: What are the determinants of librarians’ intentions when they adopt/use new AI-powered tools?
We conducted an expert evaluation (Tessmer, 1993) on these two discovery services using the Theory of Planned Behavior (TPB) as a theoretical framework that explains human behavior through three individual beliefs: attitudes, norms, and control. This framework helped us detect new “blind spots” in the behavioral determinants that have remained unnoticed in the recent discourses about AI ethics in libraries.
Our study indicated a lack in the area of normative beliefs, a “blind spot”: The social pressure to quickly adopt the newest technology and the lack of library-specific norms for using AI in academia may become a handicap for an individual librarian who contemplates whether or not to use an AI tool.
Metrics
References
Hagendorff, T. (2022). Blind spots in AI ethics. AI and Ethics, 2(4), 851–867. https://doi.org/10.1007/s43681-021-00122-8 https://doi.org/10.1007/s43681-021-00122-8
Tessmer, M. (1993). Planning and conducting formative evaluations: Improving the quality of education and training. Kogan Page.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Andrea Alessandro Gasparini, Heli Kautonen
This work is licensed under a Creative Commons Attribution 4.0 International License.