Determinants of using AI-powered discovery services
In 2023 the scholarly communities are witnessing a spring of Artificial Intelligence (AI) powered tools for scientific work. Scholars are tempted to integrate various time-saving AI applications in their workflow, from data analysis to disseminating research results. Among these novel “research assistants”, several enhanced discovery services apply machine learning to identify the most relevant results for the information seeker and visualize them to the user in innovative ways.
The rapid emergence of these tools has raised concerns about the impact of AI technology on scientific research and led to requirements of transparency, accountability, and explainability of the new AI tools.
From the systems viewpoint, responsibility for the impact of technology extends beyond developers to the broader society. The user communities, including librarians providing services for academia, are considered counterparts in the effects of AI technology systems. Individuals decide how they behave with the new information technology, for example, whether they trust the system and its outcome. Thus, an individual user is also part of the socio-technical evolution of building transparent, accountable, and explainable AI.
In this study, we explore the challenges of adopting AI tools in scientific research on the level of an individual librarian working for academia. We aim to detect poorly addressed mindsets around explainability, fairness, and privacy, named “blind spots” in AI ethics (Hagendorff, 2022). The goal is to understand the “determinants” of librarians’ information behavior with novel AI tools. We focus on two AI-powered visual discovery services: openknowledgemaps.org and www.litmaps.com. These tools help users to navigate and analyze research articles as concept graphs.
In this poster, our primary research question is: What are the determinants of librarians’ intentions when they adopt/use new AI-powered tools?
We conducted an expert evaluation (Tessmer, 1993) on these two discovery services using the Theory of Planned Behavior (TPB) as a theoretical framework that explains human behavior through three individual beliefs: attitudes, norms, and control. This framework helped us detect new “blind spots” in the behavioral determinants that have remained unnoticed in the recent discourses about AI ethics in libraries.
Our study indicated a lack in the area of normative beliefs, a “blind spot”: The social pressure to quickly adopt the newest technology and the lack of library-specific norms for using AI in academia may become a handicap for an individual librarian who contemplates whether or not to use an AI tool.
Hagendorff, T. (2022). Blind spots in AI ethics. AI and Ethics, 2(4), 851–867. https://doi.org/10.1007/s43681-021-00122-8
Tessmer, M. (1993). Planning and conducting formative evaluations: Improving the quality of education and training. Kogan Page.
How to Cite
Copyright (c) 2023 Andrea Alessandro Gasparini, Heli Kautonen
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).