The influence of Large Language Models on systematic review and research dissemination




Watch VIDEO.

This presentation will delve into the transformative role of AI in scholarly communication, highlighting its potential, implications, and challenges, and further addressing the ethical considerations that come with it.

Recent advancements in AI, specifically large language models have unlocked new possibilities for scientific exploration and communication. Large language models such as GPT-4 and LLAMA, with their remarkable text-generation capabilities, stand at the forefront of this AI revolution. In the first part of the presentation, I examine how these AI tools are reshaping the nature of systematic reviews. The ability to analyze, summarize, and generate vast amounts of text allows these models to facilitate more efficient processes, offering a valuable tool to researchers navigating through vast databases of published work.

I would discuss how AI is engendering new developments in research methodology. Through the use of predictive modelling and advanced analytics, AI tools like GPT-4 allow for a deeper understanding of existing research and the identification of gaps in the literature, thereby promoting innovative research approaches. However, these advancements come with the need for updated ethical frameworks, a topic I would try to address also.

The issues related to AI use include issues of transparency and accountability, as the ”blackbox” approach to deep learning models can be uncovered; without appropriate interpretability architecture (such as with GPT-4 or LLAMA), these models can be generating inaccurate information based on their predictive capabilities.

Nevertheless, these have proved to be of use, and with a fine prompt tuning, publicly available models can be of great use to researchers. I would delve into the question of how to balance the benefits of AI tools with the need to maintain high ethical standards in research, aiming to provide possible insights into how these ethical frameworks might be updated to accommodate the new realities of AI.

Furthermore, I’d reflect on the consequences of AI for the evaluation of research. While AI can aid in the quick assessment of a paper's relevance or novelty, questions remain about its capacity to fully evaluate the quality and significance of research. This discussion emphasizes the need for a blend of AI models with human expertise to achieve robust research evaluation for the time being (or for further training for specific use cases of the models.)

I’d like to conclude with a reflection on the overall impact of the integration of AI LLMs on systematic reviews and research dissemination. While acknowledging the transformative potential of AI in reshaping the scientific landscape, it underscores the need for careful navigation of the associated challenges and ethical implications.


Metrics Loading ...

Author Biography

Simon Baradziej, UiT The Arctic University of Norway

Simon Baradziej is a PhD student at UiT The Arctic University of Norway, working closely with AI, interpretability and some local and global LLMs (like LLAMA or GPT-4) for educational adaptability. His current focus is related to AI-based adaptability in simulation-based environments on an EU project at the Department of Technology and Safety at UiT. Simon has more than 10 years of practical experience in the fields related to AI, with former education in IT Security, Computer Science, Industrial Economics, and Innovation.



How to Cite

Baradziej, S. (2023). The influence of Large Language Models on systematic review and research dissemination. Septentrio Conference Series, (1).