Many industries, from entertainment to medicine, are wrestling with the emergence of sophisticated artificial intelligence (AI). Scientific research is no exception. Funding agencies are already cracking down on the use of generative text AIs like ChatGPT for peer review, citing the inconsistency of analysis produced by these algorithms, the opacity of their training models and other concerns.
But for two professors at the George Washington University, the advancing capabilities of large language models (LLMs), including OpenAI’s ChatGPT, Meta’s LlAMA2 and Google’s BARD, are worth careful study—and cautious optimism.
John Paul Helveston, an assistant professor of engineering management and systems engineering in the School of Engineering and Applied Science, and Ryan Watkins, a professor and director of the Educational Technology Leadership program at the Graduate School of Education and Human Development, believe LLMs have the potential to streamline and enhance aspects of the scientific method and thereby enable a greater volume of useful, interesting research. To use them that way, though, people need better education about what these algorithms can and can’t do, how to utilize AI tools most effectively and what standards and norms for using AI exist in their discipline.