Can ChatGPT Co-Author Your Study? (No, But It May Help with the Research.)

GW professors John Paul Helveston and Ryan Watkins helm an online repository documenting the use of large language models in scientific research.

August 28, 2023

Mock code for a large language model AI. (Adobe Stock, as shared in GW Today article)

Many industries, from entertainment to medicine, are wrestling with the emergence of sophisticated artificial intelligence (AI). Scientific research is no exception. Funding agencies are already cracking down on the use of generative text AIs like ChatGPT for peer review, citing the inconsistency of analysis produced by these algorithms, the opacity of their training models and other concerns.

But for two professors at the George Washington University, the advancing capabilities of large language models (LLMs), including OpenAI’s ChatGPT, Meta’s LlAMA2 and Google’s BARD, are worth careful study—and cautious optimism.

John Paul Helveston, an assistant professor of engineering management and systems engineering in the School of Engineering and Applied Science, and Ryan Watkins, a professor and director of the Educational Technology Leadership program at the Graduate School of Education and Human Development, believe LLMs have the potential to streamline and enhance aspects of the scientific method and thereby enable a greater volume of useful, interesting research. To use them that way, though, people need better education about what these algorithms can and can’t do, how to utilize AI tools most effectively and what standards and norms for using AI exist in their discipline.

Read the full article in GW Today >