Dr. Watkins Receives Grant for Project: Operationalizing Trustworthy AI: LLM Development in Academia


August 23, 2024

NSF + TRAILS (Trustworthy AI in Law & Society) logos

Dr. Ryan Watkins received NSF seed-funding through the Trustworthy AI in Law and Society (TRAILS) initiative to do research titled, "Operationalizing Trustworthy AI: LLM Development in Academia." 

Per the project abstract:

The project aims to examine how Trustworthy AI frameworks are operationalized within the development of customized applications that utilize open source Large Language Models (LLMs). This will be achieved through the monitoring and examination of the development processes and design decisions of academic (student/faculty) teams working on LLM-based projects for research or classroom application; with specific focus on academic teams that have limited or no Computer Science backgrounds. The research will utilize a combination of the NIST Trustworthy AI Risk Management framework, the CISA Secure by Design framework, and Open Source/Science principles in a quasi-experimental mixed methods research approach. The findings of the research will enhance our knowledge of how trustworthy AI frameworks are operationalized by teams in academic settings.

In layman's terms:

Dr. Watkins will oversee a pilot where they recruit 5 teams (made up of students, faculty and/or staff), who will work on the development of an AI application for use in either a classroom or research context. The research will then monitor the teams as they develop their application to see if/how they apply frameworks for trustworthy, open, and secure design. During the research they will conduct interviews, surveys, focus groups, and collect artifact (such as meeting notes, versions of the computer code, etc.) to examine the utilization of development frameworks in academic settings.