Standards for Using Trustworthy AI Effectively in Exam Development
-
Discover how AI is transforming certification exams! This session explores the intersection of artificial intelligence (AI) and the development of trustworthy certification exams. We’ll start with an overview of the NIST AI Risk Management Framework (AI RMF) and how it ensures the trustworthiness and fairness of AI-powered exam tools. This framework addresses biases, data security, and ethical considerations to enhance the reliability and integrity of exams. Our second speaker will explore the legal landscape, covering AI-related laws and regulations, data privacy, intellectual property, and liability issues, along with practical strategies to mitigate legal risks. Finally, we’ll present research on using large language models (LLMs) like GPT-4x, Claude, and Gemini to generate multiple-choice questions. We’ll tell you how advanced techniques like RAG and AI agents can improve these models and reevaluate our recent conclusion: Despite their usefulness as assistants, these models are not yet replacements for human subject-matter experts.