Silvia Tulli, David W. Aha
ISBNs: B0CQ6BZB9Y, 1032392584, 1032409134, 9781032409139,
978-1-032-40913-9, 978-1032409139, 978-1-032-39258-5, 978-1032392585,
9781032392585, 978-1-003-35528-1, 9781003355281, 978-1003355281
English | 2024 | PDF | 171 Pages
This book focuses on a subtopic of explainable AI (XAI) called
explainable agency (EA), which involves producing records of decisions
made during an agent’s reasoning, summarizing its behavior in
human-accessible terms, and providing answers to questions about
specific choices and the reasons for them. We distinguish explainable
agency from interpretable machine learning (IML), another branch of XAI
that focuses on providing insight (typically, for an ML expert)
concerning a learned model and its decisions. In contrast, explainable
agency typically involves a broader set of AI-enabled techniques,
systems, and stakeholders (e.g., end users), where the explanations
provided by EA agents are best evaluated in the context of human subject
studies.
The chapters of this book explore the concept of
endowing intelligent agents with explainable agency, which is crucial
for agents to be trusted by humans in critical domains such as finance,
self-driving vehicles, and military operations. This book presents the
work of researchers from a variety of perspectives and describes
challenges, recent research results, lessons learned from applications,
and recommendations for future research directions in EA. The historical
perspectives of explainable agency and the importance of interactivity
in explainable systems are also discussed. Ultimately, this book aims to
contribute to the successful partnership between humans and AI systems.
Features:
Contributes to the topic of explainable artificial intelligence (XAI)
Focuses on the XAI subtopic of explainable agency
Includes an introductory chapter, a survey, and five other original contributions