語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Explainable Human-AI Interaction = A Planning Perspective /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Explainable Human-AI Interaction/ by Sarath Sreedharan, Anagha Kulkarni, Subbarao Kambhampati.
其他題名:
A Planning Perspective /
作者:
Sreedharan, Sarath.
其他作者:
Kambhampati, Subbarao.
面頁冊數:
XX, 164 p.online resource. :
Contained By:
Springer Nature eBook
標題:
Mathematical Models of Cognitive Processes and Neural Networks. -
電子資源:
https://doi.org/10.1007/978-3-031-03767-2
ISBN:
9783031037672
Explainable Human-AI Interaction = A Planning Perspective /
Sreedharan, Sarath.
Explainable Human-AI Interaction
A Planning Perspective /[electronic resource] :by Sarath Sreedharan, Anagha Kulkarni, Subbarao Kambhampati. - 1st ed. 2022. - XX, 164 p.online resource. - Synthesis Lectures on Artificial Intelligence and Machine Learning,1939-4616. - Synthesis Lectures on Artificial Intelligence and Machine Learning,.
Preface -- Acknowledgments -- Introduction -- Measures of Interpretability -- Explicable Behavior Generation -- Legible Behavior -- Explanation as Model Reconciliation -- Acquiring Mental Models for Explanations -- Balancing Communication and Behavior -- Explaining in the Presence of Vocabulary Mismatch -- Obfuscatory Behavior and Deceptive Communication -- Applications -- Conclusion -- Bibliography -- Authors' Biographies -- Index.
From its inception, artificial intelligence (AI) has had a rather ambivalent relationship with humans—swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans. One critical requirement for such synergistic human‒AI interaction is that the AI systems' behavior be explainable to the humans in the loop. To do this effectively, AI agents need to go beyond planning with their own models of the world, and take into account the mental model of the human in the loop. At a minimum, AI agents need approximations of the human's task and goal models, as well as the human's model of the AI agent's task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), and be ready to provide customized explanations when needed. The authors draw from several years of research in their lab to discuss how an AI agent can use these mental models to either conform to human expectations or change those expectations through explanatory communication. While the focus of the book is on cooperative scenarios, it also covers how the same mental models can be used for obfuscation and deception. The book also describes several real-world application systems for collaborative decision-making that are based on the framework and techniques developed here. Although primarily driven by the authors' own research in these areas, every chapter will provide ample connections to relevant research from the wider literature. The technical topics covered in the book are self-contained and are accessible to readers with a basic background in AI.
ISBN: 9783031037672
Standard No.: 10.1007/978-3-031-03767-2doiSubjects--Topical Terms:
884110
Mathematical Models of Cognitive Processes and Neural Networks.
LC Class. No.: Q334-342
Dewey Class. No.: 006.3
Explainable Human-AI Interaction = A Planning Perspective /
LDR
:03716nam a22003975i 4500
001
1086961
003
DE-He213
005
20220614201522.0
007
cr nn 008mamaa
008
221228s2022 sz | s |||| 0|eng d
020
$a
9783031037672
$9
978-3-031-03767-2
024
7
$a
10.1007/978-3-031-03767-2
$2
doi
035
$a
978-3-031-03767-2
050
4
$a
Q334-342
050
4
$a
TA347.A78
072
7
$a
UYQ
$2
bicssc
072
7
$a
COM004000
$2
bisacsh
072
7
$a
UYQ
$2
thema
082
0 4
$a
006.3
$2
23
100
1
$a
Sreedharan, Sarath.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1393862
245
1 0
$a
Explainable Human-AI Interaction
$h
[electronic resource] :
$b
A Planning Perspective /
$c
by Sarath Sreedharan, Anagha Kulkarni, Subbarao Kambhampati.
250
$a
1st ed. 2022.
264
1
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2022.
300
$a
XX, 164 p.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
Synthesis Lectures on Artificial Intelligence and Machine Learning,
$x
1939-4616
505
0
$a
Preface -- Acknowledgments -- Introduction -- Measures of Interpretability -- Explicable Behavior Generation -- Legible Behavior -- Explanation as Model Reconciliation -- Acquiring Mental Models for Explanations -- Balancing Communication and Behavior -- Explaining in the Presence of Vocabulary Mismatch -- Obfuscatory Behavior and Deceptive Communication -- Applications -- Conclusion -- Bibliography -- Authors' Biographies -- Index.
520
$a
From its inception, artificial intelligence (AI) has had a rather ambivalent relationship with humans—swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans. One critical requirement for such synergistic human‒AI interaction is that the AI systems' behavior be explainable to the humans in the loop. To do this effectively, AI agents need to go beyond planning with their own models of the world, and take into account the mental model of the human in the loop. At a minimum, AI agents need approximations of the human's task and goal models, as well as the human's model of the AI agent's task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), and be ready to provide customized explanations when needed. The authors draw from several years of research in their lab to discuss how an AI agent can use these mental models to either conform to human expectations or change those expectations through explanatory communication. While the focus of the book is on cooperative scenarios, it also covers how the same mental models can be used for obfuscation and deception. The book also describes several real-world application systems for collaborative decision-making that are based on the framework and techniques developed here. Although primarily driven by the authors' own research in these areas, every chapter will provide ample connections to relevant research from the wider literature. The technical topics covered in the book are self-contained and are accessible to readers with a basic background in AI.
650
2 4
$a
Mathematical Models of Cognitive Processes and Neural Networks.
$3
884110
650
2 4
$a
Machine Learning.
$3
1137723
650
1 4
$a
Artificial Intelligence.
$3
646849
650
0
$a
Neural networks (Computer science) .
$3
1253765
650
0
$a
Machine learning.
$3
561253
650
0
$a
Artificial intelligence.
$3
559380
700
1
$a
Kambhampati, Subbarao.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1393864
700
1
$a
Kulkarni, Anagha.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1393863
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783031037771
776
0 8
$i
Printed edition:
$z
9783031037573
776
0 8
$i
Printed edition:
$z
9783031037870
830
0
$a
Synthesis Lectures on Artificial Intelligence and Machine Learning,
$x
1939-4616
$3
1393865
856
4 0
$u
https://doi.org/10.1007/978-3-031-03767-2
912
$a
ZDB-2-SXSC
950
$a
Synthesis Collection of Technology (R0) (SpringerNature-85007)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入