語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Explainable AI: Interpreting, Explai...
~
SpringerLink (Online service)
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning/ edited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller.
其他作者:
Samek, Wojciech.
面頁冊數:
XI, 439 p. 152 illus., 119 illus. in color.online resource. :
Contained By:
Springer Nature eBook
標題:
Artificial intelligence. -
電子資源:
https://doi.org/10.1007/978-3-030-28954-6
ISBN:
9783030289546
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
[electronic resource] /edited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller. - 1st ed. 2019. - XI, 439 p. 152 illus., 119 illus. in color.online resource. - Lecture Notes in Artificial Intelligence ;11700. - Lecture Notes in Artificial Intelligence ;9285.
Towards Explainable Artificial Intelligence -- Transparency: Motivations and Challenges -- Interpretability in Intelligent Systems: A New Concept? -- Understanding Neural Networks via Feature Visualization: A Survey -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation -- Unsupervised Discrete Representation Learning -- Towards Reverse-Engineering Black-Box Neural Networks -- Explanations for Attributing Deep Neural Network Predictions -- Gradient-Based Attribution Methods -- Layer-Wise Relevance Propagation: An Overview -- Explaining and Interpreting LSTMs -- Comparing the Interpretability of Deep Networks via Network Dissection -- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison -- The (Un)reliability of Saliency Methods -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation -- Understanding Patch-Based Learning of Video Data by Explaining Predictions -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks -- Interpretable Deep Learning in Drug Discovery -- Neural Hydrology: Interpreting LSTMs in Hydrology -- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI -- Current Advances in Neural Decoding -- Software and Application Patterns for Explanation Methods.
The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.
ISBN: 9783030289546
Standard No.: 10.1007/978-3-030-28954-6doiSubjects--Topical Terms:
559380
Artificial intelligence.
LC Class. No.: Q334-342
Dewey Class. No.: 006.3
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
LDR
:04334nam a22004095i 4500
001
1010996
003
DE-He213
005
20200629164439.0
007
cr nn 008mamaa
008
210106s2019 gw | s |||| 0|eng d
020
$a
9783030289546
$9
978-3-030-28954-6
024
7
$a
10.1007/978-3-030-28954-6
$2
doi
035
$a
978-3-030-28954-6
050
4
$a
Q334-342
072
7
$a
UYQ
$2
bicssc
072
7
$a
COM004000
$2
bisacsh
072
7
$a
UYQ
$2
thema
082
0 4
$a
006.3
$2
23
245
1 0
$a
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
$h
[electronic resource] /
$c
edited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller.
250
$a
1st ed. 2019.
264
1
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2019.
300
$a
XI, 439 p. 152 illus., 119 illus. in color.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
Lecture Notes in Artificial Intelligence ;
$v
11700
505
0
$a
Towards Explainable Artificial Intelligence -- Transparency: Motivations and Challenges -- Interpretability in Intelligent Systems: A New Concept? -- Understanding Neural Networks via Feature Visualization: A Survey -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation -- Unsupervised Discrete Representation Learning -- Towards Reverse-Engineering Black-Box Neural Networks -- Explanations for Attributing Deep Neural Network Predictions -- Gradient-Based Attribution Methods -- Layer-Wise Relevance Propagation: An Overview -- Explaining and Interpreting LSTMs -- Comparing the Interpretability of Deep Networks via Network Dissection -- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison -- The (Un)reliability of Saliency Methods -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation -- Understanding Patch-Based Learning of Video Data by Explaining Predictions -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks -- Interpretable Deep Learning in Drug Discovery -- Neural Hydrology: Interpreting LSTMs in Hydrology -- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI -- Current Advances in Neural Decoding -- Software and Application Patterns for Explanation Methods.
520
$a
The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.
650
0
$a
Artificial intelligence.
$3
559380
650
0
$a
Optical data processing.
$3
639187
650
0
$a
Computers.
$3
565115
650
0
$a
Computer security.
$3
557122
650
0
$a
Computer organization.
$3
596298
650
1 4
$a
Artificial Intelligence.
$3
646849
650
2 4
$a
Image Processing and Computer Vision.
$3
670819
650
2 4
$a
Computing Milieux.
$3
669921
650
2 4
$a
Systems and Data Security.
$3
677062
650
2 4
$a
Computer Systems Organization and Communication Networks.
$3
669309
700
1
$a
Samek, Wojciech.
$e
editor.
$1
https://orcid.org/0000-0002-6283-3265
$4
edt
$4
http://id.loc.gov/vocabulary/relators/edt
$3
1305121
700
1
$a
Montavon, Grégoire.
$e
editor.
$4
edt
$4
http://id.loc.gov/vocabulary/relators/edt
$3
1305122
700
1
$a
Vedaldi, Andrea.
$e
editor.
$1
https://orcid.org/0000-0003-1374-2858
$4
edt
$4
http://id.loc.gov/vocabulary/relators/edt
$3
1305123
700
1
$a
Hansen, Lars Kai.
$e
editor.
$1
https://orcid.org/0000-0003-0442-5877
$4
edt
$4
http://id.loc.gov/vocabulary/relators/edt
$3
1305124
700
1
$a
Müller, Klaus-Robert.
$e
editor.
$1
https://orcid.org/0000-0002-3861-7685
$4
edt
$4
http://id.loc.gov/vocabulary/relators/edt
$3
1261634
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783030289539
776
0 8
$i
Printed edition:
$z
9783030289553
830
0
$a
Lecture Notes in Artificial Intelligence ;
$v
9285
$3
1253845
856
4 0
$u
https://doi.org/10.1007/978-3-030-28954-6
912
$a
ZDB-2-SCS
912
$a
ZDB-2-SXCS
912
$a
ZDB-2-LNC
950
$a
Computer Science (SpringerNature-11645)
950
$a
Computer Science (R0) (SpringerNature-43710)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入