語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Transparency and Interpretability for Learned Representations of Artificial Neural Networks
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Transparency and Interpretability for Learned Representations of Artificial Neural Networks/ by Richard Meyes.
作者:
Meyes, Richard.
面頁冊數:
XXI, 211 p. 73 illus., 70 illus. in color. Textbook for German language market.online resource. :
Contained By:
Springer Nature eBook
標題:
Artificial Intelligence. -
電子資源:
https://doi.org/10.1007/978-3-658-40004-0
ISBN:
9783658400040
Transparency and Interpretability for Learned Representations of Artificial Neural Networks
Meyes, Richard.
Transparency and Interpretability for Learned Representations of Artificial Neural Networks
[electronic resource] /by Richard Meyes. - 1st ed. 2022. - XXI, 211 p. 73 illus., 70 illus. in color. Textbook for German language market.online resource.
Introduction -- Background & Foundations -- Methods and Terminology -- Related Work -- Research Studies -- Transfer Studies -- Critical Reflection & Outlook -- Summary.
Artificial intelligence (AI) is a concept, whose meaning and perception has changed considerably over the last decades. Starting off with individual and purely theoretical research efforts in the 1950s, AI has grown into a fully developed research field of modern times and may arguably emerge as one of the most important technological advancements of mankind. Despite these rapid technological advancements, some key questions revolving around the matter of transparency, interpretability and explainability of an AI’s decision-making remain unanswered. Thus, a young research field coined with the general term Explainable AI (XAI) has emerged from increasingly strict requirements for AI to be used in safety critical or ethically sensitive domains. An important research branch of XAI is to develop methods that help to facilitate a deeper understanding for the learned knowledge of artificial neural systems. In this book, a series of scientific studies are presented that shed light on how to adopt an empirical neuroscience inspired approach to investigate a neural network’s learned representation in the same spirit as neuroscientific studies of the brain. About the author Richard Meyes is head of the research group “Interpretable Learning Models” at the institute of Technologies and Management of Digital Transformation at the University of Wuppertal. His current research focusses on transparency and interpretability of decision-making processes of artificial neural networks. .
ISBN: 9783658400040
Standard No.: 10.1007/978-3-658-40004-0doiSubjects--Topical Terms:
646849
Artificial Intelligence.
LC Class. No.: Q325.5-.7
Dewey Class. No.: 006.31
Transparency and Interpretability for Learned Representations of Artificial Neural Networks
LDR
:02996nam a22003615i 4500
001
1085935
003
DE-He213
005
20221126205442.0
007
cr nn 008mamaa
008
221228s2022 gw | s |||| 0|eng d
020
$a
9783658400040
$9
978-3-658-40004-0
024
7
$a
10.1007/978-3-658-40004-0
$2
doi
035
$a
978-3-658-40004-0
050
4
$a
Q325.5-.7
072
7
$a
UYQM
$2
bicssc
072
7
$a
COM004000
$2
bisacsh
072
7
$a
UYQM
$2
thema
082
0 4
$a
006.31
$2
23
100
1
$a
Meyes, Richard.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1392537
245
1 0
$a
Transparency and Interpretability for Learned Representations of Artificial Neural Networks
$h
[electronic resource] /
$c
by Richard Meyes.
250
$a
1st ed. 2022.
264
1
$a
Wiesbaden :
$b
Springer Fachmedien Wiesbaden :
$b
Imprint: Springer Vieweg,
$c
2022.
300
$a
XXI, 211 p. 73 illus., 70 illus. in color. Textbook for German language market.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
505
0
$a
Introduction -- Background & Foundations -- Methods and Terminology -- Related Work -- Research Studies -- Transfer Studies -- Critical Reflection & Outlook -- Summary.
520
$a
Artificial intelligence (AI) is a concept, whose meaning and perception has changed considerably over the last decades. Starting off with individual and purely theoretical research efforts in the 1950s, AI has grown into a fully developed research field of modern times and may arguably emerge as one of the most important technological advancements of mankind. Despite these rapid technological advancements, some key questions revolving around the matter of transparency, interpretability and explainability of an AI’s decision-making remain unanswered. Thus, a young research field coined with the general term Explainable AI (XAI) has emerged from increasingly strict requirements for AI to be used in safety critical or ethically sensitive domains. An important research branch of XAI is to develop methods that help to facilitate a deeper understanding for the learned knowledge of artificial neural systems. In this book, a series of scientific studies are presented that shed light on how to adopt an empirical neuroscience inspired approach to investigate a neural network’s learned representation in the same spirit as neuroscientific studies of the brain. About the author Richard Meyes is head of the research group “Interpretable Learning Models” at the institute of Technologies and Management of Digital Transformation at the University of Wuppertal. His current research focusses on transparency and interpretability of decision-making processes of artificial neural networks. .
650
2 4
$a
Artificial Intelligence.
$3
646849
650
2 4
$a
Neuroscience.
$3
569964
650
1 4
$a
Machine Learning.
$3
1137723
650
0
$a
Artificial intelligence.
$3
559380
650
0
$a
Neurosciences.
$3
593561
650
0
$a
Machine learning.
$3
561253
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783658400033
776
0 8
$i
Printed edition:
$z
9783658400057
856
4 0
$u
https://doi.org/10.1007/978-3-658-40004-0
912
$a
ZDB-2-SNA
950
$a
Life Science and Basic Disciplines (German Language) (SpringerNature-11777)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入