語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Validity, Reliability, and Significance = Empirical Methods for NLP and Data Science /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Validity, Reliability, and Significance/ by Stefan Riezler, Michael Hagmann.
其他題名:
Empirical Methods for NLP and Data Science /
作者:
Riezler, Stefan.
其他作者:
Hagmann, Michael.
面頁冊數:
XVII, 147 p.online resource. :
Contained By:
Springer Nature eBook
標題:
Computational Linguistics. -
電子資源:
https://doi.org/10.1007/978-3-031-02183-1
ISBN:
9783031021831
Validity, Reliability, and Significance = Empirical Methods for NLP and Data Science /
Riezler, Stefan.
Validity, Reliability, and Significance
Empirical Methods for NLP and Data Science /[electronic resource] :by Stefan Riezler, Michael Hagmann. - 1st ed. 2022. - XVII, 147 p.online resource. - Synthesis Lectures on Human Language Technologies,1947-4059. - Synthesis Lectures on Human Language Technologies,.
Preface -- Acknowledgments -- Introduction -- Validity -- Reliability -- Significance -- Bibliography -- Authors' Biographies.
Empirical methods are means to answering methodological questions of empirical sciences by statistical techniques. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of machine learning, these correspond to the questions of whether a model predicts what it purports to predict, whether a model's performance is consistent across replications, and whether a performance difference between two models is due to chance, respectively. The goal of this book is to answer these questions by concrete statistical tests that can be applied to assess validity, reliability, and significance of data annotation and machine learning prediction in the fields of NLP and data science. Our focus is on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows detecting circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Last, a significance test based on the likelihood ratio of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data. This book can be used as an introduction to empirical methods for machine learning in general, with a special focus on applications in NLP and data science. The book is self-contained, with an appendix on the mathematical background on GAMs and LMEMs, and with an accompanying webpage including R code to replicate experiments presented in the book.
ISBN: 9783031021831
Standard No.: 10.1007/978-3-031-02183-1doiSubjects--Topical Terms:
670080
Computational Linguistics.
LC Class. No.: Q334-342
Dewey Class. No.: 006.3
Validity, Reliability, and Significance = Empirical Methods for NLP and Data Science /
LDR
:03571nam a22003975i 4500
001
1083726
003
DE-He213
005
20220601130751.0
007
cr nn 008mamaa
008
221228s2022 sz | s |||| 0|eng d
020
$a
9783031021831
$9
978-3-031-02183-1
024
7
$a
10.1007/978-3-031-02183-1
$2
doi
035
$a
978-3-031-02183-1
050
4
$a
Q334-342
050
4
$a
TA347.A78
072
7
$a
UYQ
$2
bicssc
072
7
$a
COM004000
$2
bisacsh
072
7
$a
UYQ
$2
thema
082
0 4
$a
006.3
$2
23
100
1
$a
Riezler, Stefan.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1389815
245
1 0
$a
Validity, Reliability, and Significance
$h
[electronic resource] :
$b
Empirical Methods for NLP and Data Science /
$c
by Stefan Riezler, Michael Hagmann.
250
$a
1st ed. 2022.
264
1
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2022.
300
$a
XVII, 147 p.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
Synthesis Lectures on Human Language Technologies,
$x
1947-4059
505
0
$a
Preface -- Acknowledgments -- Introduction -- Validity -- Reliability -- Significance -- Bibliography -- Authors' Biographies.
520
$a
Empirical methods are means to answering methodological questions of empirical sciences by statistical techniques. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of machine learning, these correspond to the questions of whether a model predicts what it purports to predict, whether a model's performance is consistent across replications, and whether a performance difference between two models is due to chance, respectively. The goal of this book is to answer these questions by concrete statistical tests that can be applied to assess validity, reliability, and significance of data annotation and machine learning prediction in the fields of NLP and data science. Our focus is on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows detecting circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Last, a significance test based on the likelihood ratio of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data. This book can be used as an introduction to empirical methods for machine learning in general, with a special focus on applications in NLP and data science. The book is self-contained, with an appendix on the mathematical background on GAMs and LMEMs, and with an accompanying webpage including R code to replicate experiments presented in the book.
650
2 4
$a
Computational Linguistics.
$3
670080
650
2 4
$a
Natural Language Processing (NLP).
$3
1254293
650
1 4
$a
Artificial Intelligence.
$3
646849
650
0
$a
Computational linguistics.
$3
555811
650
0
$a
Natural language processing (Computer science).
$3
802180
650
0
$a
Artificial intelligence.
$3
559380
700
1
$a
Hagmann, Michael.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1389816
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783031001949
776
0 8
$i
Printed edition:
$z
9783031010552
776
0 8
$i
Printed edition:
$z
9783031033117
830
0
$a
Synthesis Lectures on Human Language Technologies,
$x
1947-4059
$3
1389817
856
4 0
$u
https://doi.org/10.1007/978-3-031-02183-1
912
$a
ZDB-2-SXSC
950
$a
Synthesis Collection of Technology (R0) (SpringerNature-85007)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入