語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
New Theory of Discriminant Analysis ...
~
Shinmura, Shuichi.
New Theory of Discriminant Analysis After R. Fisher = Advanced Research by the Feature Selection Method for Microarray Data /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
New Theory of Discriminant Analysis After R. Fisher/ by Shuichi Shinmura.
其他題名:
Advanced Research by the Feature Selection Method for Microarray Data /
作者:
Shinmura, Shuichi.
面頁冊數:
XX, 208 p. 28 illus., 25 illus. in color.online resource. :
Contained By:
Springer Nature eBook
標題:
Statistics . -
電子資源:
https://doi.org/10.1007/978-981-10-2164-0
ISBN:
9789811021640
New Theory of Discriminant Analysis After R. Fisher = Advanced Research by the Feature Selection Method for Microarray Data /
Shinmura, Shuichi.
New Theory of Discriminant Analysis After R. Fisher
Advanced Research by the Feature Selection Method for Microarray Data /[electronic resource] :by Shuichi Shinmura. - 1st ed. 2016. - XX, 208 p. 28 illus., 25 illus. in color.online resource.
1 New Theory of Discriminant Analysis -- 1.1 Introduction -- 1.2 Motivation for our Research -- 1.3 Discriminant Functions -- 1.4 Unresolved Problem (Problem 1) -- 1.5 LSD Discrimination (Problem 2) -- 1.6 Generalized Inverse Matrices (Problem 3) -- 1.7 K-fold Cross-validation (Problem 4) -- 1.8 Matroska Feature Selection Method (Problem 5) -- 1.9 Summary -- References -- 2 Iris Data and Fisher’s Assumption -- 2.1 Introduction -- 2.2 Iris Data -- 2.3 Comparison of Seven LDFs -- 2.4 100-folf Cross-validation for Small Sample Method (Method 1) -- 2.5 Summary -- References -- 3 The Cephalo-Pelvic Disproportion (CPD) Data with Collinearity -- 3.1 Introduction -- 3.2 CPD Data -- 3.3 100-folf Cross-validation -- 3.4 Trial to Remove Collinearity -- 3.5 Summary -- References -- 4 Student Data and Problem 1 -- 4.1 Introduction -- 4.2 Student Data -- 4.3 100-folf Cross-validation for Student Data -- 4.4 Student Linearly Separable Data -- 4.5 Summary -- References -- 5 The Pass/Fail Determination using Exam Scores -A Trivial Linear Discriminant Function -- 5.1 Introduction -- 5.2 Pass/Fail Determination by Exam Scores Data in 2012 -- 5.3 Pass/Fail Determination by Exam Scores (50% Level in 2012) -- 5.4 Pass/Fail Determination by Exam Scores (90% Level in 2012) -- 5.5 Pass/Fail Determination by Exam Scores (10% Level in 2012) -- 5.6 Summary -- 6 Best Model for the Swiss Banknote Data – Explanation 1 of Matroska Feature -selection Method (Method 2) -. References -- 6 Best Model for Swiss Banknote Data -- 6.1 Introduction -- 6.2 Swiss Banknote Data -- 6.3 100-folf Cross-validation for Small Sample Method -- 6.4 Explanation 1 for Swiss Banknote Data -- 6.5 Summary -- References -- 7 Japanese Automobile Data – Explanation 2 of Matroska Feature Selection Method (Method 2) -- 7.1 Introduction -- 7.2 Japanese Automobile Data -- 7.3 100-folf Cross-validation (Method 1) -- 7.4 Matroska Feature Selection Method (Method 2) -- 7.5 Summary -- References -- 8 Matroska Feature Selection Method for Microarray Data (Method 2) -- 8.1 Introduction -- 8.2 Matroska Feature Selection Method (Method2) -- 8.3 Results of the Golub et al. Dataset -- 8.4 How to Analyze the First BGS -- 8.5 Statistical Analysis of SM1 -- 8.6 Summary -- References -- 9 LINGO Program 1 of Method 1 -- 9.1 Introduction -- 9.2 Natural (Mathematical) Notation by LINGO -- 9.3 Iris Data in Excel -- 9.4 Six LDFs by LINGO -- 9.5 Discrimination of Iris Data by LINGO -- 9.6 How to Generate Re-sampling Samples and Prepare Data in Excel File -- 9.7 Set Model by LINGO -- Index.
This is the first book to compare eight LDFs by different types of datasets, such as Fisher’s iris data, medical data with collinearities, Swiss banknote data that is a linearly separable data (LSD), student pass/fail determination using student attributes, 18 pass/fail determinations using exam scores, Japanese automobile data, and six microarray datasets (the datasets) that are LSD. We developed the 100-fold cross-validation for the small sample method (Method 1) instead of the LOO method. We proposed a simple model selection procedure to choose the best model having minimum M2 and Revised IP-OLDF based on MNM criterion was found to be better than other M2s in the above datasets. We compared two statistical LDFs and six MP-based LDFs. Those were Fisher’s LDF, logistic regression, three SVMs, Revised IP-OLDF, and another two OLDFs. Only a hard-margin SVM (H-SVM) and Revised IP-OLDF could discriminate LSD theoretically (Problem 2). We solved the defect of the generalized inverse matrices (Problem 3). For more than 10 years, many researchers have struggled to analyze the microarray dataset that is LSD (Problem 5). If we call the linearly separable model "Matroska," the dataset consists of numerous smaller Matroskas in it. We develop the Matroska feature selection method (Method 2). It finds the surprising structure of the dataset that is the disjoint union of several small Matroskas. Our theory and methods reveal new facts of gene analysis.
ISBN: 9789811021640
Standard No.: 10.1007/978-981-10-2164-0doiSubjects--Topical Terms:
1253516
Statistics .
LC Class. No.: QA276-280
Dewey Class. No.: 519.5
New Theory of Discriminant Analysis After R. Fisher = Advanced Research by the Feature Selection Method for Microarray Data /
LDR
:05449nam a22003975i 4500
001
971080
003
DE-He213
005
20200701101653.0
007
cr nn 008mamaa
008
201211s2016 si | s |||| 0|eng d
020
$a
9789811021640
$9
978-981-10-2164-0
024
7
$a
10.1007/978-981-10-2164-0
$2
doi
035
$a
978-981-10-2164-0
050
4
$a
QA276-280
072
7
$a
PBT
$2
bicssc
072
7
$a
MAT029000
$2
bisacsh
072
7
$a
PBT
$2
thema
082
0 4
$a
519.5
$2
23
100
1
$a
Shinmura, Shuichi.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1116532
245
1 0
$a
New Theory of Discriminant Analysis After R. Fisher
$h
[electronic resource] :
$b
Advanced Research by the Feature Selection Method for Microarray Data /
$c
by Shuichi Shinmura.
250
$a
1st ed. 2016.
264
1
$a
Singapore :
$b
Springer Singapore :
$b
Imprint: Springer,
$c
2016.
300
$a
XX, 208 p. 28 illus., 25 illus. in color.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
505
0
$a
1 New Theory of Discriminant Analysis -- 1.1 Introduction -- 1.2 Motivation for our Research -- 1.3 Discriminant Functions -- 1.4 Unresolved Problem (Problem 1) -- 1.5 LSD Discrimination (Problem 2) -- 1.6 Generalized Inverse Matrices (Problem 3) -- 1.7 K-fold Cross-validation (Problem 4) -- 1.8 Matroska Feature Selection Method (Problem 5) -- 1.9 Summary -- References -- 2 Iris Data and Fisher’s Assumption -- 2.1 Introduction -- 2.2 Iris Data -- 2.3 Comparison of Seven LDFs -- 2.4 100-folf Cross-validation for Small Sample Method (Method 1) -- 2.5 Summary -- References -- 3 The Cephalo-Pelvic Disproportion (CPD) Data with Collinearity -- 3.1 Introduction -- 3.2 CPD Data -- 3.3 100-folf Cross-validation -- 3.4 Trial to Remove Collinearity -- 3.5 Summary -- References -- 4 Student Data and Problem 1 -- 4.1 Introduction -- 4.2 Student Data -- 4.3 100-folf Cross-validation for Student Data -- 4.4 Student Linearly Separable Data -- 4.5 Summary -- References -- 5 The Pass/Fail Determination using Exam Scores -A Trivial Linear Discriminant Function -- 5.1 Introduction -- 5.2 Pass/Fail Determination by Exam Scores Data in 2012 -- 5.3 Pass/Fail Determination by Exam Scores (50% Level in 2012) -- 5.4 Pass/Fail Determination by Exam Scores (90% Level in 2012) -- 5.5 Pass/Fail Determination by Exam Scores (10% Level in 2012) -- 5.6 Summary -- 6 Best Model for the Swiss Banknote Data – Explanation 1 of Matroska Feature -selection Method (Method 2) -. References -- 6 Best Model for Swiss Banknote Data -- 6.1 Introduction -- 6.2 Swiss Banknote Data -- 6.3 100-folf Cross-validation for Small Sample Method -- 6.4 Explanation 1 for Swiss Banknote Data -- 6.5 Summary -- References -- 7 Japanese Automobile Data – Explanation 2 of Matroska Feature Selection Method (Method 2) -- 7.1 Introduction -- 7.2 Japanese Automobile Data -- 7.3 100-folf Cross-validation (Method 1) -- 7.4 Matroska Feature Selection Method (Method 2) -- 7.5 Summary -- References -- 8 Matroska Feature Selection Method for Microarray Data (Method 2) -- 8.1 Introduction -- 8.2 Matroska Feature Selection Method (Method2) -- 8.3 Results of the Golub et al. Dataset -- 8.4 How to Analyze the First BGS -- 8.5 Statistical Analysis of SM1 -- 8.6 Summary -- References -- 9 LINGO Program 1 of Method 1 -- 9.1 Introduction -- 9.2 Natural (Mathematical) Notation by LINGO -- 9.3 Iris Data in Excel -- 9.4 Six LDFs by LINGO -- 9.5 Discrimination of Iris Data by LINGO -- 9.6 How to Generate Re-sampling Samples and Prepare Data in Excel File -- 9.7 Set Model by LINGO -- Index.
520
$a
This is the first book to compare eight LDFs by different types of datasets, such as Fisher’s iris data, medical data with collinearities, Swiss banknote data that is a linearly separable data (LSD), student pass/fail determination using student attributes, 18 pass/fail determinations using exam scores, Japanese automobile data, and six microarray datasets (the datasets) that are LSD. We developed the 100-fold cross-validation for the small sample method (Method 1) instead of the LOO method. We proposed a simple model selection procedure to choose the best model having minimum M2 and Revised IP-OLDF based on MNM criterion was found to be better than other M2s in the above datasets. We compared two statistical LDFs and six MP-based LDFs. Those were Fisher’s LDF, logistic regression, three SVMs, Revised IP-OLDF, and another two OLDFs. Only a hard-margin SVM (H-SVM) and Revised IP-OLDF could discriminate LSD theoretically (Problem 2). We solved the defect of the generalized inverse matrices (Problem 3). For more than 10 years, many researchers have struggled to analyze the microarray dataset that is LSD (Problem 5). If we call the linearly separable model "Matroska," the dataset consists of numerous smaller Matroskas in it. We develop the Matroska feature selection method (Method 2). It finds the surprising structure of the dataset that is the disjoint union of several small Matroskas. Our theory and methods reveal new facts of gene analysis.
650
0
$a
Statistics .
$3
1253516
650
0
$a
Biostatistics.
$3
783654
650
1 4
$a
Statistical Theory and Methods.
$3
671396
650
2 4
$a
Statistics for Life Sciences, Medicine, Health Sciences.
$3
670172
650
2 4
$a
Statistics for Social Sciences, Humanities, Law.
$3
1211304
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9789811021633
776
0 8
$i
Printed edition:
$z
9789811021657
776
0 8
$i
Printed edition:
$z
9789811095467
856
4 0
$u
https://doi.org/10.1007/978-981-10-2164-0
912
$a
ZDB-2-SMA
912
$a
ZDB-2-SXMS
950
$a
Mathematics and Statistics (SpringerNature-11649)
950
$a
Mathematics and Statistics (R0) (SpringerNature-43713)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入