語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Automated Deep Learning Using Neural Network Intelligence = Develop and Design PyTorch and TensorFlow Models Using Python /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Automated Deep Learning Using Neural Network Intelligence/ by Ivan Gridin.
其他題名:
Develop and Design PyTorch and TensorFlow Models Using Python /
作者:
Gridin, Ivan.
面頁冊數:
XVII, 384 p. 159 illus., 128 illus. in color.online resource. :
Contained By:
Springer Nature eBook
標題:
Python. -
電子資源:
https://doi.org/10.1007/978-1-4842-8149-9
ISBN:
9781484281499
Automated Deep Learning Using Neural Network Intelligence = Develop and Design PyTorch and TensorFlow Models Using Python /
Gridin, Ivan.
Automated Deep Learning Using Neural Network Intelligence
Develop and Design PyTorch and TensorFlow Models Using Python /[electronic resource] :by Ivan Gridin. - 1st ed. 2022. - XVII, 384 p. 159 illus., 128 illus. in color.online resource.
Chapter 1: Introduction to Neural Network Intelligence -- Chapter 2:Hyperparameter Optimization -- Chapter 3: Hyperparameter Optimization Under Shell -- 4. Multi-Trial Neural Architecture Search -- Chapter 5: One-Shot Neural Architecture Search -- Chapter 6: Model Pruning -- Chapter 7: NNI Recipes.
Optimize, develop, and design PyTorch and TensorFlow models for a specific problem using the Microsoft Neural Network Intelligence (NNI) toolkit. This book includes practical examples illustrating automated deep learning approaches and provides techniques to facilitate your deep learning model development. The first chapters of this book cover the basics of NNI toolkit usage and methods for solving hyper-parameter optimization tasks. You will understand the black-box function maximization problem using NNI, and know how to prepare a TensorFlow or PyTorch model for hyper-parameter tuning, launch an experiment, and interpret the results. The book dives into optimization tuners and the search algorithms they are based on: Evolution search, Annealing search, and the Bayesian Optimization approach. The Neural Architecture Search is covered and you will learn how to develop deep learning models from scratch. Multi-trial and one-shot searching approaches of automatic neural network design are presented. The book teaches you how to construct a search space and launch an architecture search using the latest state-of-the-art exploration strategies: Efficient Neural Architecture Search (ENAS) and Differential Architectural Search (DARTS). You will learn how to automate the construction of a neural network architecture for a particular problem and dataset. The book focuses on model compression and feature engineering methods that are essential in automated deep learning. It also includes performance techniques that allow the creation of large-scale distributive training platforms using NNI. After reading this book, you will know how to use the full toolkit of automated deep learning methods. The techniques and practical examples presented in this book will allow you to bring your neural network routines to a higher level. What You Will Learn Know the basic concepts of optimization tuners, search space, and trials Apply different hyper-parameter optimization algorithms to develop effective neural networks Construct new deep learning models from scratch Execute the automated Neural Architecture Search to create state-of-the-art deep learning models Compress the model to eliminate unnecessary deep learning layers.
ISBN: 9781484281499
Standard No.: 10.1007/978-1-4842-8149-9doiSubjects--Topical Terms:
1115944
Python.
LC Class. No.: Q334-342
Dewey Class. No.: 006.3
Automated Deep Learning Using Neural Network Intelligence = Develop and Design PyTorch and TensorFlow Models Using Python /
LDR
:03987nam a22004095i 4500
001
1087570
003
DE-He213
005
20221104145913.0
007
cr nn 008mamaa
008
221228s2022 xxu| s |||| 0|eng d
020
$a
9781484281499
$9
978-1-4842-8149-9
024
7
$a
10.1007/978-1-4842-8149-9
$2
doi
035
$a
978-1-4842-8149-9
050
4
$a
Q334-342
050
4
$a
TA347.A78
072
7
$a
UYQ
$2
bicssc
072
7
$a
COM004000
$2
bisacsh
072
7
$a
UYQ
$2
thema
082
0 4
$a
006.3
$2
23
100
1
$a
Gridin, Ivan.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1394615
245
1 0
$a
Automated Deep Learning Using Neural Network Intelligence
$h
[electronic resource] :
$b
Develop and Design PyTorch and TensorFlow Models Using Python /
$c
by Ivan Gridin.
250
$a
1st ed. 2022.
264
1
$a
Berkeley, CA :
$b
Apress :
$b
Imprint: Apress,
$c
2022.
300
$a
XVII, 384 p. 159 illus., 128 illus. in color.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
505
0
$a
Chapter 1: Introduction to Neural Network Intelligence -- Chapter 2:Hyperparameter Optimization -- Chapter 3: Hyperparameter Optimization Under Shell -- 4. Multi-Trial Neural Architecture Search -- Chapter 5: One-Shot Neural Architecture Search -- Chapter 6: Model Pruning -- Chapter 7: NNI Recipes.
520
$a
Optimize, develop, and design PyTorch and TensorFlow models for a specific problem using the Microsoft Neural Network Intelligence (NNI) toolkit. This book includes practical examples illustrating automated deep learning approaches and provides techniques to facilitate your deep learning model development. The first chapters of this book cover the basics of NNI toolkit usage and methods for solving hyper-parameter optimization tasks. You will understand the black-box function maximization problem using NNI, and know how to prepare a TensorFlow or PyTorch model for hyper-parameter tuning, launch an experiment, and interpret the results. The book dives into optimization tuners and the search algorithms they are based on: Evolution search, Annealing search, and the Bayesian Optimization approach. The Neural Architecture Search is covered and you will learn how to develop deep learning models from scratch. Multi-trial and one-shot searching approaches of automatic neural network design are presented. The book teaches you how to construct a search space and launch an architecture search using the latest state-of-the-art exploration strategies: Efficient Neural Architecture Search (ENAS) and Differential Architectural Search (DARTS). You will learn how to automate the construction of a neural network architecture for a particular problem and dataset. The book focuses on model compression and feature engineering methods that are essential in automated deep learning. It also includes performance techniques that allow the creation of large-scale distributive training platforms using NNI. After reading this book, you will know how to use the full toolkit of automated deep learning methods. The techniques and practical examples presented in this book will allow you to bring your neural network routines to a higher level. What You Will Learn Know the basic concepts of optimization tuners, search space, and trials Apply different hyper-parameter optimization algorithms to develop effective neural networks Construct new deep learning models from scratch Execute the automated Neural Architecture Search to create state-of-the-art deep learning models Compress the model to eliminate unnecessary deep learning layers.
650
2 4
$a
Python.
$3
1115944
650
2 4
$a
Machine Learning.
$3
1137723
650
1 4
$a
Artificial Intelligence.
$3
646849
650
0
$a
Python (Computer program language).
$3
1127623
650
0
$a
Machine learning.
$3
561253
650
0
$a
Artificial intelligence.
$3
559380
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9781484281482
776
0 8
$i
Printed edition:
$z
9781484281505
776
0 8
$i
Printed edition:
$z
9781484290927
856
4 0
$u
https://doi.org/10.1007/978-1-4842-8149-9
912
$a
ZDB-2-CWD
912
$a
ZDB-2-SXPC
950
$a
Professional and Applied Computing (SpringerNature-12059)
950
$a
Professional and Applied Computing (R0) (SpringerNature-43716)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入