語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
A Derivative-free Two Level Random S...
~
SpringerLink (Online service)
A Derivative-free Two Level Random Search Method for Unconstrained Optimization
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
A Derivative-free Two Level Random Search Method for Unconstrained Optimization/ by Neculai Andrei.
作者:
Andrei, Neculai.
面頁冊數:
XI, 118 p. 14 illus., 13 illus. in color.online resource. :
Contained By:
Springer Nature eBook
標題:
Mathematical optimization. -
電子資源:
https://doi.org/10.1007/978-3-030-68517-1
ISBN:
9783030685171
A Derivative-free Two Level Random Search Method for Unconstrained Optimization
Andrei, Neculai.
A Derivative-free Two Level Random Search Method for Unconstrained Optimization
[electronic resource] /by Neculai Andrei. - 1st ed. 2021. - XI, 118 p. 14 illus., 13 illus. in color.online resource. - SpringerBriefs in Optimization,2191-575X. - SpringerBriefs in Optimization,.
1. Introduction -- 2. A Derivative-free Two Level Random Search Method for Unconstrained Optimization -- 3. Convergence of the Algorithm -- 4. Numerical Results -- 5. Conclusions -- Annex A. List of Applications -- Annex B. List of Test Functions -- Annex C. Detailed Results for 30 Large-Scale Problems -- Annex D. Detailed Results for 140 Problems.
The book is intended for graduate students and researchers in mathematics, computer science, and operational research. The book presents a new derivative-free optimization method/algorithm based on randomly generated trial points in specified domains and where the best ones are selected at each iteration by using a number of rules. This method is different from many other well established methods presented in the literature and proves to be competitive for solving many unconstrained optimization problems with different structures and complexities, with a relative large number of variables. Intensive numerical experiments with 140 unconstrained optimization problems, with up to 500 variables, have shown that this approach is efficient and robust. Structured into 4 chapters, Chapter 1 is introductory. Chapter 2 is dedicated to presenting a two level derivative-free random search method for unconstrained optimization. It is assumed that the minimizing function is continuous, lower bounded and its minimum value is known. Chapter 3 proves the convergence of the algorithm. In Chapter 4, the numerical performances of the algorithm are shown for solving 140 unconstrained optimization problems, out of which 16 are real applications. This shows that the optimization process has two phases: the reduction phase and the stalling one. Finally, the performances of the algorithm for solving a number of 30 large-scale unconstrained optimization problems up to 500 variables are presented. These numerical results show that this approach based on the two level random search method for unconstrained optimization is able to solve a large diversity of problems with different structures and complexities. There are a number of open problems which refer to the following aspects: the selection of the number of trial or the number of the local trial points, the selection of the bounds of the domains where the trial points and the local trial points are randomly generated and a criterion for initiating the line search.
ISBN: 9783030685171
Standard No.: 10.1007/978-3-030-68517-1doiSubjects--Topical Terms:
527675
Mathematical optimization.
LC Class. No.: QA402.5-402.6
Dewey Class. No.: 519.6
A Derivative-free Two Level Random Search Method for Unconstrained Optimization
LDR
:03776nam a22003975i 4500
001
1049312
003
DE-He213
005
20211125185604.0
007
cr nn 008mamaa
008
220103s2021 sz | s |||| 0|eng d
020
$a
9783030685171
$9
978-3-030-68517-1
024
7
$a
10.1007/978-3-030-68517-1
$2
doi
035
$a
978-3-030-68517-1
050
4
$a
QA402.5-402.6
072
7
$a
PBU
$2
bicssc
072
7
$a
MAT003000
$2
bisacsh
072
7
$a
PBU
$2
thema
082
0 4
$a
519.6
$2
23
100
1
$a
Andrei, Neculai.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1077311
245
1 2
$a
A Derivative-free Two Level Random Search Method for Unconstrained Optimization
$h
[electronic resource] /
$c
by Neculai Andrei.
250
$a
1st ed. 2021.
264
1
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2021.
300
$a
XI, 118 p. 14 illus., 13 illus. in color.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
SpringerBriefs in Optimization,
$x
2191-575X
505
0
$a
1. Introduction -- 2. A Derivative-free Two Level Random Search Method for Unconstrained Optimization -- 3. Convergence of the Algorithm -- 4. Numerical Results -- 5. Conclusions -- Annex A. List of Applications -- Annex B. List of Test Functions -- Annex C. Detailed Results for 30 Large-Scale Problems -- Annex D. Detailed Results for 140 Problems.
520
$a
The book is intended for graduate students and researchers in mathematics, computer science, and operational research. The book presents a new derivative-free optimization method/algorithm based on randomly generated trial points in specified domains and where the best ones are selected at each iteration by using a number of rules. This method is different from many other well established methods presented in the literature and proves to be competitive for solving many unconstrained optimization problems with different structures and complexities, with a relative large number of variables. Intensive numerical experiments with 140 unconstrained optimization problems, with up to 500 variables, have shown that this approach is efficient and robust. Structured into 4 chapters, Chapter 1 is introductory. Chapter 2 is dedicated to presenting a two level derivative-free random search method for unconstrained optimization. It is assumed that the minimizing function is continuous, lower bounded and its minimum value is known. Chapter 3 proves the convergence of the algorithm. In Chapter 4, the numerical performances of the algorithm are shown for solving 140 unconstrained optimization problems, out of which 16 are real applications. This shows that the optimization process has two phases: the reduction phase and the stalling one. Finally, the performances of the algorithm for solving a number of 30 large-scale unconstrained optimization problems up to 500 variables are presented. These numerical results show that this approach based on the two level random search method for unconstrained optimization is able to solve a large diversity of problems with different structures and complexities. There are a number of open problems which refer to the following aspects: the selection of the number of trial or the number of the local trial points, the selection of the bounds of the domains where the trial points and the local trial points are randomly generated and a criterion for initiating the line search.
650
0
$a
Mathematical optimization.
$3
527675
650
0
$a
Operations research.
$3
573517
650
0
$a
Management science.
$3
719678
650
1 4
$a
Optimization.
$3
669174
650
2 4
$a
Operations Research, Management Science.
$3
785065
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783030685164
776
0 8
$i
Printed edition:
$z
9783030685188
830
0
$a
SpringerBriefs in Optimization,
$x
2190-8354
$3
1254063
856
4 0
$u
https://doi.org/10.1007/978-3-030-68517-1
912
$a
ZDB-2-SMA
912
$a
ZDB-2-SXMS
950
$a
Mathematics and Statistics (SpringerNature-11649)
950
$a
Mathematics and Statistics (R0) (SpringerNature-43713)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入