語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Inference and Learning : = Computati...
~
University of Pennsylvania.
Inference and Learning : = Computational Difficulty and Efficiency.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Inference and Learning :/
其他題名:
Computational Difficulty and Efficiency.
作者:
Liang, Tengyuan.
面頁冊數:
1 online resource (240 pages)
附註:
Source: Dissertation Abstracts International, Volume: 78-12(E), Section: B.
Contained By:
Dissertation Abstracts International78-12B(E).
標題:
Statistics. -
電子資源:
click for full text (PQDT)
ISBN:
9780355095784
Inference and Learning : = Computational Difficulty and Efficiency.
Liang, Tengyuan.
Inference and Learning :
Computational Difficulty and Efficiency. - 1 online resource (240 pages)
Source: Dissertation Abstracts International, Volume: 78-12(E), Section: B.
Thesis (Ph.D.)--University of Pennsylvania, 2017.
Includes bibliographical references
In this thesis, we mainly investigate two collections of problems: statistical network inference and model selection in regression. The common feature shared by these two types of problems is that they typically exhibit an interesting phenomenon in terms of computational difficulty and efficiency. For statistical network inference, our goal is to infer the network structure based on a noisy observation of the network. Statistically, we model the network as generated from the structural information with the presence of noise, for example, planted submatrix model (for bipartite weighted graph), stochastic block model, and Watts-Strogatz model. As the relative amount of "signal-to-noise" varies, the problems exhibit different stages of computational difficulty. On the theoretical side, we investigate these stages through characterizing the transition thresholds on the "signal-to-noise" ratio, for the aforementioned models. On the methodological side, we provide new computationally efficient procedures to reconstruct the network structure for each model. For model selection in regression, our goal is to learn a "good" model based on a certain model class from the observed data sequences (feature and response pairs), when the model can be misspecified. More concretely, we study two model selection problems: to learn from general classes of functions based on i.i.d. data with minimal assumptions, and to select from the sparse linear model class based on possibly adversarially chosen data in a sequential fashion. We develop new theoretical and algorithmic tools beyond empirical risk minimization to study these problems from a learning theory point of view.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355095784Subjects--Topical Terms:
556824
Statistics.
Index Terms--Genre/Form:
554714
Electronic books.
Inference and Learning : = Computational Difficulty and Efficiency.
LDR
:02920ntm a2200337Ki 4500
001
920620
005
20181203094030.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355095784
035
$a
(MiAaPQ)AAI10270187
035
$a
(MiAaPQ)upenngdas:12695
035
$a
AAI10270187
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Liang, Tengyuan.
$3
1195476
245
1 0
$a
Inference and Learning :
$b
Computational Difficulty and Efficiency.
264
0
$c
2017
300
$a
1 online resource (240 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 78-12(E), Section: B.
500
$a
Advisers: Tony T. Cai; Alexander Rakhlin.
502
$a
Thesis (Ph.D.)--University of Pennsylvania, 2017.
504
$a
Includes bibliographical references
520
$a
In this thesis, we mainly investigate two collections of problems: statistical network inference and model selection in regression. The common feature shared by these two types of problems is that they typically exhibit an interesting phenomenon in terms of computational difficulty and efficiency. For statistical network inference, our goal is to infer the network structure based on a noisy observation of the network. Statistically, we model the network as generated from the structural information with the presence of noise, for example, planted submatrix model (for bipartite weighted graph), stochastic block model, and Watts-Strogatz model. As the relative amount of "signal-to-noise" varies, the problems exhibit different stages of computational difficulty. On the theoretical side, we investigate these stages through characterizing the transition thresholds on the "signal-to-noise" ratio, for the aforementioned models. On the methodological side, we provide new computationally efficient procedures to reconstruct the network structure for each model. For model selection in regression, our goal is to learn a "good" model based on a certain model class from the observed data sequences (feature and response pairs), when the model can be misspecified. More concretely, we study two model selection problems: to learn from general classes of functions based on i.i.d. data with minimal assumptions, and to select from the sparse linear model class based on possibly adversarially chosen data in a sequential fashion. We develop new theoretical and algorithmic tools beyond empirical risk minimization to study these problems from a learning theory point of view.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Statistics.
$3
556824
650
4
$a
Computer science.
$3
573171
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0463
690
$a
0984
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of Pennsylvania.
$b
Statistics.
$3
1182881
773
0
$t
Dissertation Abstracts International
$g
78-12B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10270187
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入