語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Generative Models of Images and Neural Networks.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Generative Models of Images and Neural Networks./
作者:
Peebles, William Smith.
面頁冊數:
1 online resource (90 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
Contained By:
Dissertations Abstracts International85-03B.
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9798380382526
Generative Models of Images and Neural Networks.
Peebles, William Smith.
Generative Models of Images and Neural Networks.
- 1 online resource (90 pages)
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
Thesis (Ph.D.)--University of California, Berkeley, 2023.
Includes bibliographical references
Large-scale generative models have fueled recent progress in artificial intelligence. Armed with scaling laws that accurately predict model performance as invested compute increases, NLP has become the gold standard for all disciplines of AI. Given a new task, pre-trained generative models can either solve it zero-shot or be efficiently fine-tuned on a small amount of task-specific training examples. However, the widespread adoption of generative models has lagged in other domains-such as vision and meta-learning. In this thesis, we study ways to train improved, scalable generative models of two modalities-images and neural network parameters. We also examine how pre-trained generative models can be leveraged to tackle additional downstream tasks.We begin by introducing a new, powerful class of generative models-Diffusion Transformers (DiTs). We show that transformers-with one small yet critically-important modification-retain their excellent scaling properties for diffusion-based image generation and outperform convolutional neural networks that have previously dominated the area. DiT outperforms all prior generative models on the class-conditional ImageNet generation benchmark.Next, we introduce a novel framework for learning to learn based on building generative models of a new data source-neural network checkpoints. We create datasets containing hundreds of thousands of deep learning training runs and use it to train generative models of neural network checkpoints. Given a starting parameter vector and a target loss, error or reward, loss-conditional diffusion models trained on this data can sample parameter updates that achieve a desired metric. We apply our framework to problems in vision and reinforcement learning.Finally, we explore how pre-trained image-level generative models can be used to tackle downstream tasks in vision without requiring task-specific training data. We show that pre-trained GAN generators can be used to create an infinite data stream to train networks for the dense visual correspondence problem-without requiring any human-annotated supervision like keypoints. Networks trained on this completely GAN-generated data generalize zero-shot to real images, and they outperform previous self-supervised and keypoint-supervised approaches that train on real data.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798380382526Subjects--Topical Terms:
569006
Computer engineering.
Subjects--Index Terms:
Deep learningIndex Terms--Genre/Form:
554714
Electronic books.
Generative Models of Images and Neural Networks.
LDR
:03702ntm a22004097 4500
001
1143779
005
20240517104950.5
006
m o d
007
cr mn ---uuuuu
008
250605s2023 xx obm 000 0 eng d
020
$a
9798380382526
035
$a
(MiAaPQ)AAI30490099
035
$a
AAI30490099
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Peebles, William Smith.
$3
1468571
245
1 0
$a
Generative Models of Images and Neural Networks.
264
0
$c
2023
300
$a
1 online resource (90 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
500
$a
Advisor: Efros, Alexei A.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2023.
504
$a
Includes bibliographical references
520
$a
Large-scale generative models have fueled recent progress in artificial intelligence. Armed with scaling laws that accurately predict model performance as invested compute increases, NLP has become the gold standard for all disciplines of AI. Given a new task, pre-trained generative models can either solve it zero-shot or be efficiently fine-tuned on a small amount of task-specific training examples. However, the widespread adoption of generative models has lagged in other domains-such as vision and meta-learning. In this thesis, we study ways to train improved, scalable generative models of two modalities-images and neural network parameters. We also examine how pre-trained generative models can be leveraged to tackle additional downstream tasks.We begin by introducing a new, powerful class of generative models-Diffusion Transformers (DiTs). We show that transformers-with one small yet critically-important modification-retain their excellent scaling properties for diffusion-based image generation and outperform convolutional neural networks that have previously dominated the area. DiT outperforms all prior generative models on the class-conditional ImageNet generation benchmark.Next, we introduce a novel framework for learning to learn based on building generative models of a new data source-neural network checkpoints. We create datasets containing hundreds of thousands of deep learning training runs and use it to train generative models of neural network checkpoints. Given a starting parameter vector and a target loss, error or reward, loss-conditional diffusion models trained on this data can sample parameter updates that achieve a desired metric. We apply our framework to problems in vision and reinforcement learning.Finally, we explore how pre-trained image-level generative models can be used to tackle downstream tasks in vision without requiring task-specific training data. We show that pre-trained GAN generators can be used to create an infinite data stream to train networks for the dense visual correspondence problem-without requiring any human-annotated supervision like keypoints. Networks trained on this completely GAN-generated data generalize zero-shot to real images, and they outperform previous self-supervised and keypoint-supervised approaches that train on real data.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Computer science.
$3
573171
650
4
$a
Information technology.
$3
559429
653
$a
Deep learning
653
$a
Diffusion
653
$a
Generative adversarial networks
653
$a
Neural networks
653
$a
Transformers
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0800
690
$a
0489
690
$a
0984
690
$a
0464
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of California, Berkeley.
$b
Computer Science.
$3
1179511
773
0
$t
Dissertations Abstracts International
$g
85-03B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30490099
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入