語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Pretrained Representations for Embodied AI.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Pretrained Representations for Embodied AI./
作者:
Sax, Alexander.
面頁冊數:
1 online resource (159 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
Contained By:
Dissertations Abstracts International85-03B.
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9798380382403
Pretrained Representations for Embodied AI.
Sax, Alexander.
Pretrained Representations for Embodied AI.
- 1 online resource (159 pages)
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
Thesis (Ph.D.)--University of California, Berkeley, 2023.
Includes bibliographical references
The world is messy and imperfect, unstructured and complex, and nonetheless we must still accomplish the basic behaviors necessary for survival. It is for this purpose, ecologically relevant behavior, that vision evolved 500-600 million years ago.This thesis is about how learn representations of the visual world that are useful for the types of behaviors we might want an embodied AI system to do. In the first part of this thesis, we systematically study how bottlenecking visual inputs through different pretrained representations affects the ability of a robot to learn different atomic navigation skills (Chapter 2) and manipulation skills (Chapter 3) through trial-and-error. The main finding is that the appropriate pretrained representation greatly improves the sample efficiency for skill acquisition, and greatly improves the generalization of the learned skill. In the second part of the thesis, we use the lessons learned in order to improve the accuracy of the representations in a larger variety of contexts (indoors, outdoors, tabletop settings, and so on). In Chapter 4 we do this through adding cross-prediction consistency objectives. In Chapter 5 we do this by leveraging vast amounts of 3D data available on the internet and from a robot's prior experience.The methods are primarily developed for the purpose of vision and action, but many of the ideas are general and could work for other sensory modalities and behaviors.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798380382403Subjects--Topical Terms:
569006
Computer engineering.
Subjects--Index Terms:
Computational ethologyIndex Terms--Genre/Form:
554714
Electronic books.
Pretrained Representations for Embodied AI.
LDR
:02846ntm a22004097 4500
001
1146237
005
20240812064349.5
006
m o d
007
cr bn ---uuuuu
008
250605s2023 xx obm 000 0 eng d
020
$a
9798380382403
035
$a
(MiAaPQ)AAI30427043
035
$a
AAI30427043
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Sax, Alexander.
$3
1471595
245
1 0
$a
Pretrained Representations for Embodied AI.
264
0
$c
2023
300
$a
1 online resource (159 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
500
$a
Advisor: Malik, Jitendra;Zamir, Amir.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2023.
504
$a
Includes bibliographical references
520
$a
The world is messy and imperfect, unstructured and complex, and nonetheless we must still accomplish the basic behaviors necessary for survival. It is for this purpose, ecologically relevant behavior, that vision evolved 500-600 million years ago.This thesis is about how learn representations of the visual world that are useful for the types of behaviors we might want an embodied AI system to do. In the first part of this thesis, we systematically study how bottlenecking visual inputs through different pretrained representations affects the ability of a robot to learn different atomic navigation skills (Chapter 2) and manipulation skills (Chapter 3) through trial-and-error. The main finding is that the appropriate pretrained representation greatly improves the sample efficiency for skill acquisition, and greatly improves the generalization of the learned skill. In the second part of the thesis, we use the lessons learned in order to improve the accuracy of the representations in a larger variety of contexts (indoors, outdoors, tabletop settings, and so on). In Chapter 4 we do this through adding cross-prediction consistency objectives. In Chapter 5 we do this by leveraging vast amounts of 3D data available on the internet and from a robot's prior experience.The methods are primarily developed for the purpose of vision and action, but many of the ideas are general and could work for other sensory modalities and behaviors.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Statistics.
$3
556824
653
$a
Computational ethology
653
$a
Computer vision
653
$a
Embodied AI
653
$a
Pretrained representations
653
$a
Robotics
653
$a
Transfer learning
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0800
690
$a
0463
690
$a
0464
710
2
$a
University of California, Berkeley.
$b
Computer Science.
$3
1179511
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
773
0
$t
Dissertations Abstracts International
$g
85-03B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30427043
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入