語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
SparseGS : = Real-Time 360° Sparse View Synthesis Using Gaussian Splatting.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
SparseGS :/
其他題名:
Real-Time 360° Sparse View Synthesis Using Gaussian Splatting.
作者:
Xiong, Haolin.
面頁冊數:
1 online resource (49 pages)
附註:
Source: Masters Abstracts International, Volume: 85-12.
Contained By:
Masters Abstracts International85-12.
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9798382832890
SparseGS : = Real-Time 360° Sparse View Synthesis Using Gaussian Splatting.
Xiong, Haolin.
SparseGS :
Real-Time 360° Sparse View Synthesis Using Gaussian Splatting. - 1 online resource (49 pages)
Source: Masters Abstracts International, Volume: 85-12.
Thesis (M.S.)--University of California, Los Angeles, 2024.
Includes bibliographical references
The problem of novel view synthesis has grown significantly in popularity recently with the introduction of Neural Radiance Fields (NeRFs) and other implicit scene representation methods. A recent advance, 3D Gaussian Splatting (3DGS), leverages an explicit representation to achieve real-time rendering with high-quality results. However, 3DGS still requires an abundance of training views to generate a coherent scene representation. In few shot settings, similar to NeRF, 3DGS tends to overfit to training views, causing background collapse and excessive floaters, especially as the number of training views are reduced. This work proposes a method to enable training coherent 3DGS-based radiance fields of 360° scenes from sparse training views. Depth priors are integrated with generative and explicit constraints to reduce background collapse, remove floaters, and enhance consistency from unseen viewpoints. Experiments show that this method outperforms base 3DGS by 6.4% in LPIPS and by 12.2% in PSNR, and NeRF-based methods by at least 17.6% in LPIPS on the MipNeRF-360 dataset with substantially less training and inference cost. Project website at: https://tinyurl.com/sparsegs.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798382832890Subjects--Topical Terms:
569006
Computer engineering.
Subjects--Index Terms:
3D Gaussian SplattingIndex Terms--Genre/Form:
554714
Electronic books.
SparseGS : = Real-Time 360° Sparse View Synthesis Using Gaussian Splatting.
LDR
:02533ntm a22003857 4500
001
1149753
005
20241022112630.5
006
m o d
007
cr bn ---uuuuu
008
250605s2024 xx obm 000 0 eng d
020
$a
9798382832890
035
$a
(MiAaPQ)AAI31330943
035
$a
AAI31330943
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Xiong, Haolin.
$3
1476087
245
1 0
$a
SparseGS :
$b
Real-Time 360° Sparse View Synthesis Using Gaussian Splatting.
264
0
$c
2024
300
$a
1 online resource (49 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 85-12.
500
$a
Advisor: Kadambi, Achuta.
502
$a
Thesis (M.S.)--University of California, Los Angeles, 2024.
504
$a
Includes bibliographical references
520
$a
The problem of novel view synthesis has grown significantly in popularity recently with the introduction of Neural Radiance Fields (NeRFs) and other implicit scene representation methods. A recent advance, 3D Gaussian Splatting (3DGS), leverages an explicit representation to achieve real-time rendering with high-quality results. However, 3DGS still requires an abundance of training views to generate a coherent scene representation. In few shot settings, similar to NeRF, 3DGS tends to overfit to training views, causing background collapse and excessive floaters, especially as the number of training views are reduced. This work proposes a method to enable training coherent 3DGS-based radiance fields of 360° scenes from sparse training views. Depth priors are integrated with generative and explicit constraints to reduce background collapse, remove floaters, and enhance consistency from unseen viewpoints. Experiments show that this method outperforms base 3DGS by 6.4% in LPIPS and by 12.2% in PSNR, and NeRF-based methods by at least 17.6% in LPIPS on the MipNeRF-360 dataset with substantially less training and inference cost. Project website at: https://tinyurl.com/sparsegs.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Computer science.
$3
573171
650
4
$a
Electrical engineering.
$3
596380
653
$a
3D Gaussian Splatting
653
$a
Neural Radiance Fields
653
$a
Novel view synthesis
653
$a
Radiance fields
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0464
690
$a
0984
690
$a
0544
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of California, Los Angeles.
$b
Electrical and Computer Engineering 0333.
$3
1413511
773
0
$t
Masters Abstracts International
$g
85-12.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31330943
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入