語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Multimodal scene understanding = alg...
~
Rosenhahn, Bodo.
Multimodal scene understanding = algorithms, applications and deep learning /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Multimodal scene understanding/ edited by Michael Ying Yang, Bodo Rosenhahn, Vittorio Murino.
其他題名:
algorithms, applications and deep learning /
其他作者:
Murino, Vittorio.
出版者:
London ;Academic Press, : 2019.,
面頁冊數:
1 online resource (ix, 412 p.) :ill. (some col.), maps :
標題:
Artificial intelligence. -
電子資源:
https://www.sciencedirect.com/science/book/9780128173589
ISBN:
9780128173596 (electronic bk.)
Multimodal scene understanding = algorithms, applications and deep learning /
Multimodal scene understanding
algorithms, applications and deep learning /[electronic resource] :edited by Michael Ying Yang, Bodo Rosenhahn, Vittorio Murino. - London ;Academic Press,2019. - 1 online resource (ix, 412 p.) :ill. (some col.), maps
Includes bibliographical references and index.
Front Cover; Multimodal Scene Understanding; Copyright; Contents; List of Contributors; 1 Introduction to Multimodal Scene Understanding; 1.1 Introduction; 1.2 Organization of the Book; References; 2 Deep Learning for Multimodal Data Fusion; 2.1 Introduction; 2.2 Related Work; 2.3 Basics of Multimodal Deep Learning: VAEs and GANs; 2.3.1 Auto-Encoder; 2.3.2 Variational Auto-Encoder (VAE); 2.3.3 Generative Adversarial Network (GAN); 2.3.4 VAE-GAN; 2.3.5 Adversarial Auto-Encoder (AAE); 2.3.6 Adversarial Variational Bayes (AVB); 2.3.7 ALI and BiGAN
Multimodal Scene Understanding: Algorithms, Applications and Deep Learning presents recent advances in multi-modal computing, with a focus on computer vision and photogrammetry. It provides the latest algorithms and applications that involve combining multiple sources of information and describes the role and approaches of multi-sensory data and multi-modal deep learning. The book is ideal for researchers from the fields of computer vision, remote sensing, robotics, and photogrammetry, thus helping foster interdisciplinary interaction and collaboration between these realms. Researchers collecting and analyzing multi-sensory data collections - for example, KITTI benchmark (stereo+laser) - from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites will find this book to be very useful.
ISBN: 9780128173596 (electronic bk.)Subjects--Topical Terms:
559380
Artificial intelligence.
Index Terms--Genre/Form:
554714
Electronic books.
LC Class. No.: Q342 / .M85 2019
Dewey Class. No.: 006.3
Multimodal scene understanding = algorithms, applications and deep learning /
LDR
:04908cam a2200313 a 4500
001
1043594
006
m o d
007
cr cnu---unuuu
008
211216s2019 enkab ob 001 0 eng d
020
$a
9780128173596 (electronic bk.)
020
$a
0128173599 (electronic bk.)
020
$a
9780128173589 (electronic bk.)
020
$a
0128173580 (electronic bk.)
035
$a
on1109390062
040
$a
N$T
$b
eng
$e
pn
$c
N$T
$d
EBLCP
$d
N$T
$d
UKMGB
$d
OCLCF
$d
OPELS
$d
YDXIT
$d
UKAHL
$d
YDX
$d
OCLCQ
$d
OCL
$d
SFB
$d
OCLCQ
$d
SFB
$d
VT2
$d
OCLCQ
$d
OCLCO
041
0
$a
eng
050
4
$a
Q342
$b
.M85 2019
082
0 4
$a
006.3
$2
23
245
0 0
$a
Multimodal scene understanding
$h
[electronic resource] :
$b
algorithms, applications and deep learning /
$c
edited by Michael Ying Yang, Bodo Rosenhahn, Vittorio Murino.
260
$a
London ;
$a
San Diego, CA :
$b
Academic Press,
$c
2019.
300
$a
1 online resource (ix, 412 p.) :
$b
ill. (some col.), maps
504
$a
Includes bibliographical references and index.
505
0
$a
Front Cover; Multimodal Scene Understanding; Copyright; Contents; List of Contributors; 1 Introduction to Multimodal Scene Understanding; 1.1 Introduction; 1.2 Organization of the Book; References; 2 Deep Learning for Multimodal Data Fusion; 2.1 Introduction; 2.2 Related Work; 2.3 Basics of Multimodal Deep Learning: VAEs and GANs; 2.3.1 Auto-Encoder; 2.3.2 Variational Auto-Encoder (VAE); 2.3.3 Generative Adversarial Network (GAN); 2.3.4 VAE-GAN; 2.3.5 Adversarial Auto-Encoder (AAE); 2.3.6 Adversarial Variational Bayes (AVB); 2.3.7 ALI and BiGAN
505
8
$a
2.4 Multimodal Image-to-Image Translation Networks2.4.1 Pix2pix and Pix2pixHD; 2.4.2 CycleGAN, DiscoGAN, and DualGAN; 2.4.3 CoGAN; 2.4.4 UNIT; 2.4.5 Triangle GAN; 2.5 Multimodal Encoder-Decoder Networks; 2.5.1 Model Architecture; 2.5.2 Multitask Training; 2.5.3 Implementation Details; 2.6 Experiments; 2.6.1 Results on NYUDv2 Dataset; 2.6.2 Results on Cityscape Dataset; 2.6.3 Auxiliary Tasks; 2.7 Conclusion; References; 3 Multimodal Semantic Segmentation: Fusion of RGB and Depth Data in Convolutional Neural Networks; 3.1 Introduction; 3.2 Overview; 3.2.1 Image Classi cation and the VGG Network
505
8
$a
3.2.2 Architectures for Pixel-level Labeling3.2.3 Architectures for RGB and Depth Fusion; 3.2.4 Datasets and Benchmarks; 3.3 Methods; 3.3.1 Datasets and Data Splitting; 3.3.2 Preprocessing of the Stanford Dataset; 3.3.3 Preprocessing of the ISPRS Dataset; 3.3.4 One-channel Normal Label Representation; 3.3.5 Color Spaces for RGB and Depth Fusion; 3.3.6 Hyper-parameters and Training; 3.4 Results and Discussion; 3.4.1 Results and Discussion on the Stanford Dataset; 3.4.2 Results and Discussion on the ISPRS Dataset; 3.5 Conclusion; References
505
8
$a
4 Learning Convolutional Neural Networks for Object Detection with Very Little Training Data4.1 Introduction; 4.2 Fundamentals; 4.2.1 Types of Learning; 4.2.2 Convolutional Neural Networks; 4.2.2.1 Arti cial neuron; 4.2.2.2 Arti cial neural network; 4.2.2.3 Training; 4.2.2.4 Convolutional neural networks; 4.2.3 Random Forests; 4.2.3.1 Decision tree; 4.2.3.2 Random forest; 4.3 Related Work; 4.4 Traf c Sign Detection; 4.4.1 Feature Learning; 4.4.2 Random Forest Classi cation; 4.4.3 RF to NN Mapping; 4.4.4 Fully Convolutional Network; 4.4.5 Bounding Box Prediction; 4.5 Localization
505
8
$a
4.6 Clustering4.7 Dataset; 4.7.1 Data Capturing; 4.7.2 Filtering; 4.8 Experiments; 4.8.1 Training and Test Data; 4.8.2 Classi cation; 4.8.3 Object Detection; 4.8.4 Computation Time; 4.8.5 Precision of Localizations; 4.9 Conclusion; Acknowledgment; References; 5 Multimodal Fusion Architectures for Pedestrian Detection; 5.1 Introduction; 5.2 Related Work; 5.2.1 Visible Pedestrian Detection; 5.2.2 Infrared Pedestrian Detection; 5.2.3 Multimodal Pedestrian Detection; 5.3 Proposed Method; 5.3.1 Multimodal Feature Learning/Fusion; 5.3.2 Multimodal Pedestrian Detection; 5.3.2.1 Baseline DNN model
520
$a
Multimodal Scene Understanding: Algorithms, Applications and Deep Learning presents recent advances in multi-modal computing, with a focus on computer vision and photogrammetry. It provides the latest algorithms and applications that involve combining multiple sources of information and describes the role and approaches of multi-sensory data and multi-modal deep learning. The book is ideal for researchers from the fields of computer vision, remote sensing, robotics, and photogrammetry, thus helping foster interdisciplinary interaction and collaboration between these realms. Researchers collecting and analyzing multi-sensory data collections - for example, KITTI benchmark (stereo+laser) - from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites will find this book to be very useful.
588
0
$a
Online resource; title from digital title page (viewed on October 10, 2019).
650
0
$a
Artificial intelligence.
$3
559380
650
0
$a
Engineering.
$3
561152
650
0
$a
Algorithms.
$3
527865
650
0
$a
Computer vision.
$3
561800
650
0
$a
Computational intelligence.
$3
568984
655
4
$a
Electronic books.
$2
local
$3
554714
700
1
$a
Murino, Vittorio.
$3
883231
700
1
$a
Rosenhahn, Bodo.
$3
677081
700
1
$a
Yang, Michael Ying.
$3
1345123
856
4 0
$u
https://www.sciencedirect.com/science/book/9780128173589
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入