語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Depth based Sensor Fusion in Object ...
~
The Ohio State University.
Depth based Sensor Fusion in Object Detection and Tracking.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Depth based Sensor Fusion in Object Detection and Tracking./
作者:
Sikdar, Ankita.
面頁冊數:
1 online resource (133 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B.
Contained By:
Dissertation Abstracts International79-12B(E).
標題:
Artificial intelligence. -
電子資源:
click for full text (PQDT)
ISBN:
9780438097711
Depth based Sensor Fusion in Object Detection and Tracking.
Sikdar, Ankita.
Depth based Sensor Fusion in Object Detection and Tracking.
- 1 online resource (133 pages)
Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B.
Thesis (Ph.D.)--The Ohio State University, 2018.
Includes bibliographical references
Multi-sensor fusion is the method of combining sensor data obtained from multiple sources to estimate the environment. Its common applications are in automated manufacturing, automated navigation, target detection and tracking, environment perception, biometrics, etc. Out of these applications, object detection and tracking is very important in the field of robotics or computer vision and finds application in diverse areas such as video surveillance, person following, autonomous navigation etc. In the context of purely two-dimensional (2-D) camera based tracking, situations such as erratic motion of the object, scene changes, occlusions along with noise and illumination changes are an impediment to successful object tracking. Integration of information from range sensors with cameras helps alleviate some of the issues faced by 2-D tracking. This dissertation aims to explore novel methods to develop a sensor fusion framework to combine depth information from radars, infrared and Kinect sensors with an RGB camera to improve object detection and tracking accuracy.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780438097711Subjects--Topical Terms:
559380
Artificial intelligence.
Index Terms--Genre/Form:
554714
Electronic books.
Depth based Sensor Fusion in Object Detection and Tracking.
LDR
:05620ntm a2200397Ki 4500
001
916935
005
20180928111503.5
006
m o u
007
cr mn||||a|a||
008
190606s2018 xx obm 000 0 eng d
020
$a
9780438097711
035
$a
(MiAaPQ)AAI10901827
035
$a
(MiAaPQ)OhioLINK:osu1515075130647622
035
$a
AAI10901827
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Sikdar, Ankita.
$3
1190809
245
1 0
$a
Depth based Sensor Fusion in Object Detection and Tracking.
264
0
$c
2018
300
$a
1 online resource (133 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B.
500
$a
Advisers: Yuan F. Zheng; Dong Xuan.
502
$a
Thesis (Ph.D.)--The Ohio State University, 2018.
504
$a
Includes bibliographical references
520
$a
Multi-sensor fusion is the method of combining sensor data obtained from multiple sources to estimate the environment. Its common applications are in automated manufacturing, automated navigation, target detection and tracking, environment perception, biometrics, etc. Out of these applications, object detection and tracking is very important in the field of robotics or computer vision and finds application in diverse areas such as video surveillance, person following, autonomous navigation etc. In the context of purely two-dimensional (2-D) camera based tracking, situations such as erratic motion of the object, scene changes, occlusions along with noise and illumination changes are an impediment to successful object tracking. Integration of information from range sensors with cameras helps alleviate some of the issues faced by 2-D tracking. This dissertation aims to explore novel methods to develop a sensor fusion framework to combine depth information from radars, infrared and Kinect sensors with an RGB camera to improve object detection and tracking accuracy.
520
$a
In indoor robotics applications, the use of infrared sensors has mostly been limited to a proximity sensor to avoid obstacles. The first part of the dissertation focuses on extending the use of these low-cost, but extremely fast infrared sensors to accomplish tasks such as identifying the direction of motion of a person and fusing the sparse range data obtained from infrared sensors with a camera to develop a low-cost and efficient indoor tracking sensor system. A linear infrared array network has been used to classify the direction of motion of a human being. A histogram based iterative clustering algorithm segments data into clusters, from which extracted features are fed to a classification algorithm to classify the motion direction. To address the circumstances when a robot tracks an object that executes unpredictable behavior - making abrupt turns, stopping while moving in an irregular wavy track, such as when a personal robot assistant follows a shopper in a store or a tourist in a museum or a child playing around, the use of an adaptive motion model has been proposed to keep track of the object. Therefore, an array of infrared sensors can be advantageous over a depth camera, when discrete data is required at a fast processing rate.
520
$a
Research regarding 3-D tracking has proliferated in the last decade with the advent of the low-cost Kinect sensors. Prior work on depth based tracking using Kinect sensors focuses mostly on depth based extraction of objects to aid in tracking. The next part of the dissertation focuses on object tracking in the x-z domain using a Kinect sensor, with an emphasis on occlusion handling. Particle filters, used for tracking, are propagated based on a motion model in the horizontal-depth framework. Observations are obtained by extracting objects using a suitable depth range. Particles, depicted by patches extracted in the x-z domain, are associated to these observations based on the closest match according to a likelihood model and then a majority voting is employed to select a final observation, based on which, particles are reweighted, and a final estimation is made. An occluder tracking system has been developed, which uses a part based association of the partially visible occluded objects to the whole object prior to its occlusion, thus helping to keep track of the object when it recovers from occlusion.
520
$a
The latter part of the dissertation discusses a classical data association problem, where discrete range data from a depth sensor has to be associated to 2-D objects detected by a camera. A vision sensor helps to locate objects in a 2-D plane only but estimating the distance using a single vision sensor has limitations. A radar sensor returns the range of objects accurately; however, it does not indicate which range corresponds to which object. A sensor fusion approach for radar-vision integration has been proposed, which using a modified Hungarian algorithm with geometric constraints, associates data from a simulated radar to 2-D information from an image to establish the three-dimensional (3-D) position of vehicles around an ego vehicle in a highway. This information would help an autonomous vehicle to maneuver safely.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Artificial intelligence.
$3
559380
650
4
$a
Computer science.
$3
573171
650
4
$a
Information technology.
$3
559429
650
4
$a
Computer engineering.
$3
569006
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0800
690
$a
0984
690
$a
0489
690
$a
0464
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
The Ohio State University.
$b
Computer Science and Engineering.
$3
1180873
773
0
$t
Dissertation Abstracts International
$g
79-12B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10901827
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入