語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Physics-Aware Tiny Machine Learning.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Physics-Aware Tiny Machine Learning./
作者:
Saha, Swapnil Sayan.
面頁冊數:
1 online resource (170 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
Contained By:
Dissertations Abstracts International84-12B.
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9798379617226
Physics-Aware Tiny Machine Learning.
Saha, Swapnil Sayan.
Physics-Aware Tiny Machine Learning.
- 1 online resource (170 pages)
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
Thesis (Ph.D.)--University of California, Los Angeles, 2023.
Includes bibliographical references
Tiny machine learning has enabled Internet of Things platforms to make intelligent inferences for time-critical and remote applications from unstructured data. However, realizing edge artificial intelligence systems that can perform long-term high-level reasoning and obey the underlying system physics, rules, and constraints within the tight platform resource budget is challenging. This dissertation explores how rich, robust, and intelligent inferences can be made on extremely resource-constrained platforms in a platform-aware and automated fashion. Firstly, we introduce a robust training pipeline that handles sampling rate variability, missing data, and misaligned data timestamps through intelligent data augmentation techniques during training time. We use a controlled jitter in window length and add artificial misalignments in data timestamps between sensors, along with masking representations of missing data. Secondly, we introduce TinyNS, a platform-aware neurosymbolic architecture search framework for the automatic co-optimization and deployment of neural operators and physics-based process models. TinyNS exploits fast, gradient-free, and black-box Bayesian optimization to automatically construct the most performant learning-enabled, physics, and context-aware edge artificial intelligence program from a search space containing neural and symbolic operators within the platform resource constraints. To guarantee deployability, TinyNS receives hardware metrics directly from the target hardware during the optimization process. Thirdly, we introduce the concept of neurosymbolic tiny machine learning, where we showcase recipes for defining the physics-aware tiny machine learning program synthesis search space from five neurosymbolic program categories. Neurosymbolic artificial intelligence combines the context awareness and integrity of symbolic techniques with the robustness and performance of machine learning models. We develop parsers to automatically write microcontroller code for neurosymbolic programs and showcase several previously unseen TinyML applications. These include onboard physics-aware neural-inertial navigation, on-device human activity recognition, on-chip fall detection, neural-Kalman filtering, and co-optimization of neural and symbolic processes. Finally, we showcase techniques to personalize and adapt tiny machine learning systems to the target domain and application. We illustrate the use of transfer learning, resource-efficient unsupervised template creation and matching, and foundation models as pathways to realize generalizable, domain-aware, and data-efficient edge artificial intelligence systems.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798379617226Subjects--Topical Terms:
573171
Computer science.
Subjects--Index Terms:
Edge artificial intelligenceIndex Terms--Genre/Form:
554714
Electronic books.
Physics-Aware Tiny Machine Learning.
LDR
:04099ntm a22004217 4500
001
1146238
005
20240812064349.5
006
m o d
007
cr bn ---uuuuu
008
250605s2023 xx obm 000 0 eng d
020
$a
9798379617226
035
$a
(MiAaPQ)AAI30485313
035
$a
AAI30485313
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Saha, Swapnil Sayan.
$3
1471596
245
1 0
$a
Physics-Aware Tiny Machine Learning.
264
0
$c
2023
300
$a
1 online resource (170 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-12, Section: B.
500
$a
Advisor: Srivastava, Mani B.
502
$a
Thesis (Ph.D.)--University of California, Los Angeles, 2023.
504
$a
Includes bibliographical references
520
$a
Tiny machine learning has enabled Internet of Things platforms to make intelligent inferences for time-critical and remote applications from unstructured data. However, realizing edge artificial intelligence systems that can perform long-term high-level reasoning and obey the underlying system physics, rules, and constraints within the tight platform resource budget is challenging. This dissertation explores how rich, robust, and intelligent inferences can be made on extremely resource-constrained platforms in a platform-aware and automated fashion. Firstly, we introduce a robust training pipeline that handles sampling rate variability, missing data, and misaligned data timestamps through intelligent data augmentation techniques during training time. We use a controlled jitter in window length and add artificial misalignments in data timestamps between sensors, along with masking representations of missing data. Secondly, we introduce TinyNS, a platform-aware neurosymbolic architecture search framework for the automatic co-optimization and deployment of neural operators and physics-based process models. TinyNS exploits fast, gradient-free, and black-box Bayesian optimization to automatically construct the most performant learning-enabled, physics, and context-aware edge artificial intelligence program from a search space containing neural and symbolic operators within the platform resource constraints. To guarantee deployability, TinyNS receives hardware metrics directly from the target hardware during the optimization process. Thirdly, we introduce the concept of neurosymbolic tiny machine learning, where we showcase recipes for defining the physics-aware tiny machine learning program synthesis search space from five neurosymbolic program categories. Neurosymbolic artificial intelligence combines the context awareness and integrity of symbolic techniques with the robustness and performance of machine learning models. We develop parsers to automatically write microcontroller code for neurosymbolic programs and showcase several previously unseen TinyML applications. These include onboard physics-aware neural-inertial navigation, on-device human activity recognition, on-chip fall detection, neural-Kalman filtering, and co-optimization of neural and symbolic processes. Finally, we showcase techniques to personalize and adapt tiny machine learning systems to the target domain and application. We illustrate the use of transfer learning, resource-efficient unsupervised template creation and matching, and foundation models as pathways to realize generalizable, domain-aware, and data-efficient edge artificial intelligence systems.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Electrical engineering.
$3
596380
653
$a
Edge artificial intelligence
653
$a
Machine learning
653
$a
Neural networks
653
$a
Neurosymbolic programs
653
$a
Physics-awareness
653
$a
TinyML applications
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0544
690
$a
0464
690
$a
0984
690
$a
0800
710
2
$a
University of California, Los Angeles.
$b
Electrical and Computer Engineering 0333.
$3
1413511
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
773
0
$t
Dissertations Abstracts International
$g
84-12B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30485313
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入