語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures./
作者:
Tehranchi, Farnaz.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
132 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-03, Section: A.
Contained By:
Dissertations Abstracts International83-03A.
標題:
Human-computer interaction. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28767677
ISBN:
9798535589190
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
Tehranchi, Farnaz.
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 132 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: A.
Thesis (Ph.D.)--The Pennsylvania State University, 2020.
A form of Artificial Intelligence simulates human intelligence and behavior. These simulations are not always complete and not always interactive. Adding a new type of memory and extending the visual and motor modules to existing cognitive architecture offers a motivating approach for simulating human behavior. This dissertation presents an Eyes and Hands model, a new approach to facilitate cognitive models to interact with the world. For this approach, the Java Segmentation and Manipulation (JSegMan) tool is built. JSegMan builds upon Java packages to segment and manipulate the screen. JSegMan also generates operating system commands to implement actions with interfaces. Cognitive architectures provide a unified theory of cognition for developing and simulating cognition and human behavior. The Eyes and Hands model extends two cognitive architecture modules, along with JSegMan, to facilitate interaction. Eyes and hands models can be used to explore the role of interaction in human behavior.In this dissertation, three Eyes and Hands models were developed: (a) the Dismal model that completed a spreadsheet task in the Dismal mode of Emacs, (b) the Biased-coin model based on an existing two-choice experiment, and (c) the Excel model that completed the spreadsheet task in the Excel task environment. I conducted two studies to investigate the model’s visual attention and response time. In the first study, learners’ eye movements data were recorded to predict learning. The results showed that with eye movement data, the learners’ performance could be predicted correctly 76% of the time. Therefore, where users are looking is important and should be considered in the simulation. In the second study, participants’ response time and eye movements were recorded. The Excel model was built upon this study. A simple Eyes and Hands Error model was built to demonstrate how the model’s time is allocated to error detection, error correction, and different types of knowledge. The results suggested that further analysis is required to investigate human errors.
ISBN: 9798535589190Subjects--Topical Terms:
555546
Human-computer interaction.
Subjects--Index Terms:
Memory
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
LDR
:03421nam a2200445 4500
001
1067253
005
20220823142328.5
008
221020s2020 ||||||||||||||||| ||eng d
020
$a
9798535589190
035
$a
(MiAaPQ)AAI28767677
035
$a
AAI28767677
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Tehranchi, Farnaz.
$3
1372749
245
1 3
$a
An Eyes and Hands Model: Extending Visual and Motor Modules for Cognitive Architectures.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
132 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: A.
500
$a
Advisor: Ritter, Frank E. ;Passonneau, Rebecca .
502
$a
Thesis (Ph.D.)--The Pennsylvania State University, 2020.
520
$a
A form of Artificial Intelligence simulates human intelligence and behavior. These simulations are not always complete and not always interactive. Adding a new type of memory and extending the visual and motor modules to existing cognitive architecture offers a motivating approach for simulating human behavior. This dissertation presents an Eyes and Hands model, a new approach to facilitate cognitive models to interact with the world. For this approach, the Java Segmentation and Manipulation (JSegMan) tool is built. JSegMan builds upon Java packages to segment and manipulate the screen. JSegMan also generates operating system commands to implement actions with interfaces. Cognitive architectures provide a unified theory of cognition for developing and simulating cognition and human behavior. The Eyes and Hands model extends two cognitive architecture modules, along with JSegMan, to facilitate interaction. Eyes and hands models can be used to explore the role of interaction in human behavior.In this dissertation, three Eyes and Hands models were developed: (a) the Dismal model that completed a spreadsheet task in the Dismal mode of Emacs, (b) the Biased-coin model based on an existing two-choice experiment, and (c) the Excel model that completed the spreadsheet task in the Excel task environment. I conducted two studies to investigate the model’s visual attention and response time. In the first study, learners’ eye movements data were recorded to predict learning. The results showed that with eye movement data, the learners’ performance could be predicted correctly 76% of the time. Therefore, where users are looking is important and should be considered in the simulation. In the second study, participants’ response time and eye movements were recorded. The Excel model was built upon this study. A simple Eyes and Hands Error model was built to demonstrate how the model’s time is allocated to error detection, error correction, and different types of knowledge. The results suggested that further analysis is required to investigate human errors.
590
$a
School code: 0176.
650
4
$a
Human-computer interaction.
$3
555546
650
4
$a
Cognition & reasoning.
$3
1372461
650
4
$a
Design.
$3
595500
650
4
$a
Cognitive models.
$3
1372750
650
4
$a
Dissertations & theses.
$3
1372732
650
4
$a
Software.
$2
gtt
$3
574116
650
4
$a
Electrical engineering.
$3
596380
650
4
$a
Artificial intelligence.
$3
559380
650
4
$a
Computer science.
$3
573171
650
4
$a
Industrial engineering.
$3
679492
653
$a
Memory
653
$a
Visual modules
653
$a
Motor module
653
$a
Cognitive models
653
$a
Segmentation
653
$a
Screen manipulation
653
$a
Interaction
653
$a
Eyes and hands error
653
$a
Human error
653
$a
Simulated human behavior
690
$a
0984
690
$a
0800
690
$a
0546
690
$a
0544
690
$a
0389
710
2
$a
The Pennsylvania State University.
$3
845556
773
0
$t
Dissertations Abstracts International
$g
83-03A.
790
$a
0176
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28767677
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入