語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Enhancing Expressivity of Document-C...
~
ProQuest Information and Learning Co.
Enhancing Expressivity of Document-Centered Collaboration with Multimodal Annotations.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Enhancing Expressivity of Document-Centered Collaboration with Multimodal Annotations./
作者:
Yoon, Dongwook.
面頁冊數:
1 online resource (205 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-02(E), Section: A.
Contained By:
Dissertation Abstracts International79-02A(E).
標題:
Information science. -
電子資源:
click for full text (PQDT)
ISBN:
9780355281019
Enhancing Expressivity of Document-Centered Collaboration with Multimodal Annotations.
Yoon, Dongwook.
Enhancing Expressivity of Document-Centered Collaboration with Multimodal Annotations.
- 1 online resource (205 pages)
Source: Dissertation Abstracts International, Volume: 79-02(E), Section: A.
Thesis (Ph.D.)
Includes bibliographical references
As knowledge work moves online, digital documents have become a staple of human collaboration. To communicate beyond the constraints of time and space, remote and asynchronous collaborators create digital annotations over documents, substituting face-to-face meetings with online conversations. However, existing document annotation interfaces depend primarily on text commenting, which is not as expressive or nuanced as in-person communication where interlocutors can speak and gesture over physical documents. To expand the communicative capacity of digital documents, we need to enrich annotation interfaces with face-to-face-like multimodal expressions (e.g., talking and pointing over texts). This thesis makes three major contributions toward multimodal annotation interfaces for enriching collaboration around digital documents.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355281019Subjects--Topical Terms:
561178
Information science.
Index Terms--Genre/Form:
554714
Electronic books.
Enhancing Expressivity of Document-Centered Collaboration with Multimodal Annotations.
LDR
:04093ntm a2200361Ki 4500
001
910539
005
20180517123957.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355281019
035
$a
(MiAaPQ)AAI10615393
035
$a
(MiAaPQ)cornellgrad:10426
035
$a
AAI10615393
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
099
$a
TUL
$f
hyy
$c
available through World Wide Web
100
1
$a
Yoon, Dongwook.
$3
1181879
245
1 0
$a
Enhancing Expressivity of Document-Centered Collaboration with Multimodal Annotations.
264
0
$c
2017
300
$a
1 online resource (205 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-02(E), Section: A.
500
$a
Adviser: Francois V. Guimbretiere.
502
$a
Thesis (Ph.D.)
$c
Cornell University
$d
2017.
504
$a
Includes bibliographical references
520
$a
As knowledge work moves online, digital documents have become a staple of human collaboration. To communicate beyond the constraints of time and space, remote and asynchronous collaborators create digital annotations over documents, substituting face-to-face meetings with online conversations. However, existing document annotation interfaces depend primarily on text commenting, which is not as expressive or nuanced as in-person communication where interlocutors can speak and gesture over physical documents. To expand the communicative capacity of digital documents, we need to enrich annotation interfaces with face-to-face-like multimodal expressions (e.g., talking and pointing over texts). This thesis makes three major contributions toward multimodal annotation interfaces for enriching collaboration around digital documents.
520
$a
The first contribution is a set of design requirements for multimodal annotations drawn from our user studies and explorative literature surveys. We found that the major challenges were to support lightweight access to recorded voice, to control visual occlusions of graphically rich audio interfaces, and to reduce speech anxiety in voice comment production. Second, to address these challenges, we present RichReview, a novel multimodal annotation system. RichReview is designed to capture natural communicative expressions in face-to-face document descriptions as the combination of multimodal user inputs (e.g., speech, pen-writing, and deictic pen-hovering). To balance the consumption and production of speech comments, the system employs (1) cross-modal indexing interfaces for faster audio navigation, (2) fluid document-annotation layout for reduced visual clutter, and (3) voice synthesis-based speech editing for reduced speech anxiety. The third contribution is a series of evaluations that examines the effectiveness of our design solutions. Results of our lab studies show that RichReview can successfully address the above mentioned interface problems of multimodal annotations. A subsequent series of field deployment studies test the real-world efficacy of RichReview by deploying the system for document-centered conversation activities in classrooms, such as instructor feedback for student assignments and peer discussions about course material. The results suggest that using rich annotation helps students better understand the instructor's comments, and makes them feel more valued as a person. From the results of the peer-discussion study, we learned that retaining the richness of original speech is the key to the success of speech commenting. What follows is the discussion on the benefits, challenges, and future of multimodal annotation interfaces, and technical innovations required to realize the vision.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Information science.
$3
561178
650
4
$a
Educational technology.
$3
556755
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0723
690
$a
0710
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Cornell University.
$b
Information Science.
$3
1179518
773
0
$t
Dissertation Abstracts International
$g
79-02A(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10615393
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入