語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Human-Ai Interaction Under Societal Disagreement.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Human-Ai Interaction Under Societal Disagreement./
作者:
Gordon, Mitchell Louis.
面頁冊數:
1 online resource (132 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-06, Section: A.
Contained By:
Dissertations Abstracts International85-06A.
標題:
Information science. -
電子資源:
click for full text (PQDT)
ISBN:
9798381018110
Human-Ai Interaction Under Societal Disagreement.
Gordon, Mitchell Louis.
Human-Ai Interaction Under Societal Disagreement.
- 1 online resource (132 pages)
Source: Dissertations Abstracts International, Volume: 85-06, Section: A.
Thesis (Ph.D.)--Stanford University, 2023.
Includes bibliographical references
Whose voices - whose labels - should a machine learning algorithm learn to emulate? For AI tasks ranging from online comment toxicity detection to poster design to medical treatment, different groups in society may have irreconcilable disagreements about what constitutes ground truth. Today's supervised machine learning pipeline typically resolves these disagreements implicitly by majority vote over annotators' opinions. This majoritarian procedure abstracts individual people out of the pipeline and collapses their labels into an aggregate pseudo-human, ignoring minority groups' labels.In this dissertation, I will present Jury Learning: an interactive AI architecture that enables developers to explicitly reason over whose voice a model ought to emulate through the metaphor of a jury. Through my exploratory interface, practitioners can declaratively define which people or groups, in what proportion, determine the classifier's prediction. To evaluate models under societal disagreement, I will also present The Disagreement Deconvolution: a metric transformation showing how, in abstracting away the individual people that models impact, current metrics dramatically overstate the performance of many user-facing tasks. These components become building blocks of a new pipeline for encoding our goals and values in human-AI systems, which strives to bridge principles of HCI with the realities of machine learning.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798381018110Subjects--Topical Terms:
561178
Information science.
Index Terms--Genre/Form:
554714
Electronic books.
Human-Ai Interaction Under Societal Disagreement.
LDR
:02709ntm a22003617 4500
001
1145333
005
20240618081821.5
006
m o d
007
cr mn ---uuuuu
008
250605s2023 xx obm 000 0 eng d
020
$a
9798381018110
035
$a
(MiAaPQ)AAI30726818
035
$a
(MiAaPQ)STANFORDxf168rn2553
035
$a
AAI30726818
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Gordon, Mitchell Louis.
$3
1470623
245
1 0
$a
Human-Ai Interaction Under Societal Disagreement.
264
0
$c
2023
300
$a
1 online resource (132 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-06, Section: A.
500
$a
Advisor: Landay, James;Hashimoto, Tatsunori;Bernstein, Michael.
502
$a
Thesis (Ph.D.)--Stanford University, 2023.
504
$a
Includes bibliographical references
520
$a
Whose voices - whose labels - should a machine learning algorithm learn to emulate? For AI tasks ranging from online comment toxicity detection to poster design to medical treatment, different groups in society may have irreconcilable disagreements about what constitutes ground truth. Today's supervised machine learning pipeline typically resolves these disagreements implicitly by majority vote over annotators' opinions. This majoritarian procedure abstracts individual people out of the pipeline and collapses their labels into an aggregate pseudo-human, ignoring minority groups' labels.In this dissertation, I will present Jury Learning: an interactive AI architecture that enables developers to explicitly reason over whose voice a model ought to emulate through the metaphor of a jury. Through my exploratory interface, practitioners can declaratively define which people or groups, in what proportion, determine the classifier's prediction. To evaluate models under societal disagreement, I will also present The Disagreement Deconvolution: a metric transformation showing how, in abstracting away the individual people that models impact, current metrics dramatically overstate the performance of many user-facing tasks. These components become building blocks of a new pipeline for encoding our goals and values in human-AI systems, which strives to bridge principles of HCI with the realities of machine learning.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Information science.
$3
561178
650
4
$a
Chatbots.
$3
1454898
650
4
$a
False information.
$3
1467504
650
4
$a
Society.
$2
eflch
$3
934844
650
4
$a
Decision making.
$3
528319
650
4
$a
Recommender systems.
$3
1372552
650
4
$a
Content management.
$3
1470625
650
4
$a
Computer science.
$3
573171
650
4
$a
Juries.
$3
1470624
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0800
690
$a
0723
690
$a
0454
710
2
$a
Stanford University.
$3
1184533
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
773
0
$t
Dissertations Abstracts International
$g
85-06A.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30726818
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入