語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
The Future of AI Can Be Kind: Strategies for Embedded Ethics in AI Education /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
The Future of AI Can Be Kind: Strategies for Embedded Ethics in AI Education // Yim Register.
作者:
Register, Yim,
面頁冊數:
1 electronic resource (230 pages)
附註:
Source: Dissertations Abstracts International, Volume: 86-01, Section: B.
Contained By:
Dissertations Abstracts International86-01B.
標題:
Science education. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31144431
ISBN:
9798383219805
The Future of AI Can Be Kind: Strategies for Embedded Ethics in AI Education /
Register, Yim,
The Future of AI Can Be Kind: Strategies for Embedded Ethics in AI Education /
Yim Register. - 1 electronic resource (230 pages)
Source: Dissertations Abstracts International, Volume: 86-01, Section: B.
The field of Data Science has seen rapid growth over the past two decades, with a high demand for people with skills in data analytics, programming, statistics, and ability to visualize, predict from, and otherwise make sense of data. Alongside the rise of various artificial intelligence (AI) and machine learning (ML) applications, we have also witnessed egregious algorithmic biases and harms - from discriminatory outputs of models to reinforcing normative ideals about beauty, gender, race, class, etc. These harms range from high profile cases such as the racial bias embedded in the COMPAS recidivism algorithm, to more insidious cases of algorithmic harm that compound over time with re-traumatizing effects (such as the mental health impacts of recommender systems, social media content organization and the struggle for visibility, and discriminatory content moderation of marginalized individuals). There are various strategies to combat and repair algorithmic harms, ranging from algorithmic audits and fairness metrics to AI Ethics Standards put forth by major institutions and tech companies. However, there is evidence to suggest that current Data Science curricula do not adequately prepare future practitioners to effectively respond to issues of algorithmic harm, especially the day-to-day issues that practitioners are likely to face. Through a review of AI Ethics standards and the literature, I devise a set of 9 characterizations of effective AI ethics education: specific, prescriptivist, action-centered, relatable, empathetic, contextual, expansive, preventative, and integrated. The empirical work of this dissertation reveals the value of embedding ethical critique into technical machine learning instruction - demonstrating how teaching AI concepts using cases of algorithmic harm can boost both technical comprehension and ethical considerations [397, 398]. I demonstrate the value of relying on real-world cases and experiences that students already have (such as with hiring/admissions decisions, social media algorithms, or generative AI tools) to boost their learning of both technical and social impact topics. I explore this relationship between personal relatability and experiential learning, demonstrating how to harness students' lived experiences to relate to cases of algorithmic harm and opportunities for repair. My preliminary work also reveals significant in-group favoritism, suggesting students find AI errors more urgent when they personally relate to them. While this may prove beneficial for engaging underrepresented students in the classroom, it must be paired with empathy-building techniques for students who relate less to cases of algorithmic harm, as well as trauma-informed pedagogical practice. My results also revealed an over-reliance on "life-or-death reasoning" when it came to ethical decision-making, along with organizational and financial pressures that might impede AI professionals from delaying harmful software. This dissertation contributes several strategies to effectively prepare Data Scientists to consider both technical and social aspects of their work, along with empirical results suggesting the benefits of embedded ethics throughout all areas of AI education.
English
ISBN: 9798383219805Subjects--Topical Terms:
1151737
Science education.
Subjects--Index Terms:
Computing education
The Future of AI Can Be Kind: Strategies for Embedded Ethics in AI Education /
LDR
:04744nam a22004453i 4500
001
1157759
005
20250603111409.5
006
m o d
007
cr|nu||||||||
008
250804s2024 miu||||||m |||||||eng d
020
$a
9798383219805
035
$a
(MiAaPQD)AAI31144431
035
$a
AAI31144431
040
$a
MiAaPQD
$b
eng
$c
MiAaPQD
$e
rda
100
1
$a
Register, Yim,
$e
author.
$3
1484025
245
1 0
$a
The Future of AI Can Be Kind: Strategies for Embedded Ethics in AI Education /
$c
Yim Register.
264
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
1 electronic resource (230 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 86-01, Section: B.
500
$a
Advisors: Spiro, Emma S. Committee members: West, Jevin; Pratt, Wanda; Zhang, Amy.
502
$b
Ph.D.
$c
University of Washington
$d
2024.
520
$a
The field of Data Science has seen rapid growth over the past two decades, with a high demand for people with skills in data analytics, programming, statistics, and ability to visualize, predict from, and otherwise make sense of data. Alongside the rise of various artificial intelligence (AI) and machine learning (ML) applications, we have also witnessed egregious algorithmic biases and harms - from discriminatory outputs of models to reinforcing normative ideals about beauty, gender, race, class, etc. These harms range from high profile cases such as the racial bias embedded in the COMPAS recidivism algorithm, to more insidious cases of algorithmic harm that compound over time with re-traumatizing effects (such as the mental health impacts of recommender systems, social media content organization and the struggle for visibility, and discriminatory content moderation of marginalized individuals). There are various strategies to combat and repair algorithmic harms, ranging from algorithmic audits and fairness metrics to AI Ethics Standards put forth by major institutions and tech companies. However, there is evidence to suggest that current Data Science curricula do not adequately prepare future practitioners to effectively respond to issues of algorithmic harm, especially the day-to-day issues that practitioners are likely to face. Through a review of AI Ethics standards and the literature, I devise a set of 9 characterizations of effective AI ethics education: specific, prescriptivist, action-centered, relatable, empathetic, contextual, expansive, preventative, and integrated. The empirical work of this dissertation reveals the value of embedding ethical critique into technical machine learning instruction - demonstrating how teaching AI concepts using cases of algorithmic harm can boost both technical comprehension and ethical considerations [397, 398]. I demonstrate the value of relying on real-world cases and experiences that students already have (such as with hiring/admissions decisions, social media algorithms, or generative AI tools) to boost their learning of both technical and social impact topics. I explore this relationship between personal relatability and experiential learning, demonstrating how to harness students' lived experiences to relate to cases of algorithmic harm and opportunities for repair. My preliminary work also reveals significant in-group favoritism, suggesting students find AI errors more urgent when they personally relate to them. While this may prove beneficial for engaging underrepresented students in the classroom, it must be paired with empathy-building techniques for students who relate less to cases of algorithmic harm, as well as trauma-informed pedagogical practice. My results also revealed an over-reliance on "life-or-death reasoning" when it came to ethical decision-making, along with organizational and financial pressures that might impede AI professionals from delaying harmful software. This dissertation contributes several strategies to effectively prepare Data Scientists to consider both technical and social aspects of their work, along with empirical results suggesting the benefits of embedded ethics throughout all areas of AI education.
546
$a
English
590
$a
School code: 0250
650
4
$a
Science education.
$3
1151737
650
4
$a
Computer science.
$3
573171
650
4
$a
Education.
$3
555912
653
$a
Computing education
653
$a
Human computer interaction
653
$a
Machine learning
653
$a
Recidivism algorithm
653
$a
Data analytics
690
$a
0800
690
$a
0515
690
$a
0984
690
$a
0714
710
2
$a
University of Washington.
$b
Information School.
$3
1179205
720
1
$a
Spiro, Emma S.
$e
degree supervisor.
773
0
$t
Dissertations Abstracts International
$g
86-01B.
790
$a
0250
791
$a
Ph.D.
792
$a
2024
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31144431
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入