語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Privacy-Fairness : = It's Complicated : An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Privacy-Fairness :/
其他題名:
It's Complicated : An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
作者:
Jang, Eunbee.
面頁冊數:
1 online resource (76 pages)
附註:
Source: Masters Abstracts International, Volume: 84-05.
Contained By:
Masters Abstracts International84-05.
標題:
Personal information. -
電子資源:
click for full text (PQDT)
ISBN:
9798352991749
Privacy-Fairness : = It's Complicated : An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
Jang, Eunbee.
Privacy-Fairness :
It's Complicated : An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems. - 1 online resource (76 pages)
Source: Masters Abstracts International, Volume: 84-05.
Thesis (M.Sc.)--McGill University (Canada), 2022.
Includes bibliographical references
Any machine learning software driven by data involving sensitive information should be carefully designed so that primary concerns related to human values and rights are sufficiently protected. Privacy and fairness are two of the most prominent concerns in machine learning attracting academia and industry. However, they are often addressed independently, although they both heavily rely on data distribution around sensitive personal attributes. Such a split approach cannot adequately attend to the reality in that the assessments made alongside either privacy or fairness could complicate the decision of the other. As a starting point, we present an empirical study to characterize differential privacy's utility and fairness impact on inferential systems. In particular, we investigate whether various differential privacy techniques applied to different parts of the machine learning pipeline pose any risks to group fairness. Our analysis reveals the convoluted picture of the interplay between privacy and fairness, particularly on the sensitivity of commonly used fairness metrics and shows that unfair outcomes are sometimes an unavoidable side effect of differential privacy. We further discuss essential factors to consider when designing a fairness evaluation on differentially private systems.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798352991749Subjects--Topical Terms:
1466893
Personal information.
Index Terms--Genre/Form:
554714
Electronic books.
Privacy-Fairness : = It's Complicated : An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
LDR
:04339ntm a22004097 4500
001
1142532
005
20240422070856.5
006
m o d
007
cr mn ---uuuuu
008
250605s2022 xx obm 000 0 eng d
020
$a
9798352991749
035
$a
(MiAaPQ)AAI30157867
035
$a
(MiAaPQ)McGill_cv43p3239
035
$a
AAI30157867
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Jang, Eunbee.
$3
1466892
245
1 0
$a
Privacy-Fairness :
$b
It's Complicated : An Unavoidable Unfairness of Differentially Privacy on Machine Learning Systems.
264
0
$c
2022
300
$a
1 online resource (76 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 84-05.
500
$a
Advisor: Moon, AJung ; Guo, Jin L. C.
502
$a
Thesis (M.Sc.)--McGill University (Canada), 2022.
504
$a
Includes bibliographical references
520
$a
Any machine learning software driven by data involving sensitive information should be carefully designed so that primary concerns related to human values and rights are sufficiently protected. Privacy and fairness are two of the most prominent concerns in machine learning attracting academia and industry. However, they are often addressed independently, although they both heavily rely on data distribution around sensitive personal attributes. Such a split approach cannot adequately attend to the reality in that the assessments made alongside either privacy or fairness could complicate the decision of the other. As a starting point, we present an empirical study to characterize differential privacy's utility and fairness impact on inferential systems. In particular, we investigate whether various differential privacy techniques applied to different parts of the machine learning pipeline pose any risks to group fairness. Our analysis reveals the convoluted picture of the interplay between privacy and fairness, particularly on the sensitivity of commonly used fairness metrics and shows that unfair outcomes are sometimes an unavoidable side effect of differential privacy. We further discuss essential factors to consider when designing a fairness evaluation on differentially private systems.
520
$a
Tout logiciel d'apprentissage automatique pilote par des donnees impliquant des informations sensibles doit etre concu avec soin afin que les principales preoccupations liees aux valeurs et aux droits de l'homme soient suffisamment protegees. Le respect de la vie privee et l'equite sont deux des preoccupations les plus importantes de l'apprentissage automatique qui attirent les universitaires et les industriels. Cependant, elles sont souvent abordees independamment, bien qu'elles dependent toutes deux fortement de la distribution des donnees autour d'attributs per- sonnels sensibles. Une telle approche separee ne peut pas repondre adequatement a la realite dans la mesure ou les evaluations faites en meme temps que la confidentialite ou l'equite pourraient compliquer la decision de l'autre. Comme point de depart, nous presentons une etude empirique pour caracteriser l'impact de la confidentialite differentielle sur l'utilite et l'equite des systemes inferentiels. En particulier, nous examinons si diverses techniques de confidentialite differentielle appliquees a differentes parties du pipeline d'apprentissage automatique presentent des risques pour l'equite du groupe. Notre analyse revele l'image alambiquee de l'interaction entre la confidentialite et l'equite, en particulier sur la sensibilite des mesures d'equite couramment utilisees et montre que les resultats injustes sont parfois un effet secondaire inevitable de la confidentialite differentielle. Nous discutons egalement des facteurs essentiels a prendre en compte lors de la conception d'une evaluation de l'equite sur des systemes a confidentialite differentielle.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Personal information.
$3
1466893
650
4
$a
Decision making.
$3
528319
650
4
$a
Gender.
$3
1214940
650
4
$a
Design.
$3
595500
650
4
$a
Algorithms.
$3
527865
650
4
$a
Privacy.
$3
575491
650
4
$a
Teachers.
$3
881385
650
4
$a
Education.
$3
555912
650
4
$a
Computer science.
$3
573171
650
4
$a
Finance.
$3
559073
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0770
690
$a
0389
690
$a
0515
690
$a
0800
690
$a
0984
690
$a
0508
690
$a
0338
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
McGill University (Canada).
$3
845629
773
0
$t
Masters Abstracts International
$g
84-05.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30157867
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入