語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Human Resource Professionals' Perceptions of Trust in Explainable Artificial Intelligence Hiring Software /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Human Resource Professionals' Perceptions of Trust in Explainable Artificial Intelligence Hiring Software // Jason Richard Powell.
作者:
Powell, Jason Richard,
面頁冊數:
1 electronic resource (396 pages)
附註:
Source: Dissertations Abstracts International, Volume: 86-03, Section: B.
Contained By:
Dissertations Abstracts International86-03B.
標題:
Information technology. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31557236
ISBN:
9798384425595
Human Resource Professionals' Perceptions of Trust in Explainable Artificial Intelligence Hiring Software /
Powell, Jason Richard,
Human Resource Professionals' Perceptions of Trust in Explainable Artificial Intelligence Hiring Software /
Jason Richard Powell. - 1 electronic resource (396 pages)
Source: Dissertations Abstracts International, Volume: 86-03, Section: B.
The rapid integration of artificial intelligence (AI) in hiring processes, particularly in explainable AI (XAI) hiring software, presents both opportunities and challenges for managing human resources (HR). The problem this study addressed is that recruiters and HR professionals distrust the accuracy of data produced by AI and the lack of control provided by AI-powered software (Johnson et al., 2022). The purpose of this qualitative case study was to investigate recruiters' and HR professionals' perceptions of trust and control in XAI-capable hiring software, the factors contributing to their distrust, and how doubts impacted decision-making processes in recruitment. Employing the Technology Acceptance Model (TAM) and considering the Artificial Intelligence - Technology Acceptance Model (AI-TAM) as theoretical frameworks, this study explored HR professionals' decision-making processes, trust factors in XAI-driven recruitment software, and perceived transparency of AI algorithms. Through semi-structured interviews with 15 HR professionals across various organizations in the United States, thematic analysis revealed a complex interplay of trust and control perceptions. The findings highlighted the critical role of explainability and transparency in fostering trust, the need for control mechanisms to ensure the ethical use of AI in hiring, and the potential biases that may arise from reliance on AI systems. The study contributed to the literature on AI in HR by providing empirical insights into HR professionals' perceptions, underscoring the importance of developing XAI systems that are technologically advanced, ethically sound, and user-friendly.
English
ISBN: 9798384425595Subjects--Topical Terms:
559429
Information technology.
Subjects--Index Terms:
Control
Human Resource Professionals' Perceptions of Trust in Explainable Artificial Intelligence Hiring Software /
LDR
:03181nam a22004453i 4500
001
1157863
005
20250603111431.5
006
m o d
007
cr|nu||||||||
008
250804s2024 miu||||||m |||||||eng d
020
$a
9798384425595
035
$a
(MiAaPQD)AAI31557236
035
$a
AAI31557236
040
$a
MiAaPQD
$b
eng
$c
MiAaPQD
$e
rda
100
1
$a
Powell, Jason Richard,
$e
author.
$3
1484146
245
1 0
$a
Human Resource Professionals' Perceptions of Trust in Explainable Artificial Intelligence Hiring Software /
$c
Jason Richard Powell.
264
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
1 electronic resource (396 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 86-03, Section: B.
500
$a
Advisors: Smiley, Garrett Committee members: Braxton Castanze, Sherri; Sopko, Leila.
502
$b
D.B.A.
$c
National University
$d
2024.
520
$a
The rapid integration of artificial intelligence (AI) in hiring processes, particularly in explainable AI (XAI) hiring software, presents both opportunities and challenges for managing human resources (HR). The problem this study addressed is that recruiters and HR professionals distrust the accuracy of data produced by AI and the lack of control provided by AI-powered software (Johnson et al., 2022). The purpose of this qualitative case study was to investigate recruiters' and HR professionals' perceptions of trust and control in XAI-capable hiring software, the factors contributing to their distrust, and how doubts impacted decision-making processes in recruitment. Employing the Technology Acceptance Model (TAM) and considering the Artificial Intelligence - Technology Acceptance Model (AI-TAM) as theoretical frameworks, this study explored HR professionals' decision-making processes, trust factors in XAI-driven recruitment software, and perceived transparency of AI algorithms. Through semi-structured interviews with 15 HR professionals across various organizations in the United States, thematic analysis revealed a complex interplay of trust and control perceptions. The findings highlighted the critical role of explainability and transparency in fostering trust, the need for control mechanisms to ensure the ethical use of AI in hiring, and the potential biases that may arise from reliance on AI systems. The study contributed to the literature on AI in HR by providing empirical insights into HR professionals' perceptions, underscoring the importance of developing XAI systems that are technologically advanced, ethically sound, and user-friendly.
546
$a
English
590
$a
School code: 1625
650
4
$a
Information technology.
$3
559429
653
$a
Control
653
$a
Explainable AI
653
$a
Human resource management
653
$a
Recruiting
653
$a
Trust
653
$a
Software
690
$a
0800
690
$a
0310
690
$a
0489
710
2
$a
National University.
$b
School of Business.
$3
1474271
720
1
$a
Smiley, Garrett
$e
degree supervisor.
773
0
$t
Dissertations Abstracts International
$g
86-03B.
790
$a
1625
791
$a
D.B.A.
792
$a
2024
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31557236
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入