語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Explainable Artificial Intelligence for Enhancing Transparency in Decision Support Systems /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Explainable Artificial Intelligence for Enhancing Transparency in Decision Support Systems // Mir Riyanul Islam.
作者:
Islam, Mir Riyanul,
面頁冊數:
1 electronic resource (101 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-10, Section: B.
Contained By:
Dissertations Abstracts International85-10B.
標題:
Computer engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31273617
ISBN:
9798382218977
Explainable Artificial Intelligence for Enhancing Transparency in Decision Support Systems /
Islam, Mir Riyanul,
Explainable Artificial Intelligence for Enhancing Transparency in Decision Support Systems /
Mir Riyanul Islam. - 1 electronic resource (101 pages)
Source: Dissertations Abstracts International, Volume: 85-10, Section: B.
Artificial Intelligence (AI) is recognized as advanced technology that assists in decision-making processes with high accuracy and precision. However, many AI models are generally appraised as black boxes due to their reliance on complex inference mechanisms. The intricacies of how and why these AI models reach a decision are often not comprehensible to human users, resulting in concerns about the acceptability of their decisions. Previous studies have shown that the lack of associated explanation in a human-understandable form makes the decisions unacceptable to end-users. Here, the research domain of Explainable AI (ΧΑΙ) provides a wide range of methods with the common theme of investigating how AI models reach to a decision or explain it. These explanation methods aim to enhance transparency in Decision Support Systems (DSS), particularly crucial in safety-critical domains like Road Safety (RS) and Air Traffic Flow Management (ATFM). Despite ongoing developments, DSSs are still in the evolving phase for safety-critical applications. Improved transparency, facilitated by XAI, emerges as a key enabler for making these systems operationally viable in real-world applications, addressing acceptability and trust issues. Besides, certification authorities are less likely to approve the systems for general use following the current mandate of Right to Explanation from the European Commission and similar directives from organisations across the world. This urge to permeate the prevailing systems with explanations paves the way for research studies on XAI concentric to DSSs.To this end, this thesis work primarily developed explainable models for the application domains of RS and ATFM. Particularly, explainable models are developed for assessing drivers' in-vehicle mental workload and driving behaviour through classification and regression tasks. In addition, a novel method is proposed for generating a hybrid feature set from vehicular and electroencephalography (EEG) signals using mutual information (MI). The use of this feature set is successfully demonstrated to reduce the efforts required for complex computations of EEG feature extraction. The concept of MI was further utilized in generating human-understandable explanations of mental workload classification. For the domain of ATFM, an explainable model for flight take-off time delay prediction from historical flight data is developed and presented in this thesis. The gained insights through the development and evaluation of the explainable applications for the two domains underscore the need for further research on the advancement of XAI methods.In this doctoral research, the explainable applications for the DSSs are developed with the additive feature attribution (AFA) methods, a class of XAI methods that are popular in current XAI research. Nevertheless, there are several sources from the literature that assert that feature attribution methods often yield inconsistent results that need plausible evaluation. However, the existing body of literature on evaluation techniques is still immature offering numerous suggested approaches without a standardized consensus on their optimal application in various scenarios. To address this issue, comprehensive evaluation criteria are also developed for AFA methods as the literature on XAI suggests. The proposed evaluation process considers the underlying characteristics of the data and utilizes the additive form of Case-based Reasoning, namely AddCBR. The AddCBR is proposed in this thesis and is demonstrated to complement the evaluation process as the baseline to compare the feature attributions produced by the AFA methods. Apart from generating an explanation with feature attribution, this thesis work also proposes the iXGB interpretable XGBoost. iXGB generates decision rules and counterfactuals to support the output of an XGBoost model thus improving its interpretability. From the functional evaluation, iXGB demonstrates the potential to be used for interpreting arbitrary tree-ensemble methods.In essence, this doctoral thesis initially contributes to the development of ideally evaluated explainable models tailored for two distinct safety-critical domains. The aim is to augment transparency within the corresponding DSSs. Additionally, the thesis introduces novel methods for generating more comprehensible explanations in different forms, surpassing existing approaches. It also showcases a robust evaluation approach for XAI methods.
English
ISBN: 9798382218977Subjects--Topical Terms:
569006
Computer engineering.
Subjects--Index Terms:
Road Safety
Explainable Artificial Intelligence for Enhancing Transparency in Decision Support Systems /
LDR
:10673nam a22004333i 4500
001
1157788
005
20250603111415.5
006
m o d
007
cr|nu||||||||
008
250804s2024 miu||||||m |||||||eng d
020
$a
9798382218977
035
$a
(MiAaPQD)AAI31273617
035
$a
AAI31273617
040
$a
MiAaPQD
$b
eng
$c
MiAaPQD
$e
rda
100
1
$a
Islam, Mir Riyanul,
$e
author.
$3
1484060
245
1 0
$a
Explainable Artificial Intelligence for Enhancing Transparency in Decision Support Systems /
$c
Mir Riyanul Islam.
264
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
1 electronic resource (101 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-10, Section: B.
500
$a
Advisors: Ahmed, Mobyen; Begum, Shahina.
502
$b
Ph.D.
$c
Malardalen University (Sweden)
$d
2024.
520
$a
Artificial Intelligence (AI) is recognized as advanced technology that assists in decision-making processes with high accuracy and precision. However, many AI models are generally appraised as black boxes due to their reliance on complex inference mechanisms. The intricacies of how and why these AI models reach a decision are often not comprehensible to human users, resulting in concerns about the acceptability of their decisions. Previous studies have shown that the lack of associated explanation in a human-understandable form makes the decisions unacceptable to end-users. Here, the research domain of Explainable AI (ΧΑΙ) provides a wide range of methods with the common theme of investigating how AI models reach to a decision or explain it. These explanation methods aim to enhance transparency in Decision Support Systems (DSS), particularly crucial in safety-critical domains like Road Safety (RS) and Air Traffic Flow Management (ATFM). Despite ongoing developments, DSSs are still in the evolving phase for safety-critical applications. Improved transparency, facilitated by XAI, emerges as a key enabler for making these systems operationally viable in real-world applications, addressing acceptability and trust issues. Besides, certification authorities are less likely to approve the systems for general use following the current mandate of Right to Explanation from the European Commission and similar directives from organisations across the world. This urge to permeate the prevailing systems with explanations paves the way for research studies on XAI concentric to DSSs.To this end, this thesis work primarily developed explainable models for the application domains of RS and ATFM. Particularly, explainable models are developed for assessing drivers' in-vehicle mental workload and driving behaviour through classification and regression tasks. In addition, a novel method is proposed for generating a hybrid feature set from vehicular and electroencephalography (EEG) signals using mutual information (MI). The use of this feature set is successfully demonstrated to reduce the efforts required for complex computations of EEG feature extraction. The concept of MI was further utilized in generating human-understandable explanations of mental workload classification. For the domain of ATFM, an explainable model for flight take-off time delay prediction from historical flight data is developed and presented in this thesis. The gained insights through the development and evaluation of the explainable applications for the two domains underscore the need for further research on the advancement of XAI methods.In this doctoral research, the explainable applications for the DSSs are developed with the additive feature attribution (AFA) methods, a class of XAI methods that are popular in current XAI research. Nevertheless, there are several sources from the literature that assert that feature attribution methods often yield inconsistent results that need plausible evaluation. However, the existing body of literature on evaluation techniques is still immature offering numerous suggested approaches without a standardized consensus on their optimal application in various scenarios. To address this issue, comprehensive evaluation criteria are also developed for AFA methods as the literature on XAI suggests. The proposed evaluation process considers the underlying characteristics of the data and utilizes the additive form of Case-based Reasoning, namely AddCBR. The AddCBR is proposed in this thesis and is demonstrated to complement the evaluation process as the baseline to compare the feature attributions produced by the AFA methods. Apart from generating an explanation with feature attribution, this thesis work also proposes the iXGB interpretable XGBoost. iXGB generates decision rules and counterfactuals to support the output of an XGBoost model thus improving its interpretability. From the functional evaluation, iXGB demonstrates the potential to be used for interpreting arbitrary tree-ensemble methods.In essence, this doctoral thesis initially contributes to the development of ideally evaluated explainable models tailored for two distinct safety-critical domains. The aim is to augment transparency within the corresponding DSSs. Additionally, the thesis introduces novel methods for generating more comprehensible explanations in different forms, surpassing existing approaches. It also showcases a robust evaluation approach for XAI methods.
520
$a
Artificiell intelligens (AI) ar erkant som en avancerad teknik som hjalper till att fattabeslut med hog noggrannhet och precision. Manga AI-modeller betraktas dock som svarta lador pa grund av att de bygger pa komplexa slutledningsmekanismer. Hur och varfor dessa AI-modeller nar fram till ett beslut ar ofta inte begripligt for manskliga anvandare, vilket leder till oro for att deras beslut inte ar godtagbara. Tidigare studier har visat att avsaknaden av tillhorande forklaringar i en for manniskor begriplig form gor besluten oacceptabla for slutanvandarna. Forskningsomradet forklarlig AI (XAI) erbjuder ett brett utbud av metoder med det gemensamma temat att undersoka hur Al-modeller nar fram till ett beslut eller forklarar det. Dessa forklaringsmetoder syftar till att oka transparensen i beslutsstodsystem (DSS), vilket ar sarskilt viktigt inom sakerhetskritiska omraden som vagsakerhet och flygtrafikflodeshantering. Trots den pagaende utvecklingen befinner sig DSS fortfarande i en utvecklingsfas for sakerhetskritiska tillampningar. Forbattrad transparens, som underlattas av XAI, framstar som en viktig faktor for att gora dessa system praktiskt anvandbara i verkliga tillampningar, och for att hantera acceptans- och fortroendefragor. Dessutom ar det mindre troligt att certifieringsmyndigheterna godkanner systemen for allman anvandning efter det nuvarande mandatet Ratt till forklaring fran Europeiska kommissionen och liknande direktiv fran organisationer over hela varlden. Denna onskan att genomsyra de radande systemen med forklaringar banar vag for forskningsstudier om XAI som ar koncentrerade tillbeslutsstodsystem. For detta andamal har denna avhandling framst utvecklat forklarbara modeller for tillampningsomradena vagsakerhet och flygtrafikflodeshantering. I synnerhet utvecklas forklarbara modeller for att bedoma forarnas mentala arbetsbelastning i fordonet och korbeteende genom klassificerings- och regressionsuppgifter. Dessutom foreslas en ny metod for att generera en hybridfunktionsuppsattning fran fordons- och elektroencefalografi (EEG) med hjalp av omsesidig information (MI). Anvandningen av denna funktionsuppsattning har framgangsrikt demonstrerats for att minska de insatser som kravs for komplexa berakningar av EEG-funktionsextraktion. Begreppet MI anvandes vidare for att generera forklaringar av klassificeringen av mental arbetsbelastning som ar begripliga for manniskor. For flygtrafikflodeshantering utvecklas och presenteras i denna avhandling en forklaringsmodell for forutsagelse av tidsfordrojning vid start av flyg fran historiska flygdata. De insikter som erhallits genom utvecklingen och utvarderingen av de forklarbara tillampningarna for de tva domanerna understryker behovet av ytterligare forskning om utvecklingen av XAI-metoder.I denna doktorsavhandling utvecklas de forklarbara applikationerna for DSS med hjalp av additive feature attribution (AFA) metoder, en klass av XAI-metoder som ar populara inom aktuell XAI-forskning. Det finns dock flera kallor i litteraturen som havdar att funktionsattributionsmetoder ofta ger inkonsekventa resultat som behover utvarderas pa ett trovardigt satt. Den befintliga litteraturen om utvarderingstekniker ar dock fortfarande omogen och erbjuder manga foreslagna tillvagagangssatt utan ett standardiserat samforstand om deras optimala tillampning i olika scenarier. For att ta itu med detta problem har omfattande utvarderingskriterier aven utvecklats for AFA-metoder som litteraturen om XAI foreslar. Den foreslagna utvarderingsprocessen tar hansyn till de underliggande egenskaperna hos data och anvander den additiva formen av case-based reasoning, namligen AddCBR. AddCBR foreslas i denna avhandling och demonstreras for att komplettera utvarderingsprocessen for att jamfora de funktionsattributioner som produceras av AFA-metoderna. Forutom att generera en forklaring med funktionstillskrivning foreslar denna avhandling ocksa iXGB - interpretable XGBoost. iXGB genererar beslutsregler och kontrafakta for att stodja utdata fran en XGBoost-modell och darmed forbattra dess tolkningsbarhet. Den funktionella utvarderingen visar att iXGB har potential att anvandas for att tolka godtyckliga trad-ensemble-metoder.Sammanfattningsvis bidrar denna doktorsavhandling initialt till utvecklingen av idealiskt utvarderade forklarbara modeller skraddarsydda for tva distinkta sakerhetskritiska domaner. Syftet ar att oka transparensen inom de motsvarande beslutsstodsystem. Dessutom introducerar avhandlingen nya metoder for att generera mer begripliga forklaringar i olika former, vilket overtraffar befintliga tillvagagangssatt. Den visar ocksa en robust utvarderingsmetod for XAI-metoder.
546
$a
English
590
$a
School code: 3209
650
4
$a
Computer engineering.
$3
569006
653
$a
Road Safety
653
$a
Air Traffic Flow Management
653
$a
Decision Support Systems
653
$a
Electroencephalography
690
$a
0464
690
$a
0800
710
2
$a
Malardalen University (Sweden).
$3
1465931
720
1
$a
Ahmed, Mobyen
$e
degree supervisor.
720
1
$a
Begum, Shahina
$e
degree supervisor.
773
0
$t
Dissertations Abstracts International
$g
85-10B.
790
$a
3209
791
$a
Ph.D.
792
$a
2024
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31273617
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入