語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
An Ontology-Enabled Approach for User-Centered and Knowledge-Enabled Explanations of AI Systems /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
An Ontology-Enabled Approach for User-Centered and Knowledge-Enabled Explanations of AI Systems // Shruthi Chari.
作者:
Chari, Shruthi,
面頁冊數:
1 electronic resource (181 pages)
附註:
Source: Dissertations Abstracts International, Volume: 86-04, Section: B.
Contained By:
Dissertations Abstracts International86-04B.
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31483369
ISBN:
9798384466598
An Ontology-Enabled Approach for User-Centered and Knowledge-Enabled Explanations of AI Systems /
Chari, Shruthi,
An Ontology-Enabled Approach for User-Centered and Knowledge-Enabled Explanations of AI Systems /
Shruthi Chari. - 1 electronic resource (181 pages)
Source: Dissertations Abstracts International, Volume: 86-04, Section: B.
Explainability has been a well-studied problem in the Artificial Intelligence (AI) community through several AI ages, ranging from expert systems to the current deep learning era, to enable AI's safe and robust use. Through the ages, the unique nature of AI approaches and their applications have necessitated the explainability approaches to evolve as well. However, these multiple iterations of explaining AI decisions have all focused on helping humans better understand and analyze the results and workings of AI systems.In this thesis, we seek to further the user-centered explainability sub-field in AI by addressing several challenges around explanations in the current AI era, which is characterized by the availability of many machine learning (ML) explainers and neuro-symbolic AI approaches. Previous research in user-centered explainability has mainly focused on what needs to be explained and less so on implementations for them. Additionally, there are challenges to supporting explanations in a manner that humans can easily interpret due to the lack of a unified framework for different explanation types and methods to support domain knowledge from authoritative literature. We address the three challenges or research questions around user-centered explainability: How can we formally represent explanations with support for interacting AI systems (AI methods in applied ecosystem), additional data sources, and along different dimensions? How useful and feasible are such explanations for clinical settings? Is it feasible to combine explanations from multiple data modalities and AI methods?For the first research question, we design an Explanation Ontology (EO), a general-purpose semantic representation that can represent fifteen different literature-derived explanation types via their system-, interface- and user- related components. We demonstrate the utility of the EO in representing explanations across different use cases, supporting system designers to answer explanation-related questions via a set of competency questions, and categorizing explanations to be of supported explanation types within our ontology.For the second research question, we focus on key explanation dimension, that is, contextual explanation and conduct a case study on supporting contextual explanations from an authoritative knowledge source, clinical practice guidelines (CPGs). Here, we design a clinical question-answering (QA) system to address CPG questions to provide contextual explanations to help clinicians interpret risk prediction scores and their post hoc explanations in a comorbidity risk prediction setting. For the QA system, we leverage large language models (LLMs) and their clinical variants and implement knowledge augmentations to these models to improve semantic coherence of the answers. We evaluate both the feasibility and value of supporting these contextual explanations. For feasibility, we use quantitative metrics to report the performance of the QA system and do so across different model settings and data splits. To evaluate the value of the explanations, we report findings from showing the results of our QA approach to an expert panel of clinicians.Finally, for the last research question, we design a general-purpose and open-source framework, Metaexplainer, capable of providing natural-language explanations to a user question from the several explainer methods registered to generate explanations of a particular type. The Metaexplainer is a three-stage (Decompose, Delegate, and Synthesis) modular framework through which each stage produces intermediate outputs that the next stage ingests. In the Decompose stage, we input user questions and identify what explanation type can best address them and generate actionable machine interpretations; in the Delegate stage, we run explainer methods registered for the identified explanation type and pass filters if any, from the question and finally, in the synthesis stage we generate natural-language explanations along the explanation type template. For the Metaexplainer, we leverage LLMs, the EO, and explainer methods to generate user-centered explanations in response to user questions. We evaluate the Metaexplainer on open-source tabular datasets, but the framework can be applied to other modalities with code adaptations.Overall, through this thesis, we aim to design methods that can support knowledge-enabled explanations across different use cases, accounting for the methods in today's AI era that can generate the supporting components of these explanations and domain knowledge sources that can enhance them. We demonstrate the efficacy of our approach in two clinical use cases as case studies but design our methods to be applied to use cases outside of healthcare as well. By implementing approaches for knowledge-enabled explainability that leverage the strengths of symbolic and neural AI, we take a step towards user-centered explainability to help humans interpret and understand AI decisions from different perspectives.
English
ISBN: 9798384466598Subjects--Topical Terms:
573171
Computer science.
Subjects--Index Terms:
Explainable AI
An Ontology-Enabled Approach for User-Centered and Knowledge-Enabled Explanations of AI Systems /
LDR
:06636nam a22004453i 4500
001
1157843
005
20250603111426.5
006
m o d
007
cr|nu||||||||
008
250804s2024 miu||||||m |||||||eng d
020
$a
9798384466598
035
$a
(MiAaPQD)AAI31483369
035
$a
AAI31483369
040
$a
MiAaPQD
$b
eng
$c
MiAaPQD
$e
rda
100
1
$a
Chari, Shruthi,
$e
author.
$0
(orcid)0000-0003-2946-7870
$3
1484121
245
1 3
$a
An Ontology-Enabled Approach for User-Centered and Knowledge-Enabled Explanations of AI Systems /
$c
Shruthi Chari.
264
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
1 electronic resource (181 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 86-04, Section: B.
500
$a
Advisors: McGuinness, Deborah L.; Seneviratne, Oshani Committee members: Hendler, James A.; Chakraborty, Prithwish; Meyer, Pablo.
502
$b
Ph.D.
$c
Rensselaer Polytechnic Institute
$d
2024.
520
$a
Explainability has been a well-studied problem in the Artificial Intelligence (AI) community through several AI ages, ranging from expert systems to the current deep learning era, to enable AI's safe and robust use. Through the ages, the unique nature of AI approaches and their applications have necessitated the explainability approaches to evolve as well. However, these multiple iterations of explaining AI decisions have all focused on helping humans better understand and analyze the results and workings of AI systems.In this thesis, we seek to further the user-centered explainability sub-field in AI by addressing several challenges around explanations in the current AI era, which is characterized by the availability of many machine learning (ML) explainers and neuro-symbolic AI approaches. Previous research in user-centered explainability has mainly focused on what needs to be explained and less so on implementations for them. Additionally, there are challenges to supporting explanations in a manner that humans can easily interpret due to the lack of a unified framework for different explanation types and methods to support domain knowledge from authoritative literature. We address the three challenges or research questions around user-centered explainability: How can we formally represent explanations with support for interacting AI systems (AI methods in applied ecosystem), additional data sources, and along different dimensions? How useful and feasible are such explanations for clinical settings? Is it feasible to combine explanations from multiple data modalities and AI methods?For the first research question, we design an Explanation Ontology (EO), a general-purpose semantic representation that can represent fifteen different literature-derived explanation types via their system-, interface- and user- related components. We demonstrate the utility of the EO in representing explanations across different use cases, supporting system designers to answer explanation-related questions via a set of competency questions, and categorizing explanations to be of supported explanation types within our ontology.For the second research question, we focus on key explanation dimension, that is, contextual explanation and conduct a case study on supporting contextual explanations from an authoritative knowledge source, clinical practice guidelines (CPGs). Here, we design a clinical question-answering (QA) system to address CPG questions to provide contextual explanations to help clinicians interpret risk prediction scores and their post hoc explanations in a comorbidity risk prediction setting. For the QA system, we leverage large language models (LLMs) and their clinical variants and implement knowledge augmentations to these models to improve semantic coherence of the answers. We evaluate both the feasibility and value of supporting these contextual explanations. For feasibility, we use quantitative metrics to report the performance of the QA system and do so across different model settings and data splits. To evaluate the value of the explanations, we report findings from showing the results of our QA approach to an expert panel of clinicians.Finally, for the last research question, we design a general-purpose and open-source framework, Metaexplainer, capable of providing natural-language explanations to a user question from the several explainer methods registered to generate explanations of a particular type. The Metaexplainer is a three-stage (Decompose, Delegate, and Synthesis) modular framework through which each stage produces intermediate outputs that the next stage ingests. In the Decompose stage, we input user questions and identify what explanation type can best address them and generate actionable machine interpretations; in the Delegate stage, we run explainer methods registered for the identified explanation type and pass filters if any, from the question and finally, in the synthesis stage we generate natural-language explanations along the explanation type template. For the Metaexplainer, we leverage LLMs, the EO, and explainer methods to generate user-centered explanations in response to user questions. We evaluate the Metaexplainer on open-source tabular datasets, but the framework can be applied to other modalities with code adaptations.Overall, through this thesis, we aim to design methods that can support knowledge-enabled explanations across different use cases, accounting for the methods in today's AI era that can generate the supporting components of these explanations and domain knowledge sources that can enhance them. We demonstrate the efficacy of our approach in two clinical use cases as case studies but design our methods to be applied to use cases outside of healthcare as well. By implementing approaches for knowledge-enabled explainability that leverage the strengths of symbolic and neural AI, we take a step towards user-centered explainability to help humans interpret and understand AI decisions from different perspectives.
546
$a
English
590
$a
School code: 0185
650
4
$a
Computer science.
$3
573171
650
4
$a
Health sciences.
$3
1179212
653
$a
Explainable AI
653
$a
Healthcare AI
653
$a
Ontologies
653
$a
Knowledge graphs
653
$a
Machine learning
690
$a
0800
690
$a
0566
690
$a
0984
710
2
$a
Rensselaer Polytechnic Institute.
$b
Computer Science.
$3
1190468
720
1
$a
McGuinness, Deborah L.
$e
degree supervisor.
720
1
$a
Seneviratne, Oshani
$e
degree supervisor.
773
0
$t
Dissertations Abstracts International
$g
86-04B.
790
$a
0185
791
$a
Ph.D.
792
$a
2024
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31483369
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入