Language:
English
繁體中文
Help
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Reinforcement learning design for ca...
~
Zhao, Yufan.
Reinforcement learning design for cancer clinical trials.
Record Type:
Language materials, manuscript : Monograph/item
Title/Author:
Reinforcement learning design for cancer clinical trials./
Author:
Zhao, Yufan.
Description:
1 online resource (119 pages)
Notes:
Source: Dissertation Abstracts International, Volume: 70-07, Section: B, page: 3862.
Contained By:
Dissertation Abstracts International70-07B.
Subject:
Biostatistics. -
Online resource:
click for full text (PQDT)
ISBN:
9781109277180
Reinforcement learning design for cancer clinical trials.
Zhao, Yufan.
Reinforcement learning design for cancer clinical trials.
- 1 online resource (119 pages)
Source: Dissertation Abstracts International, Volume: 70-07, Section: B, page: 3862.
Thesis (Ph.D.)--The University of North Carolina at Chapel Hill, 2009.
Includes bibliographical references
There has been significant recent research activity in developing therapies that are tailored to each individual. Finding such therapies in treatment settings involving multiple decision times is a major challenge. In this dissertation, we develop reinforcement learning trials for discovering these optimal regimens for life-threatening diseases such as cancer. A temporal-difference learning method called Q-learning is utilized which involves learning an optimal policy from a single training set of finite longitudinal patient trajectories. Approximating the Q-function with time-indexed parameters can be achieved by using support vector regression or extremely randomized trees. Within this framework, we demonstrate that the procedure can extract optimal strategies directly from clinical data without relying on the identification of any accurate mathematical models, unlike approaches based on adaptive design. We show that reinforcement learning has tremendous potential in clinical research because it can select actions that improve outcomes by taking into account delayed effects even when the relationship between actions and outcomes is not fully known.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9781109277180Subjects--Topical Terms:
783654
Biostatistics.
Index Terms--Genre/Form:
554714
Electronic books.
Reinforcement learning design for cancer clinical trials.
LDR
:03371ntm a2200361Ki 4500
001
918715
005
20181030085012.5
006
m o u
007
cr mn||||a|a||
008
190606s2009 xx obm 000 0 eng d
020
$a
9781109277180
035
$a
(MiAaPQ)AAI3366451
035
$a
(MiAaPQ)unc:10396
035
$a
AAI3366451
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Zhao, Yufan.
$3
1193115
245
1 0
$a
Reinforcement learning design for cancer clinical trials.
264
0
$c
2009
300
$a
1 online resource (119 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 70-07, Section: B, page: 3862.
500
$a
Adviser: Michael R. Kosorok.
502
$a
Thesis (Ph.D.)--The University of North Carolina at Chapel Hill, 2009.
504
$a
Includes bibliographical references
520
$a
There has been significant recent research activity in developing therapies that are tailored to each individual. Finding such therapies in treatment settings involving multiple decision times is a major challenge. In this dissertation, we develop reinforcement learning trials for discovering these optimal regimens for life-threatening diseases such as cancer. A temporal-difference learning method called Q-learning is utilized which involves learning an optimal policy from a single training set of finite longitudinal patient trajectories. Approximating the Q-function with time-indexed parameters can be achieved by using support vector regression or extremely randomized trees. Within this framework, we demonstrate that the procedure can extract optimal strategies directly from clinical data without relying on the identification of any accurate mathematical models, unlike approaches based on adaptive design. We show that reinforcement learning has tremendous potential in clinical research because it can select actions that improve outcomes by taking into account delayed effects even when the relationship between actions and outcomes is not fully known.
520
$a
To support our claims, the methodology's practical utility is firstly illustrated in a virtual simulated clinical trial. We then apply this general strategy with significant refinements to studying and discovering optimal treatments for advanced metastatic stage IIIB/IV non-small cell lung cancer (NSCLC). In addition to the complexity of the NSCLC problem of selecting optimal compounds for first and second-line treatments based on prognostic factors, another primary scientific goal is to determine the optimal time to initiate second-line therapy, either immediately or delayed after induction therapy, yielding the longest overall survival time. We show that reinforcement learning not only successfully identifies optimal strategies for two lines of treatment from clinical data, but also reliably selects the best initial time for second-line therapy while taking into account heterogeneities of NSCLC across patients.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Biostatistics.
$3
783654
650
4
$a
Artificial intelligence.
$3
559380
650
4
$a
Statistics.
$3
556824
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0308
690
$a
0800
690
$a
0463
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
The University of North Carolina at Chapel Hill.
$b
Biostatistics.
$3
1193116
773
0
$t
Dissertation Abstracts International
$g
70-07B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3366451
$z
click for full text (PQDT)
based on 0 review(s)
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login