語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Essays in Quantitative Marketing.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Essays in Quantitative Marketing./
作者:
Gui, George Zhida.
面頁冊數:
1 online resource (122 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-07, Section: A.
Contained By:
Dissertations Abstracts International85-07A.
標題:
Purchasing. -
電子資源:
click for full text (PQDT)
ISBN:
9798381019483
Essays in Quantitative Marketing.
Gui, George Zhida.
Essays in Quantitative Marketing.
- 1 online resource (122 pages)
Source: Dissertations Abstracts International, Volume: 85-07, Section: A.
Thesis (Ph.D.)--Stanford University, 2023.
Includes bibliographical references
The first chapter is a joint work with Tilman Drerup that studies the economic consequences of over-delivering versus under-delivering and their implication for firms when they design promises. Firms often need to promise a certain level of service quality to attract customers, and a central question is how to design promises to balance the trade-off between customer acquisition and customer retention. For example, most E-commerce platforms need to promise a certain delivery time. Over-promising may attract more customers at that present moment, but its impact on future retention depends on consumer inertia, learning, and loss aversion. Empirical analysis of this topic is challenging because the realized and promised service qualities are often unobserved or lack exogenous variation. To study this problem, we leverage a novel dataset from Instacart that directly observes variation in promised and actual delivery time. We apply a generalized propensity score method to nonparametrically estimate the impact of delivery time on customer retention. Consistent with reference dependence and loss aversion, we document that customers are around 92% more responsive once the delivery becomes late. Our results inform a structural model of learning and reference dependence that illustrates the importance of estimating loss aversion and distinguishing promise-based reference points from expectation-based reference points: the company would forgo millions of dollars in revenue if it underestimates loss aversion or assumes expectation-based reference points.The second chapter studies how to better leverage data by combining naturally-occurring observational data with randomized controlled trials. Randomized controlled trials generate experimental variation that can credibly identify causal effects, but often suffer from limited scale, while observational datasets are large but often violate desired identification assumptions. To improve estimation efficiency, I propose a method that leverages imperfect instruments - pretreatment covariates that satisfy the relevance condition but may violate the exclusion restriction. I show that these imperfect instruments can be used to derive moment restrictions that, in combination with the experimental data, improve estimation efficiency. I outline estimators for implementing this strategy, and show that my methods can reduce variance by up to 50%; therefore, only half of the experimental sample is required to attain the same statistical precision. I apply my method to a search listing dataset from Expedia that studies the causal effect of search rankings on clicks, and show that the method can substantially improve the precision.The third chapter is a joint work with Harikesh Nair and Fengshi Niu, in which we study how to use auction throttling to identify the online advertising effect. Causally identifying the effect of digital advertising is challenging, because experimentation is expensive, and observational data lacks random variation. This chapter identifies a pervasive source of naturally occurring, quasi-experimental variation in user-level ad-exposure in digital advertising campaigns. It shows how this variation can be utilized by ad-publishers to identify the causal effect of advertising campaigns. The variation pertains to auction throttling, a probabilistic method of budget pacing that is widely used to spread an ad-campaign's budget over its deployed duration, so that the campaign's budget is not exceeded or overly concentrated in any one period. The throttling mechanism is implemented by computing a participation probability based on the campaign's budget spending rate and then including the campaign in a random subset of available ad- auctions each period according to this probability. We show that access to logged-participation probabilities enables identifying the local average treatment effect (LATE) in the ad-campaign. We present a new estimator that leverages this identification strategy and outline a bootstrap procedure for quantifying its variability. We apply our method to real-world ad-campaign data from an e-commerce advertising platform that uses such throttling for budget pacing. We show our estimate is statistically different from estimates derived using other standard observational methods such as OLS and two-stage least squares estimators. Compared to the implausible 600% conversion lifts estimated using naive observational methods, our estimated conversion lift is 110%, a far more plausible number.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798381019483Subjects--Topical Terms:
572672
Purchasing.
Index Terms--Genre/Form:
554714
Electronic books.
Essays in Quantitative Marketing.
LDR
:05715ntm a22003257 4500
001
1145334
005
20240618081821.5
006
m o d
007
cr mn ---uuuuu
008
250605s2023 xx obm 000 0 eng d
020
$a
9798381019483
035
$a
(MiAaPQ)AAI30726836
035
$a
(MiAaPQ)STANFORDcw738sd5782
035
$a
AAI30726836
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Gui, George Zhida.
$3
1470626
245
1 0
$a
Essays in Quantitative Marketing.
264
0
$c
2023
300
$a
1 online resource (122 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-07, Section: A.
500
$a
Advisor: Sahni, Navdeep;Nair, Harikesh.
502
$a
Thesis (Ph.D.)--Stanford University, 2023.
504
$a
Includes bibliographical references
520
$a
The first chapter is a joint work with Tilman Drerup that studies the economic consequences of over-delivering versus under-delivering and their implication for firms when they design promises. Firms often need to promise a certain level of service quality to attract customers, and a central question is how to design promises to balance the trade-off between customer acquisition and customer retention. For example, most E-commerce platforms need to promise a certain delivery time. Over-promising may attract more customers at that present moment, but its impact on future retention depends on consumer inertia, learning, and loss aversion. Empirical analysis of this topic is challenging because the realized and promised service qualities are often unobserved or lack exogenous variation. To study this problem, we leverage a novel dataset from Instacart that directly observes variation in promised and actual delivery time. We apply a generalized propensity score method to nonparametrically estimate the impact of delivery time on customer retention. Consistent with reference dependence and loss aversion, we document that customers are around 92% more responsive once the delivery becomes late. Our results inform a structural model of learning and reference dependence that illustrates the importance of estimating loss aversion and distinguishing promise-based reference points from expectation-based reference points: the company would forgo millions of dollars in revenue if it underestimates loss aversion or assumes expectation-based reference points.The second chapter studies how to better leverage data by combining naturally-occurring observational data with randomized controlled trials. Randomized controlled trials generate experimental variation that can credibly identify causal effects, but often suffer from limited scale, while observational datasets are large but often violate desired identification assumptions. To improve estimation efficiency, I propose a method that leverages imperfect instruments - pretreatment covariates that satisfy the relevance condition but may violate the exclusion restriction. I show that these imperfect instruments can be used to derive moment restrictions that, in combination with the experimental data, improve estimation efficiency. I outline estimators for implementing this strategy, and show that my methods can reduce variance by up to 50%; therefore, only half of the experimental sample is required to attain the same statistical precision. I apply my method to a search listing dataset from Expedia that studies the causal effect of search rankings on clicks, and show that the method can substantially improve the precision.The third chapter is a joint work with Harikesh Nair and Fengshi Niu, in which we study how to use auction throttling to identify the online advertising effect. Causally identifying the effect of digital advertising is challenging, because experimentation is expensive, and observational data lacks random variation. This chapter identifies a pervasive source of naturally occurring, quasi-experimental variation in user-level ad-exposure in digital advertising campaigns. It shows how this variation can be utilized by ad-publishers to identify the causal effect of advertising campaigns. The variation pertains to auction throttling, a probabilistic method of budget pacing that is widely used to spread an ad-campaign's budget over its deployed duration, so that the campaign's budget is not exceeded or overly concentrated in any one period. The throttling mechanism is implemented by computing a participation probability based on the campaign's budget spending rate and then including the campaign in a random subset of available ad- auctions each period according to this probability. We show that access to logged-participation probabilities enables identifying the local average treatment effect (LATE) in the ad-campaign. We present a new estimator that leverages this identification strategy and outline a bootstrap procedure for quantifying its variability. We apply our method to real-world ad-campaign data from an e-commerce advertising platform that uses such throttling for budget pacing. We show our estimate is statistically different from estimates derived using other standard observational methods such as OLS and two-stage least squares estimators. Compared to the implausible 600% conversion lifts estimated using naive observational methods, our estimated conversion lift is 110%, a far more plausible number.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Purchasing.
$3
572672
650
4
$a
Statistical significance.
$3
1470627
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0338
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Stanford University.
$3
1184533
773
0
$t
Dissertations Abstracts International
$g
85-07A.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30726836
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入