語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Deep Representation of Lyrical Style...
~
ProQuest Information and Learning Co.
Deep Representation of Lyrical Style and Semantics for Music Recommendation.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Deep Representation of Lyrical Style and Semantics for Music Recommendation./
作者:
Lokesh Kashyap, Abhay.
面頁冊數:
1 online resource (140 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9780355542868
Deep Representation of Lyrical Style and Semantics for Music Recommendation.
Lokesh Kashyap, Abhay.
Deep Representation of Lyrical Style and Semantics for Music Recommendation.
- 1 online resource (140 pages)
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
Thesis (Ph.D.)--University of Maryland, Baltimore County, 2017.
Includes bibliographical references
In an increasingly mobile and connected world, digital music consumption has rapidly increased. More recently, faster and cheaper mobile bandwidth has given the average mobile user the potential to access large troves of music through streaming services like Spotify and Google Music that boast catalogs with tens of millions of songs. At this scale, effective music recommendation is an important part of user experience and music discovery. Collaborative filtering (CF), a popular technique used by recommendation systems, suffer from two major issues; popularity bias that leads to a long tail and cold-start for new items. In such cases, they use content features to supplement similarity measures which, for music, are acoustic features extracted from a song's audio and textual features from its metadata, tags and lyrics. Research in content based music similarity has largely been focused in the acoustic domain while lyrical content has received little attention and been limited to traditional Information Retrieval (IR) techniques. Lyrics contain information about the emotion and meaning conveyed in a song that cannot be easily extracted from the audio. This is especially important for lyrics-centric genres like Rap, which was also the most streamed genre in 2016. The goal of this dissertation is to explore and evaluate different lyrical content features that could be useful for content, context and emotion based models for music recommendation systems.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355542868Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
Deep Representation of Lyrical Style and Semantics for Music Recommendation.
LDR
:05087ntm a2200361K 4500
001
913118
005
20180614071648.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355542868
035
$a
(MiAaPQ)AAI10635105
035
$a
(MiAaPQ)umbc:11737
035
$a
AAI10635105
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Lokesh Kashyap, Abhay.
$3
1185794
245
1 0
$a
Deep Representation of Lyrical Style and Semantics for Music Recommendation.
264
0
$c
2017
300
$a
1 online resource (140 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
500
$a
Adviser: Tim Finin.
502
$a
Thesis (Ph.D.)--University of Maryland, Baltimore County, 2017.
504
$a
Includes bibliographical references
520
$a
In an increasingly mobile and connected world, digital music consumption has rapidly increased. More recently, faster and cheaper mobile bandwidth has given the average mobile user the potential to access large troves of music through streaming services like Spotify and Google Music that boast catalogs with tens of millions of songs. At this scale, effective music recommendation is an important part of user experience and music discovery. Collaborative filtering (CF), a popular technique used by recommendation systems, suffer from two major issues; popularity bias that leads to a long tail and cold-start for new items. In such cases, they use content features to supplement similarity measures which, for music, are acoustic features extracted from a song's audio and textual features from its metadata, tags and lyrics. Research in content based music similarity has largely been focused in the acoustic domain while lyrical content has received little attention and been limited to traditional Information Retrieval (IR) techniques. Lyrics contain information about the emotion and meaning conveyed in a song that cannot be easily extracted from the audio. This is especially important for lyrics-centric genres like Rap, which was also the most streamed genre in 2016. The goal of this dissertation is to explore and evaluate different lyrical content features that could be useful for content, context and emotion based models for music recommendation systems.
520
$a
With Rap as a model use case and a custom dataset comprising over 35, 000 songs from over 500 Rap artists, this dissertation focuses on featurizing two main aspects of lyrics; its artistic style of composition and its semantic content. For lyrical style, phonetic representations of lyrics are used to match rhymed syllables and extract a suite of high level rhyme density features of different types. This is augmented with literary features like the use of figurative language, profanity and vocabulary strength along with text statistics. In contrast to these engineered features, Convolutional Neural Networks (CNNs) are used to automatically learn rhyme patterns and other syllable statistics that are most relevant for a task like artist identification from raw syllable sequences. For semantics, lyrics are represented using both traditional IR techniques like LSA and the more recent neural embedding methods like doc2vec. Also, in addition to using only plain lyrics, their annotations are also included to provide an extra layer of contextual information. Finally, to mitigate long-tail & cold-start problems, these lyrical content features are used to map songs and artists to their corresponding points in the collaborative filtering based latent space using neural networks.
520
$a
The usefulness of these lyrical style and semantic features are evaluated for three main tasks; artist identification, artist similarity and song similarity. It is shown that both rhyme and literary features serve as strong indicators to identify artists from lyrics while comparable results are achieved from feature learning methods like CNNs. In addition to artist identification, which evaluates lyrical features in a purely content space, lyrical similarity between artists and songs are also compared to a real-world, collaborative filtering based recommendation system from Last.fm and the results indicate a strong relationship between the way listeners consume music and lyrical content. For lyrical semantics, neural embedding methods significantly outperformed traditional LSA methods and the inclusion of annotations improved song similarity measures. Finally, this dissertation is accompanied by a web-application, Rapalytics.com, that is dedicated to visualizing all these extracted lyrical features and has been featured on a number of media outlets, most notably, Vox, attn: and Metro.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Artificial intelligence.
$3
559380
650
4
$a
Music.
$3
649088
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0800
690
$a
0413
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of Maryland, Baltimore County.
$b
Computer Science.
$3
1179407
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10635105
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入