語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Towards Augmenting and Evaluating Large Language Models.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Towards Augmenting and Evaluating Large Language Models./
作者:
Liu, Tianyang.
面頁冊數:
1 online resource (98 pages)
附註:
Source: Masters Abstracts International, Volume: 85-10.
Contained By:
Masters Abstracts International85-10.
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9798381978063
Towards Augmenting and Evaluating Large Language Models.
Liu, Tianyang.
Towards Augmenting and Evaluating Large Language Models.
- 1 online resource (98 pages)
Source: Masters Abstracts International, Volume: 85-10.
Thesis (M.S.)--University of California, San Diego, 2024.
Includes bibliographical references
In the rapidly evolving field of Natural Language Processing (NLP), the advent of Large Language Models (LLMs) marks a significant milestone, setting new standards in language understanding and generation. This thesis focuses on augmenting and evaluating LLMs, introducing ToolkenGPT, a novel method to integrate external tools via tool embeddings to enrich model functionality and adaptability and RepoBench, a benchmark for assessing the proficiency of LLMs in handling repository-level code auto-completion. Additionally, this thesis rethinks approaches towards tabular data reasoning, exploring how LLMs can be better tailored to understand and interpret structured data formats effectively.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798381978063Subjects--Topical Terms:
569006
Computer engineering.
Subjects--Index Terms:
Large Language ModelsIndex Terms--Genre/Form:
554714
Electronic books.
Towards Augmenting and Evaluating Large Language Models.
LDR
:02003ntm a22003737 4500
001
1146447
005
20240812064618.5
006
m o d
007
cr bn ---uuuuu
008
250605s2024 xx obm 000 0 eng d
020
$a
9798381978063
035
$a
(MiAaPQ)AAI30993449
035
$a
AAI30993449
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Liu, Tianyang.
$3
1471835
245
1 0
$a
Towards Augmenting and Evaluating Large Language Models.
264
0
$c
2024
300
$a
1 online resource (98 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 85-10.
500
$a
Advisor: McAuley, Julian.
502
$a
Thesis (M.S.)--University of California, San Diego, 2024.
504
$a
Includes bibliographical references
520
$a
In the rapidly evolving field of Natural Language Processing (NLP), the advent of Large Language Models (LLMs) marks a significant milestone, setting new standards in language understanding and generation. This thesis focuses on augmenting and evaluating LLMs, introducing ToolkenGPT, a novel method to integrate external tools via tool embeddings to enrich model functionality and adaptability and RepoBench, a benchmark for assessing the proficiency of LLMs in handling repository-level code auto-completion. Additionally, this thesis rethinks approaches towards tabular data reasoning, exploring how LLMs can be better tailored to understand and interpret structured data formats effectively.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Computer science.
$3
573171
653
$a
Large Language Models
653
$a
Natural Language Processing
653
$a
Tabular data reasoning
653
$a
Auto-completion
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0464
710
2
$a
University of California, San Diego.
$b
Computer Science and Engineering.
$3
1189479
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
773
0
$t
Masters Abstracts International
$g
85-10.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30993449
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入