語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
LLM-Coordination : = Developing Coordinating Agents With Large Language Models.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
LLM-Coordination :/
其他題名:
Developing Coordinating Agents With Large Language Models.
作者:
Agashe, Saaket.
面頁冊數:
1 online resource (70 pages)
附註:
Source: Masters Abstracts International, Volume: 85-07.
Contained By:
Masters Abstracts International85-07.
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9798381421101
LLM-Coordination : = Developing Coordinating Agents With Large Language Models.
Agashe, Saaket.
LLM-Coordination :
Developing Coordinating Agents With Large Language Models. - 1 online resource (70 pages)
Source: Masters Abstracts International, Volume: 85-07.
Thesis (M.S.)--University of California, Santa Cruz, 2023.
Includes bibliographical references
It is essential for intelligent agents to not only excel in isolated situations but also coordinate with partners to achieve common goals. Current Multi-agent Coordination methods rely on Reinforcement Learning techniques to train agents that can work together effectively. On the other hand, agents based on Large Language Models (LLM) have shown promising reasoning and planning capabilities in single-agent tasks, at times outperforming RL-based methods. In this study, we build and assess the effectiveness of LLM agents in various coordination scenarios. We introduce the LLM-Coordination Framework to enable LLMs to complete coordination tasks. We evaluate our method on three game environments and organize the evaluation into five aspects: Theory of Mind, Situated Reasoning, Sustained Coordination, Robustness to Partners, and Explicit Assistance. First, the evaluation of the Theory of Mind and Situated Reasoning reveals the capabilities of LLM to infer the partner's intention and reason actions accordingly. Then, the evaluation around Sustained Coordination and Robustness to Partners further showcases the ability of LLMs to coordinate with an unknown partner in complex long-horizon tasks, outperforming Reinforcement Learning baselines. Lastly, to test Explicit Assistance, which refers to the ability of an agent to offer help proactively, we introduce two novel layouts into the Overcooked-AI benchmark, examining if agents can prioritize helping their partners, sacrificing time that could have been spent on their tasks. This research underscores the promising capabilities of LLMs in sophisticated coordination environments and reveals the potential of LLMs in building strong real-world agents for multi-agent coordination.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798381421101Subjects--Topical Terms:
573171
Computer science.
Subjects--Index Terms:
Large Language ModelIndex Terms--Genre/Form:
554714
Electronic books.
LLM-Coordination : = Developing Coordinating Agents With Large Language Models.
LDR
:03088ntm a22003857 4500
001
1152634
005
20241209114617.5
006
m o d
007
cr mn ---uuuuu
008
250605s2023 xx obm 000 0 eng d
020
$a
9798381421101
035
$a
(MiAaPQ)AAI30818887
035
$a
AAI30818887
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Agashe, Saaket.
$3
1479686
245
1 0
$a
LLM-Coordination :
$b
Developing Coordinating Agents With Large Language Models.
264
0
$c
2023
300
$a
1 online resource (70 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 85-07.
500
$a
Advisor: Wang, Xin Eric.
502
$a
Thesis (M.S.)--University of California, Santa Cruz, 2023.
504
$a
Includes bibliographical references
520
$a
It is essential for intelligent agents to not only excel in isolated situations but also coordinate with partners to achieve common goals. Current Multi-agent Coordination methods rely on Reinforcement Learning techniques to train agents that can work together effectively. On the other hand, agents based on Large Language Models (LLM) have shown promising reasoning and planning capabilities in single-agent tasks, at times outperforming RL-based methods. In this study, we build and assess the effectiveness of LLM agents in various coordination scenarios. We introduce the LLM-Coordination Framework to enable LLMs to complete coordination tasks. We evaluate our method on three game environments and organize the evaluation into five aspects: Theory of Mind, Situated Reasoning, Sustained Coordination, Robustness to Partners, and Explicit Assistance. First, the evaluation of the Theory of Mind and Situated Reasoning reveals the capabilities of LLM to infer the partner's intention and reason actions accordingly. Then, the evaluation around Sustained Coordination and Robustness to Partners further showcases the ability of LLMs to coordinate with an unknown partner in complex long-horizon tasks, outperforming Reinforcement Learning baselines. Lastly, to test Explicit Assistance, which refers to the ability of an agent to offer help proactively, we introduce two novel layouts into the Overcooked-AI benchmark, examining if agents can prioritize helping their partners, sacrificing time that could have been spent on their tasks. This research underscores the promising capabilities of LLMs in sophisticated coordination environments and reveals the potential of LLMs in building strong real-world agents for multi-agent coordination.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Computer engineering.
$3
569006
653
$a
Large Language Model
653
$a
Multi-agent coordination
653
$a
Theory of Mind
653
$a
Situated Reasoning
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0800
690
$a
0984
690
$a
0464
710
2
$a
University of California, Santa Cruz.
$b
Computer Science.
$3
1184383
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
773
0
$t
Masters Abstracts International
$g
85-07.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30818887
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入