Language:
English
繁體中文
Help
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Context Aware Human-Robot and Human-...
~
Magnenat-Thalmann, Nadia.
Context Aware Human-Robot and Human-Agent Interaction
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Context Aware Human-Robot and Human-Agent Interaction/ edited by Nadia Magnenat-Thalmann, Junsong Yuan, Daniel Thalmann, Bum-Jae You.
other author:
Magnenat-Thalmann, Nadia.
Description:
XIII, 298 p. 143 illus.online resource. :
Contained By:
Springer Nature eBook
Subject:
User interfaces (Computer systems). -
Online resource:
https://doi.org/10.1007/978-3-319-19947-4
ISBN:
9783319199474
Context Aware Human-Robot and Human-Agent Interaction
Context Aware Human-Robot and Human-Agent Interaction
[electronic resource] /edited by Nadia Magnenat-Thalmann, Junsong Yuan, Daniel Thalmann, Bum-Jae You. - 1st ed. 2016. - XIII, 298 p. 143 illus.online resource. - Human–Computer Interaction Series,1571-5035. - Human–Computer Interaction Series,.
Preface -- Introduction -- Part I User Understanding through Multisensory Perception -- Face and Facial Expressions Recognition and Analysis -- Body Movement Analysis and Recognition -- Sound Source Localization and Tracking -- Modelling Conversation -- Part II Facial and Body Modelling Animation -- Personalized Body Modelling -- Parameterized Facial modelling and Animation -- Motion Based Learning -- Responsive Motion Generation -- Shared Object Manipulation -- Part III Modelling Human Behaviours -- Modelling Personality, Mood and Emotions -- Motion Control for Social Behaviours -- Multiple Virtual Humans Interactions -- Multi-Modal and Multi-Party Social Interactions.
This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI). The research presented in this volume is split into three sections: ·User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. ·Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. ·Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.
ISBN: 9783319199474
Standard No.: 10.1007/978-3-319-19947-4doiSubjects--Topical Terms:
1253526
User interfaces (Computer systems).
LC Class. No.: QA76.9.U83
Dewey Class. No.: 005.437
Context Aware Human-Robot and Human-Agent Interaction
LDR
:03570nam a22004335i 4500
001
979743
003
DE-He213
005
20200704101241.0
007
cr nn 008mamaa
008
201211s2016 gw | s |||| 0|eng d
020
$a
9783319199474
$9
978-3-319-19947-4
024
7
$a
10.1007/978-3-319-19947-4
$2
doi
035
$a
978-3-319-19947-4
050
4
$a
QA76.9.U83
050
4
$a
QA76.9.H85
072
7
$a
UYZG
$2
bicssc
072
7
$a
COM070000
$2
bisacsh
072
7
$a
UYZG
$2
thema
082
0 4
$a
005.437
$2
23
082
0 4
$a
4.019
$2
23
245
1 0
$a
Context Aware Human-Robot and Human-Agent Interaction
$h
[electronic resource] /
$c
edited by Nadia Magnenat-Thalmann, Junsong Yuan, Daniel Thalmann, Bum-Jae You.
250
$a
1st ed. 2016.
264
1
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2016.
300
$a
XIII, 298 p. 143 illus.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
Human–Computer Interaction Series,
$x
1571-5035
505
0
$a
Preface -- Introduction -- Part I User Understanding through Multisensory Perception -- Face and Facial Expressions Recognition and Analysis -- Body Movement Analysis and Recognition -- Sound Source Localization and Tracking -- Modelling Conversation -- Part II Facial and Body Modelling Animation -- Personalized Body Modelling -- Parameterized Facial modelling and Animation -- Motion Based Learning -- Responsive Motion Generation -- Shared Object Manipulation -- Part III Modelling Human Behaviours -- Modelling Personality, Mood and Emotions -- Motion Control for Social Behaviours -- Multiple Virtual Humans Interactions -- Multi-Modal and Multi-Party Social Interactions.
520
$a
This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI). The research presented in this volume is split into three sections: ·User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. ·Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. ·Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.
650
0
$a
User interfaces (Computer systems).
$3
1253526
650
0
$a
Optical data processing.
$3
639187
650
0
$a
Artificial intelligence.
$3
559380
650
1 4
$a
User Interfaces and Human Computer Interaction.
$3
669793
650
2 4
$a
Computer Imaging, Vision, Pattern Recognition and Graphics.
$3
671334
650
2 4
$a
Artificial Intelligence.
$3
646849
700
1
$a
Magnenat-Thalmann, Nadia.
$4
edt
$4
http://id.loc.gov/vocabulary/relators/edt
$3
682927
700
1
$a
Yuan, Junsong.
$4
edt
$4
http://id.loc.gov/vocabulary/relators/edt
$3
1062516
700
1
$a
Thalmann, Daniel.
$4
edt
$4
http://id.loc.gov/vocabulary/relators/edt
$3
679427
700
1
$a
You, Bum-Jae.
$e
editor.
$4
edt
$4
http://id.loc.gov/vocabulary/relators/edt
$3
1272767
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783319199467
776
0 8
$i
Printed edition:
$z
9783319199481
776
0 8
$i
Printed edition:
$z
9783319373478
830
0
$a
Human–Computer Interaction Series,
$x
1571-5035
$3
1254242
856
4 0
$u
https://doi.org/10.1007/978-3-319-19947-4
912
$a
ZDB-2-SCS
912
$a
ZDB-2-SXCS
950
$a
Computer Science (SpringerNature-11645)
950
$a
Computer Science (R0) (SpringerNature-43710)
based on 0 review(s)
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login