Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003.
…
6 pages
1 file
In this paper, we describe a tool for developing animated agents with facial expressions in negotiation through a computer network. The tool learns a user's tendency to select facial expressions of the animated agent, and generates facial expressions instead of human. In order to estimate facial expressions, the tool has an emotional model constructed by Bayesian Network. We can easily develop animated agents if we use this tool as a component. And we describe the estimation of an opponent's emotional state, based on observed data, by using the Bayesian Network.
CHI '07 Extended Abstracts on Human Factors in Computing Systems, 2007
This paper investigates the manner in which decisionmaking is influenced by the impressions given by lifelike agents in negotiation situations. These impressions comprise an agent's facial expressions such as happy and sad, and the history of relationship with the agent. In this paper, we introduce a negotiation game as one of the basic interactions based on the soft game theory. The experimental results reveal that expressions and history significantly influence the receiver's impressions and decision-making. The findings of this study can be beneficial for designing the nonverbal expressions of an animated agent who can negotiate and make a deal with users.
This paper presents two strategies for selecting volitional facial expressions and gaze behaviors of animated agents, in the case of Online Ne- gotiation and Soft Game Theory. It is dicult to make general models select volitional facial expressions for their use in many kinds of applications. In this paper, in order to develop these strategies, I investigated the eects of facial expressions and gaze behaviors through experiments based on Online Negoti- ation and Game Theory. I describe strategies by using estimations of reserve prices based on facial expressions in the case of Online Negotiation. Addition- ally, I deal with Soft Game Theory, which considers players' emotions, and describe strategies for using facial expressions. We can apply these strategies to animated agents and robots as a mechanism to select facial expressions and gaze behaviors.
1999
The ability to express emotions is important for creating believable interactive characters. To simulate emotional expressions in an interactive environment, an intelligent agent needs both an adaptive model for generating believable responses, and a visualization model for mapping emotions into facial expressions. Recent advances in intelligent agents and in facial modeling have produced effective algorithms for these tasks independently. In this paper, we describe a method for integrating these algorithms to create an interactive simulation of an agent that produces appropriate facial expressions in a dynamic environment. Our approach to combining a model of emotions with a facial model represents a first step towards developing the technology of a truly believable interactive agent, which has a wide range of applications from designing intelligent training systems to video games and animation tools.
We present a multi agent system to generate society profile about facial expressions. Each user interacts with its User Interface Agent which collects user information and communicates with the other agents in the system. A graphical facial animation program is used to monitor facial expressions and to allow the user to draw his/her expressions. The information processed in the system is the parameters related to facial expressions. The parameters are processed for generating general society profile and local agent beliefs according to facial expressions generated by users and the grades assigned to expressions. The society profile is generated by an intelligent agent, called Clustering Agent having ability to cluster all the agent information in the system. The Clustering Agent learns some parameter weights used in the clustering process by the rewards of the users. The system may be a useful tool for multimedia applications, including advertisements, games, and entertainment.
Proceedings Joint 9th IFSA World Congress and 20th NAFIPS International Conference (Cat. No. 01TH8569), 2001
This paper introduces a model of interaction between users and animated agents as well as inter-agent interaction that supports basic features of affective conversation and social interaction. As essential requirements for animated agents' capability to engage in and exhibit affective and social communication we motivate reasoning about emotions and emotion expression, personality, and social role awareness. The main contribution of our paper is the incorporation of 'social filter programs' to mental models of animated agents. Those programs may qualify an agent's expression of its emotional state by the social context, thereby enhancing the agent's believability as a conversational partner. Our implemented system is entirely web-based and demonstrates animated agents with affective and social behavior in a virtual coffee shop environment.
2006
We introduce an emotional agent model that shows how emotions affect an agent's negotiation strategy. By adding emotions, we add the effects of these indirectly related features to the negotiation, features that are ignored in most models. Our new method, the PAD Emotional Negotiation Model, maps a nonemotional agent's strategy during negotiation to the strategy used by an emotional agent. Our evaluations show this model can be used to implement agents with various emotional states that mimic human emotions during negotiation.
2001
In this paper we introduce a model, called Expression Generator, which generates facial expressions related to pseudo emotions and behavior of chatterbots, which are characters used as front ends of dialogue systems employed in Web sites. The emotions are articulated as sequences of MPEG-4 compliant facial displays, and visualized in a browser-embedded player. We define a set of operators to
Soft Computing for …, 2010
An intelligent virtual agent is an intelligent agent with a digital representation and endowed with conversational capabilities; this interactive character exhibits human-like behavior and communicates with humans or virtual interlocutors using verbal and nonverbal modalities such as gesture and speech. Building credible characters requires a multidisciplinary approach, the behaviors that must be reproduced are very complex and must be decoded by their interlocutors. In this paper we introduce a kinesics model in order to select the nonverbal expression of intelligent virtual agents able to express their emotions and purposes using gestures and face expressions. Our model select the nonverbal expressions based on causal analysis, and data mining algorithms applied to perceptual studies in order to identify significant attributes in emotional face expressions.
Universal Access in Human-Computer Interaction. Design for All and Accessibility Practice, 2014
We describe a method using animated agents to investigate how humans recognize conversational moods. Conversational moods are usually generated by conversation participants through verbal cues as well as nonverbal cues, such as facial expressions, eye movements, and nods. Identifying specific rules of conversational moods would enable us to construct conversational robots and agents that are not only able to converse naturally, but pleasantly and excitedly. Additionally, these robots and agents would be able to assist us with proper action to generate improved conversational moods in different situations. We propose methods for developing agents that can help improve the quality of our conversations and facilitate greater enjoyment of life.
Annual Conference on Computer Graphics, 1994
We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speakerllistener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout, we will use examples from an actual synthesized, fully animated conversation.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Affective Dialogue Systems, 2004
Sice 2003 Annual Conference, 2003
KNOWLEDGEBASED SYSTEMS, 2000
Proceedings of the IASTED International Conference on Robotics and Applications, 2010
… on Autonomous Agents, 2000
Computers & Graphics, 1996
Proceedings of the 8th international conference on Multimodal interfaces - ICMI '06, 2006