Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, Computers & Graphics
This paper presents a powerful animation engine for developing applications with embodied animated agents called Maxine. The engine, based on open source tools, allows management of scenes and virtual characters, and pays special attention to multimodal and emotional interaction with the user. Virtual actors are endowed with facial expressions, lip-synch, emotional voice, and they can vary their answers depending on their own emotional state and the relationship with the user during conversation. Maxine virtual agents have been used in several applications: a virtual presenter was employed in MaxinePPT, a specific application developed to allow non-programmers to create 3D presentations easily using classical PowerPoint presentations; a virtual character was also used as an interactive interface to communicate with and control a domotic environment; finally, an interactive pedagogical agent was used to simplify and improve the teaching and practice of Computer Graphics subjects.
2000
This paper presents a powerful animation engine, called Maxine, to develop applications with embodied animated agents. The engine, based on open source tools, allows the management of scenes and virtual characters, and allows multimodal and emotional interaction with the users. The virtual actors are provided with facial expressions, lyp-synch, emotional voice, and can vary their answers depending on their emotional
Life-Like Characters …, 2003
International Workshop on Information Presentation and Natural Multimodal Dialogue, 2001
1 Abstract This paper illustrates the architecture of a multimodal believable agent, provided with a personality and a social role, aiming at providing information to users engaging them in a natural conversation. To achieve this aim, we provide our agent with a mind, a dialogue manager and a body: a) the mind, according to the agent's personality, the events occuring and the user dialog move, triggers, if appropriate, an emotion; b) the dialog manager, according to an overall dialog goal and the corresponding plan to be pursued, selects the ...
2007
In this paper, Maxine, a powerful engine to develop applications with embodied animated agents is presented. The engine, based on the use of opensource libraries, enables multimodal real-time interaction with the user: via text, voice, images and gestures. Maxine virtual agents can establish emotional communication with the user through their facial expressions, the modulation of the voice and expressing the answers of the agents according to the information gathered by the system: noise level in the room, observer's position, emotional state of the observer, etc. Moreover, the user's emotions are considered and captured through images. For the moment, Maxine virtual agents have been used as virtual presenters and, a specific application, MaxinePPT, has been developed to allow non-programmers to create 3D presentations easily from classical PowerPoint presentations. Nevertheless, other applications are also envisaged.
2000
Abstract. Recent years have witnessed the birth of a new paradigm for learning environments: animated pedagogical agents. These lifelike autonomous characters cohabit learning environments with students to create rich, face-to-face learning interactions. This opens up exciting new possibilities; for example, agents can demonstrate complex tasks, employ locomotion and gesture to focus students' attention on the most salient aspect of the task at hand, and convey emotional responses to the tutorial situation.
2004
Embodied conversational agents provide a promising option for presenting information to users. This contribution revisits a number of past and ongoing systems with animated characters that have been developed at DFKI. While in all systems the purpose of using characters is to convey information to the user, there are significant variations in the style of presentation and the assumed conversational setting. The spectrum of systems include systems that feature a single, TV-style presentation agent, dialogue systems, as well as systems that deploy multiple interactive characters. We also provide a technical view on these systems and sketch the underlying system architectures of each sample system.
2001
With the advent of software agents and assistants, the concept of so called conversational user interfaces evolved, incorporating natural language interaction, dialogue management, and anthropomorphic representations. Today's challenge is to build a suitable visualization architecture for anthropomorphic conversational user interfaces, and to design believable and appropriate face-to-face interaction imitating human attributes such as emotions. The system is designed as an autonomous agent enabling easy integration into a variety of scenarios. Architecture, protocols, and graphical output are discussed.
Abstract One of the most important developments in the software industry in recent years has been the merger of the entertainment and educational software markets. An intriguing possibility for edutainment software is the introduction of intelligent life-like characters. By coupling inferential capabilities with well-crafted animated creatures, it becomes possible to create animated pedagogical agents that provide communicative functionalities with strong visual impact.
Applied Artificial Intelligence, 2010
Embodied agents are a powerful paradigm for current and future multimodal interfaces yet require high effort and expertise for their creation, assembly, and animation control. Therefore, open animation engines and high-level control languages are required to make embodied agents accessible to researchers and developers. We present EMBR, a new real-time character animation engine that offers a high degree of animation control via the EMBRScript language. We argue that a new layer of control, the animation layer, is necessary to keep the higher-level control layers (behavioral/functional) consistent and slim while allowing a unified and abstract access to the animation engine (e.g., for the procedural animation of nonverbal behavior). We also introduce new concepts for the high-level control of motion quality (spatial/temporal extent, power, fluidity). Finally, we describe the architecture of the EMBR engine, its integration into larger project contexts, and conclude with a concrete application.
2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009
Embodied agents are a powerful paradigm for current and future multimodal interfaces, yet require high effort and expertise for their creation, assembly and animation control. Therefore, open animation engines and high-level control languages are required to make embodied agents accessible to researchers and developers. In this paper, we present EMBR, a new realtime character animation engine that offers a high degree of animation control via the EMBRScript language. We argue that a new layer of control, the animation layer, is necessary to keep the higher-level control layers (behavioral/functional) consistent and slim, while allowing a unified and abstract access to the animation engine, e.g. for the procedural animation of nonverbal behavior. We describe the EMBRScript animation layer, the architecture of the EMBR engine, its integration into larger project contexts, and conclude with a concrete application.
2000
This paper describes the potential for instructing animated agents through collaborative dialog in a simulated environment. The abilities to share activity with a human instructor and employ both verbal and nonverbal modes of communication allow the agent to be taught in a manner natural for the instructor. A system that uses such shared activity and instruction to acquire the knowledge needed by an agent is described. This system is implemented in STEVE, an embodied agent that teaches physical tasks to human students.
2002
Executive summary This Document describes the Prototype of animated agent for application 1. In particular, it describes the different phases involved in the computation of the final animation of the agents. This document discusses the method we are using to resolve conflicts arising when combining several facial expressions. We also present our lip and coarticulation model.
The increase in the use of animation for educational purposes and especially virtual characters known as animated pedagogical agents (APAs) has open questions about the effectiveness of this tool during the learning process (Frechette & Moreno, 2010) For that reason several studies had focused their attention in the psychological and social aspects involved during the process and its relation with the use of the APAs. Character designers and animators might face the responsibility of creating agents that successfully fulfil a pedagogical purpose in sundry educational virtual environments. This research reviews a number of studies- from the perspective of psychoanalysis- that highlights main characteristics of the agents, its roles and socio-psychological impacts. Therefore it describes a general set of design considerations that might be useful for character designers, when creating an animated pedagogical agent (APA), building a bridge to connect design concerns and the knowledge reviewed before. In conclusion, to succeed in the design of an APA, designers might consider several psychological and social factors that are projected in the APAs graphic design and might have an impact in the learner’s cognitive process when learning with an animated pedagogical agent in virtual educational environments. Key words Animated Pedagogical Agent (APA), virtual educational environment, character design, socio-psychological factors, e-learning.
… of the 2nd international conference on …, 1997
Lecture Notes in Computer Science, 2007
An interactive system in which the user can program animated agents visually is introduced: the Visual Agent Programming (VAP) software provides a GUI to program life-like agents. VAP is superior to currently available systems to program such agents in that it has a richer set of features including automatic compilation, generation, commenting and formatting of code, amongst others. Moreover, a rich error feedback system not only helps the expert programmer, but makes the system particularly accessible to the novice user. The VAP software package is available freely online.
2002
Artificial Intelligence is an important area in the field of Computing applied to Education in terms of technological implementation. This paper describes IVTE, Intelligent Virtual Teaching Environment, implemented by Multi-Agents technology including pedagogical features, represented by Guilly. Guilly is a Animated Pedagogical Agent that acts based on a student model and teaching strategies. Keyword: Computer Science, Education, Learning Environment, Cognition, Games.
2003
This work proposes an animated pedagogical agent that has the role of providing emotional support to the student: motivating and encouraging him, making him believe in his self-ability, and promoting a positive mood in him, which fosters learning. This careful support of the agent, its affective tactics, is expressed through emotional behaviour and encouragement messages of the lifelike character. Due to human social tendency of anthropomorphising software, we believe that a software agent can accomplish this affective role. In order to choose the adequate affective tactics, the agent should also know the student's emotions. The proposed agent recognises the student's emotions: joy/distress, satisfaction/disappointment, anger/gratitude, and shame, from the student's observable behaviour, i. e. his actions in the interface of the educational system. The inference of emotions is psychologically grounded on the cognitive theory of emotions. More specifically, we use the OCC model which is based on the cognitive approach of emotion and can be computationally implemented. Due to the dynamic nature of the student's affective information, we adopted a BDI approach to implement the affective user model and the affective diagnosis. Besides, in our work we profit from the reasoning capacity of the BDI approach in order for the agent to deduce the student's appraisal, which allows it to infer the student's emotions. As a case study, the proposed agent is implemented as the Mediating Agent of MACES: an educational collaborative environment modelled as a multi-agent system and pedagogically based on the sociocultural theory of Vygotsky.
AI Magazine, 2001
Journal of the Brazilian Computer Society, 2009
This article introduces an open-source module responsible for the presentation of verbal (speech) and corporal (animation) behaviors of animated pedagogical agents. This module can be inserted into any learning environment regardless of application domain and platform, being executable under different operating systems. It was implemented in Java as a reactive agent (named Body agent) that communicates with the agent's Mind through a language known as FIPA-ACL. Therefore, it may be inserted into any intelligent learning environment that is also capable to communicate using FIPA-ACL. Persistence of information is ensured by XML files, increasing the agent's portability. The agent also includes a mechanism for automatically updating new behaviors and characters once available in the server. A simulation environment was conceived to test the proposed agent.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.