Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, Proceedings of the workshop on Advanced visual interfaces - AVI '96
…
3 pages
1 file
Animated agents-either based on real video, cartoon-style drawings or even model-based 3D graphics are likely to become integral parts of future user interfaces. We present the PPP Persona, a tool which can be used for showing, explaining, and verbally commenting textual and graphical output on a window-based interface. The realization of the module follows the client/server paradigm, i.e, some client applications can send requests for executing presentation tasks to the server. However, to achieve a lively and appealing behaviour of the animated agent, the server autonomously performs some actions, eg. to span pauses or to react immediately to user interactions.
… of the 2nd international conference on …, 1997
2001
With the advent of software agents and assistants, the concept of so called conversational user interfaces evolved, incorporating natural language interaction, dialogue management, and anthropomorphic representations. Today's challenge is to build a suitable visualization architecture for anthropomorphic conversational user interfaces, and to design believable and appropriate face-to-face interaction imitating human attributes such as emotions. The system is designed as an autonomous agent enabling easy integration into a variety of scenarios. Architecture, protocols, and graphical output are discussed.
AI Magazine, 2001
International Workshop on Information Presentation and Natural Multimodal Dialogue, 2001
1 Abstract This paper illustrates the architecture of a multimodal believable agent, provided with a personality and a social role, aiming at providing information to users engaging them in a natural conversation. To achieve this aim, we provide our agent with a mind, a dialogue manager and a body: a) the mind, according to the agent's personality, the events occuring and the user dialog move, triggers, if appropriate, an emotion; b) the dialog manager, according to an overall dialog goal and the corresponding plan to be pursued, selects the ...
Lecture Notes in Computer Science, 2007
An interactive system in which the user can program animated agents visually is introduced: the Visual Agent Programming (VAP) software provides a GUI to program life-like agents. VAP is superior to currently available systems to program such agents in that it has a richer set of features including automatic compilation, generation, commenting and formatting of code, amongst others. Moreover, a rich error feedback system not only helps the expert programmer, but makes the system particularly accessible to the novice user. The VAP software package is available freely online.
Computers & Graphics, 2008
This paper presents a powerful animation engine for developing applications with embodied animated agents called Maxine. The engine, based on open source tools, allows management of scenes and virtual characters, and pays special attention to multimodal and emotional interaction with the user. Virtual actors are endowed with facial expressions, lip-synch, emotional voice, and they can vary their answers depending on their own emotional state and the relationship with the user during conversation. Maxine virtual agents have been used in several applications: a virtual presenter was employed in MaxinePPT, a specific application developed to allow non-programmers to create 3D presentations easily using classical PowerPoint presentations; a virtual character was also used as an interactive interface to communicate with and control a domotic environment; finally, an interactive pedagogical agent was used to simplify and improve the teaching and practice of Computer Graphics subjects.
2001
Lifelike characters, or animated agents, provide a promising option for interface development as they allow us to draw on communication and interaction styles with which humans are already familiar. In this contribution, we revisit some of our past and ongoing projects in order to motivate an evolution of character-based presentation systems. This evolution starts from systems in which a character presents information content in the style of a TVpresenter. It moves on with the introduction of presentation teams that convey information to the user by performing role plays. In order to explore new forms of active user involvement during a presentation, the next step may lead to systems that convey information in the style of interactive performances. From a technical point of view, this evaluation is mirrored in different approaches to determine the behavior of the employed characters. By means of concrete applications, we argue that a central planning component for automated agent scripting is not always a good choice, especially not in the case of interactive performances where the user may take on an active role as well.
Applied Artificial Intelligence, 1999
Life-Like Characters …, 2003
Proceedings of the IEEE, 2003
This paper presents a vision of the near future in which computer interaction is characterized by natural face-to-face conversations with lifelike characters that speak, emote, and gesture. These animated agents will converse with people much like people converse effectively with assistants in a variety of focused applications. Despite the research advances required to realize this vision, and the lack of strong experimental evidence that animated agents improve human-computer interaction, we argue that initial prototypes of perceptive animated interfaces can be developed today, and that the resulting systems will provide more effective and engaging communication experiences than existing systems. In support of this hypothesis, we first describe initial experiments using an animated character to teach speech and language skills to children with hearing problems, and classroom subjects and social skills to children with autistic spectrum disorder. We then show how existing dialogue system architectures can be transformed into perceptive animated interfaces by integrating computer vision and animation capabilities. We conclude by describing the Colorado Literacy Tutor, a computer-based literacy program that provides an ideal testbed for research and development of perceptive animated interfaces, and consider next steps required to realize the vision. ).
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proc. of COSIGN, 2002
ACM Southeast Regional Conference, 2009
IEEE Computer Graphics and Applications, 2000
Knowledge-Based Systems, 1998
Referring Phenomena in a Multimedia Context and their Computational Treatment on - ReferringPhenomena '97, 1997
Applied Artificial Intelligence, 2010
Proceedings of the 29th ACM international conference on Design of communication - SIGDOC '11, 2011
… of the second international conference on …, 1998