Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003, Proceedings of the second international joint conference on Autonomous agents and multiagent systems - AAMAS '03
In this paper, we introduce a toolkit called SceneMaker for authoring scenes for adaptive, interactive performances. These performances are based on automatically generated and prescripted scenes which can be authored with the SceneMaker in a two-step approach: In step one, the scene flow is defined using cascaded finite state machines. In a second step, the content of each scene must be provided. This can be done either manually by using a simple scripting language, or by integrating scenes which are automatically generated at runtime based on a domain and dialogue model. Both scene types can be interweaved in our planbased, distributed platform. The system provides a context memory with access functions that can be used by the author to make scenes user-adaptive. Using CrossTalk as the target application, we describe our models and languages, and illustrate the authoring process. CrossTalk is an interactive installation with animated presentation agents which "live" beyond the actual presentation and systematically step out of character within the presentation, both to enhance the illusion of life. The context memory enables the system to adapt to user feedback and generates data for later evaluation of user/system behavior. The SceneMaker toolkit should enable the non-expert to compose adaptive, interactive performances in a rapid prototyping approach.
AI Magazine, 2001
Lecture Notes in Computer Science, 2002
In this contribution, we argue in favor of a shift from applications with single presentation agents towards flexible performances given by a team of characters as a new presentation style. We will illustrate our approach by means of two subsequent versions of a test-bed called the "Inhabited Market Place" (IMP1 and IMP2). In IMP1, the attribute "flexible" refers to the system's ability to adapt a presentation to the needs and preferences of a particular user. In IMP2, flexibility additionally refers to the user's option of actively participating in a computer-based performance and influencing the behavior of the involved characters at runtime. While a plan-based approach has proven appropriate in both versions to automatically control the behavior of the agents, IMP2 calls for highly reactive and distributed behavior planning.
2001
Lifelike characters, or animated agents, provide a promising option for interface development as they allow us to draw on communication and interaction styles with which humans are already familiar. In this contribution, we revisit some of our past and ongoing projects in order to motivate an evolution of character-based presentation systems. This evolution starts from systems in which a character presents information content in the style of a TVpresenter. It moves on with the introduction of presentation teams that convey information to the user by performing role plays. In order to explore new forms of active user involvement during a presentation, the next step may lead to systems that convey information in the style of interactive performances. From a technical point of view, this evaluation is mirrored in different approaches to determine the behavior of the employed characters. By means of concrete applications, we argue that a central planning component for automated agent scripting is not always a good choice, especially not in the case of interactive performances where the user may take on an active role as well.
… of the second international conference on …, 1998
Proc. of COSIGN, 2002
In this paper, we describe CrossTalk, an interactive installation in which the virtual fair hostess Cyberella presents and explains the idea of simulated dialogues among animated agents to present product information. In particular, Cyberella introduces two further virtual agents, Tina and Ritchie who engage in a car-sales dialogue. Cyberella on the one hand, and Tina and Ritchie on the other hand live on two physically separated screens which are spatially arranged as to form a triangle with the user. The name "CrossTalk" underlines the fact that different animated agents have cross-screen conversations amongst themselves. From the point of view of information presentation CrossTalk explores a meta-theater metaphor that let agents live beyond the actual presentation, as professional actors, enriching the interactive experience of the user with unexpected intermezzi and rehearsal periods. CrossTalk is designed as an interactive installation for public spaces, such as an exhibition, a trade fair, or a kiosk space.
2003
We first introduce CrossTalk, an interactive installation with animated presentation characters that has been designed for public spaces, such as an exhibition, or a trade fair. The installation relies on what we call a metatheater metaphor. Quite similar to professional actors, characters in CrossTalk are not always on duty. Rather, they can step out of their roles, and amuse the user with unexpected intermezzi and rehearsal periods. From the point of view of interactive story telling, CrossTalk comprises at least two interesting aspects. Firstly, it smoothly combines manual scripting of character behavior with an approach for automated script generation. Secondly, the system maintains a context memory that enables the characters to adapt to user feedback and to reflect on previous encounters with users. The context memory is our first step towards characters that develop their own history based on their interaction experiences with users. In this paper we briefly describe our approach for the authoring of adaptive, interactive performances, and sketch our ideas to enrich conversations among the characters by having them reflect on their own experiences.
1998
Abstract The tools for authoring multimedia presentations start with sophisticated interactive tools like Director and ToolBook. However, to make the presentations truly interactive requires programming in “scripting languages.” These languages have generally been difficult to learn for non-programmers.“Interactive behaviors” allow users to click on, move, or otherwise interact with objects on the screen, as opposed to just watching the presentation like a TV show.
… of the 2nd international conference on …, 1997
2009
In this submission we present a first step for an author-centric interface to believable agents. Based on a number of approaches for the description of 3D content, we developed CharanisML, the Character Animation System Meta Language. It is applicable for controlling both 2D and 3D avatars. To demonstrate this, we implemented two different clients in 2D and 3D that are able to interpret CharanisML. Also, they can be adapted as animation engines for interactive digital storytelling engines like Scenejo, that are used in the fields of entertainment as well as game-based learning. Using CharanisML it is possible for an author to control characters independently from both storytelling engines and two- or three-dimensional representation.
Virtual Reality, 2003
CrossTalk is a self-explaining virtual character exhibition for public spaces. This paper presents the CrossTalk system, including its authoring tool Scene-Maker and the CarSales exhibit. CrossTalk extends the commonplace human-to-screen interaction to an interaction triangle. The user faces two separated screens inhabited with virtual characters and interacts through a frontal touch screen. One screen features the exhibition's hostess, an agent who explains exchangeable exhibits located in the opposing screen. The current exhibit is CarSales, a demonstration of automatically generated dialogue, performed by virtual actors. The physical presence of the characters is established through the separation of screens and intensified by inter-character conversations across screens, tying hostess and exhibit together. CrossTalk utilizes a combination of both automatically generated and pre-scripted scenes, and a context memory to adapt to the user and the environment. CrossTalk's authoring tool SceneMaker, in a strict separation of narrative structure and content, provides non-experts with a screenplay-like language to create installations for staging exhibitions.
The School of Creative Arts & Technologies at Ulster University (Magee) has brought together the subject of computing with creative technologies, cinematic arts (film), drama, dance, music and design in terms of research and education. We propose here the development of a flagship computer software platform, SceneMaker, acting as a digital laboratory workbench for integrating and experimenting with the computerprocessing of new theories and methods in these multidisciplinary fields. We discuss the architecture of SceneMaker and relevant technologies for processing within its component modules. SceneMaker will enable the automated production of multimodal animated scenes from film and drama scripts or screenplays. SceneMaker will highlight affective or emotional content in digital storytelling with particular focus on character body posture, facial expressions, speech, non-speech audio, scene composition, timing, lighting, music and cinematography. Applications of SceneMaker include automated simulation of productions.
Proc. of HCI-Italy, 2003
This paper presents a character-centered architecture for interactive presentation of information. The architecture is based on scripts that result from the off-line composition of pre-built multi-modal script units and are acted by animated characters in a reactive fashion. Scripts are stored in an a script library, that allows continuous switches over scripts in reaction to the user input. The system's interactional and dramatic goals are explicitly represented, and drive the script retrieval process.
Computers & Graphics, 2008
This paper presents a powerful animation engine for developing applications with embodied animated agents called Maxine. The engine, based on open source tools, allows management of scenes and virtual characters, and pays special attention to multimodal and emotional interaction with the user. Virtual actors are endowed with facial expressions, lip-synch, emotional voice, and they can vary their answers depending on their own emotional state and the relationship with the user during conversation. Maxine virtual agents have been used in several applications: a virtual presenter was employed in MaxinePPT, a specific application developed to allow non-programmers to create 3D presentations easily using classical PowerPoint presentations; a virtual character was also used as an interactive interface to communicate with and control a domotic environment; finally, an interactive pedagogical agent was used to simplify and improve the teaching and practice of Computer Graphics subjects.
2008
We present first results for the development of a general interface to believable agents. Central element of our approach is CharanisML, a character animation system meta language, which draws from a number of previous proposals for the description of interactive 3D content. To demonstrate the general character of this interface, we implemented two different clients in 2D and 3D that are able to interpret CharanisML. These clients may be adapted as animation engines for interactive digital storytelling engines like Scenejo. Using CharanisML it is possible for an author to control characters independently from both, storytelling engines and two or three-dimensional representations.
… Using Virtual Reality Technologies for Storytelling, 2007
The proposal of this research project is to develop a standard connection mechanism to make narrative environments and the external systems that control them interoperable. Thanks to a new communication interface called RCEI, different knowledge-based storytelling systems will be able to perform interactive drama using different narrative environments that were available. The controller has a reasonable level of granularity, being the only additional requirement the development of valid adapters for both extremes of the connection. Open source implementation of this protocol and language is provided in order to save extra parsing and serializing efforts and make software more interoperable.
Intelligent Virtual Agents, 2007
Interactive Storytelling, 2020
In this project, we introduce finite state machines as a way to simultaneously connect an audience's input with performer output during a live performance. Our approach is novel in that we can redefine the traditional notion of an author, by dividing and balancing the responsibilities for creating and developing emergent narrative between three elements: the audience, finite state machines, and the performers. This also allows audience members a large degree of freedom to input into the system, as we can consider audience inaction as a form of productive action. We have developed the Data Generation Engine (DGE), software that generates data to be used by performers for creating and developing narrative and to provide audiences with opportunities to manipulate the DGE's data generation and distribution mechanisms. We argue that using this approach in a live theatre context allows a consistent narrative to emerge while giving the audience the freedom to engage in the narrative without disrupting the performance.
Proceedings of the 2nd International Conference on INtelligent TEchnologies for interactive enterTAINment, 2008
Today there is a number of automatic systems for developing interactive digital storytelling applications. Each one uses its own architecture, data structure and user interface which make practically impossible to create a single universal quantitative metric to compare them. While these differences are intrinsic to the artistic nature of narrative applications, developers of underlying technology could be benefited from some "evaluation standards" for these systems' functionality, interoperability and performance. This paper describes a testbed environment that has been designed as an example scenario for testing how different interactive storytelling systems confront a set of "common challenges" of this kind of applications. In order to avoid additional programming efforts an adapter that allows the connection of this environment with other systems has been implemented and released as open source.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.