Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008
In virtual human (VH) applications, and in particular, games, motions with different functions are to be synthesized, such as communicative and manipulative hand gestures, locomotion, expression of emotions or identity of the character. In the bodily behavior, the primary motions define the function, while the more subtle secondary motions contribute to the realism and variability. From a technological point of view, there are different methods at our disposal for motion synthesis: motion capture and retargeting, procedural kinematic animation, force-driven dynamical simulation, or the application of Perlin noise. Which method to use for generating primary and secondary motions, and how to gather the information needed to define them? In this paper we elaborate on informed usage, in its two meanings. First we discuss, based on our own ongoing work, how motion capture data can be used to identify joints involved in primary and secondary motions, and to provide basis for the specification of essential parameters for motion synthesis methods used to synthesize primary and secondary motion. Then we explore the possibility of using different methods for primary and secondary motion in parallel in such a way, that one methods informs the other. We introduce our mixed usage of kinematic an dynamic control of different body parts to animate a character in real-time. Finally we discuss motion Turing test as a methodology for evaluation of mixed motion paradigms.
2007
Many computer applications depend on the visual realism of virtual human character motion. Unfortunately, it is difficult to describe what makes a motion look real yet easy to recognize when a motion looks fake. These characteristics make synthesizing motions for a virtual human character a difficult challenge. A potentially useful approach is to synthesize high-quality, nuanced motions from a database of example motions. Unfortunately, none of the existing example-based synthesis techniques has been able to supply the quality, flexibility, efficiency and control needed for interactive applications, or applications where a user directs a virtual human character through an environment. At runtime, interactive applications, such as training simulations and video games, must be able to synthesize motions that not only look realistic but also quickly and accurately respond to a user's request. This dissertation shows how motion parameter decoupling and highly structured control mechanisms can be used to synthesize high-quality motions for interactive applications using an example-based approach. The main technical contributions include three example-based motion synthesis algorithms that directly address existing interactive motion synthesis problems: a method for splicing upper-body actions with lower-body locomotion, a method for controlling character gaze using a biologically and psychologically inspired model, and a method for using a new data structure called a parametric motion graph to synthesize accurate, quality motion streams in realtime.
2010
Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or 'natural') and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parameterize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control.
2010
Motion capture (mocap) provides highly precise data of human movement which can be used for empirical analysis and virtual human animation. In this paper, we describe a corpus that has been collected for the purpose of modelling movement in a dyadic conversational context. We describe the technical setup, scenarios and challenges involved in capturing the corpus, and present ways of annotating and visualizing the data. For visualization we suggest the techniques of motion trails and animated recreation. We have incorporated these motion capture visualization techniques as extensions to the ANVIL tool and into a procedural animation system, and show a first attempt at automated analysis of the data (handedness detection).
Lecture Notes in Computer Science, 2011
We propose a novel methodology for authoring interactive behaviors of virtual characters. Our approach is based on enaction, which means a continuous two-directional loop of bodily interaction. We have implemented the case of two characters, one human and one virtual, who are separated by a glass wall and can interact only through bodily motions. Animations for the virtual character are based on captured motion segments and descriptors for the style of motions that are automatically calculated from the motion data. We also present a rule authoring system that is used for generating behaviors for the virtual character. Preliminary results of an enaction experiment with an interview show that the participants could experience the different interaction rules as different behaviors or attitudes of the virtual character.
IEEE Computer Graphics and Applications, 1998
A dvances in computer animation techniques have spurred increasing levels of realism and movement in virtual characters that closely mimic physical reality. Increases in computational power and control methods enable the creation of 3D virtual humans for real-time interactive applications. 1 Artificial intelligence techniques and autonomous agents give computer-generated characters a life of their own and let them interact with other characters in virtual worlds. Developments and advances in networking and virtual reality (VR) let multiple participants share virtual worlds and interact with applications or each other.
2004
While computer animation is currently widely used to create characters in games, films, and various other applications, techniques such as motion capture and keyframing are still relatively expensive. Automatic acquisition of secondary motion and/or motion prototyping using machine learning might be a solution to this problem. Our paper presents an application of the Q-learning algorithms to generate action sequences for animated characters. The techniques can be used in both deterministic and nondeterministic environments to generate actions which can later be incorporated into more complex animation sequences. The paper presents an application of both deterministic and non-deterministic updates of the Q-learning algorithm to automatic acquisition of motion. Results obtained from the learning system are also compared to human motion and conclusions are drawn.
1989
This paper explains the ideal concepts that must be part of a system for synthetic actor animation. After a brief introduction to the role of synthetic actors, five major steps to the motion control of these actors are discussed: positional constraints and inverse kinematics, dynamics, impact of the environment, task planning and behavioral animation.
IEEE Computer Graphics and Applications, 2000
A dvances in computer animation techniques have spurred increasing levels of realism and movement in virtual characters that closely mimic physical reality. Increases in computational power and control methods enable the creation of 3D virtual humans for real-time interactive applications. 1 Artificial intelligence techniques and autonomous agents give computer-generated characters a life of their own and let them interact with other characters in virtual worlds. Developments and advances in networking and virtual reality (VR) let multiple participants share virtual worlds and interact with applications or each other. Controlling actions and behavior High-level control procedures make it possible to give behaviors to computer-generated characters that make them appear "intelligent"-that is, they interact with other characters with similar properties and respond to environmental situations in a meaningful and constructive way. Such scenarios have the potential of receiving script information as input and producing computer-generated sequences as output. Application areas include production animation and interactive computer games. In addition, researchers are currently investigating ways of having virtual humans perform complex tasks reliably. 1
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2008
This paper describes a framework for animating virtual characters in real-time environments thanks to motion capture data. In this paper, we mainly focus on the adaptation of motion capture data to the virtual skeleton and to its environment. To speed-up this real-time process we introduce a morphology-independent representation of motion. Based on this representation, we have redesigned the methods for inverse kinematics and kinetics so that our method can adapt the motion thanks to spacetime constraints, including a control of the center of mass position. If the resulting motion doesn't satisfy general mechanical laws (such as maintaining the angular momentum constant during aerial phases) the current pose is corrected. External additional forces can also be considered in the dynamic correction module so that the character automatically bend his hips when pushing heavy objects, for example. All this process is performed in real-time.
2009
Motion capture technology is recognised as a standard tool in the computer animation pipeline. It provides detailed movement for animators; however, it also introduces problems and brings concerns for creating realistic and convincing motion for character animation. In this thesis, the post-processing techniques are investigated that result in realistic motion generation. Anumber of techniques are introduced that are able to improve the quality of generated motion from motion capture data, especially when integrating motion transitions from different motion clips. The presented motion data reconstruction technique is able to build convincing realistic transitions from existing motion database, and overcome the inconsistencies introduced by traditional motion blending techniques. It also provides a method for animators to re-use motion data more efficiently. Along with the development of motion data transition reconstruction, the motion capture data mapping technique was investigated...
We describe a system for o -line production and real-time playback o f motion for articulated human gures in 3D virtual environments. The key notions are (1) the logical storage of full-body motion in posture graphs, which p r o vides a simple motion access method for playback, and (2) mapping the motions of higher DOF gures to lower DOF gures using slaving to provide human models at several levels of detail, both in geometry and articulation, for later playback. We present our system in the context of a simple problem: Animating human gures in a distributed simulation, using DIS protocols for communicating the human state information. We also discuss several related techniques for real-time animation of articulated gures in visual simulation.
The interactive approach of character animation requires sophisticated motion synthesis algorithms to generate poses and motions for unpredictable events. Traditional animation techniques used alone cannot produce the needed degree of realism or the degree of controllability required for virtual characters. Motion-capture techniques and the physical simulation of characters, when combined, offer the potential to produce realistic motions while still maintaining a high level of control. A wide range of possible poses and motions thus can be generated, but determining the quality of the synthesized motion in terms of believability is important. Algorithms are created to evaluate the plausibility of the generated pose to only allow for realistic and plausible character poses.
IEEE Computer Graphics and Applications, 1998
At present, very few systems possess the multiple functions required to build real-time deformable humans who look believable and recognizable. In this paper, we describe our interactive system for building a virtual human, fitting texture to the body and head, and controlling skeleton motion. Emphasis is placed on those aspects of deformations that increase the realism of the humans appearance. Specific attention is also paid to the hand as it contains half of the body skeletal bones. The system includes facial motion control as well. We first detail the complete animation framework, integrating all the virtual human modules. We then present the first of our two case studies: CyberTennis, where two virtual humans play realtime tennis within our Virtual Life Network Environment system. One real player is in Geneva and her opponent is in Lausanne. An autonomous virtual judge referees the game. The second application combines high-tech and artistic choreography in a CyberDance performance in Geneva exhibition hall, as part of Computer Animation 97. In this performance, the movements of the choreographer are captured and are paralleled, in real time, by his virtual robot counterpart . The show also features virtual humans choreographed for an aerobic dance session. Additional presentations of this performance are now scheduled, including one at the prestigious Creativity Institute in Zermatt in January 1998.
Presence: Teleoperators and Virtual Environments, 2008
Virtual humans are more and more used in VR applications but their animation is still a challenge, especially if complex tasks must be carried-out in interaction with the user. In many applications with virtual humans, credible virtual characters play a major role in presence. Motion editing techniques assume that the natural laws are intrinsically encoded in prerecorded trajectories and that modifications may preserve them leading to credible autonomous actors. However, a complete knowledge of all the constraints is required to ensure continuity or to synchronize and blend several actions necessary to achieve a given task. We propose a framework capable of performing these tasks in an interactive environment that can change at each frame, depending on the user's orders. This framework enables to animate from dozens of characters in real-time for complex constraints to hundreds of characters if only ground adaptation is performed. It offers the following capabilities: motion synchronization, blending, retargeting and adaptation thanks to enhanced inverse kinetics and kinematics solver. To evaluate this framework we have compared the motor behavior of subjects in real and in virtual environments.
1998
This article surveys virtual humans and techniques to control the face and body. It also covers higher level interfaces for direct speech input and issues of real-time control. leged information for controlling actors' motions falls into three categories: geometric, physical, and behavioral, giving rise to three corresponding motion-control method categories. More recently, Thalmann8 proposed four new classes of synthetic actors: participatory, guided, autonomous, and interactive-perceptive.
Lecture Notes in Computer Science, 2004
Recent findings in biological neuroscience suggest that the brain learns body movements as sequences of motor primitives. Simultaneously, this principle is gaining popularity in robotics, computer graphics and computer vision: movement primitives were successfully applied to robotic control tasks as well as to render or to recognize human behavior. In this paper, we demonstrate that movement primitives can also be applied to the problem of implementing lifelike computer game characters. We present an approach to behavior modeling and learning that integrates several pattern recognition and machine learning techniques: trained with data from recorded multiplayer computer games, neural gas networks learn topological representation of virtual worlds; PCA is used to identify elementary movements the human players repeatedly executed during a match and complex behaviors are represented as probability functions mapping movement primitives to locations in the game environment. Experimental results underline that this framework produces game characters with humanlike skills.
Proceedings of the ACM …, 1997
Real-time animation of virtual humans requires a dedicated architecture for the integration of different motion control techniques running into so-called actions. In this paper we describe a software architecture called AGENTlib for the management of action combination. Considered actions exploit various techniques from keyframe sequence playback to Inverse Kinematics and motion capture. Two major requirements have to be enforced from the end user viewpoint. First that multiple motion controllers can control simultaneously some parts or whole of the virtual human. Second, that successive actions result in a smooth motion flow.
2010
The synthesis of realistic and complex body movements in real-time is a challenging task in computer graphics and in robotics. High realism requires accurate modeling of the details of the trajectories for a large number of degrees of freedom. At the same time, real-time animation necessitates flexible systems that can react in an online fashion and, therefore, adapting to external constraints. Such online systems are suitable for the self-organization of complex behaviors due to the dynamic interaction among multiple autonomous characters in the scene. A novel approach for the online synthesis of realistic human body movements is hereby presented. The proposed model is inspired by concepts from motor control as it approximates full-body movements by the superposition of lowerdimensional movement primitives -synergiesthat are learned from motion capture data. For this purpose, a blind source separation algorithm is applied and provides significantly more compact representations than...
1994
In this paper we extend previous work on automatic motion synthesis for physically realistic 2D articulated figures in three ways. First, we describe an improved motion-synthesis algorithm that runs substantially faster than previously reported algorithms. Second, we present two new techniques for influencing the style of the motions generated by the algorithm. These techniques can be used by an animator to achieve a desired movement style, or they can be used to guarantee variety in the motions synthesized over several runs of the algorithm. Finally, we describe an animation editor that supports the interactive concatenation of existing, automatically generated motion controllers to produce complex, composite trajectories. Taken together, these results suggest how a usable, useful system for articulated-figure motion synthesis might be developed.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.