Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015, The Visual Computer
Actions performed by a virtual character can be controlled with verbal commands such as 'walk five steps forward'. Similar control of the motion style, meaning how the actions are performed, is complicated by the ambiguity of describing individual motions with phrases such as 'aggressive walking'. In this paper, we present a method for controlling motion style with relative commands such as 'do the same, but more sadly'. Based on acted example motions, comparative annotations, and a set of calculated motion features, relative styles can be defined as vectors in the feature space. We present a new method for creating these style vectors by finding out which features are essential for a style to be perceived and eliminating those that show only incidental correlations with the style. We show with a user study that our feature selection procedure is more accurate than earlier methods for creating style vectors, and that the style definitions generalize across different actors and annotators. We also present a tool enabling interactive control of parametric motion synthesis by verbal commands. As the control method is independent from the generation of motion, it can be applied to virtually any parametric synthesis method.
Computer Graphics Forum, 2004
While powerful, the simplest version of this approach is not particularly well suited to modeling the specific style of an individual whose motion had not yet been recorded when building the database: it would take an expert to adjust the PCA weights to obtain a motion style that is indistinguishable from his. Consequently, when realism is required, the current practice is to perform a full motion capture session each time a new person must be considered. In this paper, we extend the PCA approach so that this requirement can be drastically reduced: for whole classes of cyclic and noncyclic motions such as walking, running or jumping, it is enough to observe the newcomer moving only once at a particular speed or jumping a particular distance using either an optical motion capture system or a simple pair of synchronized video cameras. This one observation is used to compute a set of principal component weights that best approximates the motion and to extrapolate in real-time realistic animations of the same person walking or running at different speeds, and jumping a different distance.
2007
Many computer applications depend on the visual realism of virtual human character motion. Unfortunately, it is difficult to describe what makes a motion look real yet easy to recognize when a motion looks fake. These characteristics make synthesizing motions for a virtual human character a difficult challenge. A potentially useful approach is to synthesize high-quality, nuanced motions from a database of example motions. Unfortunately, none of the existing example-based synthesis techniques has been able to supply the quality, flexibility, efficiency and control needed for interactive applications, or applications where a user directs a virtual human character through an environment. At runtime, interactive applications, such as training simulations and video games, must be able to synthesize motions that not only look realistic but also quickly and accurately respond to a user's request. This dissertation shows how motion parameter decoupling and highly structured control mechanisms can be used to synthesize high-quality motions for interactive applications using an example-based approach. The main technical contributions include three example-based motion synthesis algorithms that directly address existing interactive motion synthesis problems: a method for splicing upper-body actions with lower-body locomotion, a method for controlling character gaze using a biologically and psychologically inspired model, and a method for using a new data structure called a parametric motion graph to synthesize accurate, quality motion streams in realtime.
Intelligent Virtual Agents, 2015
We present a Matlab toolbox for synthesis and visualization of human motion style. The aim is to support development of expressive virtual characters by providing implementations of several style related motion synthesis methods thus allowing side-by-side comparisons. The implemented methods are based on recorded (captured or synthetic) motions, and include linear motion interpolation and extrapolation, style transfer, rotation swapping per body part and per quaternion channel, frequency band scaling and swapping, and Principal/Independent Component Analysis (PCA/ICA) based synthesis and component swapping.
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2009
Animation data, from motion capture or other sources, is becoming increasingly available and provides high quality motion, but is difficult to customize for the needs of a particular application. This is especially true when stylistic changes are needed, for example, to reflect a character's changing mood, differentiate one character from another or meet the precise desires of an animator. We introduce a system for editing animation data that is particularly well suited to making stylistic changes. Our approach transforms the joint angle representation of animation data into a set of pose parameters more suitable for editing. These motion drives include position data for the wrists, ankles and center of mass, as well as the rotation of the pelvis. We also extract correlations between drives and body movement, specifically between wrist position and the torso angles. The system solves for the pose at each frame based on the current values of these drives and correlations using an efficient set of inverse kinematics and balance algorithms. An animator can interactively edit the motion by performing linear operations on the motion drives or extracted correlations, or by layering additional correlations. We demonstrate the effectiveness of the approach with various examples of gesture and locomotion.
This paper describes a framework that allows a user to synthesize human motion while retaining control of its qualitative properties. The user paints a timeline with annotations-like walk, run or jump-from a vocabulary which is freely chosen by the user. The system then assembles frames from a motion database so that the final motion performs the specified actions at specified times. The motion can also be forced to pass through particular configurations at particular times, and to go to a particular position and orientation. Annotations can be painted positively (for example, must run), negatively (for example, may not run backwards) or as a don't-care. The system uses a novel search method, based around dynamic programming at several scales, to obtain a solution efficiently so that authoring is interactive. Our results demonstrate that the method can generate smooth, natural-looking motion. The annotation vocabulary can be chosen to fit the application, and allows specification of composite motions (run and jump simultaneously, for example). The process requires a collection of motion data that has been annotated with the chosen vocabulary. This paper also describes an effective tool, based around repeated use of support vector machines, that allows a user to annotate a large collection of motions quickly and easily so that they may be used with the synthesis algorithm.
2008
In virtual human (VH) applications, and in particular, games, motions with different functions are to be synthesized, such as communicative and manipulative hand gestures, locomotion, expression of emotions or identity of the character. In the bodily behavior, the primary motions define the function, while the more subtle secondary motions contribute to the realism and variability. From a technological point of view, there are different methods at our disposal for motion synthesis: motion capture and retargeting, procedural kinematic animation, force-driven dynamical simulation, or the application of Perlin noise. Which method to use for generating primary and secondary motions, and how to gather the information needed to define them? In this paper we elaborate on informed usage, in its two meanings. First we discuss, based on our own ongoing work, how motion capture data can be used to identify joints involved in primary and secondary motions, and to provide basis for the specification of essential parameters for motion synthesis methods used to synthesize primary and secondary motion. Then we explore the possibility of using different methods for primary and secondary motion in parallel in such a way, that one methods informs the other. We introduce our mixed usage of kinematic an dynamic control of different body parts to animate a character in real-time. Finally we discuss motion Turing test as a methodology for evaluation of mixed motion paradigms.
2014
We present a first implementation of a framework for the exploration of stylistic variations in intangible heritage, recorded through motion capture techniques. Our approach is based on a statistical modelling of the phenomenon, which is then presented to the user through a reactive stylistic synthesis, visualised in real-time on a virtual character. This approach enables an interactive exploration of the stylistic space. In this paper, a first implementation of the framework is presented with a proof-of-concept application enabling the intuitive and interactive stylistic exploration of an expressive gait space.
Lecture Notes in Computer Science
This paper presents an empirical evaluation of a method called Style transformation which consists of modifying an existing gesture sequence in order to obtain a new style where the transformation parameters have been extracted from an existing captured sequence. This data-driven method can be used either to enhance key-framed gesture animations or to taint captured motion sequences according to a desired style.
2017
With the advancement in motion sensing technology, acquiring high-quality human motions for creating realistic character animation is much easier than before. Since motion data itself is not the main obstacle anymore, more and more effort goes into enhancing the realism of character animation, such as motion styles and control. In this paper, we explore a less studied area: the emotion of motions. Unlike previous work which encode emotions into discrete motion style descriptors, we propose a continuous control indicator called motion strength, by controlling which a data-driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low-level motion features to the emotion strength. Since the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run-time is very low. As a...
In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. Actions are encoded in a compact and human-readable XML-format which is a suitable basis for the synthesis of virtual human animations. XSAMPL3D descriptions can be generated automatically from captured VR interaction data or created manually for rapid prototyping of animations.
Computer Animation and Virtual Worlds, 2008
In this paper, we present a novel method for editing stylistic human motions. We represent styles as differences between stylistic motions and introduced neutral motions, including timing differences and spatial differences. Timing differences are defined as time alignment curves, while spatial differences are found by a machine learning technique: Independent Feature Subspaces Analysis, which is the combination of Multidimensional Independent Component Analysis and Invariant Feature Subspaces. This technique is used to decompose two motions into several subspaces. One of these subspaces can be defined as style subspace that describes the style aspects of the stylistic motion. In order to find the style subspace, we compare norms of the projections of two motions on each subspace. Once the time alignment curves and style subspaces of several motion clips are obtained, animators can tune, transfer and merge the style subspaces to synthesize new motion clips with various styles. Our method is easy to use since manual manipulations and large training data sets are not necessary.
2012
In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and humanreadable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details Digital Peer Publishing Licence Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the current version of the Digital Peer Publishing Licence (DPPL). The text of the licence may be accessed and retrieved via Internet at http://www.dipp.nrw.de/. First presented at 6th Workshop 'Virtuelle und Erweiterte Realität', GI-Fachgruppe VR/AR 2009, extended and revised for JVRB of the demonstrated action, such as motion trajectiories, hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.
Lecture Notes in Computer Science, 2012
We explore motion capture as a means for generating expressive bodily interaction between humans and virtual characters. Recorded interactions between humans are used as examples from which rules are formed that control reactions of a virtual character to human actions. The author of the rules selects segments considered important and features that best describe the desired interaction. These features are motion descriptors that can be calculated in real-time such as quantity of motion or distance between the interacting characters. The rules are authored as mappings from observed descriptors of a human to the desired descriptors of the responding virtual character. Our method enables a straightforward process of authoring continuous and natural interaction. It can be used in games and interactive animations to produce dramatic and emotional effects. Our approach requires less example motions than previous machine learning methods and enables manual editing of the produced interaction rules.
Our progress in the problem of making animated characters move expressively has been slow, and it persists in being among the most challenging in computer graphics. Simply attending to the low-level motion control problem, particularly for physically based models, is very difficult. Providing an animator with the tools to imbue character motion with broad expressive qualities is even more ambitious, but it is clear it is a goal to which we must aspire. Part of the problem is simply finding the right language in which to express qualities of motion. Another important issue is that expressive animation often involves many disparate parts of the body, which thwarts bottom-up controller synthesis. We demonstrate progress in this direction through the specification of directed, expressive animation over a limited range of standing movements. A key contribution is that through the use of high-level concepts such as character sketches, actions and properties, which impose different modalit...
1999
Current methods for figure animation involve a tradeoff between the level of realism captured in the movements and the ease of generating the animations. We introduce a motion control paradigm that circumvents this tradeoff-it provides the ability to generate a wide range of natural-looking movements with minimal user labor. Effort, which is one part of Rudolf Laban's system for observing and analyzing movement, describes the qualitative aspects of movement. Our motion control paradigm simplifies the generation of expressive movements by proceduralizing these qualitative aspects to hide the non-intuitive, quantitative aspects of movement. We build a model of Effort using a set of kinematic movement parameters that defines how a figure moves between goal keypoints. Our motion control scheme provides control through Effort's four dimensional system of textual descriptors, providing a level of control thus far missing from behavioral animation systems and offering novel specification and editing capabilities on top of traditional keyframing and inverse kinematics methods. Since our Effort model is inexpensive computationally, Effort-based motion control systems can work in real-time. We demonstrate our motion control scheme by implementing EMOTE (Expressive MOTion Engine), a character animation module for expressive arm movements. EMOTE works with inverse kinematics to control the qualitative aspects of end-effector specified movements. The user specifies general movements by entering a sequence of goal positions for each hand. The user then expresses the essence of the movement by adjusting sliders for the Effort motion factors: Space, Weight, Time, and Flow. EMOTE produces a wide range of expressive movements, provides an easy-to-use interface (that is more intuitive than joint angle interpolation curves or physical parameters), features interactive editing, and real-time motion generation.
1994
In this paper we extend previous work on automatic motion synthesis for physically realistic 2D articulated figures in three ways. First, we describe an improved motion-synthesis algorithm that runs substantially faster than previously reported algorithms. Second, we present two new techniques for influencing the style of the motions generated by the algorithm. These techniques can be used by an animator to achieve a desired movement style, or they can be used to guarantee variety in the motions synthesized over several runs of the algorithm. Finally, we describe an animation editor that supports the interactive concatenation of existing, automatically generated motion controllers to produce complex, composite trajectories. Taken together, these results suggest how a usable, useful system for articulated-figure motion synthesis might be developed.
ACM Transactions on Graphics, 2003
We introduce an acting-based animation system for creating and editing character animation at interactive speeds. Our system requires minimal training, typically under an hour, and is well suited for rapidly prototyping and creating expressive motion. A real-time motion-capture framework records the user's motions for simultaneous analysis and playback on a large screen. The animator's real-world, expressive motions are mapped into the character's virtual world. Visual feedback maintains a tight coupling between the animator and character. Complex motion is created by layering multiple passes of acting. We also introduce a novel motion-editing technique, which derives implicit relationships between the animator and character. The animator mimics some aspect of the character motion, and the system infers the association between features of the animator's motion and those of the character. The animator modifies the mimic by acting again, and the system maps the changes...
2002
Abstract Computers process and store human movement in a different manner from how humans perceive and observe human movement. We describe an investigation of the mapping between the linguistic descriptions people ascribe to animated motions and the parameters utilized to produce the animations.
2015
We describe a new paradigm in which a user can produce a wide range of expressive, natural-looking movements of animated characters by specifying their manners and attitudes with natural language verbs and adverbs. A natural language interpreter, a Parameterized Action Representation (PAR), and an expressive motion engine (EMOTE) are designed to bridge the gap between natural language instructions issued by the user and expressive movements carried out by the animated characters. By allowing users to customize basic movements with natural language terms to support individualized expressions, our approach may eventually lead to the automatic generation of expressive movements from speech text, a storyboard script, or a behavioral simulation.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.