Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
7 pages
1 file
This paper describes a framework that allows a user to synthesize human motion while retaining control of its qualitative properties. The user paints a timeline with annotations-like walk, run or jump-from a vocabulary which is freely chosen by the user. The system then assembles frames from a motion database so that the final motion performs the specified actions at specified times. The motion can also be forced to pass through particular configurations at particular times, and to go to a particular position and orientation. Annotations can be painted positively (for example, must run), negatively (for example, may not run backwards) or as a don't-care. The system uses a novel search method, based around dynamic programming at several scales, to obtain a solution efficiently so that authoring is interactive. Our results demonstrate that the method can generate smooth, natural-looking motion. The annotation vocabulary can be chosen to fit the application, and allows specification of composite motions (run and jump simultaneously, for example). The process requires a collection of motion data that has been annotated with the chosen vocabulary. This paper also describes an effective tool, based around repeated use of support vector machines, that allows a user to annotate a large collection of motions quickly and easily so that they may be used with the synthesis algorithm.
2007
Many computer applications depend on the visual realism of virtual human character motion. Unfortunately, it is difficult to describe what makes a motion look real yet easy to recognize when a motion looks fake. These characteristics make synthesizing motions for a virtual human character a difficult challenge. A potentially useful approach is to synthesize high-quality, nuanced motions from a database of example motions. Unfortunately, none of the existing example-based synthesis techniques has been able to supply the quality, flexibility, efficiency and control needed for interactive applications, or applications where a user directs a virtual human character through an environment. At runtime, interactive applications, such as training simulations and video games, must be able to synthesize motions that not only look realistic but also quickly and accurately respond to a user's request. This dissertation shows how motion parameter decoupling and highly structured control mechanisms can be used to synthesize high-quality motions for interactive applications using an example-based approach. The main technical contributions include three example-based motion synthesis algorithms that directly address existing interactive motion synthesis problems: a method for splicing upper-body actions with lower-body locomotion, a method for controlling character gaze using a biologically and psychologically inspired model, and a method for using a new data structure called a parametric motion graph to synthesize accurate, quality motion streams in realtime.
2000
In this paper we present a system that can synthesise novel motion sequences from a database of motion capture examples. This is achieved through learning a statistical model from the captured data which enables realistic synthesis of new movements by sampling the original captured sequences. New movements are synthesised by specifying the start and end keyframes. The statistical model identifies segments of the original motion capture data to generate novel motion sequences between the keyframes. The advantage of this approach is that it combines the flexibility of keyframe animation with the realism of motion capture data.
21st International Conference on Data Engineering (ICDE'05), 2005
Enacting and capturing real motion for all potential scenarios is terribly expensive; hence, there is a great demand to synthetically generate realistic human motion. However, it is a central conceptual challenge in character animation to generate a large sequence of smooth human motion, in a synthetic way.
Computer Graphics Forum, 2004
While powerful, the simplest version of this approach is not particularly well suited to modeling the specific style of an individual whose motion had not yet been recorded when building the database: it would take an expert to adjust the PCA weights to obtain a motion style that is indistinguishable from his. Consequently, when realism is required, the current practice is to perform a full motion capture session each time a new person must be considered. In this paper, we extend the PCA approach so that this requirement can be drastically reduced: for whole classes of cyclic and noncyclic motions such as walking, running or jumping, it is enough to observe the newcomer moving only once at a particular speed or jumping a particular distance using either an optical motion capture system or a simple pair of synchronized video cameras. This one observation is used to compute a set of principal component weights that best approximates the motion and to extrapolate in real-time realistic animations of the same person walking or running at different speeds, and jumping a different distance.
The Visual Computer, 2015
Actions performed by a virtual character can be controlled with verbal commands such as 'walk five steps forward'. Similar control of the motion style, meaning how the actions are performed, is complicated by the ambiguity of describing individual motions with phrases such as 'aggressive walking'. In this paper, we present a method for controlling motion style with relative commands such as 'do the same, but more sadly'. Based on acted example motions, comparative annotations, and a set of calculated motion features, relative styles can be defined as vectors in the feature space. We present a new method for creating these style vectors by finding out which features are essential for a style to be perceived and eliminating those that show only incidental correlations with the style. We show with a user study that our feature selection procedure is more accurate than earlier methods for creating style vectors, and that the style definitions generalize across different actors and annotators. We also present a tool enabling interactive control of parametric motion synthesis by verbal commands. As the control method is independent from the generation of motion, it can be applied to virtually any parametric synthesis method.
This paper introduces an interactive editing system for a human locomotion which considers quantitative and qualitative aspects of motion and suggests two editing processes to generate a convincing output animation. Based on a minimal set of sample locomotion clips containing only straight motion paths, an animator controls a character's motion path and stylistic posture changes during the editing processes. During quantitative editing, our system generates a locomotion sequence following a curved motion path specified by an animator. Key-times of foot strikes are detected automatically for each sample in order to specify motion cycles which are appended and interpolated for a continuous and smooth output sequence. Additionally, the system provides a timing interface in order to specify temporal points of transition from one sample to another. In addition, qualitative editing is supported by incorporating a procedural system which provides a set of controllable parameters to facilitate posture editing. Initiated with a sample clip, this process produces motion that differs stylistically from any in the sample set, yet preserves the high quality of datadriven motion. A post-processing step enforces foot constraints, and modifies the character's posture to account for important physical forces acting on the body while navigating a curved path. As shown in the experimental results, our system provides intuitive interfaces for editing motion capture clips and generates realistic locomotion at interactive speed.
The Visual Computer, 2013
Quick creation of 3D character animations is an important task in game design, simulations, forensic animation, education, training, and more. We present a framework for creating 3D animated characters using a simple sketching interface coupled with a large, unannotated motion database that is used to find the appropriate motion sequences corresponding to the input sketches. Contrary to the previous work that deals with static sketches, our input sketches can be enhanced by motion and rotation curves that improve matching in the context of the existing animation sequences. Our framework uses animated sequences as the basic building blocks of the final animated scenes, and allows for various operations with them such as trimming, resampling, or connecting by use of blending and interpolation. A database of significant and unique poses, together with a two-pass search running on the GPU, allows for interactive matching even for large amounts of poses in a template database. The system provides intuitive interfaces, an immediate feedback, and poses very small requirements on the user. A user study showed that the system can be used by novice users with no animation experience or artistic talent, as well as by users with an animation background. Both groups were able to create animated scenes consisting of complex and varied actions in less than 20 minutes.
2008 Digital Image Computing: Techniques and Applications, 2008
The motion capture technique is gathering more and more attention because of its powerful potential for providing lifelike motions in computer graphics (CG) animation via sample-based motion creating techniques. Since motion data is multi-dimensional and spatio-temporal data that is difficult to edit as desired, an effective scheme for reusing captured motion sets to create new motion is advantageous. Reusing a motion data set requires effective browsing and extraction techniques that enable the user to look up and capture the relations among the motion contents in the motion data set. We propose a new framework for a motion editing tool by focusing on the connectivity between motions and utilizing it as a filter to extract the desired motion contents from a motion database. The proposed system uses a tree structure for expressing possible connective motion paths in the motion database. The motion connective tree can be a useful user interface to browse and select a motion scenario by exploring the existing motion data sets in the database. Our prototype system demonstrates an easy-tounderstand interface to explore and quickly edit a motion data set by selecting icons of the motion tree nodes.
ACM Transactions on Graphics, 2009
This article presents a new motion model deformable motion models for human motion modeling and synthesis. Our key idea is to apply statistical analysis techniques to a set of precaptured human motion data and construct a low-dimensional deformable motion model of the form x = M (α, γ), where the deformable parameters α and γ control the motion's geometric and timing variations, respectively. To generate a desired animation, we continuously adjust the deformable parameters' values to match various forms of user-specified constraints. Mathematically, we formulate the constraint-based motion synthesis problem in a Maximum A Posteriori (MAP) framework by estimating the most likely deformable parameters from the user's input. We demonstrate the power and flexibility of our approach by exploring two interactive and easy-to-use interfaces for human motion generation: direct manipulation interfaces and sketching interfaces.
2008
In virtual human (VH) applications, and in particular, games, motions with different functions are to be synthesized, such as communicative and manipulative hand gestures, locomotion, expression of emotions or identity of the character. In the bodily behavior, the primary motions define the function, while the more subtle secondary motions contribute to the realism and variability. From a technological point of view, there are different methods at our disposal for motion synthesis: motion capture and retargeting, procedural kinematic animation, force-driven dynamical simulation, or the application of Perlin noise. Which method to use for generating primary and secondary motions, and how to gather the information needed to define them? In this paper we elaborate on informed usage, in its two meanings. First we discuss, based on our own ongoing work, how motion capture data can be used to identify joints involved in primary and secondary motions, and to provide basis for the specification of essential parameters for motion synthesis methods used to synthesize primary and secondary motion. Then we explore the possibility of using different methods for primary and secondary motion in parallel in such a way, that one methods informs the other. We introduce our mixed usage of kinematic an dynamic control of different body parts to animate a character in real-time. Finally we discuss motion Turing test as a methodology for evaluation of mixed motion paradigms.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Computer Animation and Virtual Worlds, 2006
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2009
Computer Animation and Virtual Worlds, 2007
Lecture Notes in Computer Science, 2000
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation - SCA '05, 2005
ACM Transactions on Graphics, 2009
Computational Visual Media, 2019
IEEE Transactions on Visualization and Computer Graphics, 2008
Computer Animation and Virtual Worlds, 2017