Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, The Visual Computer
…
18 pages
1 file
We explore an approach to full-body motion editing with linear motion models, prioritized constraint-based optimization and latent-space interpolation. By exploiting the mathematical connections between linear motion models and prioritized inverse kinematics (PIK), we formulate and solve the motion editing problem as an optimization function whose differential structure is rich enough to efficiently optimize user-specified constraints within the latent motion space. Performing motion editing within latent motion spaces has the advantage of handling pose transitions and consequently motion flow by construction from single key-frame editing. To handle motion adjustments from multiple key-frame and trajectory constraints, we developed a latent-space interpolation technique by exploiting spline functions. Such an approach handles per-frame adjustments generating smooth animations, while avoiding the computational expense of joint space interpolations. We demonstrate the usefulness of this approach by editing and generating full-body reaching and walking jump animations in challenging environment scenarios.
2009
The growth of motion capture systems has contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the various captured motions normally require specific needs. Consequently, modifying and reusing these motions in new situations-for example, retargeting it to a new environment-became an increasing area of research known as motion editing. In the last few years, human motion editing has become one of the most active research areas in the field of computer animation. In this thesis, we introduce and discuss a novel method for interactive human motion editing. Our main contribution is the development of a Low-dimensional Prioritized Inverse Kinematics (LPIK) technique that handles user constraints within a low-dimensional motion space-also known as the latent space. Its major feature is to operate in the latent space instead of the joint space. By construction, it is sufficient to constrain a single frame with LPIK to obtain a natural movement enforcing the intrinsic motion flow. The LPIK has the advantage of reducing the size of the Jacobian matrix as the motion latent space dimension is small for a coordinated movement compared to the joint space. Moreover, the method offers the compelling advantage that it is well suited for characters with large number of degrees of freedom (DoFs). This is one of the limitations of IK methods that perform optimizations in the joint space. In addition, our method still provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature. Essentially, our technique is based on the mathematical connections between linear motion models such as Principal Component Analysis (PCA) and Prioritized Inverse Kinematics (PIK). We use PCA as a first stage of preprocessing to reduce the dimensionality of the database to make it tractable and to encapsulate an underlying motion pattern. And after, to bound IK solutions within the space of natural-looking motions. We use PIK to allow the user to manipulate constraints with different priorities while interactively editing an animation. Essentially, the priority strategy ensures that a higher priority task is not affected by other tasks of lower priority. Furthermore, two strategies to impose motion continuity based on PCA are introduced. We show a number of experiments used to evaluate and validate (both qualitatively and quantitatively) the benefits of our method. Finally, we assess the quality of the edited animations against a goal-directed constraint-based technique, to verify the robustness of our method regarding performance, simplicity and realism.
2003
We propose an intuitive motion editing technique allowing the end-user to transform an original motion by applying position constraints on freely selected locations of the character body. The major innovation comes from the possibility to assign a priority level to each constraint. The resulting scale of user-defined priority levels allows to handle multiple asynchronously overlapping constraints. As a consequence the end user can enforce a larger range of natural behaviors where conflicting constraints compete to control a common set of joints. By default the joint angles of the original motion are preserved as the lowest priority constraint. However, in case a Cartesian constraint from the original motion is essential, it is straightforward to define a high priority constraint that will retain it before enforcing other lower priority constraints. Additional features are proposed to provide a more productive motion editing process like defining the constraints relative to a mobile ...
Graphical Models, 2001
Tools for assisting with editing human motion have become one of the most active research areas in the field of computer animation. Not surprisingly, the area has demonstrated some stunning successes in both research and practice. This paper explores the range of constraint-based techniques used to alter motions while preserving specific spatial features. We examine a variety of methods, defining a taxonomy of these methods that is categorized by the mechanism employed to enforce temporal constraints. We pay particular attention to a less explored category of techniques that we term per-frame inverse kinematics plus filtering, and we show how these methods may provide an easier to implement while retaining the benefits of other approaches.
Computer Animation and Virtual Worlds, 2006
Human motion is difficult to create and manipulate because of the high dimensionality and spatiotemporal nature of human motion data. Recently, the use of large collections of captured motion data has added increased realism in character animation. In order to make the synthesis and analysis of motion data tractable, we present a low‐dimensional motion space in which high‐dimensional human motion can be effectively visualized, synthesized, edited, parameterized, and interpolated in both spatial and temporal domains. Our system allows users to create and edit the motion of animated characters in several ways: The user can sketch and edit a curve on low‐dimensional motion space, directly manipulate the character's pose in three‐dimensional object space, or specify key poses to create in‐between motions. Copyright © 2006 John Wiley & Sons, Ltd.
Computer Graphics Forum, 1992
A new approach is presented for the animation of articulated figures. We propose a system of articulated motion design which offers a full combination of both direct and inverse kinematic control of the joint parameters. Such an approach allows an animator to interactively specify goal-directed changes to existing sampled joint motions, resulting in a more general and expressive class of possible joint motions. The fundamental idea is to consider any desired joint space motion as a reference model inserted into the secondary task of an inverse kinematic control scheme. This approach profits from the use of halfspace cartesian main tasks in conjunction with a parallel control of the articulated figure called the coach-trainee metaphor.In addition, a transition function is introduced so as to guarantee the continuity of the control. The resulting combined kinematic control scheme leads to a new methodology of joint motion editing which is demonstrated through the improvement of a functional model of human walking.
Computer Animation and Virtual Worlds, 2007
This paper introduces an interactive editing system for a human locomotion which considers quantitative and qualitative aspects of motion and suggests two editing processes to generate a convincing output animation. Based on a minimal set of sample locomotion clips containing only straight motion paths, an animator controls a character's motion path and stylistic posture changes during the editing processes. During quantitative editing, our system generates a locomotion sequence following a curved motion path specified by an animator. Key-times of foot strikes are detected automatically for each sample in order to specify motion cycles which are appended and interpolated for a continuous and smooth output sequence. Additionally, the system provides a timing interface in order to specify temporal points of transition from one sample to another. In addition, qualitative editing is supported by incorporating a procedural system which provides a set of controllable parameters to facilitate posture editing. Initiated with a sample clip, this process produces motion that differs stylistically from any in the sample set, yet preserves the high quality of datadriven motion. A post-processing step enforces foot constraints, and modifies the character's posture to account for important physical forces acting on the body while navigating a curved path. As shown in the experimental results, our system provides intuitive interfaces for editing motion capture clips and generates realistic locomotion at interactive speed.
Computer Animation and Virtual Worlds, 2014
We present in the paper a hybrid method for motion editing combining motion blending and Jacobian-based inverse kinematics (IK). When the original constraints are changed, a blending-based IK solver is first employed to find an adequate joint configuration coarsely. Using linear motion blending, this search corresponds to a gradient-based minimization in the weight space. The found solution is then improved by a Jacobian-based IK solver by further minimizing the distance between the end effectors and constraints. To accelerate the searching in the weight space, we introduce a weight map, which pre-computes the good starting positions for the gradient-based minimization. The advantages of our approach are threefold: first, more realistic motions can be generated by utilizing motion blending techniques, compared with pure Jacobian-based IK. The blended results also increase the rate of convergence of the Jacobian-based IK solver. Second, the Jacobian-based IK solver modifies poses in the pose configuration space and the computational cost does not scale with the number of examples. Third, it is possible to extrapolate the given example motions with a Jacobian-based IK solver, while it is generally difficult with pure blending-based techniques.
2006
Convincingly animating virtual humans has become of great interest in many fields since recent years. In computer games for example, virtual humans often are the main characters. Failing to realistically animate them may wreck all previous efforts made to provide the player with an immersion feeling. At the same time, computer generated movies have become very popular and thus have increased the demand for animation realism. Indeed, virtual humans are now the new stars in movies like Final Fantasy or Shrek, or are even used for special effects in movies like Matrix. In this context, the virtual humans animations not only need to be realistic as for computer games, but really need to be expressive as for real actors. While creating animations from scratch is still widespread, it demands artistics skills and hours if not days to produce few seconds of animation. For these reasons, there has been a growing interest for motion capture: instead of creating a motion, the idea is to reproduce the movements of a live performer. However, motion capture is not perfect and still needs improvements. Indeed, the motion capture process involves complex techniques and equipments. This often results in noisy animations which must be edited. Moreover, it is hard to exactly foresee the final motion. For example, it often happens that the director of a movie decides to change the script. The animators then have to change part or the whole animation. The aim of this thesis is then to provide animators with interactive tools helping them to easily and rapidly modify preexisting animations. We first present our Inverse Kinematics solver used to enforce kinematic constraints at each time of an animation. Afterward, we propose a motion deformation framework offering the user a way to specify prioritized constraints and to edit an initial animation so that it may be used in a new context (characters, environment,etc). Finally, we introduce a semi-automatic algorithm to extract important motion features from motion capture animation which may serve as a first guess for the animators when specifying important characteristics an initial animation should respect.
We introduce staggered poses---a representation of character motion that explicitly encodes coordinated timing among movement features in different parts of a character's body. This representation allows us to provide sparse, pose--based controls for editing motion that preserve existing movement detail, and we describe how to edit coordinated timing among extrema in these controls for stylistic editing. The staggered pose representation supports the editing of new motion by generalizing keyframe--based workflows to retain high--level control after local timing and transition splines have been created. For densely--sampled motion such as motion capture data, we present an algorithm that creates a staggered pose representation by locating coordinated movement features and modeling motion detail using splines and displacement maps. These techniques, taken together, enable feature--based keyframe editing of dense motion data.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Computers & Graphics, 2012
IEEE Computer Graphics and Applications, 1997
Lecture Notes in Computer Science, 2000
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2009
The Journal of Visualization and Computer Animation, 1998
Graphical Models /graphical Models and Image Processing /computer Vision, Graphics, and Image Processing, 2006
Graphical Models /graphical Models and Image Processing /computer Vision, Graphics, and Image Processing, 2008
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, 2002
Computational Visual Media, 2019
ACM Transactions on Graphics, 2009
Lecture Notes in Computer Science, 2009
Procedia Computer Science, 2013