Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, ACM Transactions on Graphics
…
9 pages
1 file
The ability to interactively edit human motion data is essential for character animation. We present a novel motion editing technique that allows the user to manipulate synchronized multiple character motions interactively. Our Laplacian motion editing method formulates the interaction among multiple characters as a collection of linear constraints and enforces the constraints, while the user directly manipulates the motion of characters in both spatial and temporal domains. Various types of manipulation handles are provided to specify absolute/relative spatial location, direction, time, duration, and synchronization of multiple characters. The capability of non-sequential discrete editing is incorporated into our motion editing interfaces, so continuous and discrete editing is performed simultaneously and seamlessly. We demonstrate that the synchronized multiple character motions are synthesized and manipulated at interactive rates using spatiotemporal constraints.
We introduce staggered poses---a representation of character motion that explicitly encodes coordinated timing among movement features in different parts of a character's body. This representation allows us to provide sparse, pose--based controls for editing motion that preserve existing movement detail, and we describe how to edit coordinated timing among extrema in these controls for stylistic editing. The staggered pose representation supports the editing of new motion by generalizing keyframe--based workflows to retain high--level control after local timing and transition splines have been created. For densely--sampled motion such as motion capture data, we present an algorithm that creates a staggered pose representation by locating coordinated movement features and modeling motion detail using splines and displacement maps. These techniques, taken together, enable feature--based keyframe editing of dense motion data.
Computer Animation and Virtual Worlds, 2006
Human motion is difficult to create and manipulate because of the high dimensionality and spatiotemporal nature of human motion data. Recently, the use of large collections of captured motion data has added increased realism in character animation. In order to make the synthesis and analysis of motion data tractable, we present a low‐dimensional motion space in which high‐dimensional human motion can be effectively visualized, synthesized, edited, parameterized, and interpolated in both spatial and temporal domains. Our system allows users to create and edit the motion of animated characters in several ways: The user can sketch and edit a curve on low‐dimensional motion space, directly manipulate the character's pose in three‐dimensional object space, or specify key poses to create in‐between motions. Copyright © 2006 John Wiley & Sons, Ltd.
2003
We propose an intuitive motion editing technique allowing the end-user to transform an original motion by applying position constraints on freely selected locations of the character body. The major innovation comes from the possibility to assign a priority level to each constraint. The resulting scale of user-defined priority levels allows to handle multiple asynchronously overlapping constraints. As a consequence the end user can enforce a larger range of natural behaviors where conflicting constraints compete to control a common set of joints. By default the joint angles of the original motion are preserved as the lowest priority constraint. However, in case a Cartesian constraint from the original motion is essential, it is straightforward to define a high priority constraint that will retain it before enforcing other lower priority constraints. Additional features are proposed to provide a more productive motion editing process like defining the constraints relative to a mobile ...
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2009
Animation data, from motion capture or other sources, is becoming increasingly available and provides high quality motion, but is difficult to customize for the needs of a particular application. This is especially true when stylistic changes are needed, for example, to reflect a character's changing mood, differentiate one character from another or meet the precise desires of an animator. We introduce a system for editing animation data that is particularly well suited to making stylistic changes. Our approach transforms the joint angle representation of animation data into a set of pose parameters more suitable for editing. These motion drives include position data for the wrists, ankles and center of mass, as well as the rotation of the pelvis. We also extract correlations between drives and body movement, specifically between wrist position and the torso angles. The system solves for the pose at each frame based on the current values of these drives and correlations using an efficient set of inverse kinematics and balance algorithms. An animator can interactively edit the motion by performing linear operations on the motion drives or extracted correlations, or by layering additional correlations. We demonstrate the effectiveness of the approach with various examples of gesture and locomotion.
2009
The growth of motion capture systems has contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the various captured motions normally require specific needs. Consequently, modifying and reusing these motions in new situations-for example, retargeting it to a new environment-became an increasing area of research known as motion editing. In the last few years, human motion editing has become one of the most active research areas in the field of computer animation. In this thesis, we introduce and discuss a novel method for interactive human motion editing. Our main contribution is the development of a Low-dimensional Prioritized Inverse Kinematics (LPIK) technique that handles user constraints within a low-dimensional motion space-also known as the latent space. Its major feature is to operate in the latent space instead of the joint space. By construction, it is sufficient to constrain a single frame with LPIK to obtain a natural movement enforcing the intrinsic motion flow. The LPIK has the advantage of reducing the size of the Jacobian matrix as the motion latent space dimension is small for a coordinated movement compared to the joint space. Moreover, the method offers the compelling advantage that it is well suited for characters with large number of degrees of freedom (DoFs). This is one of the limitations of IK methods that perform optimizations in the joint space. In addition, our method still provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature. Essentially, our technique is based on the mathematical connections between linear motion models such as Principal Component Analysis (PCA) and Prioritized Inverse Kinematics (PIK). We use PCA as a first stage of preprocessing to reduce the dimensionality of the database to make it tractable and to encapsulate an underlying motion pattern. And after, to bound IK solutions within the space of natural-looking motions. We use PIK to allow the user to manipulate constraints with different priorities while interactively editing an animation. Essentially, the priority strategy ensures that a higher priority task is not affected by other tasks of lower priority. Furthermore, two strategies to impose motion continuity based on PCA are introduced. We show a number of experiments used to evaluate and validate (both qualitatively and quantitatively) the benefits of our method. Finally, we assess the quality of the edited animations against a goal-directed constraint-based technique, to verify the robustness of our method regarding performance, simplicity and realism.
Graphical Models, 2001
Tools for assisting with editing human motion have become one of the most active research areas in the field of computer animation. Not surprisingly, the area has demonstrated some stunning successes in both research and practice. This paper explores the range of constraint-based techniques used to alter motions while preserving specific spatial features. We examine a variety of methods, defining a taxonomy of these methods that is categorized by the mechanism employed to enforce temporal constraints. We pay particular attention to a less explored category of techniques that we term per-frame inverse kinematics plus filtering, and we show how these methods may provide an easier to implement while retaining the benefits of other approaches.
Lecture Notes in Computer Science, 2009
The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraintbased animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.
2006
Convincingly animating virtual humans has become of great interest in many fields since recent years. In computer games for example, virtual humans often are the main characters. Failing to realistically animate them may wreck all previous efforts made to provide the player with an immersion feeling. At the same time, computer generated movies have become very popular and thus have increased the demand for animation realism. Indeed, virtual humans are now the new stars in movies like Final Fantasy or Shrek, or are even used for special effects in movies like Matrix. In this context, the virtual humans animations not only need to be realistic as for computer games, but really need to be expressive as for real actors. While creating animations from scratch is still widespread, it demands artistics skills and hours if not days to produce few seconds of animation. For these reasons, there has been a growing interest for motion capture: instead of creating a motion, the idea is to reproduce the movements of a live performer. However, motion capture is not perfect and still needs improvements. Indeed, the motion capture process involves complex techniques and equipments. This often results in noisy animations which must be edited. Moreover, it is hard to exactly foresee the final motion. For example, it often happens that the director of a movie decides to change the script. The animators then have to change part or the whole animation. The aim of this thesis is then to provide animators with interactive tools helping them to easily and rapidly modify preexisting animations. We first present our Inverse Kinematics solver used to enforce kinematic constraints at each time of an animation. Afterward, we propose a motion deformation framework offering the user a way to specify prioritized constraints and to edit an initial animation so that it may be used in a new context (characters, environment,etc). Finally, we introduce a semi-automatic algorithm to extract important motion features from motion capture animation which may serve as a first guess for the animators when specifying important characteristics an initial animation should respect.
We describe a system for o -line production and real-time playback o f motion for articulated human gures in 3D virtual environments. The key notions are (1) the logical storage of full-body motion in posture graphs, which p r o vides a simple motion access method for playback, and (2) mapping the motions of higher DOF gures to lower DOF gures using slaving to provide human models at several levels of detail, both in geometry and articulation, for later playback. We present our system in the context of a simple problem: Animating human gures in a distributed simulation, using DIS protocols for communicating the human state information. We also discuss several related techniques for real-time animation of articulated gures in visual simulation.
Computer Graphics Forum, 2021
3D animation production for storytelling requires essential manual processes of virtual scene composition, character creation, and motion editing, etc. Although professional artists can favorably create 3D animations using software, it remains a complex and challenging task for novice users to handle and learn such tools for content creation. In this paper, we present Write‐An‐Animation, a 3D animation system that allows novice users to create, edit, preview, and render animations, all through text editing. Based on the input texts describing virtual scenes and human motions in natural languages, our system first parses the texts as semantic scene graphs, then retrieves 3D object models for virtual scene composition and motion clips for character animation. Character motion is synthesized with the combination of generative locomotions using neural state machine as well as template action motions retrieved from the dataset. Moreover, to make the virtual scene layout compatible with c...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Presence: Teleoperators and Virtual Environments, 2008
ACM Transactions on Graphics, 2015
ACM Transactions on Graphics, 2009
ACM Transactions on Graphics, 2003
Computer Graphics Forum, 2005
Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology - VRST '14, 2014
The Visual Computer, 2013
The Visual Computer, 2013
IEEE Transactions on Visualization and Computer Graphics, 2000
The Journal of Visualization and Computer Animation, 1998
IEEE Computer Graphics and Applications, 1997