Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2023, HAL (Le Centre pour la Communication Scientifique Directe)
In animation, style can be considered as a distinctive layer over the content of a motion, allowing a character to achieve the same gesture in various ways. Editing existing animation to modify the style while keeping the same content is an interesting task, which can facilitate the re-use of animation data and cut down on production time. Existing animation edition methods either work directly on the motion data, providing precise but tedious tools, or manipulate semantic style categories, taking control away from the user. As a middle ground, we propose a new character motion edition paradigm allowing higher-level manipulations without sacrificing controllability. We describe the concept of pose metrics, objective value functions which can be used to edit animation, leaving the style interpretation up to the user. We then propose an edition pipeline to edit animation data using pose metrics.
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2009
Animation data, from motion capture or other sources, is becoming increasingly available and provides high quality motion, but is difficult to customize for the needs of a particular application. This is especially true when stylistic changes are needed, for example, to reflect a character's changing mood, differentiate one character from another or meet the precise desires of an animator. We introduce a system for editing animation data that is particularly well suited to making stylistic changes. Our approach transforms the joint angle representation of animation data into a set of pose parameters more suitable for editing. These motion drives include position data for the wrists, ankles and center of mass, as well as the rotation of the pelvis. We also extract correlations between drives and body movement, specifically between wrist position and the torso angles. The system solves for the pose at each frame based on the current values of these drives and correlations using an efficient set of inverse kinematics and balance algorithms. An animator can interactively edit the motion by performing linear operations on the motion drives or extracted correlations, or by layering additional correlations. We demonstrate the effectiveness of the approach with various examples of gesture and locomotion.
We introduce staggered poses---a representation of character motion that explicitly encodes coordinated timing among movement features in different parts of a character's body. This representation allows us to provide sparse, pose--based controls for editing motion that preserve existing movement detail, and we describe how to edit coordinated timing among extrema in these controls for stylistic editing. The staggered pose representation supports the editing of new motion by generalizing keyframe--based workflows to retain high--level control after local timing and transition splines have been created. For densely--sampled motion such as motion capture data, we present an algorithm that creates a staggered pose representation by locating coordinated movement features and modeling motion detail using splines and displacement maps. These techniques, taken together, enable feature--based keyframe editing of dense motion data.
The Visual Computer, 2015
Actions performed by a virtual character can be controlled with verbal commands such as 'walk five steps forward'. Similar control of the motion style, meaning how the actions are performed, is complicated by the ambiguity of describing individual motions with phrases such as 'aggressive walking'. In this paper, we present a method for controlling motion style with relative commands such as 'do the same, but more sadly'. Based on acted example motions, comparative annotations, and a set of calculated motion features, relative styles can be defined as vectors in the feature space. We present a new method for creating these style vectors by finding out which features are essential for a style to be perceived and eliminating those that show only incidental correlations with the style. We show with a user study that our feature selection procedure is more accurate than earlier methods for creating style vectors, and that the style definitions generalize across different actors and annotators. We also present a tool enabling interactive control of parametric motion synthesis by verbal commands. As the control method is independent from the generation of motion, it can be applied to virtually any parametric synthesis method.
Symposium on Computer Animation, 2003
The utility of an interactive tool can be measured by how pervasively it is embedded into a user's workflow. Tools for artists additionally must provide an appropriate level of control over expressive aspects of their work while suppressing unwanted intrusions due to details that are, for the moment, unnecessary. Our focus is on tools that target editing the expressive aspects
Intelligent Virtual Agents, 2015
We present a Matlab toolbox for synthesis and visualization of human motion style. The aim is to support development of expressive virtual characters by providing implementations of several style related motion synthesis methods thus allowing side-by-side comparisons. The implemented methods are based on recorded (captured or synthetic) motions, and include linear motion interpolation and extrapolation, style transfer, rotation swapping per body part and per quaternion channel, frequency band scaling and swapping, and Principal/Independent Component Analysis (PCA/ICA) based synthesis and component swapping.
2014
Three-dimensional animation is an area in vast expansion due to, continuous research in the field has enabled an increasing number of users access to powerful tools with intuitive interfaces. We present our work-in-progress methodology by which artists can manipulate existing animation segments using intuitive characteristics instead of manually changing keyframes' values and interpolations. To achieve this goal, motion capture is used to create a database in which actors perform the same movement with different characteristics; keyframes from those movements are analyzed and used to create a transformation of animation curves that describe differences of values and times in keyframes of neutral and a movement with a specific characteristic. This transformation can be used to change a large set of keyframes, embedding a desired characteristic into the segment. To test our methodology, we used as a proof of concept a character performing a walk, represented by 59 joints with 172 ...
Our progress in the problem of making animated characters move expressively has been slow, and it persists in being among the most challenging in computer graphics. Simply attending to the low-level motion control problem, particularly for physically based models, is very difficult. Providing an animator with the tools to imbue character motion with broad expressive qualities is even more ambitious, but it is clear it is a goal to which we must aspire. Part of the problem is simply finding the right language in which to express qualities of motion. Another important issue is that expressive animation often involves many disparate parts of the body, which thwarts bottom-up controller synthesis. We demonstrate progress in this direction through the specification of directed, expressive animation over a limited range of standing movements. A key contribution is that through the use of high-level concepts such as character sketches, actions and properties, which impose different modalit...
ACM Transactions on Graphics, 2016
Lecture Notes in Computer Science
This paper presents an empirical evaluation of a method called Style transformation which consists of modifying an existing gesture sequence in order to obtain a new style where the transformation parameters have been extracted from an existing captured sequence. This data-driven method can be used either to enhance key-framed gesture animations or to taint captured motion sequences according to a desired style.
Computer Animation and Virtual Worlds, 2008
In this paper, we present a novel method for editing stylistic human motions. We represent styles as differences between stylistic motions and introduced neutral motions, including timing differences and spatial differences. Timing differences are defined as time alignment curves, while spatial differences are found by a machine learning technique: Independent Feature Subspaces Analysis, which is the combination of Multidimensional Independent Component Analysis and Invariant Feature Subspaces. This technique is used to decompose two motions into several subspaces. One of these subspaces can be defined as style subspace that describes the style aspects of the stylistic motion. In order to find the style subspace, we compare norms of the projections of two motions on each subspace. Once the time alignment curves and style subspaces of several motion clips are obtained, animators can tune, transfer and merge the style subspaces to synthesize new motion clips with various styles. Our method is easy to use since manual manipulations and large training data sets are not necessary.
2009
The growth of motion capture systems has contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the various captured motions normally require specific needs. Consequently, modifying and reusing these motions in new situations-for example, retargeting it to a new environment-became an increasing area of research known as motion editing. In the last few years, human motion editing has become one of the most active research areas in the field of computer animation. In this thesis, we introduce and discuss a novel method for interactive human motion editing. Our main contribution is the development of a Low-dimensional Prioritized Inverse Kinematics (LPIK) technique that handles user constraints within a low-dimensional motion space-also known as the latent space. Its major feature is to operate in the latent space instead of the joint space. By construction, it is sufficient to constrain a single frame with LPIK to obtain a natural movement enforcing the intrinsic motion flow. The LPIK has the advantage of reducing the size of the Jacobian matrix as the motion latent space dimension is small for a coordinated movement compared to the joint space. Moreover, the method offers the compelling advantage that it is well suited for characters with large number of degrees of freedom (DoFs). This is one of the limitations of IK methods that perform optimizations in the joint space. In addition, our method still provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature. Essentially, our technique is based on the mathematical connections between linear motion models such as Principal Component Analysis (PCA) and Prioritized Inverse Kinematics (PIK). We use PCA as a first stage of preprocessing to reduce the dimensionality of the database to make it tractable and to encapsulate an underlying motion pattern. And after, to bound IK solutions within the space of natural-looking motions. We use PIK to allow the user to manipulate constraints with different priorities while interactively editing an animation. Essentially, the priority strategy ensures that a higher priority task is not affected by other tasks of lower priority. Furthermore, two strategies to impose motion continuity based on PCA are introduced. We show a number of experiments used to evaluate and validate (both qualitatively and quantitatively) the benefits of our method. Finally, we assess the quality of the edited animations against a goal-directed constraint-based technique, to verify the robustness of our method regarding performance, simplicity and realism.
The Visual Computer, 2013
Quick creation of 3D character animations is an important task in game design, simulations, forensic animation, education, training, and more. We present a framework for creating 3D animated characters using a simple sketching interface coupled with a large, unannotated motion database that is used to find the appropriate motion sequences corresponding to the input sketches. Contrary to the previous work that deals with static sketches, our input sketches can be enhanced by motion and rotation curves that improve matching in the context of the existing animation sequences. Our framework uses animated sequences as the basic building blocks of the final animated scenes, and allows for various operations with them such as trimming, resampling, or connecting by use of blending and interpolation. A database of significant and unique poses, together with a two-pass search running on the GPU, allows for interactive matching even for large amounts of poses in a template database. The system provides intuitive interfaces, an immediate feedback, and poses very small requirements on the user. A user study showed that the system can be used by novice users with no animation experience or artistic talent, as well as by users with an animation background. Both groups were able to create animated scenes consisting of complex and varied actions in less than 20 minutes.
Lecture Notes in Computer Science, 2009
The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraintbased animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.
Computer Graphics Forum, 2004
While powerful, the simplest version of this approach is not particularly well suited to modeling the specific style of an individual whose motion had not yet been recorded when building the database: it would take an expert to adjust the PCA weights to obtain a motion style that is indistinguishable from his. Consequently, when realism is required, the current practice is to perform a full motion capture session each time a new person must be considered. In this paper, we extend the PCA approach so that this requirement can be drastically reduced: for whole classes of cyclic and noncyclic motions such as walking, running or jumping, it is enough to observe the newcomer moving only once at a particular speed or jumping a particular distance using either an optical motion capture system or a simple pair of synchronized video cameras. This one observation is used to compute a set of principal component weights that best approximates the motion and to extrapolate in real-time realistic animations of the same person walking or running at different speeds, and jumping a different distance.
1994
In this paper we extend previous work on automatic motion synthesis for physically realistic 2D articulated figures in three ways. First, we describe an improved motion-synthesis algorithm that runs substantially faster than previously reported algorithms. Second, we present two new techniques for influencing the style of the motions generated by the algorithm. These techniques can be used by an animator to achieve a desired movement style, or they can be used to guarantee variety in the motions synthesized over several runs of the algorithm. Finally, we describe an animation editor that supports the interactive concatenation of existing, automatically generated motion controllers to produce complex, composite trajectories. Taken together, these results suggest how a usable, useful system for articulated-figure motion synthesis might be developed.
The Visual Computer, 2008
We introduce an easy and intuitive approach to create animations by assembling existing animations. Using our system, the user needs only to simply scribble regions of interest and select the example animations that he/she wants to apply. Our system will then synthesize a transformation for each triangle and solve an optimization problem to compute the new animation for this target mesh. Like playing a jigsaw puzzle game, even a novice can explore his/her creativity by using our system without learning complicated routines, but just using a few simple operations to achieve the goal.
In comparison to traditional animation techniques, motion capture allows animators to obtain a large amount of realistic data in little time. The motivation behind our research is to try to fill the gap that separates realistic motion from cartoon animation. With this, classical animators can produce high quality animated movies (such as Frozen, Toy Story, etc.) and non-realistic video games in a significantly shorter amount of time. To add cartoon-like qualities to realistic animations, we suggest an algorithm that changes the animation curves of motion capture data by modifying local minima and maxima. We also propose a curve-based interface that allows users to quickly edit and visualize the changes applied to the animation. Finally, we present the results of two user studies that evaluate both the overall user satisfaction with the system's functionality, interactivity and learning curve, and the animation quality. vii
ACM Transactions on Graphics, 2009
The ability to interactively edit human motion data is essential for character animation. We present a novel motion editing technique that allows the user to manipulate synchronized multiple character motions interactively. Our Laplacian motion editing method formulates the interaction among multiple characters as a collection of linear constraints and enforces the constraints, while the user directly manipulates the motion of characters in both spatial and temporal domains. Various types of manipulation handles are provided to specify absolute/relative spatial location, direction, time, duration, and synchronization of multiple characters. The capability of non-sequential discrete editing is incorporated into our motion editing interfaces, so continuous and discrete editing is performed simultaneously and seamlessly. We demonstrate that the synchronized multiple character motions are synthesized and manipulated at interactive rates using spatiotemporal constraints.
Cartoonists and animators often use lines of action to emphasize dynamics in character poses. In this paper, we propose a physically-based model to simulate the line of action's motion, leading to rich motion from simple drawings. Our proposed method is decomposed into three steps. Based on user-provided strokes, we forward simulate 2D elastic motion. To ensure continuity across keyframes, we re-target the forward simulations to the drawn strokes. Finally, we synthesize a 3D character motion matching the dynamic line. The fact that the line can move freely like an elastic band raises new questions about its relationship to the body over time. The line may move faster and leave body parts behind, or the line may slide slowly towards other body parts for support. We conjecture that the artist seeks to maximize the filling of the line (with the character's body)---while respecting basic realism constraints such as balance. Based on these insights, we provide a method that synth...
2006
Convincingly animating virtual humans has become of great interest in many fields since recent years. In computer games for example, virtual humans often are the main characters. Failing to realistically animate them may wreck all previous efforts made to provide the player with an immersion feeling. At the same time, computer generated movies have become very popular and thus have increased the demand for animation realism. Indeed, virtual humans are now the new stars in movies like Final Fantasy or Shrek, or are even used for special effects in movies like Matrix. In this context, the virtual humans animations not only need to be realistic as for computer games, but really need to be expressive as for real actors. While creating animations from scratch is still widespread, it demands artistics skills and hours if not days to produce few seconds of animation. For these reasons, there has been a growing interest for motion capture: instead of creating a motion, the idea is to reproduce the movements of a live performer. However, motion capture is not perfect and still needs improvements. Indeed, the motion capture process involves complex techniques and equipments. This often results in noisy animations which must be edited. Moreover, it is hard to exactly foresee the final motion. For example, it often happens that the director of a movie decides to change the script. The animators then have to change part or the whole animation. The aim of this thesis is then to provide animators with interactive tools helping them to easily and rapidly modify preexisting animations. We first present our Inverse Kinematics solver used to enforce kinematic constraints at each time of an animation. Afterward, we propose a motion deformation framework offering the user a way to specify prioritized constraints and to edit an initial animation so that it may be used in a new context (characters, environment,etc). Finally, we introduce a semi-automatic algorithm to extract important motion features from motion capture animation which may serve as a first guess for the animators when specifying important characteristics an initial animation should respect.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.