Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1995
…
10 pages
1 file
In this paper, we discuss the role of digital actors in Virtual Environments. We describe the integration of motion control techniques with autonomy based on synthetic sensors. In particular, we emphasize the synthetic vision, audition and tactile system. We also discuss how to introduce the sensors of real humans in the Virtual Space in order to have a communication between digital actors and real humans.
Lecture Notes in Computer Science, 1997
In this paper, we present current research developments in the Virtual Life of autonomous synthetic actors. After a brief description of the perception action principles with a few simple examples, we emphasize the concept of virtual sensors for virtual humans. In particular, we describe in details our experiences in implementing virtual vision, tactile and audition. We then describe perception-based locomotion, a multisensor based method of automatic grasping and vision-based ball games.
Lecture Notes in Computer Science, 1999
This paper explains methods to provide autonomous virtual humans with the skills necessary to perform stand-alone role in films, games and interactive television. We present current research developments in the Virtual Life of autonomous synthetic actors. After a brief description of our geometric, physical, and auditory Virtual Environments, we introduce the perception action principles with a few simple examples. We emphasize the concept of virtual sensors for virtual humans. In particular, we describe our experiences in implementing virtual sensors such as vision sensors , tactile sensors, and hearing sensors. We then describe knowledge-based navigation, knowledge-based locomotion and in more details sensor-based tennis.
1998
This article surveys virtual humans and techniques to control the face and body. It also covers higher level interfaces for direct speech input and issues of real-time control. leged information for controlling actors' motions falls into three categories: geometric, physical, and behavioral, giving rise to three corresponding motion-control method categories. More recently, Thalmann8 proposed four new classes of synthetic actors: participatory, guided, autonomous, and interactive-perceptive.
2007
What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.
Tsinghua Science & Technology, 2011
Creating realistic virtual humans has been a challenging objective in computer science research for some time. This paper describes an integrated framework for modeling virtual humans with a high level of autonomy. The framework seeks to reproduce human-like believable behavior and movement in virtual humans in a virtual environment. The framework includes a visual and auditory information perception module, a decision network based behavior decision module, and a hierarchical autonomous motion control module. These cooperate to model realistic autonomous individual behavior for virtual humans in real-time interactive virtual environments. The framework was tested in a simulated virtual environment system to demonstrate the ability of the framework to create autonomous, perceptive and intelligent virtual humans in real-time virtual environments.
2002
This study lies in the context of virtual engineering and human information systems. We propose to model and implement the behaviour of believable virtual agents, using ideas from psychology (cognitive maps, affordances) and neurophysiology (active perception, movement prediction). Virtual worlds are peopled with autonomous entities improvising in free interaction. Autonomization of a model consists in giving to it a sensorimotor interface and also a decision module so that it could adapt its reactions to inner and extern stimuli. We propose in this article the basis for a behavioral model imitating human beings' perceptive operation. The psychological notion of "affordance" will help us in the construction of fuzzy cognitive maps for believable virtual human behaviour specification. Sensus Alain Berthoz, neurophysiologist, perception is not only an interpretation of sensorial messages: it is also an internal simulation of the action and an anticipation of the consequences of this simulated action. Following neurophysiological experiments on hippocampus in which were observed oscillations permitting prediction of trajectories, our virtual actor uses fuzzy cognitive maps in an imaginary space and simulate a behaviour. This simulation in the simulation allows him to predict the consequences of actions. The expected benefit from our affordance-based and proactive model consists in elaborating a believable virtual helmsman within the framework of a virtual sailing ship. We have implemented such a virtual actor in the multi-agent environment oRis.
2008
Abstract—Virtual studios have long been used in commercial broadcasting. However, most virtual studios are based on “blue screen ” technology, and its two-dimensional (2-D) nature restricts the user from making natural three-dimensional (3-D) interactions. Actors have to follow prewritten scripts and pretend as if directly interacting with the synthetic objects. This often creates an unnatural and seemingly uncoordinated output. In this paper, we introduce an improved virtual-studio framework to enable actors/users to interact in 3-D more naturally with the synthetic environment and objects. The proposed system uses a stereo camera to first construct a 3-D environment (for the actor to act in), a multiview camera to extract the image and 3-D information about the actor, and a real-time registration and rendering software for generating the final output. Synthetic 3-D objects can be easily inserted and rendered, in real time, together with the 3-D environment and video actor for natu...
Computers & Graphics, 1995
actor in the problems of path searching, obstacle avoidance, and internal knowledge representation with learning and forgetting characteristics. For the general navigation problem, we propose a local and a global approach. In the global approach, a dynamic octree serves as global 3D visual memory and allows an actor to memorize their environment that he sees and to adapt it to a changing and dynamic environment. His reasoning process allows him to find 3D paths based on his visual memory by avoiding impasses and circuits. In the local approach, low level vision based navigation reflexes, normally performed by intelligent actors, are simulated. The local navigation model uses the direct input information from his visual environment to reach goals or subgoals and to avoid unexpected obstacles.
2002
Virtual Life is a new area dedicated to the simulation of life in virtual worlds, human-virtual interaction and immersion inside virtual worlds. Virtual Life cannot exist without the growing development of Computer Animation techniques and corresponds to the most advanced concepts and techniques of it. In this chapter, we present current research developments in the Virtual Life of autonomous synthetic actors. After a brief description of the perception action principles with a few simple examples, we emphasize the concept of virtual sensors for virtual humans. In particular, we describe in details our experiences in implementing virtual vision, tactile and audition. We then describe perception-based locomotion, a multisensor based method of automatic grasping and vision-based ball games. We also discuss problems of integrating autonomous humans into virtual environments.
OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), 1996
Sandia National Laboratories 61 E C E I V ED sense of who else is present in their virtual This paper presents preliminary work in the development of an avatar driver. An avatar is the graphical embodiment of a user in a virtual world. In applications such as small team, close quarters training and mission planning and rehearsal, it is important that the user's avatar reproduce his or her motions naturally and with high fidelity. This paper presents a set of special purpose algorithms for driving the motion of the an avatar with minimal information about the posture and position of the user. These algorithms utilize information about natural human motion and posture to produce solutions quickly and accurately without the need for complex general-purpose kinematics algorithms. Several examples illustrating the successhl application of these techniques are included.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Computer Graphics Forum, 1995
International Journal of Virtual Reality
WIT Transactions on Information and Communication Technologies, 1970
Lecture Notes in Computer Science, 2008
Lecture Notes in Computer Science, 2005
Proceedings Computer Animation'95, 1995
Arxiv preprint arXiv: …, 2007