Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, Confluence of Computer Vision and Computer Graphics
…
10 pages
1 file
In this paper we present the first implementation of a new medium for tele-collaboration. The realized testbed consists of two tele-cubicles at two Internet nodes. At each tele-cubicle a stereo-rig is used to provide an accurate dense 3D-reconstruction of a person in action. The two real dynamic worlds are exchanged over the network and visualized stereoscopically. The remote communication and the dynamic nature of tele-collaboration raise the question of optimal representation for graphics and vision. We treat the issues of limited bandwidth, latency, and processing power with a tunable 3D-representation where the user can decide over the trade-off between delay and 3D-resolution by tuning the spatial resolution, the size of the working volume, and the uncertainty of reconstruction. Due to the limited number of cameras and displays our system can not provide the user with a surround-immersive feeling. However, it is the first system that uses 3D-real-data that are reconstructed online at another site. The system has been implemented with low-cost off-the-shelf hardware and has been successfully demonstrated in local area networks.
2010 IEEE International Conference on Multimedia and Expo, 2010
Though the variety of desktop real time stereo vision systems has grown considerably in the past several years, few make any verifiable claims about the accuracy of the algorithms used to construct 3D data or describe how the data generated by such systems, which is large in size, can be effectively distributed. In this paper, we describe a system that creates an accurate (on the order of a centimeter), 3D reconstruction of an environment in real time (under 30 ms) that also allows for remote interaction between users. This paper addresses how to reconstruct, compress, and visualize the 3D environment. In contrast to most commercial desktop real time stereo vision systems our algorithm produces 3D meshes instead of dense point clouds, which we show allows for better quality visualizations. The chosen representation of the data also allows for high compression ratios for transfer to remote sites. We demonstrate the accuracy and speed of our results on a variety of benchmarks.
2008 Tenth IEEE International Symposium on Multimedia, 2008
In this paper, we present a framework for immersive 3D video conferencing and geographically distributed collaboration. Our multi-camera system performs a full-body 3D reconstruction of users in real time and renders their image in a virtual space allowing remote interaction between users and the virtual environment. The paper features an overview of the technology and algorithms used for calibration, capturing, and reconstruction. We introduce stereo mapping using adaptive triangulation which allows for fast (under 25 ms) and robust real-time 3D reconstruction. The chosen representation of the data provides high compression ratios for transfer to a remote site. The algorithm produces partial 3D meshes, instead of dense point clouds, which are combined on the renderer to create a unified model of the user. We have successfully demonstrated the use of our system in various applications such as remote dancing and immersive Tai Chi learning.
1999
In this paper we present the first implementation of a new medium for telecollaboration. The realized testbed consists of two telecubicles hooked at two Internet nodes. At each telecubicle a stereorig is used to provide an accurate dense 3D-reconstruction of a person in action. The two real dynamic worlds are transmitted over the network and visualized with a spatially immersive display including a projector, a tracker, and optionally stereo-glasses. The full-3D information facilitates the interaction with any virtual object demonstrating in an optimal way the confluence of graphics, vision, and communication. In particular, the remote communication and the dynamic nature of tele-collaboration put the challenge of optimal representation for graphics and vision. We treat the issues of limited bandwidth, latency, and processing power with a tunable 3D-representation where the user can decide over the trade-off between delay and 3Dresolution by tuning the spatial resolution, the size of the working volume, and the uncertainty of reconstruction. Due to the limited number of cameras and projectors our system can not provide the user with a surround-immersive feeling. However, it is the first system that uses 3D-real-data that are reconstructed online at another site. The system has been implemented with low-cost off-the-shelf hardware and has been successfully demonstrated in a local area network. We show the superiority of the system over similar approaches with respect to depth accuracy and speed.
2000
Our long-term vision is to provide a better every-day working envi- ronment, with high-fidelity scene reconstruction for life-sized 3D tele- collaboration. In particular, we want to provide the user with a true sense of presence with our remote collaborator and their real surroundings, and the ability to share and interact with 3D documents. The challenges re- lated to this vision
Teleconferencing is becoming more and more important and popular in today's society and is mostly accomplished using 2D video conferencing systems. However, we believe there is a lot of room for improving the communication experience: one crucial aspect is to add 3D information, but also freeing the user from sitting in front of a computer. With these improvements, we aim at eventually creating a fully immersive 3D telepresence system that might improve the way we communicate over long distances. In this paper we review and analyze existing technology to achieve this goal and present a proof-of-concept, but fully functional prototype.
Proc. of International …, 2009
The interest in immersive video conference systems exists now for many years from both sides, the commercialization point of view as well as from a research perspective. Nevertheless, so far the user acceptance of such systems was still very limited. This situation changed recently. Technological advances in fields like display and camera technology as well as processing hardware lead the way to a new generation of immersive tele-conference systems. On one hand, large scale and high definition displays significantly enhance the feeling of virtual presence. Nowadays, commercial solutions benefit from these facts. Besides this, research in the area of multiuser 3D display technology shows promising results. On the other hand, new fast graphics board solutions allow a high algorithmic parallelization in a consumer PC environment. In this way, real time, high quality and high resolution implementations of more sophisticated 2D and 3D acquisition algorithms, such as volume based approaches, become more and more realistic. From this point of view this paper summarizes first results and experiences of the European FP7 research project 3DPresence which aims to build a three party and multiuser 3D tele-conferencing system. The goal of this paper is to discuss general issues and problems of future generation immersive mutli-user 3D video conference systems. Further on, it provides challenging first results and proposes solutions for critical questions.
Tenth IEEE International …, 2008
We present our implementation and evaluation of TEEVE, a distributed 3D tele-immersive system. TEEVE is among the first to support multi-stream/multi-site 3D teleimmersive environments with COTS hardware and software infrastructures. It promotes collaborative physical activities among geographically dispersed sites by immersing the 3D representations of remote participants into a joint 3D virtual space. In this paper, we describe our implementation of TEEVE and introduce the recent advances in TEEVE's different components. In particular, we present an implemented protocol for ViewCast-based semantic-aware data dissemination to support multi-site remote collaboration. We evaluate the TEEVE system by deploying it on the Internet. The experimental results demonstrate that it achieves stable visual quality, soft real-time delay, and efficient resource usage.
2008
Traditional set-top camera video-conferencing systems still fail to meet the ‘telepresence challenge ’ of providing a viable alternative for physical business travel, which is nowadays characterized by unacceptable delays, costs, inconvenience, and an increasingly large ecological footprint. Even recent high-end commercial solutions, while partially removing some of these traditional shortcomings, still present the problems of not scaling easily, expensive implementations, not utilizing 3D life-sized representations of the remote participants and addressing only eye contact and gesturebased interactions in very limited ways. The European FP7 project 3DPresence will develop a multi-party, high-end 3D videoconferencing concept that will tackle the problem of transmitting the feeling of physical presence in real-time to
Lecture Notes in Computer Science, 2014
We present an approach for high quality rendering of the 3D representation of a remote collaboration scene, along with real-time rendering speed, by expanding the unstructured lumigraph rendering (ULR) method. ULR uses a 3D proxy which is in the simplest case a 2D plane. We develop dynamic proxy for ULR, to get a better and more detailed 3D proxy in real-time; which leads to the rendering of highquality and accurate 3D scenes with motion parallax support. The novel contribution of this work is the development of a dynamic proxy in realtime. The dynamic proxy is generated based on depth images instead of color images as in the Lumigraph approach.
Proceedings of the 7th ACM International Workshop on Massively Multiuser Virtual Environments, 2015
The next generation in 3D tele-presence is based on modular systems that combine live captured object based 3D video and synthetically authored 3D graphics content. This paper presents the design, implementation and evaluation of a network solution for multi-party real-time communication of these types of content. This prototype includes a UDP/TCP multi-streaming kernel that includes media synchronization support, packet scheduling, loss resilient real-time transmission and an easy to use blocking and non-blocking API. To compress the live reconstructed 3D data streams that represent the natural user, two categories of 3D mesh codecs were integrated: a highly adaptive real-time geometry driven mesh codec and a fast single rate codec that provides better performance at high resolutions. Subjective tests with 16 subjects indicate that only modest perceptual degradation of the highly realistic 3D natural user is introduced, especially when the users in the virtual world are at a distance. We developed a session management protocol for setting up streams based on the specific 3DTI capabilities allowing device scalability from light (render only) to heavy clients (rendering and 3D Capturing). Additionally, a distributed messaging system via web-sockets and cloud infrastructures based on publish and subscribe was integrated for real-time delivery of avatar and other AI data.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Stereoscopic Displays and Virtual Reality Systems XIV, 2007
Stereoscopic Displays and Virtual Reality Systems XIV, 2007
IEEE Transactions on Multimedia, 2000
Proc. Multi-Media Modeling MMM95, World Scientific, Singapore, 1995
Multimedia Tools and Applications, 2019
ACM Transactions on …, 2010
Proceedings of the 15th …, 2007
Proceedings of the 15th …, 2007