Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004, Proceedings Shape Modeling Applications, 2004.
Multiresolution geometry streaming has been well studied in recent years. The client can progressively visualize a triangle mesh from the coarsest resolution to the finest one while a server successively transmits detail information. However, the streaming order of the detail data usually depends only on the geometric importance, since basically a mesh simplification process is performed backwards in the streaming. Consequently, the resolution of the model changes globally during streaming even if the client does not want to download detail information for the invisible parts from a given view point.
Proceedings of the 18th International Workshop on Network and Operating Systems Support for Digital Audio and Video - NOSSDAV '08, 2008
Progressive mesh streaming enables users to view 3D meshes over the network with increasing level of details, by sending coarse version of the meshes initially, followed by a series of refinements. To optimally increase the rendered mesh quality, refinements should be sent in descending order of their visual contributions based on the user's viewpoint. A common approach is to let the sender decide this sending order, but the computational cost of making this decision prohibits such sender-driven approach from scaling to large number of clients. To improve scalability, we propose a receiver-driven protocol, in which the receiver decides the sending order and explicitly requests the refinements, while the sender simply sends the data requested. The sending order is computed at the receiver by estimating the visibility and visual contributions of the refinements, even before receiving them, with the help of GPU. Experiments show that our protocol reduces the CPU cost of the sender by 24% and the outgoing traffic of the sender by 40%.
2009 11th IEEE International Symposium on Multimedia, 2009
Fast and efficient streaming of detailed 3D model over lossy network has long been a challenge, although progressive compression techniques were proposed long time ago. One reason is that packet loss occurring in unreliable networks is highly unpredictable, and leads to connectivity inconsistency and distortions. In this paper, we address this problem by proposing a receiver-based loss tolerance scheme based on a prediction technique. Our method works without introducing protection bits and retransmission. We stream mesh refinement data on reliable and unreliable networks separately so as to reduce the transmission delay as well as to obtain a satisfactory decompression result. The tests indicate that the decompression is completed quickly, suggesting that it is a practical solution. Moreover, the proposed prediction technique achieves a good approximation of the original mesh with low distortion.
ACM Multimedia Conference, 2007
D triangular mesh is becoming an increasingly important data type for networked applications such as digital museums, online games, and virtual worlds. In these applications, a multi-resolution rep- resentation is typically desired for streaming large 3D meshes, al- lowing for incremental rendering at the viewers while data is still being transmitted. Such progressive coding, however, introduces dependencies between data. This
Computer-Aided Design, 2016
We focus on applications where a remote client needs to visualize or process a complex, manifold triangle mesh, M, but only in a relatively small, user controlled, Region of Interest (RoI) at a time. The client first downloads a coarse base mesh, pre-computed on the server via a series of simplification passes on M, one per Level of Detail (LoD), each pass identifying an independent set of triangles, collapsing them, and, for each collapse, storing, in a Vertex Expansion Record (VER), the information needed to reverse the collapse. On each client initiated RoI modification request, the server pushes to the client a selected subset of these VERs, which, when decoded and applied to refine the mesh locally, ensure that the portion in the RoI is always at full resolution. The eBits approach proposed here offers state of the art compression ratios (using less than 2.5 bits per new full resolution RoI triangle when the RoI has more than 2000 vertices to transmit the connectivity for the selective refinements) and fine-grain control (allowing the user to adjust the RoI by small increments). The effectiveness of eBits results from several novel ideas and novel variations of previous solutions. We represent the VERs using persistent labels so that they can be applied in different orders within a given LoD. The server maintains a shadow copy of the client's mesh. To avoid sending IDs identifying which vertices should be expanded, we either transmit, for each new vertex, a compact encoding of its death tag-the LoD at which it will be expanded if it lies in the RoI-or transmit vertex masks for the RoI and its neighboring vertices. We also propose a three-step simplification that reduces the overall transmission cost by increasing both the simplification effectiveness and the regularity of the valences in the resulting meshes.
Computer-Aided Design, 2000
We consider the problem of transmitting huge triangle meshes in the context of a Web-like client-server architecture. Approximations of the original mesh are transmitted by applying selective re nement. A multiresolution geometric model is maintained by the server. A client may query the server for a mesh at an arbitrary, continuously variable, level of detail. The client makes repeated queries over time with di erent query parameters. The server answers to queries by traversing the multiresolution model and transmitting updates to the client, which uses them to progressively modify a current mesh.
Journal of Computer Science, 2012
The complexity in 3D virtual environment over the web is growing rapidly every day. This 3D virtual environment comprises a set of structured scenes and each scene has multiple 3D objects/meshes. Therefore the granular level of the block in a virtual environment is the object. In a virtual environment, it is required to give user interactions for every 3D object and at any point of time, it is enough if the system streams and brings in only the visible portion of the object from the server to the client by utilizing the limited network bandwidth and the limited client memory space. This streaming would reduce the time to present the rendered object to the requested clients. Further to reduce the time and effectively utilize the bandwidth and memory space, in the proposed study, an attempt is made to exploit the user interaction on 3D object and built a predictive agent which would minimize the latency in the rendering of the 3D mesh that is being streamed. The experiment result shows that the rendering time and cache miss rates are significantly reduced with the predictive agent.
ACM Transactions on Multimedia Computing, Communications, and Applications, 2006
Streaming 3D graphics have been widely used in multimedia applications such as online gaming and virtual reality. However, a gap exists between the zero-loss-tolerance of the existing compression schemes and the lossy network transmissions. In this article, we propose a generic 3D middleware between the 3D application layer and the transport layer for the transmission of triangle-based progressively compressed 3D models. Significant features of the proposed middleware include. 1) handling 3D compressed data streams from multiple progressive compression techniques. 2) considering end user hardware capabilities for effectively saving the data size for network delivery. 3) a minimum cost dynamic reliable set selector to choose the transport protocol for each sublayer based on the real-time network traffic. Extensive simulations with TCP/UDP and SCTP show that the proposed 3D middleware can achieve the dual objectives of maintaining low transmission delay and small distortion, and thus ...
1999
Polygonal meshes remain the primary representation for visualization of 3D data in a wide range of industries, including manufacturing, architecture, geographic information systems, medical imaging, robotics, entertainment, and military applications. Because of its widespread use, it is desirable to compress polygonal meshes stored in file servers and exchanged over computer networks to reduce storage and transmission time requirements. In this report we describe several schemes that have been recently introduced to represent single and multi-resolution polygonal meshes in compressed form, and to progressively transmit polygonal mesh data. The progressive transmission of polygonal meshes allows the decoder process to make part of a single-resolution mesh, or the low resolution levels of detail of a multi-resolution mesh, available to the rendering system before the whole bitstream is fully received and decoded. It is desirable to combine compression and progressive transmission, but not all the existing methods exhibit both features. These progressive transmission schemes are closely related to surface simplification or decimation methods, which change the surface topology while approximating the geometry, and can be regarded as lossy compression schemes as well. Finally, we describe in more detail the Topological Surgery and Progressive Forest Split schemes that are currently part of the MPEG-4 multimedia standard.
2013
We present a software architecture for distributing and rendering gigantic 3D triangle meshes on common handheld devices. Our approach copes with strong bandwidth and hardware capabilities limitations in terms with a compression-domain adaptive multiresolution rendering approach. The method uses a regular conformal hierarchy of tetrahedra to spatially partition the input 3D model and to arrange mesh fragments at different resolution. We create compact GPU-friendly representations of these fragments by constructing cache-coherent strips that index locally quantized vertex data, exploiting the bounding tetrahedron for creating local barycentic parametrization of the geometry. For the first time, this approach supports local quantization in a fully adaptive seamless 3D mesh structure. For web distribution, further compression is obtained by exploiting local data coherence for entropy coding. At run-time, mobile viewer applications adaptively refine a local multiresolution model maintained in a GPU by asynchronously loading from a web server the required fragments. CPU and GPU cooperate for decompression, and a shaded rendering of colored meshes is performed at interactive speed directly from an intermediate compact representation using only 8bytes/vertex, therefore coping with both memory and bandwidth limitations. The quality and performance of the approach is demonstrated with the interactive exploration of gigatriangle-sized models on common mobile platforms.
IEEE Computer Graphics and Applications, 1999
S everal factors currently limit the size of Virtual Reality Modeling Language (VRML) models that can be effectively visualized over the Web. Principal factors include network bandwidth limitations and inefficient encoding schemes for geometry and associated properties. The delays caused by these factors reduce the attractiveness of using VRML for a large range of virtual reality models, CAD data, and scientific visualizations. The Moving Pictures Expert Group's MPEG-4 addresses the problem of efficiently encoding VRML scene graphs. MPEG-4 version 2 contains a 3D mesh coding toolkit to compress Indexed-FaceSet and LOD nodes, featuring progressive transmission. 1 In this article we propose a framework to mitigate the effects on users of long delays in delivering VRML content. Our solution is general and can work independently of VRML. We exploit the powerful prototyping mechanisms in VRML 2 to illustrate how our techniques might be used to stream geometric content in a VRML environment. Our framework for the progressive transmission of geometry has three main parts, as follows:
Proceedings of the seventeen ACM international conference on Multimedia - MM '09, 2009
Progressive mesh streaming is increasingly used in 3D networked applications, such as online games, virtual worlds, and digital museums. To scale such applications to a large number of users without high infrastructure cost, we apply peer-to-peer techniques to mesh streaming. We consider two issues: how to partition a progressive mesh into chunks and how to lookup the provider of a chunk. For the latter issue, we investigated into two solutions, which trade off server overhead and response time. The first uses a simple centralized lookup service, while the second organizes peers into groups according to the hierarchical structure of the progressive meshes to take advantage of access pattern. Simulation results show that our proposed systems are robust under high churn rate, reduce the server overhead by more than 90%, keep control overhead below 10%, and achieve low average response time.
IEEE Visualization, 2005. VIS 05, 2005
Recent years have seen an immense increase in the complexity of geometric data sets. Today's gigabyte-sized polygon models can no longer be completely loaded into the main memory of common desktop PCs. Unfortunately, current mesh formats, which were designed years ago when meshes were orders of magnitudes smaller, do not account for this. Using such formats to store large meshes is inefficient and complicates all subsequent processing. We describe a streaming format for polygon meshes that is simple enough to replace current offline mesh formats and is more suitable for representing large data sets. Furthermore, it is an ideal input and output format for I/O-efficient out-of-core algorithms that process meshes in a streaming, possibly pipelined, fashion. This paper chiefly concerns the underlying theory and the practical aspects of creating and working with this new representation. In particular, we describe desirable qualities for streaming meshes and methods for converting meshes from a traditional to a streaming format. A central theme of this paper is the issue of coherent and compatible layouts of the mesh vertices and polygons. We present metrics and diagrams that characterize the coherence of a mesh layout and suggest appropriate strategies for improving its "streamability." To this end, we outline several out-of-core algorithms for reordering meshes with poor coherence, and present results for a menagerie of well known and generally incoherent surface meshes.
Highly detailed geometric models are rapidly becoming commonplace in computer graphics. These models, often represented as complex triangle meshes, challenge rendering performance, transmission bandwidth, and storage capacities. This paper introduces the progressive mesh (PM) representation, a new scheme for storing and transmitting arbitrary triangle meshes. This efficient, loss-less, continuous-resolution representation addresses several practical problems in graphics: smooth geomorphing of level-of-detail approximations, progressive transmission, mesh compression, and selective refinement. In addition, we present a new mesh simplification procedure for constructing a PM representation from an arbitrary mesh. The goal of this optimization procedure is to preserve not just the geometry of the original mesh, but more importantly its overall appearance as defined by its discrete and scalar appearance attributes such as material identifiers, color values, normals, and texture coordinates. We demonstrate construction of the PM representation and its applications using several practical models.
2008 12th International Conference on Computer Supported Cooperative Work in Design, 2008
Large-volume 3D triangle mesh model has large difficult to render, store, and transmit in Internet. This research investigates a 3D (3-dimension) streaming technology to overcome the difficulties in model transmission over web. Edge collapse based mesh simplification and progressive mesh refinement are studied, which is crucial to establish 3D streaming technology. A prototype with GUI for mesh simplification and refinement and Peer-to-Peer network architecture are developed to implement the 3D streaming technology for internet-enabled transmission of 3D mesh model.
2006
Three-dimensional (3D) meshes are used intensively in distributed graphics applications where model data is transmitted on demand to users' terminals and rendered for interactive manipulation. This paper presents a transmission system for such applications with an objective of minimizing the latency between the user input and the response. In particular, we represent 3D models by multiple resolutions to allow fast and scalable rendering, and provide them with unequal error protection and/or retransmission when sending the data over a lossy link. The transmission policies are determined adaptive to environment variables and in linear computation time. Simulation results show that the proposed transmission system achieves 20-30% reduction on delivering latency compared to the state-of-the-art approach in the literature. 3D c l i en t ¢£ ¤¥ § ¦¨ © ¢£ ¢£ ¤¥ ¦¨ © ¢£
2008 IEEE International Conference on Multimedia and Expo, 2008
Nowadays, the Internet provides a convenient medium for sharing complex 3D models online. However, transmitting 3D progressive meshes over networks may encounter the problem of packets loss that can lead to connectivity inconsistency and distortion of the reconstructed meshes. In this paper, we combine reliable and unreliable channels to reduce both time delay and mesh distortion, and we propose an error-concealment scheme for tolerating packet loss when the meshes are transmitted over unreliable network channels. When the loss of connectivity data occurs, the decoder can predict the geometry data and mesh connectivity information, and construct an approximation of the original mesh. Therefore, the proposed error-concealment scheme can significantly reduce the data size required to be transmitted over reliable channels. The results show that both the computational cost of our error-concealment scheme and the distortion introduced by our scheme are small.
2006
For PC and even mobile devices, video and image streaming technologies, such as H.264 and JPEG/JPEG 2000, are already mature. However, the 3D model streaming technology is still far from practical use. Therefore, we wonder if 3D model streaming can directly benefit from current image and video streaming technologies. Hence, in this paper, we propose a mesh streaming method based on geometry image [3] to represent a 3D model or a 3D scene and integrate it into an existed client-server multimedia streaming server. In this method, the mesh data of a 3D model is first converted into a JPEG 2000 (J2K) image. Based on the JPEG 2000 streaming technique, the mesh data can then be transmitted over the Internet as a mesh streaming. Furthermore, the view-dependent issue is also taken into account. Moreover, since this method is based on JPEG 2000 standard, our system is much suitable to be integrated into any existed image and video streaming system.
ACM Transactions on Multimedia Computing, Communications, and Applications, 2014
Online galleries of 3D models typically provide two ways to preview a model before the model is downloaded and viewed by the user: (i) by showing a set of thumbnail images of the 3D model taken from representative views (or keyviews); (ii) by showing a video of the 3D model as viewed from a moving virtual camera along a path determined by the content provider. We propose a third approach called preview streaming for mesh-based 3D objects: by streaming and showing parts of the mesh surfaces visible along the virtual camera path. This article focuses on the preview streaming architecture and framework and presents our investigation into how such a system would best handle network congestion effectively. We present three basic methods: (a) STOP-AND-WAIT, where the camera pauses until sufficient data is buffered; (b) REDUCE-SPEED, where the camera slows down in accordance to reduce network bandwidth; and (c) REDUCE-QUALITY, where the camera continues to move at the same speed but fewer vertices are sent and displayed, leading to lower mesh quality. We further propose two advanced methods: (d) KEYVIEW-AWARE, which trades off mesh quality and camera speed appropriately depending on how close the current view is to the keyviews, and (e) ADAPTIVE-ZOOM, which improves visual quality by moving the virtual camera away from the original path. A user study reveals that our KEYVIEW-AWARE method is preferred over the basic methods. Moreover, the ADAPTIVE-ZOOM scheme compares favorably to the KEYVIEW-AWARE method, showing that path adaptation is a viable approach to handling bandwidth variation.
Computer Graphics Forum, 2012
The constantly increasing complexity of polygonal models in interactive applications poses two major problems. First, the number of primitives that can be rendered at real-time frame rates is currently limited to a few million. Secondly, less than 45 million triangles-with vertices and normal-can be stored per gigabyte. Although the rendering time can be reduced using level-of-detail (LOD) algorithms, representing a model at different complexity levels, these often even increase memory consumption. Out-of-core algorithms solve this problem by transferring the data currently required for rendering from external devices. Compression techniques are commonly used because of the limited bandwidth. The main problem of compression and decompression algorithms is the only coarse-grained random access. A similar problem occurs in view-dependent LOD techniques. Because of the interdependency of split operations, the adaption rate is reduced leading to visible popping artefacts during fast movements. In this paper, we propose a novel algorithm for real-time view-dependent rendering of gigabyte-sized models. It is based on a neighbourhood dependency-free progressive mesh data structure. Using a per operation compression method, it is suitable for parallel random-access decompression and out-of-core memory management without storing decompressed data.
Current mesh compression schemes encode triangles and vertices in an order derived from systematically traversing the connectivity graph. These schemes struggle with gigabyte-sized mesh input where the construction and the usage of the data structures that support topological traversal queries become I/O-inefficient and require large amounts of temporary disk space. Furthermore they expect the entire mesh as input. Since meshes cannot be compressed until their generation is complete, they have to be stored at least once in uncompressed form. We radically depart from the traditional approach to mesh compression and propose a scheme that incrementally encodes a mesh in the order it is given to the compressor using only minimal memory resources. This makes the compression process essentially transparent to the user and practically independent of the mesh size. This is especially beneficial for compressing large meshes, where previous approaches spend significant memory, disk, and I/O resources on pre-processing, whereas our scheme starts compressing after receiving the first few triangles.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.