Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Cluster analysis is used to construct fluid flow zones from seismic attributes. The steps are: 1. Remove grid points that contain outliers in any seismic attribute; 2. Scale each attribute to zero mean and unit variance; 3. Use principal component analysis to transform the scaled attributes to uncorrelated principal component attributes; 4. Principal component attributes are grouped into categories of similar seismic response using cluster analysis; 5. Upscale the seismic grid to the computational grid scale using a weighted voting procedure (morphing); 6. Spatially filter cluster assignments to remove small, isolated spots using a weighted voting scheme; 7. Assign a seismic zone to spatially connected elements with the same cluster category. Using a Gulf of Mexico test case, the zoning procedure produced useful computational zones and reduced the time required for history matching.
Makara Journal of Science, 2021
Cluster analysis is used to determine possible lithology groupings on the basis of information from seismic data. Specifically, k-means is used in the cluster analysis of different lithologies. The data center is determined randomly and updated through an iterative process (unsupervised). The cluster analysis process involves combinations of complex seismic attributes and spectral decomposition as inputs. The complex seismic attributes are reflection strength and cosine phase. Reflection strength clearly describes the lithology boundary while the cosine phase describes the lithologies. Spectral decomposition is used to detect the presence of channels. The resolution of seismic data generally reaches 90 Hz. Spectral decomposition can produce outputs with up to 1 Hz intervals. The spectral components are correlated and repeated. To reduce the repetition of spectral data and increase the trend within the data, we use principal component spectral analysis. We apply and validate the workflow using the seismic data volume acquired over Boonsville, Texas, USA. The results of the cluster analysis method show good consistency with existing lithological maps interpreted from well data correlations.
SEG Technical Program Expanded Abstracts 2011, 2011
This paper reviews Kohonen Self Organizing Maps as one of the clustering algorithms that can generate 3D seismic facies volumes and maps using multiple attributes as input. The present area of study is the Mississippian Barnett Shale of the Fort Worth Basin in Texas. The aim of the study is to visualize the variation in shale and possible relationship between these rock types and their seismic expression and try to delineate by passed play after hydraulic fracturing.
Data Mining VII: Data, Text and Web Mining and their Business Applications, 2006
Seismic attributes are information extracted from seismic data and constitute important tools to estimate the geological structure of a place, helping the understanding of the subsurface and reducing the uncertainties on interpretations. This comprehension is crucial to tasks such as lithology prediction and reservoir characterization. Seismic attributes are generated by transforming data from a seismic line (two dimensional data) or a seismic volume (three dimensional data). This work presents a study of clustering algorithms to these attributes and the techniques employed follow two distinct approaches: a self organizing map to perform crisp clustering and fuzzy c-means to perform partial clustering. The evaluations of the partitions are performed with the PBM index which indicates the best number of groups. Data from a Brazilian oil field is used to test the algorithms.
—This paper shows how it made possible in geographical science to observe the seismic zone, clustering of highly sensitive earthquake zone and spatial data clustering during important geographical processes. This paper shows simple density based and K-Mean clustering technique. Density-Based clustering is done here using density estimation and by searching regions which are denser than a given threshold and to form clusters from these dense regions by using connectivity and density functions. Also we defined some optimal no of K locations for K-Mean clustering where the sum of the distance from every point to each of the K centers is minimized what is called global optimization. With this dataset it forms clusters using density estimation and K-Mean clustering. Also it correlates the clustering pattern by applying co-relation algorithm and proximity measure algorithm; hence it easily removes noisy data. This scheme can extract clusters efficiently with reduced number of comparisons.
Engineering Geology, 2018
Mining seismicity is routinely observed to cluster in space and time due to the spatially distinct rock mass failure processes associated with the temporally dependent process of mining. Assessment of clustered seismicity is important to develop an understanding of and to quantify seismic hazard that is associated with mining. This article presents a density-based clustering method that is applicable to the assessment of 3D spatial distributions of short-term seismicity. The methodology presented in this article is developed from existing approaches that address the general limitations of density-based clustering algorithms. Synthetically generated seismicity allows for the assessment of the methodology with respect to external and internal performance measures. The clustering of a dataset with known attributes allows for confidence to be developed in the capability of the clustering method. Additionally, this internal performance evaluation can represent the relative accuracy of outcomes without prior information concerning dataset attributes. The clustering method is applied to two case studies of mining seismicity. These cases illustrate the general applicability of the clustering method along with the value of evaluating internal performance measures when optimising the selection of parameters and understanding the sensitivity of clustering outcomes to these choices. which uses a simple metric to define a space-time distance between
J. geophys. Res, 2003
JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 108, NO. B10, 2468, doi:10.1029/ 2002JB002130, 2003. 2. Earthquake Clustering Models. [3] Here we give a short outline of the method for modeling the interrelation of any earthquake ...
Intelligent Control and Automation, 2013
A genetic algorithm-based joint inversion method is presented for evaluating hydrocarbon-bearing geological formations. Conventional inversion procedures routinely used in the oil industry perform the inversion processing of borehole geophysical data locally. As having barely more types of data than unknowns in a depth, a set of marginally over-determined inverse problems has to be solved along a borehole, which is a rather noise sensitive procedure. For the reduction of noise effect, the amount of overdetermination must be increased. To fulfill this requirement, we suggest the use of our interval inversion method, which inverts simultaneously all data from a greater depth interval to estimate petrophysical parameters of reservoirs to the same interval. A series expansion based discretization scheme ensures much more data against unknowns that significantly reduces the estimation error of model parameters. The knowledge of reservoir boundaries is also required for reserve calculation. Well logs contain information about layer-thicknesses, but they cannot be extracted by the local inversion approach. We showed earlier that the depth coordinates of layerboundaries can be determined within the interval inversion procedure. The weakness of method is that the output of inversion is highly influenced by arbitrary assumptions made for layer-thicknesses when creating a starting model (i.e. number of layers, search domain of thicknesses). In this study, we apply an automated procedure for the determination of rock interfaces. We perform multidimensional hierarchical cluster analysis on well-logging data before inversion that separates the measuring points of different layers on a lithological basis. As a result, the vertical distribution of clusters furnishes the coordinates of layer-boundaries, which are then used as initial model parameters for the interval inversion procedure. The improved inversion method gives a fast, automatic and objective estimation to layer-boundaries and petrophysical parameters, which is demonstrated by a hydrocarbon field example.
2003
A novel technique based on cluster analysis of the multi-resolutional structure of earthquake patterns is developed and applied to observed and synthetic seismic catalogs. The observed data represent seismic activities situated around the Japanese islands in the 1997-2003 time interval. The synthetic data were generated by numerical simulations for various cases of a heterogeneous fault governed by 3-D elastic dislocation and power-law creep. At the highest resolution, we analyze the local cluster structure in the data space of seismic events for the two types of catalogs by using an agglomerative clustering algorithm. We demonstrate that small magnitude events produce local spatio-temporal patches corresponding to neighboring large events. Seismic events, quantized in space and time, generate the multi-dimensional feature space of the earthquake parameters. Using a non-hierarchical clustering algorithm and multidimensional scaling, we explore the multitudinous earthquakes by real-time 3 -D visualization and inspection of multivariate clusters. At the resolutions characteristic of the earthquake parameters, all of the ongoing seismicity before and after largest events accumulate to a global structure consisting of a few separate clusters in the feature space. We show that by combining the clustering results from low and high resolution spaces, we can recognize precursory events more precisely and decode vital information that cannot be discerned at a single level of resolution.
Journal of Geophysical Research: Solid Earth
Earthquake clustering properties are investigated in relation to fluid balance H(t) (the difference of fluid injection and production rates) using about nine years of data from The Geysers (both the entire field and a local subset), Coso, and Salton Sea geothermal fields in California. Individual earthquake clusters are identified and classified using the nearest-neighbor approach of Zaliapin and Ben-Zion (2013a,
Interpretation, 2014
Seismic facies estimation is a critical component in understanding the stratigraphy and lithology of hydrocarbon reservoirs.With the adoption of 3D technology and increasing survey size, manual techniques of facies classification have become increasingly time consuming. Besides, the numbers of seismic attributes have increased dramatically, providing increasingly accurate measurements of reflector morphology. However, these seismic attributes add multiple “dimensions” to the data greatly expanding the amount of data to be analyzed. Principal component analysis and self-organizing maps (SOMs) are popular techniques to reduce such dimensionality by projecting the data onto a lower order space in which clusters can be more readily identified and interpreted. After dimensional reduction, popular classification algorithms such as neural net, K-means, and Kohonen SOMs are routinely done for general well log prediction or analysis and seismic facies modeling. Although these clustering methods have been successful in many hydrocarbon exploration projects, they have some inherent limitations. We explored one of the recent techniques known as generative topographic mapping (GTM), which takes care of the shortcomings of Kohonen SOMs and helps in data classification. We applied GTM to perform multiattribute seismic facies classification of a carbonate conglomerate oil field in the Veracruz Basin of southern Mexico. Finally, we introduced supervision into GTM and calculated the probability of occurrence of seismic facies seen at the wells over the reservoir units. In this manner, we were able to assign a level of confidence (or risk) to encountering facies that corresponded to good and poor production.
has been steadily increasing within E&P interpretation workflows over the past 10 years. It is not yet considered a standard procedure but, with the knowledge of the advantages (and limitations) of the different seismic classification methods, its role in the interpretation process as a successful hydrocarbon prediction tool is anticipated to grow.
Second International Meeting for Applied Geoscience & Energy
This work is about machine learning (ML) unsupervised clustering algorithms such as Self Organizing Maps (SOM), Generative Topographic Maps (GTM) and K-means application on seismic multi-attributes to quickly detect high productivity (based on well drill stem test data-DST) carbonate reservoirs, such as buildups and associated platform facies.
Advances in Knowledge Acquisition, Transfer, and Management, 2019
Seismology, which is a sub-branch of geophysics, is one of the fields in which data mining methods can be effectively applied. In this chapter, employing data mining techniques on multivariate seismic data, decomposition of non-spatial variable is done. Then k-means clustering, density-based spatial clustering of applications with noise (DBSCAN), and hierarchical tree clustering algorithms are applied on decomposed data, and then pattern analysis is conducted using spatial data on the resulted clusters. The conducted analysis suggests that the clustering results with spatial data is compatible with the reality and characteristic features of regions related to earthquakes can be determined as a result of modeling seismic data using clustering algorithms. The baseline metric reported is clustering times for varying size of inputs.
Grid Virtual metacomputer, which uses a network of geographically distributed local networks, computers and computational resources and services. Grid Computing focuses on distributed computing technologies, E Earthquake Clusters over Multi-dimensional Space, Visualization of OpenGL A standard specification defining a cross-language cross-platform API for writing applications that produce 2D and 3D computer graphics. Sumatra-Andaman earthquake An undersea earthquake that occurred at 00:58:53 UTC (07:58:53 local time) December 26, 2004, with an epicenter off the west coast of Sumatra, Indonesia. The earthquake triggered a series of devastating tsunamis along the coasts of most landmasses bordering the Indian Ocean, killing large numbers of people and inundating coastal communities across South and Southeast Asia, including parts of Indonesia, Sri Lanka, India, and Thailand. Earthquake catalog Data set consisting of earthquake hypocenters, origin times, and magnitudes. Additional information may include phase and amplitude readings, as well as first-motion mechanisms and moment tensors. Pattern recognition The methods, algorithms and tools to analyze data based on either statistical information or on a priori knowledge extracted from the patterns. The patterns for classification are groups of observations, measurements, objects, defining feature vectors in an appropriate multidimensional feature space. Data mining Algorithms, tools, methods and systems used in extraction of knowledge hidden in a large amount of data. Features denoted f i or F j (i; j-feature indices)-a set of variables which carry discriminating and characterizing information about the objects under consideration. The features can represent raw measurements (data) f i or can be generated in a non-linear way from the data F j (features). Feature space The multidimensional space in which the F k vectors are defined. Data and feature vectors represent vectors in respective spaces. Feature vector A collection of features ordered in some meaningful way into multi-dimensional feature vectors F l (F l where l-feature vector index) that represents the signature of the object to be identified represented by the generated features F l. Feature extraction The procedure of mapping source feature space into output feature space of lower dimensionality, retaining the minimal value of error cost function. Multidimensional scaling The nonlinear procedure of feature extraction, which minimizes the value of the "stress" being the function of differences of all the distances between feature vectors in the source space and corresponding distances in the resulting space of lower dimensionality. Data space The multi-dimensional space in which the data vectors f k exist. Data vector A collection of features ordered in some meaningful way into multi-dimensional vectors f k (f k ; k-data vector index) and f k D [m k ; z k ; x k ; t k ] where m k is the magnitude and x k , z k , t kits epicentral coordinates, depth and the time of occurrence, respectively. Cluster Isolated set of feature (or data) vectors in data and feature spaces. Clustering The computational procedure extracting clusters in multidimensional feature spaces. Agglomerative (hierarchical) clustering algorithm The clustering algorithm in which at the start the feature vectors represent separate clusters and the larger clusters are built-up in a hierarchical way. The procedure repeats the process of gluing-up the closest clusters up to the stage when a desired number of clusters is achieved. k-Means clustering Non-hierarchical clustering algorithm in which the randomly generated centers of clusters are improved iteratively. Multi-resolutional clustering analysis Due to clustering a hierarchy of clusters can be obtained. The analysis of the results of clustering in various resolution levels allows for extraction of knowledge hidden in both local (small clusters) and global (large clusters) similarity of multidimensional feature vectors. N-body solver The algorithm exploiting the concept of time evolution of an ensemble of mutually interacting particles. Non-hierarchical clustering algorithm The clustering algorithm in which the clusters are searched for by using global optimization algorithms. The most representative algorithms of this type is k-means procedure. Definition of the Subject Earthquakes have a direct societal relevance because of their tremendous impact on human community [59]. The genesis of earthquakes is an unsolved problem in the earth sciences, because of the still unknown underlying physical mechanisms. Unlike the weather, which can be predicted for several days in advance by numerically integrating non-linear partial differential equations on massively parallel systems, earthquake forecasting remains an elusive goal, because of the lack of direct observations and the fact that the governing equations are still unknown. Instead one must employ statistical approaches (e. g., [61,72,82]) and data-assimilation techniques (e. g., [6,53,81]). The nature of the spatio-temporal evolution of earthquakes has Earthquake Clusters over Multi-dimensional Space, Visualization of E 2349 to be assessed from the observed seismicity and geodetic measurements. Problems of this nature can be analyzed by recognizing non-linear patterns hidden in the vast amount of seemingly unrelated information. With the proliferation of large-scale computations, data mining [77], which is a time-honored and well-understood process, has come into its own for extracting useful patterns from large incoherent data sets found in diverse fields, such as astronomy, medical imaging, combinatorial chemistry, bio-informatics, seismology, remote sensing and stock markets [75]. Recent advances in information technology, high performance computing, and satellite imagery have led to the availability of extremely large data sets, exceeding Terabytes at each turn, that are coming regularly to physical scientists who need to analyze them quickly. These data sets are non-trivial to analyze without the use of new computer science algorithms that find solutions with a minimal computing complexity. With the imminent arrival of petascale computing by 2011 in USA, we can expect some breakthrough results from clustering analysis. Indeed, clustering has become a widely successful approach for revealing features and patterns in the datamining process. We describe the method of using clustering as a tool for analyzing complex seismic data sets and the visualization techniques necessary for interpreting the results. Petascale computing will also spur visualization techniques, which are sorely needed to understand the vast amounts of data compressed in many different kinds of spaces, with spatial, temporal and other types of dimensions [78]. Examples of clusters abound in nature include stars in galaxies, hubs in airline routes and centers of various human relationships [5]. Clustering comes from multi-scale, nonlinear interactions due to the rock rheology and earthquakes.
Geophysical Journal International
SUMMARY With seismic catalogues becoming progressively larger, extracting information becomes challenging and calls upon using sophisticated statistical analysis. Data are typically clustered by machine learning algorithms to find patterns or identify regions of interest that require further exploration. Here, we investigate two density-based clustering algorithms, DBSCAN and OPTICS, for their capability to analyse the spatial distribution of seismicity and their effectiveness in discovering highly active seismic volumes of arbitrary shapes in large data sets. In particular, we study the influence of varying input parameters on the cluster solutions. By exploring the parameter space, we identify a crossover region with optimal solutions in between two phases with opposite behaviours (i.e. only clustered and only unclustered data points). Using a synthetic case with various geometric structures, we find that solutions in the crossover region consistently have the largest clusters and...
2013
Flow unit characterization plays an important role in heterogeneity analysis and reservoir simulation studies. Usually, a correct description of the late ral variations of reservoir is associated with uncertainties. From this point of view, the well da ta alone does not cover reservoir properties. Because of large well distances, it is difficult to build the model of a heterogenic reservoir, but 3D seismic data provides regular sampling that can imp rove reservoir spatial description. In this study, seismic attribute analysis was used to predict flow zone indicator (FZI) values of a carbonate reservoir by using seismic and well log d ata. First, a 3D acoustic impedance volume was created as an external attribute for seismic data a nalysis. To improve the ability of FZI prediction, the maximum number of attributes from multiattribute analysis was computed by using a step-wise regression technique. To verify the results of mult iattribute technique, the cross plot analysis of multiatt...
2008
OpenGL A standard specification defining a cross-language cross-platform API for writing applications that produce 2D and 3D computer graphics. Sumatra-Andaman earthquake An undersea earthquake that occurred at 00:58:53 UTC (07:58:53 local time) December 26, 2004, with an epicenter off the west coast of Sumatra, Indonesia. The earthquake triggered a series of devastating tsunamis along the coasts of most landmasses bordering the Indian Ocean, killing large numbers of people and inundating coastal communities across South and Southeast Asia, including parts of Indonesia, Sri Lanka, India, and Thailand. Earthquake catalog Data set consisting of earthquake hypocenters, origin times, and magnitudes. Additional information may include phase and amplitude readings, as well as first-motion mechanisms and moment tensors. to analyze data based on either statistical information or on a priori knowledge extracted from the patterns. The patterns for classification are groups of observations, measurements, objects, defining feature vectors in an appropriate multidimensional feature space. Data mining Algorithms, tools, methods and systems used in extraction of knowledge hidden in a large amount of data. Features denoted f i or F j (i; j -feature indices) -a set of variables which carry discriminating and characterizing information about the objects under consideration. The features can represent raw measurements (data) f i or can be generated in a non-linear way from the data F j (features). Feature space The multidimensional space in which the F k vectors are defined. Data and feature vectors represent vectors in respective spaces. Feature vector A collection of features ordered in some meaningful way into multi-dimensional feature vectors F l (F l where l -feature vector index) that represents the signature of the object to be identified represented by the generated features F l . Feature extraction The procedure of mapping source feature space into output feature space of lower dimensionality, retaining the minimal value of error cost function. The nonlinear procedure of feature extraction, which minimizes the value of the "stress" being the function of differences of all the distances between feature vectors in the source space and corresponding distances in the resulting space of lower dimensionality. The multi-dimensional space in which the data vectors f k exist. Data vector A collection of features ordered in some meaningful way into multi-dimensional vectors f k ( f k ; k -data vector index) and f k D [m k ; z k ; x k ; t k ] where m k is the magnitude and x k , z k , t k -its epicentral coordinates, depth and the time of occurrence, respectively. Cluster Isolated set of feature (or data) vectors in data and feature spaces. Clustering The computational procedure extracting clusters in multidimensional feature spaces. Agglomerative (hierarchical) clustering algorithm The clustering algorithm in which at the start the feature vectors represent separate clusters and the larger clusters are built-up in a hierarchical way. The procedure repeats the process of gluing-up the closest clusters up to the stage when a desired number of clusters is achieved. k-Means clustering Non-hierarchical clustering algorithm in which the randomly generated centers of clusters are improved iteratively. Multi-resolutional clustering analysis Due to clustering a hierarchy of clusters can be obtained. The analysis of the results of clustering in various resolution levels allows for extraction of knowledge hidden in both local (small clusters) and global (large clusters) similarity of multidimensional feature vectors. N-body solver The algorithm exploiting the concept of time evolution of an ensemble of mutually interacting particles. Non-hierarchical clustering algorithm The clustering algorithm in which the clusters are searched for by using global optimization algorithms. The most representative algorithms of this type is k-means procedure. Earthquake Clusters over Multi-dimensional Space, Visualization of E 2349 to be assessed from the observed seismicity and geodetic measurements. Problems of this nature can be analyzed by recognizing non-linear patterns hidden in the vast amount of seemingly unrelated information. With the proliferation of large-scale computations, data mining [77], which is a time-honored and well-understood process, has come into its own for extracting useful patterns from large incoherent data sets found in diverse fields, such as astronomy, medical imaging, combinatorial chemistry, bio-informatics, seismology, remote sensing and stock markets [75]. Recent advances in information technology, high performance computing, and satellite imagery have led to the availability of extremely large data sets, exceeding Terabytes at each turn, that are coming regularly to physical scientists who need to analyze them quickly. These data sets are non-trivial to analyze without the use of new computer science algorithms that find solutions with a minimal computing complexity. With the imminent arrival of petascale computing by 2011 in USA, we can expect some breakthrough results from clustering analysis. Indeed, clustering has become a widely successful approach for revealing features and patterns in the datamining process. We describe the method of using clustering as a tool for analyzing complex seismic data sets and the visualization techniques necessary for interpreting the results. Petascale computing will also spur visualization techniques, which are sorely needed to understand the vast amounts of data compressed in many different kinds of spaces, with spatial, temporal and other types of dimensions [78]. Examples of clusters abound in nature include stars in galaxies, hubs in airline routes and centers of various human relationships [5]. Clustering comes from multi-scale, nonlinear interactions due to the rock rheology and earthquakes.
International Journal of ADVANCED AND APPLIED SCIENCES, 2019
Reservoir property modeling is often used to determine lateral extent and continuity in targeted multiple wells for implementation of any enhanced oil recovery (EOR) methods. Some works were reported that most of EOR techniques are controlled by static-dynamic lateral extent and continuity within reservoir pay zones succession as they contribute to poor recovery in fields. Therefore, delineation of lateral extent and continuity using a statistical approach could save time, less cumbersome and data required. This paper by cluster analysis delineates lateral extent of 5 exploratory wells by integration of microscopic petrographic and petrophysical data from 132 subsurface and 97 core plugs. Result reveal four main petrofacies clusters of good lateral extent and continuity that corresponds to section (Blue cluster) characterized by massive coarse grained sandstones, massive fine grained sandstones and massive very fine grained sandstone lithofacies with average porosity (18-32%), permeability (400-1189 mD) at depths between 859 and 2129m. The Green cluster is dominated by massive fine grained and massive medium grained sandstone lithofacies exhibit average porosity (15-30%), permeability (110-840 mD) at depth from 880-1805m, while Brown cluster comprises of massive medium grained and massive fine grained sandstone lithofacies exhibit an average porosity (18-25%) and permeability (210-657 mD). The understandings of porosity-permeability continuity can facilitate effective areal sweep efficiency from injected fluid well to production well during recovery. Thus, this approach is important to improve the field plan for reservoir engineers.
2010 Second International Conference on Computational Intelligence and Natural Computing, 2010
This paper introduces clustering analysis technique in reservoir flow unit research. The parameters such as flow zone indicator, porosity, permeability, clay content, poro-throat radius (R35),etc, which can characterize reservoir characteristic are selected as evaluation parameters. Reservoir flow units in Wei2 block, Jiangsu oilfield, are classified as four types according to evaluation criterion established by clustering analysis theory. Then this paper discuss the main features of each flow unit combination with sedimentological and performance data in this paper. At last, authors point out II and III type flow units are main zones of remaining oil distribution at present by analyzing relation of flow unit and remaining 0i1, which are the main targets to adjust potentially further.
Acta Polytechnica Hungarica, 2022
New and effective approaches for the analysis of seismic data make it possible to identify the distribution of earthquakes helping further to assess frequency of occurrence any associated risks. This paper proposes an effective approach for detecting areas with increased spatial density of seismic events and zoning territories on the map based on the Density-based Spatial Clustering of Applications with Noise algorithm (DBSCAN algorithm). The validity of the choice of this clustering algorithm is explained by the fact that the DBSCAN algorithm can detect clusters of complex shapes including geographical coordinates. This study uses seismic data from the seismic catalog of the Republic of Kazakhstan from 2011 to 2021 inclusive. Finally, the clusters detected over a certain period of time allowed for the presentation of a spatial model of the distribution of earthquakes and the detection of areas with increased spatial density on the map. In general, the results of the study were also compared and well associated with the general map of the seismic zoning of the Republic of Kazakhstan showing reliable results of clustering based on density. In addition, the architecture of intelligent information and the analytical system for analyzing seismic data is based on the proposed approach.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.