Papers by Carlos López-Vázquez

Communications in Computer and Information Science, 2020
The Friedman Test has been proposed in 1937 to analyze tables of ranks, like those arising from a... more The Friedman Test has been proposed in 1937 to analyze tables of ranks, like those arising from a wine contest. If we have N judges and k wines, the standard problem is to analyze a table of N rows and k columns holding the opinion of the judges. The Friedman’s Test is used to accept/reject the null hypothesis that all the wines are equivalent. Friedman offered an asymptotically valid approximation as well as exact tables for low k and N. The accuracy of the asymptotic approximation for moderate k and N was low, and extended tables were required. The published ones were mostly computed using Monte Carlo techniques. The effort required to compute the extended tables for the case without ties was significant (over 100 years of CPU time) and an alternative using many-core processors is described here for the general case with ties. The solution can be used also for other similar tests which yet lack for large enough tables.
Based on the K-model, an alternative solution to the problem of forecasting atmospheric pollution... more Based on the K-model, an alternative solution to the problem of forecasting atmospheric pollution other than the Gaussian models (either the plume or the puff) is presented. It is intended to show its possible use for the long-term as well as for the real-time forecast. My proposed procedure is expected to take similar computer time to Gaussian models, thereby eliminating the main objection for its use. Key word index: advection-diffusion solver, atmospheric diffusion, real-time forecast, air pollution

Data Fusion, Data Harmonization and Conflation are equivalent terms which denote the process of m... more Data Fusion, Data Harmonization and Conflation are equivalent terms which denote the process of merging dataset A and dataset B to produce dataset C which hopefully has better properties than the original ones. Each term is popular within specific communities (Remote Sensing, Computer Science, Cartography, etc.) without a clear preeminence. This paper is devoted to Geometric Conflation, defined as the process of modifying coordinates of objects of (for example) dataset B in order to fit as much as possible those also available in dataset A, believed to be more accurate. Recent proposals splits the process in three steps. The first one identifies all (or most) corresponding objects that appear both in A and B. Depending on the goal, usually such objects are only “well defined points” (like cross-roads) but the set might also include polylines (like roads) and/or polygons (like parcels). Second step is to report about the differences found among such homologue objects. Third step is t...
The Cartographic Journal, 2013
ABSTRACT The great availability of geographic information, due to Spatial Data Infrastructure dev... more ABSTRACT The great availability of geographic information, due to Spatial Data Infrastructure development, the existence of data collected by volunteers, etc., makes the problems of geometric interoperability of data very conspicuous. Traditionally, conflation is being carefully carried out and evaluated by experts. Yet there are practices that involve occasional users who will look up the information in mobile devices without the intention of keeping a copy. Evaluation will be carried out ? with different criteria, involving the Human Visual System and perhaps even the characteristics of the physical devices ? as well. In this paper, we coin the term 'Ephemeral Conflation' to characterize that context and the procedures to evaluate it.
Estadística (Journal of the Inter …, 1994
... La metodología desarrollada permite además realizar control en tiempo real de los nuevos dato... more ... La metodología desarrollada permite además realizar control en tiempo real de los nuevos datos recogidos, con un mínimo de recursos informáticos, lo que habilita a su aplicación en forma independiente de grandes equipos de cómputo. ...
thedigitalmap.com
... Provisorios Ing. Agrim. Rodolfo Méndez Baillo [email protected] Servicio Geográfico Mili... more ... Provisorios Ing. Agrim. Rodolfo Méndez Baillo [email protected] Servicio Geográfico Militar - URUGUAY Dr. Ing. Carlos López Vázquez carlos.lopez@ thedigitalmap.com LatinGEO Sede Uruguay RESUMEN Se presenta ...
idee.es
Resumen Dadas dos cartografías A y B de una misma región pero diferente origen, es normal que exi... more Resumen Dadas dos cartografías A y B de una misma región pero diferente origen, es normal que existan discrepancias geométricas entre las mismas. En la literatura se encuentran una multitud de métodos aplicables para reducir las discrepancias, pero hay poco o ningún ...
thedigitalmap.com
Un Programa de Mejora de Exactitud Posicional (ProMEP, equivalente a PAI por su sigla en inglés: ... more Un Programa de Mejora de Exactitud Posicional (ProMEP, equivalente a PAI por su sigla en inglés: Positional Accuracy Improvement) es un proceso sistemático en que se intenta mejorar la exactitud geométrica de una cartografía digital en una única operación. El objetivo final ...
thedigitalmap.com
This work attempts to compare different methodologies for the missing value problem of daily rain... more This work attempts to compare different methodologies for the missing value problem of daily rain data. A Monte Carlo simulation was designed, randomly choosing both date and place for the missing values and afterwards different imputation procedures were successively ...
… ; FAO, 2006, 55 p. Ilus, Tab …, 1990
Base de datos : BVSDE.BIBLIOGRAFICA. Búsqueda : BVSDE.REPIDI.00039291 [Identificador único]. Refe... more Base de datos : BVSDE.BIBLIOGRAFICA. Búsqueda : BVSDE.REPIDI.00039291 [Identificador único]. Referencias encontradas : 2. Mostrando: 1 .. 2 en el formato [Largo]. pagina 1 de 1, 1 / 2, bvsde.bibliografica, ...

Corrosion Science, 2007
This paper presents a deterministic model for the damage function of carbon steel, expressed in μ... more This paper presents a deterministic model for the damage function of carbon steel, expressed in μm of corrosion penetration as a function of cumulated values of environmental variables. Instead of the traditional linear model, we designed an Artificial Neural Network (ANN) to fit the data. The ANN numerical model shows good results regarding goodness of fit and residual distributions. It achieves a RMSE value of 0.8μm and a R 2 of 0.9988 while the classical linear regression model produces 2.6μm and 0.9805 respectively. Besides, F LOF for the ANN model were next to the critical value. The improved accuracy provide a chance to identify the most relevant variables of the problem. The procedure was to add/remove one after the other the 2 variables and perform from scratch the corresponding training of the ANN. After some trial and error as well as phenomenological arguments, we were able to show that some popular meteorological variables like mean relative humidity and mean temperature shown no relevance while the results were clearly improved by including the hours with RH < 40%. The results as such might be valid for a limited geographical region, but the procedure is completely general and applicable to other regions.

In order to increase the overall performance of distributed parallel programs running in a networ... more In order to increase the overall performance of distributed parallel programs running in a network of non-dedicated workstations, we have researched methods for improving load balancing in loosely coupled heterogeneous distributed systems. Current software designed to handle distributed applications does not focus on the problem of forecasting the computers future load. The software only dispatches the tasks assigning them either to an idle CPU (in dedicated networks) or to the lowest loaded one (in non-dedicated networks). Our approach tries to improve the standard dispatching strategies provided by both parallel languages and libraries, by implementing new dispatching criteria. It will choose the most suitable computer after forecasting the load of the individual machines based on current and historical data. Existing applications could take advantage of this new service with no extra changes but a recompilation. A fair comparison between different dispatching algorithms could only be done if they run over the same external network load conditions. In order to do so, a tool to arbitrarily replicate historical observations of load parameters while running the different strategies was developed. In this environment, the new algorithms are being tested and compared to verify the improvement over the dispatching strategy already available. The overall performance of the system was tested with in-house developed numerical models. The project reported here is connected with other efforts at CeCal devoted to make it easier for scientists and developers to gain advantage of parallel computing techniques using low cost components.
Statistical Methods and Applications

Transactions in GIS
The widespread availability of powerful desktop computers, easy-to-use software tools and geograp... more The widespread availability of powerful desktop computers, easy-to-use software tools and geographic datasets have raised the quality problem of input data to be a crucial one. Even though accuracy has been a concern in every serious application, there are no general tools for its improvement. Some particular ones exist however, and we are reporting here results for a particular case of quantitative raster data: Digital Elevation Models (DEM). We tested two procedures designed to detect anomalous values (also named gross errors, outliers or blunders) in DEM, but valid also for other quantitative raster datasets. A DEM with elevations varying from 181 to 1044 m derived from SPOT data has been used as a contaminated sample, while a manually derived DEM obtained from aerial photogrammetry has been regarded as the ground truth. That allows a direct performance comparison for the methods with real errors. We assumed that once an outlier location is suggested, a "better" value can be measured or obtained through some methodology. The options are different depending upon the user (end users might only interpolate, while DEM producers might go to the original data and make another reading). In this experiment we simply put the ground truth value. Preliminary results show that for the available dataset, the accuracy might be improved to some extent with very little effort. Effort is defined here as the percentage of points suggested by de methodology in relation with its total number: thus 100 per cent effort implies that all points have been checked. The method proposed by López (1997) gave poor results, because it has been designed for errors with low spatial correlation (which is not the case here). A modified version has been designed and compared also against the method suggested by Felicísimo (1994). The three procedures can be applied both for error detection during the DEM generation and by end users, and they might be of use for other quantitative raster data. The choice of the best methodology is different depending on the effort involved.

Mine wastes from the operation of As Pontes coal mine in A Coruña (Spain) contain pyrite and have... more Mine wastes from the operation of As Pontes coal mine in A Coruña (Spain) contain pyrite and have the potential to generate acid waters when rainwater interacts with them. Their runoff waters are collected into two main channels (North and South) and taken to the liquid-effluent treatment plant. After mine closure in December 2007, dump runoff waters are diverted into the open-pit mine to contribute to the filling of the future open pit lake. The chemistry of dump runoff waters changes seasonally in response to hydrological events. Here we present a coupled hydrological and geochemical model to predict daily stream flows and chemical quality. The coupled model accounts for three end-member waters and reproduces most of the chemical data measured at the South channel during 2006. Once calibrated, the model has been tested and verified with data collected during 2007 and not used for calibration.
International Journal of Geographical Information Science, 2012
Geometric conflation is the process undertaken to modify the coordinates of features in dataset A... more Geometric conflation is the process undertaken to modify the coordinates of features in dataset A in order to match corresponding ones in dataset B. The overwhelming majority of the literature considers the use of points as features to define the transformation. In this article we present a procedure to consider one-dimensional curves also, which are commonly available as Global Navigation

12th AGILE International …, 2009
There exist a number of different algorithms devoted to harmonize the geometry of two vector data... more There exist a number of different algorithms devoted to harmonize the geometry of two vector datasets. We applied some of them to an urban area in Spain, using ortorectified images of higher accuracy as a ground reference. The metric of success was the US National Standard for Spatial Data Accuracy figures, based upon RMSE statistics. To simulate a realistic situation where control points might or might not be identified over the available vector dataset, we choose at random only 20 well identified points, and afterwards apply all methods. With just a few trials, the results do not show that the method A is clearly superior to method B. We discuss the methodological implications of using just one data case to discard method B in favor of method A. We propose to create a statistically sound framework to test existing and new methods by Monte Carlo simulation in order to build confidence while choosing one method as the best.
Uploads
Papers by Carlos López-Vázquez