Papers by Vladik Kreinovich

International Journal of Approximate Reasoning, 2009
In many practical situations, we are not satisfied with the accuracy of the existing measurements... more In many practical situations, we are not satisfied with the accuracy of the existing measurements. There are two possible ways to improve the measurement accuracy: • first, instead of a single measurement, we can make repeated measurements; the additional information coming from these additional measurements can improve the accuracy of the result of this series of measurements; • second, we can replace the current measuring instrument with a more accurate one; correspondingly, we can use a more accurate (and more expensive) measurement procedure provided by a measuring lab-e.g., a procedure that includes the use of a higher quality reagent. In general, we can combine these two ways, and make repeated measurements with a more accurate measuring instrument. What is the appropriate trade-off between sample size and accuracy? This is the general problem that we address in this paper.
This chapter presents a rigorous theory of random fuzzy sets in its most general form. Some appli... more This chapter presents a rigorous theory of random fuzzy sets in its most general form. Some applications are included.
After the success of stable model semantics and its generalization for programs with classical ne... more After the success of stable model semantics and its generalization for programs with classical negations [GL90], the same ideas have been used in [GL91] to include disjunction into the resulting formalization of commonsense reasoning. The resulting semantics of disjunctive logic programs is relatively new, and therefore, few results are known. An additional problem with this semantics is as follows: since it incorporates a larger number of logical connectives, it is inevitably more complicated, and therefore, usual syntactic proofs are far less intuitive than the ones for stable models.
In many engineering applications, we have to combine probabilistic and interval errors. For examp... more In many engineering applications, we have to combine probabilistic and interval errors. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. In environmental measurements, we often only know the values with interval uncertainty. We must therefore modify the existing statistical algorithms to process such interval data. Such modification are described in this paper.
Abstract| Uncertainty is very important in risk analysis. A natural way to describe this uncertai... more Abstract| Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of di erent values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justi cation for a general second-order formalism for handling different types of uncertainty.
Reliable Computing, 1997
In many interval computation methods, if we cannot guarantee a solution within a given interval, ... more In many interval computation methods, if we cannot guarantee a solution within a given interval, it often makes sense to "inflate" this interval a little bit. There exist many different "inflation" methods. The authors of PASCAL-XSC, after empirically comparing the behavior of different inflation methods, decided to implement the formula [x-,x+]e = [(1 + e)x- - e · x+, (1 + e)x+ - e · x-]. A natural question is: Is this choice really optimal (in some reasonable sense), or is it only an empirical approximation to the truly optimal choice?
Reliable Computing, 2006
In engineering applications, we need to make decisions under uncertainty. Traditionally, in engin... more In engineering applications, we need to make decisions under uncertainty. Traditionally, in engineering, statistical methods are used, methods assuming that we know the probability distribution of di erent uncertain parameters. Usually, we can safely linearize the dependence of the desired quantities y (e.g., stress at di erent structural points) on the uncertain parameters xi { thus enabling sensitivity analysis. Often, the number n of uncertain parameters is huge, so sensitivity analysis leads to a lot of computation time. To speed up the processing, we propose to use special Monte-Carlo-type simulations.
Studies in Fuzziness and Soft Computing, 2001
An important part of our knowledge is in the form of images. For example, a large amount of geoph... more An important part of our knowledge is in the form of images. For example, a large amount of geophysical and environmental data comes from satellite photos, a large amount of the information stored on the Web is in the form of images, etc. It is therefore desirable to use this image information in data mining. Unfortunately, most existing data mining techniques have been designed for mining numerical data and are thus not well suited for image databases. Hence, new methods are needed for image mining. In this paper, we show how data mining can be used to nd common patterns in several images.
Journal of Computational and Applied Mathematics, 2007
In many areas of science and engineering, it is desirable to estimate statistical characteristics... more In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t) -e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit DL. We must therefore modify the existing statistical algorithms to process such interval data.

22nd International Conference of the North American Fuzzy Information Processing Society, NAFIPS 2003, 2003
Geospatial databases generally consist of measurements related to points (or pixels in the case o... more Geospatial databases generally consist of measurements related to points (or pixels in the case of raster data), lines, and polygons. In recent years, the size and complexity of these databases have increased significantly and they often contain duplicate records, i.e., two or more close records representing the same measurement result. In this paper, we use fuzzy measures to address the problem of detecting duplicates in a database consisting of point measurements. As a test case, we use a database of measurements of anomalies in the Earth's gravity field that we have compiled. We show that a natural duplicate deletion algorithm requires (in the worst case) quadratic time, and we propose a new asymptotically optimal ¢ ¤ £ ¦ ¥ §© £ ¦ ¥ algorithm. These algorithms have been successfully applied to gravity databases. We believe that they will prove to be useful when dealing with many other types of point data.
IEEE International Conference on Fuzzy Systems, 2005
The age of fossil species in samples recovered from a well that penetrates an undisturbed sequenc... more The age of fossil species in samples recovered from a well that penetrates an undisturbed sequence of sedimentary rocks increases with depth. The results of biostratigraphic analysis of such a sequence consist of several age-depth values - both known with interval (or fuzzy) uncertainty - and we would like to find, for each possible depth, the interval of the possible
In many real-life situations, we have several types of uncer- tainty: measurement uncertainty can... more In many real-life situations, we have several types of uncer- tainty: measurement uncertainty can lead to probabilistic and/or interval uncertainty, expert estimates come with interval and/or fuzzy uncer- tainty, etc. In many situations, in addition to measurement uncertainty, we have prior knowledge coming from prior data processing, prior knowl- edge coming from prior interval constraints. In this paper, on the
International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, 2006
In many real-life situations, we have several types of uncertainty: measurement uncertainty can l... more In many real-life situations, we have several types of uncertainty: measurement uncertainty can lead to probabilistic and/or interval uncertainty, expert estimates come with interval and/or fuzzy uncertainty, etc. In many situations, in addition to measurement uncertainty, we have prior knowledge coming from prior data processing and/or prior knowledge coming from prior interval constraints. In this paper, on the example of
In many practical situations, the measurement result z depends not only on the measured value x, ... more In many practical situations, the measurement result z depends not only on the measured value x, but also on the parameters s describing the experiment's setting and on the values of some auxiliary quantities y; the dependence z = f(x;s;y) of z on x, s, and y is usually known. In the ideal case when we know the exact value

In numerical mathematics, one of the most frequently used ways of gauging the quality of differen... more In numerical mathematics, one of the most frequently used ways of gauging the quality of different numerical methods is benchmarking. Specifically, once we have methods that work well on some (but not all) problems from a given problem class, we find the problem that is the toughest for the existing methods. This prob- lem becomes a benchmark for gauging how well different methods solve problems that previous methods could not. Once we have a method that works well in solving this benchmark problem, we re- peat the process again - by selecting, as a new benchmark, a problem that is the toughest to solve by the new methods, and by looking for a new method that works the best on this new benchmark. At first glance, this idea sounds like a heuristic, but its success in numerical mathematics indicates that this heuristic is either optimal or at least close to optimality. In this paper, we use the geombinatoric approach to prove that benchmarking is indeed asymptotically optimal. What is...
For many practical applications, it it important to solve the seismic inverse problem, i.e., to m... more For many practical applications, it it important to solve the seismic inverse problem, i.e., to measure seismic travel times and reconstruct velocities at different depths from this data. The existing algorithms for solving the seismic inverse problem often take too long and/or produce un-physical results -because they do not take into account the knowledge of geophysicist experts. In this paper, we analyze how expert knowledge can be used in solving the seismic inverse problem.
Lecture Notes in Computer Science, 2006
It is known that in general, statistical analysis of interval data is an NP-hard problem: even co... more It is known that in general, statistical analysis of interval data is an NP-hard problem: even computing the variance of interval data is, in general, NP-hard. Until now, only one case was known for which a feasible algorithm can compute the variance of interval data: the case when all the measurements are accurate enough -so that even after the measurement, we can distinguish between different measured values xi. In this paper, we describe several new cases in which feasible algorithms are possible -e.g., the case when all the measurements are done by using the same (not necessarily very accurate) measurement instrument -or at least a limited number of different measuring instruments.
Numerical Algorithms, 2000
In interval computations, the range of each intermediate result r is described by an interval r. ... more In interval computations, the range of each intermediate result r is described by an interval r. To decrease excess interval width, we can keep some information on how r depends on the input x = (x 1 ; : : : ; x n ). There are several successful methods of approximating this dependence; in these methods, the dependence is approximated by linear functions (a ne arithmetic) or by general polynomials (Taylor series methods). Why linear functions and polynomials? What other classes can we try? These questions are answered in this paper.
MLQ, 2004
We show how quantum computing can speed up computations related to processing probabilistic, inte... more We show how quantum computing can speed up computations related to processing probabilistic, interval, and fuzzy uncertainty.
2008 IEEE International Conference on Fuzzy Systems (IEEE World Congress on Computational Intelligence), 2008
In the ideal case of complete knowledge, for each property Pi (such as high fever, headache, etc.... more In the ideal case of complete knowledge, for each property Pi (such as high fever, headache, etc.), we know the exact set Si of all the objects that satisfy this property. In practice, we usually only have partial knowledge. In this case, we only know the set S i of all the objects about which we know that P i holds and the set S i about which we know that P i may hold (i.e., equivalently, that we have not yet excluded the possibility of P i ). This pair of sets is called a set interval.
Uploads
Papers by Vladik Kreinovich