Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2001, Proceedings of the thirty-third annual ACM symposium on Theory of computing - STOC '01
…
94 pages
1 file
We introduce the smoothed analysis of algorithms, which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of Gaussian perturbations.
2004
We introduce the smoothed analysis of algorithms, which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of Gaussian perturbations.
2003
In smoothed analysis, one measures the complexity of al- gorithms assuming that their inputs are subject to small amounts of random noise. In an earlier work (Spielman and Teng, 2001), we intro- duced this analysis to explain the good practical behavior of the simplex algorithm. In this paper, we provide further motivation for the smoothed analysis of algorithms, and develop models of noise suitable for analyz- ing the behavior of discrete algorithms. We then consider the smoothed complexities of testing some simple graph properties in these models.
Applied Mathematics and Computation, 2007
Average case analysis forms an interesting and intriguing part of algorithm theory since it explains why some algorithms with bad worst-case complexity can better themselves in performance on the average. Well known examples include the quicksort, simplex method and the wide variety of computer graphics and computational geometry algorithms. Here we make a statistical case study of the robustness of average complexity measures, which are derived assuming uniform distribution, for non-uniform inputs (both discrete and continuous).
Applied Mathematics Letters, 2000
In doing the statistical analysis of bubble sort program, we compute its execution times with various parameters. The statistical analysis endorses the specific quadratic pattern of the execution time on the number of items to be sorted. Next, a cursor along the future direction is indicated. (~
Bulletin of the American Mathematical Society, 1984
It has been a challenge for mathematicians to theoretically confirm the extremely good performance of simplex algorithms for linear programming. We have confirmed that a certain variant of the simplex method solves problems of order m X n in an expected number of steps which is bounded between two quadratic functions of the smaller dimension of the problem. Our probabilistic assumptions are rather weak.
2004
Abstract: Stochastic modelling of deterministic computer experiments was strongly and correctly advocated by Prof. Jerome Sacks and others [see J. Sacks, W. Welch, T. Mitchell and H. Wynn, Design and Analysis of Computer Experiments, Statistical Science, vol. 4. ...
International Journal of Analysis, 2014
We present a new and improved worst case complexity model for quick sort as yworst(n,td)=b0+b1n2+g(n,td)+ɛ, where the LHS gives the worst case time complexity, n is the input size, td is the frequency of sample elements, and g(n,td) is a function of both the input size n and the parameter td. The rest of the terms arising due to linear regression have usual meanings. We claim this to be an improvement over the conventional model; namely, yworst(n)=b0+b1n+b2n2+ɛ, which stems from the worst case O(n2) complexity for this algorithm.
International Journal of Advanced Computer Science and Applications, 2019
A theoretical approach of asymptote analyzes the algorithms for approximate time complexity. The worst-case asymptotic complexity classifies an algorithm to a certain class. The asymptotic complexity for algorithms returns the degree variable of the algorithmic function while ignores the lower terms. In perspective of programming, asymptote only considers the number of iterations in a loop ignoring inside and outside statements. However, every statement must have some execution time. This paper provides an effective approach to analyze the algorithms belonging to the same class of asymptotes. The theoretical analysis of algorithmic functions shows that the difference between theoretical outputs of two algorithmic functions depends upon the difference between their coefficient of 'n' and the constant term. The said difference marks the point for the behavioral change of algorithms. This theoretic analysis approach is applied to algorithms with linear asymptotic complexity. Two algorithms are considered having a different number of statements outside and inside the loop. The results positively indicated the effectiveness of the proposed approach as the tables and graphs validates the results of the derived formula.
Sailing Routes in the World of Computation, 2018
Algorithmic statistics studies explanations of observed data that are good in the algorithmic sense: an explanation should be simple i.e. should have small Kolmogorov complexity and capture all the algorithmically discoverable regularities in the data. However this idea can not be used in practice as is because Kolmogorov complexity is not computable. In recent years resource-bounded algorithmic statistics were created [7, 8]. In this paper we prove a polynomial-time version of the following result of 'classic' algorithmic statistics. Assume that some data were obtained as a result of some unknown experiment. What kind of data should we expect in similar situation (repeating the same experiment)? It turns out that the answer to this question can be formulated in terms of algorithmic statistics [6]. We prove a polynomial-time version of this result under a reasonable complexity theoretic assumption. The same assumption was used by Antunes and Fortnow [1].
Applied Mathematics Letters, 1999
In doing the statistical analysis of a bubble-sort program [1], where all computing operations were of the same type, we observed that the statistical results tallied fairly well with the mathematical claim about the algorithm's computational complexity. In our next algorithm, the computing operations are not of the same type. We test and observe that the statistical measure of the algorithm's complexity, arguably more 'realistic,' does not tally with its mathematical counterpart.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Future Research Directions in Computer Science
Information Processing Letters, 1995
International Journal of Experimental Design and Process Optimisation, 2014
Journal of the ACM, 1985
The Design and Analysis of Algorithms, 1992
Theoretical Computer Science, 2007
Lecture Notes in Computer Science, 2001
Lecture Notes in Computer Science, 2006
Theory of Computing Systems, 2018