Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016, Electron. Colloquium Comput. Complex.
…
38 pages
1 file
Impagliazzo and Wigderson [25] showed that if E = DTIME(2O(n)) requires size 2Ω(n) circuits, then every time T constant-error randomized algorithm can be simulated deterministically in time poly(T). However, such polynomial slowdown is a deal breaker when T = 2α·n, for a constant α > 0, as is the case for some randomized algorithms for NP-complete problems. Paturi and Pudlak [30] observed that many such algorithms are obtained from randomized time T algorithms, for T ≤ 2o(n), with large one-sided error 1 - e, for e = 2-α·n, that are repeated 1/e times to yield a constant-error randomized algorithm running in time T/e = 2(α+o(1))·n. We show that if E requires size 2Ω(n) nondeterministic circuits, then there is a poly(n)-time e-HSG (Hitting-Set Generator) H: {0, 1}O(log n)+log(1/e) → {0, 1}n, implying that time T randomized algorithms with one-sided error 1 - e can be simulated in deterministic time poly(T)/e. In particular, under this hardness assumption, the fastest known constan...
2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), 2020
In certain complexity-theoretic settings, it is notoriously difficult to prove complexity separations which hold almost everywhere, i.e., for all but finitely many input lengths. For example, a classical open question is whether NEXP ⊂ i.o.-NP; that is, it is open whether nondeterministic exponential time computations can be simulated on infinitely many input lengths by NP algorithms. This difficulty also applies to Williams' algorithmic method for circuit lower bounds [Williams, J. ACM 2014]. In particular, although [Murray and Williams, STOC 2018] proved NTIME[2 polylog(n) ] ⊂ ACC 0 , it has remained an open problem to show that E NP (2 O(n) time with an NP oracle) is not contained in i.o.-ACC 0. In this paper, we show how many infinitely-often circuit lower bounds proved by the algorithmic method can be adapted to establish almost-everywhere lower bounds. • We show there is a function f ∈ E NP such that for all sufficiently large input lengths n and ε ≤ o(1), f cannot be (1/2 + 2 −n ε)-approximated by 2 n ε-size ACC 0 circuits on inputs of length n, improving lower bounds in [Chen and Ren, STOC 2020] and [Viola, ECCC 2020]. • We construct rigid matrices in P NP for all but finitely many inputs, rather than infinitely often as in [Alman and Chen, FOCS 2019] and [Bhangale et al., FOCS 2020]. • We show there are functions in E NP requiring constant-error probabilistic degree at least Ω(n/ log 2 n) for all large enough n, improving an infinitely-often separation of [Viola, ECCC 2020]. Our key to proving almost-everywhere worst-case lower bounds is a new "constructive" proof of an NTIME hierarchy theorem proved by [Fortnow and Santhanam, CCC 2016], where we show for every "weak" nondeterminstic algorithm (with smaller running-time and short witness), a "refuter algorithm" exists that can construct "bad" inputs for the hard language. We use this refuter algorithm to construct an almost-everywhere hard function. To extend our lower bounds to the average case, we prove a new XOR Lemma based on approximate linear sums, and combine it with the PCP-of-proximity applications developed in [Chen and Williams, CCC 2019] and [Chen and Ren, STOC 2020]. As a byproduct of our new XOR Lemma, we obtain a nondeterministic pseudorandom generator for poly-size ACC 0 circuits with seed length polylog(n), which resolves an open question in [Chen and Ren, STOC 2020].
We define a hierarchy of complexity classes that lie between P and RP, yielding a new way of quantifying partial progress towards the derandomization of RP. A standard approach in derandomization is to reduce the number of random bits an algorithm uses. We instead focus on a model of computation that allows us to quantify the extent to which random bits are being used. More specifically, we consider Stack Machines (SMs), which are log-space Turing Machines that have access to an unbounded stack, an input tape of length N , and a random tape of length N O(1). We parameterize these machines by allowing at most r(N) − 1 reversals on the random tape, thus obtaining the r(N)-th level of our hierarchy, denoted by RPdL[r]. It follows by a result of Cook [Coo71] that RPdL[1] = P, and of Ruzzo [Ruz81] that RPdL[exp(N)] = RP. Our main results are the following. • For every i ≥ 1, derandomizing RPdL[2 O(log i N) ] implies the derandomization of RNC i. Thus, progress towards the P vs RP question along our hierarchy implies also progress towards derandomizing RNC. Perhaps more surprisingly, we also prove a partial converse: Pseurorandom generators (PRGs) for RNC i+1 are sufficient to derandomize RPdL[2 O(log i N) ]; i.e. derandomizing using PRGs a class believed to be strictly inside P, we derandomize a class containing P. More generally, we introduce Randomness Compilers, a model equivalent to Stack Machines. In this model a polynomial time algorithm gets an input x and it outputs a circuit C x , which takes random inputs. Acceptance of x is determined by the acceptance probability of C x. When C x is of polynomial size and depth O(log i N) the corresponding class is denoted by P+RNC i , and we show that RPdL[2 O(log i N) ] ⊆ P+RNC i ⊆ RPdL[2 O(log i+1 N) ]. • We show an unconditional N Ω(1) lower bound on the number of reversals required by a SM for Polynomial Evaluation. This in particular implies that known Schwartz-Zippel-like algorithms for Polynomial Identity Testing cannot be implemented in the lowest levels of our hierarchy. • We show that in the 1-st level of our hierarchy, machines with one-sided error are as powerful as machines with two-sided and unbounded error.
The starting point of this work is the basic question of whether there exists a formal and meaningful way to limit the computational power that a time bounded randomized Turing Machine can employ on its randomness. We attack this question using a fascinating connection between space and time bounded machines given by Cook [4]: a Turing Machine S running in space s with access to an unbounded stack is equivalent to a Turing Machine T running in time 2 O(s). We extend S with access to a read-only tape containing 2 O(s) uniform random bits, and a usual error regime: one-sided or two-sided, and bounded or unbounded. We study the effect of placing a bound p on the number of passes S is allowed on its random tape. It follows from Cook's results that: • If p = 1 (one-way access) and the error is one-sided unbounded, S is equivalent to deterministic T. • If p = ∞ (unrestricted access), S is equivalent to randomized T (with the same error). As our first two contributions, we completely resolve the case of unbounded error. We show that we cannot meaningfully interpolate between deterministic and randomized T by increasing p: • If p = 1 and the error is two-sided unbounded, S is still equivalent to deterministic T. • If p = 2 and the error is unbounded, S is already equivalent to randomized T (with the same error). In the bounded error case, we consider a logarithmic space Stack Machine S that is allowed p passes over its randomness. Of particular interest is the case p = 2 (log n) i , where n is the input length, and i is a positive integer. Intuitively, we show that S performs polynomial time computation on its input and parallel (preprocessing plus NC i) computation on its randomness. Formally, we introduce Randomness Compilers. In this model, a polynomial time Turing Machine gets an input x and outputs a (polynomial size, bounded fan-in) circuit C x that takes random inputs. Acceptance of x is determined by the acceptance probability of C x. We say that the randomness compiler has depth d if C x has depth d(|x|). As our third contribution, we show that: • S simulates, and is in turn simulated by, a randomness compiler with depth O (log n) i , and O (log n) i+1 , respectively. Randomness Compilers are a formal refinement of polynomial time randomized Turing Machines that might elicit independent interest.
Concurrent Computations, 1988
Informally, a randomized algorithm (in the sense of [59] and [82]) is one which bases some of its decisions on the outcomes of coin flips. We can think of the algorithm with one possible sequence of outcomes for the coin flips to be different from the same algorithm with a different sequence of outcomes for the coin flips. Therefore, a randomized algorithm is really a family of algorithms. For a given input, some of the algorithms in this family might run for an indefinitely long time. The objective in the design of a randomized algorithm is to ensure that the number of such bad algorithms in the family is only a small fraction of the total number of algorithms. If for any input we can find at least (1 −) (being very close to 0) portion of algorithms in the family that will run quickly on that input, then clearly, a random algorithm in the family will run quickly on any input with probability ≥ (1 −). In this case we say that this family of algorithms (or this randomized algorithm) runs quickly with probability at least (1 −). is called the error probability. Observe that this probability is independent of the input and the input distribution. To give a flavor for the above notions, we now give an example of a randomized algorithm. We are given a polynomial of n variables f (x 1 ,. .. , x n) over a field F. It is required to check if f is identically zero. We generate a random n−vector (r 1 ,. .. , r n) (r i ∈ F, i = 1,. .. , n) and check if f (r 1 ,. .. , r n) = 0. We repeat this for k independent random vectors. If there was at least one vector on which f evaluated to a non zero value, of course f is nonzero. If f evaluated to zero on all the k vectors tried, we conclude f is zero. It can be shown (see section 2.3.1) that the probability of error in our conclusion will be very small if we choose a sufficiently large k. In comparison, the best known deterministic algorithm for this problem is much more complicated and has a much higher time bound. 1.2 Advantages of Randomization Advantages of randomized algorithms are many. Two extremely important advantages are their simplicity and efficiency. A major portion of randomized algorithms found in the literature are extremely simpler and easier to understand than the best deterministic algorithms for the same problems. The reader would have already got a feel for this from the above given example of testing if a polynomial is identically zero. Randomized algorithms have also been shown to yield better complexity bounds. Numerous examples can be given to illustrate this fact. But we won't enlist all of them here since the algorithms described in the rest of the paper will convince the reader. A skeptical reader at this point might ask: How dependable are randomized algorithms in practice, after all there is a non zero probability that they might fail? This skeptic reader must realize that there is a probability (however small it might be) that the hardware itself might fail. Adleman and Manders [1] remark that if we can find a fast algorithm for a problem with an error probability < 2 −k for some integer k independent of the problem size, we can reduce the error probability far below the hardware error probability by making k large enough. 1.3 Randomization in Parallel Algorithms The tremendously low cost of hardware nowadays has prompted computer scientists to design parallel machines and algorithms to solve problems very efficiently. In an early paper Reif [68] proposed using randomization in parallel computation. In this paper he also solved many algebraic and graph theoretic problems in parallel using randomization. Since then a new area of CS research has evolved that tries to exploit the special features offered by both randomization and parallelization. This paper demonstrates the power of randomization in obtaining efficient parallel algorithms for various important computational problems. 1.4 Different types of randomized algorithms Two types of randomized algorithms can be found in the literature: 1) those that always output the correct answer but whose run-time is a random variable with a specified mean. These are called Las Vegas algorithms; and 2) those that run for a specified amount of time and whose output will be correct with a specified probability. These are called Monte Carlo algorithms. Primality testing algorithm of Rabin [59] is of the second type. The error of a randomized algorithm can either be 1-sided or 2-sided. Consider a randomized algorithm for recognizing a language. The output of the circuit is either yes or no. There are algorithms which when outputting yes will always be correct, but when outputting no they will be correct with high probability. These algorithms are said to have 1-sided error. Algorithms that have non zero error probability on both possible outputs are said to have 2-sided error.
Theoretical Computer Science, 1999
Up to now, the known derandomization methods for BPP have been derived assuming the existence of an ExP function that has a "hard" average-case circuit complexity. In this paper we instead present the first construction of a de-randomization method for BOP that relies on the existence of an EXP function that is hard only in the worst-case.
Journal of Statistical Mechanics: Theory and Experiment, 2011
Random instances of feedforward Boolean circuits are studied both analytically and numerically. Evaluating these circuits is known to be a P-complete problem and thus, in the worst case, believed to be impossible to perform, even given a massively parallel computer, in time much less than the depth of the circuit. Nonetheless, it is found that for some ensembles of random circuits, saturation to a fixed truth value occurs rapidly so that evaluation of the circuit can be accomplished in much less parallel time than the depth of the circuit. For other ensembles saturation does not occur and circuit evaluation is apparently hard. In particular, for some random circuits composed of connectives with five or more inputs, the number of true outputs at each level is a chaotic sequence. Finally, while the average case complexity depends on the choice of ensemble, it is shown that for all ensembles it is possible to simultaneously construct a typical circuit together with its solution in polylogarithmic parallel time.
computational complexity, 2008
We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences:
Journal of Computer and System Sciences, 2001
We propose a new approach t o wards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization method based on an easiness assumption fails, then we obtain a certain hardness test that can be used to remove error in BPP algorithms. As an application, we prove that every RP algorithm can be simulated by a zero-error probabilistic algorithm, running in expected subexponential time, that appears correct in nitely often (i.o.) to every e cient adversary. A similar result by Impagliazzo and Wigderson (FOCS'98) states that BPP allows deterministic subexponential-time simulations that appear correct with respect to any e ciently sampleable distribution i.o., under the assumption that EXP 6 = BPP i n c o n trast, our result does not rely on any u n p r o ven assumptions. As another application of our techniques, we g e t the following gap theorem for ZPP: either every RP algorithm can be simulated by a deterministic subexponential-time algorithm that appears correct i.o. to every e cient a d v ersary, o r EXP = ZPP. In particular, this implies that if ZPP is somewhat easy, e.g., ZPP DTIME(2 n c) for some xed constant c, then RP is subexponentially easy in the uniform setting described above.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2012
The notion of probabilistic computation dates back at least to Turing, who also wrestled with the practical problems of how to implement probabilistic algorithms on machines with, at best, very limited access to randomness. A more recent line of research, known as derandomization, studies the extent to which randomness is superfluous. A recurring theme in the literature on derandomization is that probabilistic algorithms can be simulated quickly by deterministic algorithms, if one can obtain impressive (i.e. superpolynomial, or even nearly exponential) circuit size lower bounds for certain problems. In contrast to what is needed for derandomization, existing lower bounds seem rather pathetic. Here, we present two instances where ‘pathetic’ lower bounds of the form n 1+ ϵ would suffice to derandomize interesting classes of probabilistic algorithms. We show the following: — If the word problem over S 5 requires constant-depth threshold circuits of size n 1+ ϵ for some ϵ >0, then an...
24th Annual Symposium on Foundations of Computer Science (sfcs 1983), 1983
Software protection is one of the most important issues concerning computer practice. There exist many heuristics and ad-hoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves. In this paper we provide theoretical treatment of software protection. We reduce the problem of software protection to the problem of efficient simulation on oblivious RAM. A machine is oblivious if the sequence in which it accesses memory locations is equivalent for any two inputs with the same running time. For example, an oblivious Turing Machine is one for which the movement of the heads on the tapes is identical for each computation. (Thus, it is independent of the actual input.) What is the slowdown in the running time of any machine, if it is required to be oblivious? In 1979 Pippenger and Fischer showed how a two-tape oblivious Turing Machine can simulate, on-line, a one-tape Turing Machine, with a logarithmic slowdown in the running time. We show an analogous result for the random-access machine (RAM) model of computation. In particular, we show how to do an on-line simulation of an arbitrary RAM input by a probabilistic oblivious RAM with a poly-logarithmic slowdown in the running time. On the other hand, we show that a logarithmic slowdown is a lower bound.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2013 IEEE Conference on Computational Complexity, 2013
Journal of Cryptology, 2013
Proceedings of the 21st conference on Winter simulation - WSC '89, 1989
Electron. Colloquium Comput. Complex., 2018
ACM Transactions on Computation Theory, 2020
Journal of Cryptology, 2008
computational complexity, 2004
Symposium on the Theory of Computing, 1989
Information and Computation, 1999
Journal of Computer and System Sciences, 1992
Mathematical Systems Theory, 1994
ACM Transactions on Computation Theory, 2019
Lecture Notes in Computer Science, 2007
Lecture Notes in Computer Science, 2000
Theory of Computing Systems, 2020
Theory of Computing Systems, 2011