Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015, Lecture Notes in Computer Science
…
21 pages
1 file
We consider representing of natural numbers by expressions using 1's, addition, multiplication and parentheses. n denotes the minimum number of 1's in the expressions representing n. The logarithmic complexity n log is defined as n /log 3 n. The values of n log are located in the segment [3, 4.755], but almost nothing is known with certainty about the structure of this "spectrum" (are the values dense somewhere in the segment etc.). We establish a connection between this problem and another difficult problem: the seemingly "almost random" behaviour of digits in the base 3 representations of the numbers 2 n. We consider also representing of natural numbers by expressions that include subtraction, and the so-called P-algorithms-a family of "deterministic" algorithms for building representations of numbers.
arXiv (Cornell University), 2017
We consider a problem first proposed by Mahler and Popken in 1953 and later developed by Coppersmith, Erdős, Guy, Isbell, Selfridge, and others. Let f (n) be the complexity of n ∈ Z + , where f (n) is defined as the least number of 1's needed to represent n in conjunction with an arbitrary number of +'s, * 's, and parentheses. Several algorithms have been developed to calculate the complexity of all integers up to n. Currently, the fastest known algorithm runs in time O(n 1.230175 ) and was given by J. Arias de Reyna and J. van de Lune in 2014. This algorithm makes use of a recursive definition given by Guy and iterates through products, f (d) + f n d , for d | n, and sums, f (a) + f (na), for a up to some function of n. The rate-limiting factor is iterating through the sums. We discuss potential improvements to this algorithm via a method that provides a strong uniform bound on the number of summands that must be calculated for almost all n. We also develop code to run J. Arias de Reyna and J. van de Lune's analysis in higher bases and thus reduce their runtime of O(n 1.230175 ) to O(n 1.222911236 ). All of our code can be found online at: .
Integers, 2012
Define n to be the complexity of n, the smallest number of 1's needed to write n using an arbitrary combination of addition and multiplication. John Selfridge showed that n ≥ 3 log 3 n for all n. Define the defect of n, denoted δ(n), to be n − 3 log 3 n; in this paper we present a method for classifying all n with δ(n) < r for a given r. From this, we derive several consequences. We prove that 2 m 3 k = 2m + 3k for m ≤ 21 with m and k not both zero, and present a method that can, with more computation, potentially prove the same for larger m. Furthermore, defining Ar(x) to be the number of n with δ(n) < r and n ≤ x, we prove that Ar(x) = Θr((log x) ⌊r⌋+1), allowing us to conclude that the values of n − 3 log 3 n can be arbitrarily large.
Lecture Notes in Computer Science, 2014
Proceedings of the 29th Annual ACM Symposium on Applied Computing, 2014
We study some essential arithmetic properties of a new tree-based number representation, hereditarily binary numbers, defined by applying recursively run-length encoding of bijective base-2 digits. Our representation expresses giant numbers like the largest known prime number and its related perfect number as well as the largest known Woodall, Cullen, Proth, Sophie Germain and twin primes as trees of small sizes. More importantly, our number representation supports novel algorithms that, in the best case, collapse the complexity of various computations by super-exponential factors and in the worse case are within a constant factor of their traditional counterparts. As a result, it opens the door to a new world, where arithmetic operations are limited by the structural complexity of their operands, rather than their bitsizes.
2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, 2014
our tree-based hereditarily binary numbers apply recursively a run-length compression mechanism they enable performing arithmetic computations symbolically and lift tractability of computations to be limited by the representation size of their operands rather than by their bitsizes this paper describes several new arithmetic algorithms on hereditarily binary numbers 1 that are within constant factors from their traditional counterparts for their average case behavior 2 are super-exponentially faster on some "interesting" giant numbers 3 ⇒ make tractable important computations that are impossible with traditional number representations Paul Tarau (University of North Texas) Arithmetc of Hereditarily Binary Natural Numbers SYNASC'2014 2 / 25 Outline Related work Bijective base-2 numbers as iterated function applications The arithmetic interpretation of hereditarily binary numbers Constant average and worst case constant or log * operations Arithmetic operations working one o k or i k block at a time Primality tests Performance evaluation Compact representation of some record-holder giant numbers Conclusion and future work
2007
The first essay discusses, in nontechnical terms, the paradox implicit in defining a random integer as one without remarkable properties, and the resolution of that paradox at the cost of making randomness a property which most integers have but can’t be proved to have. The second essay briefly reviews the search for randomness in the digit sequences of natural irrational numbers like π and artificial ones like Champernowne’s C = 0.12345678910111213 . . ., and discusses at length Chaitin’s definable-but-uncomputable number Ω, whose digit sequence is so random that no betting strategy could succeed against it. Other, Cabalistic properties of Ω are pointed out for the first time.
2007
Abstract. It is well known that the hardest bit of integer multiplication is the middle bit, ie, MUL n��� 1, n. This paper contains several new results on its complexity. First, the size s of randomized read-k branching programs, or, equivalently, their space (log s) is investigated. A randomized algorithm for MUL n��� 1, n with k= O (log\, n)(implying time O (n\, log\, n)), space O (log\, n) and error probability n��� c for arbitrarily chosen constants c is presented. Second, the size of general branching programs and formulas is investigated.
RAIRO - Theoretical Informatics and Applications, 2006
The arithmetical complexity of infinite words, defined by Avgustinovich, FonDer -Flaass and the author in 2000, is the number of words of length n which occur in the arithmetical subsequences of the infinite word. This is one of the modifications of the classical function of subword complexity, which is equal to the number of factors of the infinite word of length n. In this paper, we show that the orders of growth of the arithmetical complexity can behave as many sub-polynomial functions. More precisely, for each sequence u of subword complexity fu(n) and for each prime p ≥ 3 we build a Toeplitz word on the same alphabet whose arithmetical complexity is a(n) = Θ(nfu(log p n)).
Journal of Numerical Cognition, 2020
An important paradigm in modeling the complexity of mathematical tasks relies on computational complexity theory, in which complexity is measured through the resources (time, space) taken by a Turing machine to carry out the task. These complexity measures, however, are asymptotic and as such potentially a problematic fit when descriptively modeling mathematical tasks that involve small inputs. In this paper, we argue that empirical data on human arithmetical cognition implies that a more fine-grained complexity measure is needed to accurately study mental arithmetic tasks. We propose a computational model of mental integer addition that is sensitive to the relevant aspects of human arithmetical ability. We show that this model necessitates a two-part complexity measure, since the addition tasks consists of two qualitatively different stages: retrieval of addition facts and the (de)composition of multidigit numbers. Finally, we argue that the two-part complexity measure can be devel...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Human Factors: The Journal of the Human Factors and Ergonomics Society
International Journal of Unconventional Computing, Vol. 12 (5-6), 453-463, 2016
Springer eBooks, 1993
Journal of Computer Science, 2006
Transactions of the American Mathematical Society, 1969
arXiv (Cornell University), 2000
Lecture Notes in Computer Science, 2012
Lecture Notes in Computer Science, 2016
Cryptographic Applications of Analytic Number Theory, 2003
IEEE Transactions on Computers
RAIRO - Theoretical Informatics and Applications, 2006
Lecture Notes in Computer Science, 2016
Physical Review A, 2001
Acta Universitatis Sapientiae, Informatica, 2014