Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999, Information Processing Letters
It is shown that an array of n elements can be sorted using O(1) extra space, O(n log n/ log log n) element moves, and n log 2 n + O(n log log n) comparisons. This is the first in-place sorting algorithm requiring o(n log n) moves in the worst case while guaranteeing O(n log n) comparisons, but due to the constant factors involved the algorithm is predominantly of theoretical interest.
2008
In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of...
Algorithmica, 1996
Until recently, it was not known whether it was possible to sort stably (i.e., keeping equal elements in their initial order) an array of n elements using only O(n) data moves and O(1) extra space. In [13] an algorithm was given to perform this task in O(n 2) comparisons in the worst case. Here, we develop a new algorithm for the problem that performs only O(n 1+ε) comparisons (0 < ε ≤ 1 is any fixed constant) in the worst case. This bound on the number of comparisons matches (asymptotically) the best known bound for the same problem with the stability constraint dropped.
2008
This paper presents two new algorithms for inline transforming an integer array 'a' into its own sorting permutation - that is: after performing either of these algorithms, a(i) is the index in the unsorted input array 'a' of its i'th largest element (i=0,1..n-1). The difference between the two IPS (Inline Permutation Substitution) algorithms is that the first and fastest generates an unstable permutation while the second generates the unique, stable, permutation array. The extra space needed in both algorithms is O(log n) - no extra array of length n is needed! The motivation for using these algorithms is given along with their pseudo code. To evaluate their efficiency, they are tested relative to sorting the same array with Quicksort on 4 different machines and for 14 different distributions of the numbers in the input array, with n=10, 50, 250.. 97M. This evaluation shows that both IPS algorithms are generally faster than Quicksort for values of n less th...
This paper introduces a new, faster sorting algorithm (ARL - Adaptive Left Radix) that does in-place, non-stable sorting. Left Radix, often called MSD (Most Significant Digit) radix, is not new in itself, but the adaptive feature and the in-place sorting ability are new features. ARL does sorting with only internal moves in the array, and uses a dynamically defined radix for each pass. ALR is a recursive algorithm that sorts by first sorting on the most significant 'digits' of the numbers - i.e. going from left to right. Its space requirements are O(N + logM) and time performance is O(N*log M) - where M is the maximum value sorted and N is the number of integers to sort. The performance of ARL is compared with both with the built in Quicksort algorithm in Java, Arrays.sort(), and with ordinary Radix sorting (sorting from right-to-left). ARL is almost twice as fast as Quicksort if N > 100. This applies to the normal case, a uniformly drawn distribution of the numbers 0:N...
2008
This paper presents two new algorithms for inline transforming an integer array 'a' into its own sorting permutation-that is: after performing either of these algorithms, a[i] is the index in the unsorted input array 'a' of its i'th largest element (i=0,1..n-1). The difference between the two IPS (Inline Permutation Substitution) algorithms is that the first and fastest generates an unstable permutation while the second generates the unique, stable, permutation array. The extra space needed in both algorithms is O(log n)-no extra array of length n is needed! The motivation for using these algorithms is given along with their pseudo code. To evaluate their efficiency, they are tested relative to sorting the same array with Quicksort on 4 different machines and for 14 different distributions of the numbers in the input array, with n=10, 50, 250.. 97M. This evaluation shows that both IPS algorithms are generally faster than Quicksort for values of n less than 10 7 , but degenerates both in speed and demand for space for larger values. These are results with 32 bit integers (with 64 bits integers this limit would be proportionally higher, at least 10 14). The two IPS algorithms do a recursive, most significant digit radix sort (left to right) on the input array while substituting the values of the sorted elements with their indexes.
2011
Sundararajan and Chakraborty (2007) introduced a new version of Quick sort removing the interchanges. Khreisat (2007) found this algorithm to be competing well with some other versions of Quick sort. However, it uses an auxiliary array thereby increasing the space complexity. Here, we provide a second version of our new sort where we have removed the auxiliary array. This second
In the first place, a novel, yet straightforward in-place integer value-sorting algorithm is presented. It sorts in linear time using constant amount of additional memory for storing counters and indices beside the input array. The technique is inspired from the principal idea behind one of the ordinal theories of "serial order in behavior" and explained by the analogy with the three main stages in the formation and retrieval of memory in cognitive neuroscience: (i) practicing, (ii) storage and (iii) retrieval. It is further improved in terms of time complexity as well as specialized for distinct integers, though still improper for rank-sorting. Afterwards, another novel, yet straightforward technique is introduced which makes this efficient value-sorting technique proper for rank-sorting. Hence, given an array of n elements each have an integer key, the technique sorts the elements according to their integer keys in linear time using only constant amount of additional mem...
Theoretical Computer Science, 2002
We present an e cient and practical algorithm for the internal sorting problem. Our algorithm works in-place and, on the average, has a running-time of O(n log n) in the size n of the input. More speciÿcally, the algorithm performs n log n + 2:996n + o(n) comparisons and n log n + 2:645n + o(n) element moves on the average. An experimental comparison of our proposed algorithm with the most e cient variants of Quicksort and Heapsort is carried out and its results are discussed.
Almost all computers regularly sort data. Many different sorting algorithms have been proposed. It is known that no sort algorithm based on key comparisons can sort N keys in less than O (N log N) operations and that many perform O(N2 ) operations in the worst case . This paper is aimed at proposing a new sortie tree data structure, which can be used for the sorting of data. This algorithm that implements a sortie tree data structure is a non-comparative sorting algorithm.
2016
The paper presents a new sorting algorithm that takes input data integer elements and sorts them without any comparison operations between the data—a comparison-free sorting. The algorithm uses a one-hot representation for each input element that is stored in a two-dimensional matrix called a one-hot matrix. Concurrently, each input element is also stored in a one-dimensional matrix in the input element’s integer representation. Subsequently, the transposed one-hot matrix is mapped to a binary matrix producing a sorted matrix with all elements in their sorted order. The algorithm exploits parallelism that is suitable for single instruction multiple thread (SIMT) computing that can harness the resources of these computing machines, such as CPUs with multiple cores and GPUs with large thread blocks. We analyze our algorithm’s sorting time on varying CPU architectures, including singleand multi-threaded implementations on a single CPU. Our results show a fast sorting time for the singl...
2011 3rd International Conference on Computer Research and Development, 2011
Today implementation of sort leads to lower and easier order time. Our purpose in this article are trying to introduce an algorithm with lower cost of Bubble sort and nearly same as Selection sort. A way is in orderly inserting of elements in another array. Sort is an algorithm that arranges all elements of an array, orderly. This algorithm has sequential form. The order of Selection sort complexity is nearly least than Bubble sort.
A novel integer value-sorting technique is proposed replacing bucket sort, distri-bution counting sort and address calculation sort family of algorithms. It requires only constant amount of additional memory. The technique is inspired from one of the ordinal theories of "serial order in behavior" and explained by the analogy with the three main stages in the formation and retrieval of memory in cognitive neuroscience namely (i) practicing, (ii) storing and (iii) retrieval. Although not suitable for integer rank-sorting where the problem is to put an array of elements into ascending or descending order by their numeric keys, each of which is an integer, the technique seems to be efficient and applicable to rank-sorting, as well as other problems such as hashing, searching, element distinction, succinct data structures, gaining space, etc.
A novel integer sorting technique was proposed replacing bucket sort, distribution counting sort and address calculation sort family of algorithms which requires only constant amount of additional memory. The technique was inspired from one of the ordinal theories of "serial order in behavior" and explained by the analogy with the three main stages in the formation and retrieval of memory in cognitive neuroscience namely (i) practicing, (ii) storing and (iii) retrieval. In this study, the technique is improved both theoretically and practically and an algorithm is obtained which is faster than the former making it more competitive. With the improved version, n integers S[0...n-1] each in the range [0, n-1] are sorted exactly in O(n) time while the complexity of the former technique was the recursion T(n) = T(n/2) + O(n) yielding T(n) = O(n).
International Journal of Applied Information Systems, 2014
One of the fundamental issues in Computer Science is the ordering of a list of items-known as sorting. Sorting algorithms such as the Bubble, Insertion and Selection Sort, all have a quadratic time complexity O(N 2) that limits their use when the number of elements is very large. This paper presents Use-Me sort. It sorts a list by making the use of already sorted elements present in the list. Moreover, it provides a trade-off between Space and Time Complexity with better performance than the existing sorting algorithms of the O (N 2) class.
In-place associative integer sorting technique was proposed for integer lists which requires only constant amount of additional memory replacing bucket sort, distribution counting sort and address calculation sort family of algorithms. Afterwards, the technique was further improved and an in-place sorting algorithm is proposed where n integers S[0...n-1] each in the range [0, n-1] are sorted exactly in O(n) time while the complexity of the former technique was the recursion T(n) = T(n/2) + O(n) yielding T(n) = O(n). The technique was specialized with two variants one for read-only distinct integer keys and the other for modifiable distinct integers, as well. Assuming w is the fixed word length, the variant for modifiable distinct integers was capable of sorting n distinct integers S[0...n-1] each in the range [0, m-1] in exactly O(n) time if m < (w-logn)n. Otherwise, it sort in O(n + m/(w-logn)) time for the worst, O(m/(w-logn)) time for the average (uniformly distributed keys) a...
Abstract: When using the visualize to compare algorithms, never forget that the visualize sorts only very small arrays. The effect of quadratic complexity (either a square number of moves or a square number of exchanges) is dramatic as the size of the array grows. For instance, dichotomic insertion, which is only marginally slower than quick sort on 100 items, becomes 10 times slower on 10000 items. We have investigated the complexity values researchers have obtained and observed that there is scope for fine tuning in present context. Strong evidence to that effect is also presented. We aim to provide a useful and comprehensive note to researcher about how complexity aspects of sorting algorithms can be best analyzed. Keywords: Algorithm analysis, Sorting algorithm, Empirical Analysis Computational Complexity notations. Title: Performance Comparison of Sorting Algorithms On The Basis Of Complexity Author: Mr. Niraj Kumar, Mr. Rajesh Singh International Journal of Computer Science and Information Technology Research ISSN 2348-120X (online), ISSN 2348-1196 (print) Research Publish Journals
Journal of Computer and System Sciences, 1998
We show that a unit-cost RAM with a word length of w bits can sort n integers in the range O.. 2W-1 in O (n log log n) time, for arbitrary w z log n, a significant improvement over the bound of O (n-) achieved by the fusion trees of Fredman and Willard. Provided that w 2 (log n)z+', for some fixed e > 0, the sorting can even be accomplished in linear expected time with a randomized algorithm. Both of our algorithms parallelize without loss on a unitcost PRAM with a word length of w bits. The first one yields an algorithm that uses O (log n) time and O (n log log n) operations on a deterministic CRCW PRAM. The second one yields an algorithm that uses O(log n) expected time and O(n) expected operations on a randomized EREW PRAM, provided that w 2 (log n)2+' for some fixed c >0. Our deterministic and randomized sequential and parallel algorithms generalize to the lexicographic sorting problem of sorting multiple-precision integers represented in several words.
In a single iteration of Insertion sort, only one element is inserted to its proper position. In this technique the proper position of the item can be found using linear search algorithm starting from first element and proceed in incremental fashion. The linear search can be applied either in incremental fashion or in reversed order to find the proper position of the element in the sorted part of the array. So it takes a huge amount of time to search an item when the size of the array is very large. In the proposed technique, binary search instead of linear search is used to find the proper place of an item in the array. Binary search is applicable because the part of the array in which the new element is to be inserted is in sorted form. Similarly the average case running time is further reduced such that the new element to be inserted is first compared with last element of sorted part of the array. If it is greater than the last value of sorted part of the array, no need to perform binary search because the element is in its proper place. Similarly if the element of interest is less than last element of sorted part of the array then it is also compared with the very first element of the array. If it is less than the first element, then binary search cannot be performed and the element is inserted to the first position of the array. The proposed algorithm is compared with insertion sort, binary insertion sort and shell sort. Simulation results show that it is very efficient than other techniques.
2009
Since the dawn of computing, the sorting problem has attracted a great deal of research. In past, many researchers have attempted to optimize it properly using empirical analysis. We have investigated the complexity values researchers have obtained and observed that there is scope for fine tuning in present context. Strong evidence to that effect is also presented. We aim to provide a useful and comprehensive note to researcher about how complexity aspects of sorting algorithms can be best analyzed. It is also intended current researchers to think about whether their own work might be improved by a suggestive fine tuning. Our work is based on the knowledge learned after literature review of experimentation, survey paper analysis being carried out for the performance improvements of sorting algorithms. Although written from the perspective of a theoretical computer scientist, it is intended to be of use to researchers from all fields who want to study sorting algorithms rigorously.
Combinatorics, Probability and Computing, 2014
We describe a general framework for realistic analysis of sorting algorithms, and we apply it to the average-case analysis of three basic sorting algorithms (QuickSort, InsertionSort, BubbleSort). Usually the analysis deals with the mean number of key comparisons, but here we view keys as words produced by the same source, which are compared via their symbols in lexicographic order. The ‘realistic’ cost of the algorithm is now the total number of symbol comparisons performed by the algorithm, and, in this context, the average-case analysis aims to provide estimates for the mean number of symbol comparisons used by the algorithm. For sorting algorithms, and with respect to key comparisons, the average-case complexity of QuickSort is asymptotic to 2n log n, InsertionSort to n2/4 and BubbleSort to n2/2. With respect to symbol comparisons, we prove that their average-case complexity becomes Θ (n log2n), Θ(n2), Θ (n2 log n). In these three cases, we describe the dominant constants which ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.