Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
6 pages
1 file
Binary Search is efficient due to it's logarithmic time complexity. It is used to identify the position of a key in a sorted list. Often, computer applications require searching for two to more different keys at the same execution. In this paper, a hybrid algorithm to perform the binary search with 2 to m different keys (m is an integer greater than or equal to 2) in a sorted list of elements is proposed. An m-key version of the proposed algorithm requires considering (2m + 1) different cases. Correctness proof of the algorithm is established using induction on the list size, n. Time complexity of the proposed algorithm is a function of 2 variables, namely, the number of keys, m and the list size, n, and is given as, O(mlog(n)) in both the worst and the average cases. The best case complexity is linear, which is O(m). Performance of 2 and 3-key versions is compared with the classical single key version. Possible key index combinations with the multi-key search strategies are als...
2007
Binary Search is fundamental to the study and analysis of Discrete Computational Structures. This is an efficient search strategy due to it's logarithmic time complexity. It is used to identify the position of a key in a sorted list. Often, database applications require searching for two to more different key elements at the same execution. This is particularly true if the database includes structural layering, which is based on a particular index or a field. In this paper, a hybrid algorithm to perform binary search with 2 to m different keys (m is an integer greater than or equal to 2) in a sorted list structure is proposed. An m-key version of the proposed algorithm requires considering (2m + 1) individual cases. Correctness proof of the algorithm is established using induction on the size of the list, n. Time complexity of the proposed algorithm is a function of 2 independent variables, m and n, which is, O(mlog(n)) in the worst, and also in the average cases. The best case complexity is linear on the number of the keys, which is O(m). Performance of the 2 and the 3-key versions is compared with the classical single key version. Possible key index combinations with the multi-key search strategies are explored for database applications. An extension of the algorithm known as the Multi-key Binary Insertion Search is also proposed. Applications of the proposed algorithms are considered together with a model employee database management program with improved efficiency.
In this paper, two new algorithms for list segregated multiple key search strategies are discussed. In block search, a given list of elements, which are in sorted order is subdivided into a number of equal sized blocks, and the given key element is compared to the upper bounding element in each block until a bounding element is found, which is either equal to or greater than the key. If the bounding element is greater, then the search is confined in the corresponding block to locate the correct key position. In the proposed algorithms, the block search effort is confined to identify multiple key elements in a given list. The search efforts are optimized after identifying each individual key position by discarding a part of the search space (a portion of the list elements). Here, the list of keys are assumed to be sorted in ascending order, though it is possible to consider a descending list of keys with an ascending list of elements. Once a key index position is identified, the succ...
In a single iteration of Insertion sort, only one element is inserted to its proper position. In this technique the proper position of the item can be found using linear search algorithm starting from first element and proceed in incremental fashion. The linear search can be applied either in incremental fashion or in reversed order to find the proper position of the element in the sorted part of the array. So it takes a huge amount of time to search an item when the size of the array is very large. In the proposed technique, binary search instead of linear search is used to find the proper place of an item in the array. Binary search is applicable because the part of the array in which the new element is to be inserted is in sorted form. Similarly the average case running time is further reduced such that the new element to be inserted is first compared with last element of sorted part of the array. If it is greater than the last value of sorted part of the array, no need to perform binary search because the element is in its proper place. Similarly if the element of interest is less than last element of sorted part of the array then it is also compared with the very first element of the array. If it is less than the first element, then binary search cannot be performed and the element is inserted to the first position of the array. The proposed algorithm is compared with insertion sort, binary insertion sort and shell sort. Simulation results show that it is very efficient than other techniques.
International Journal of Advanced Computer Science and Applications, 2021
A searching algorithm was found to be effective in producing acutely needed results in the operation of data structures. Searching is being performed as a common operation unlike other operations in various formats of algorithms. The binary and linear search book a room in most of the searching techniques. Going with each technique has its inbuilt limitations and explorations. The versatile approach of different techniques which is in practice helps in bringing out the hybrid search techniques around it. For any tree representation, the sorted order is expected to achieve the best performance. This paper exhibits the new technique named the biform tree approach for producing the sorted order of elements and to perform efficient searching. Keywords—Time complexities; space complexities; searching algorithm; biform tree; pre-order traversal
International Journal of Information Science and Computing
In computer science, various searching algorithms are available. We choose among them according to the situation. There are some efficient searching algorithms like Binary Search, Jump Search etc. In this paper a new searching algorithm which tracks the key element faster in some cases is proposed. This algorithm is Log Search. This algorithm only works for a sorted array. This algorithm searches the key element in a sub-array of the full array. If there is no possibility to find the key element, it will try to find out the next sub-array. If there is a possibility to find it to that sub-array, then it will recursively do the same thing.
An efficient digital search algorithm is presented in this paper by introducing a new internal array structure, called a double-array, that combines the fast access of a matrix form with the com-pactness of a list form. Each arc of a digital search tree, called a DS-free, can be computed from the double-array in O(1) time; that is to say, the worst-case time complexity for retrieving a key becomes O (k) for the length k of that key. The double-array is modified to make the size compact while maintaining fast access and algorithms for retrieval , insertion and deletion are presented. Suppose that the size of the double-array is n + cm, n being the number of nodes of the DS-tree, m being the number of input symbols, and c a constant depending on each double-array. Then it is theoretically proved that the worst-case times of deletion and insertion are proportional to cm and cm', respectively, independent of n. From the experimental results of building the double-array incrementally for various sets of keys, it is shown that the constant c has an extremely small value, ranging from 0.17 to 1.13.
Binary Search Trees are a frequently used data structure for rapid access to the stored data. Data structures like arrays, vectors and linked lists are limited by the trade-off between the ability to perform a fast search and resize easily. They are an alternative that is both dynamic in size and easily searchable. Due to efficiency reason, complete and nearly complete binary search trees are of particular significance. This paper addresses the performance analysis and measurement, collectively known as the Performance in binary search tree search applications. Performance measurement is equally significant asides from the performance analysis to learn more about the deviation from optimality. To estimate this deviation, new performance criteria for the binary search trees are presented. A multi-key search algorithm is proposed and the related analysis followed. The algorithm is capable of searching for multiple key elements in the same execution, sacrificing some optimality in the ...
We present theoretical algorithms for sorting and searching multikey data, and derive from them practical C implementations for applications in which keys are character strings. The sorting algorithm blends Quicksort and radix sort; it is competitive with the best known C sort codes. The searching algorithm blends tries and binary search trees; it is faster than hashing and other commonly used search methods. The basic ideas behind the algorithms date back at least to the 1960s, but their practical utility has been overlooked. We also present extensions to more complex string problems, such as partial-match searching.
International Journal of Modern Education and Computer Science, 2017
Search algorithm, is an efficient algorithm, which performs an important task that locates specific data among a collection of data. Often, the difference among search algorithms is the speed, and the key is to use the appropriate algorithm for the data set. Binary search is the best and fastest search algorithm that works on the principle of 'divide and conquer'. However, it needs the data collection to be in sorted form, to work properly. In this paper, we study the efficiency of binary search, in terms of execution time and speed up, by evaluating the performance improvement of the combined search algorithms, which are sorted into three different strategies: sequential, multithread, and parallel using message passing interface. The experimental code is written in 'C language' and applied on an IMAN1 supercomputer system. The experimental results show that the decision variables are generated from the IMAN1 supercomputer system, which is the most efficient. It varied for the three different strategies, which applies binary search algorithm on merge sort. The improvement in performance evaluation gained by using parallel code, greatly depends on the size of data set used, and the number of processors that the speed-up of the parallel codes on 2, 4, 8, 16, 32, 64, 128, and 143 processors is best executed, using between a 50,000 and 500,000 dataset size, respectively. Moreover, on a large number of processors, parallel code achieves the best speed-up to a maximum of 2.72.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal of Computer Science and Mobile Computing
International Journal of Computer Applications, 2014
International Journal of Computer Applications, 2019