Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
9 pages
1 file
Parallel program allows most efficient use of processors. The efficient processors utilization is the key to maximizing performance of computing systems. This research paper described the computations to be parallelized One Time Pad (OTP) in the form of a sequential program. To transform the sequential computations into a parallel program, the control and data dependencies have to be taken into consideration to ensure that the parallel program produces the same results as the sequential program for all possible input values. The main goal is usually to reduce the program execution time as much as possible by using multiple processors or cores.
The next generation computing systems are focusing on parallel computing to solve the problem in fast and high speed using parallel programming concepts. The running time of any complex algorithm or applications can be improved by running more than one task at the same time with multiple processors concurrently. Here the performance improvement is measured in terms of increase in the number of cor es per machine and is analyzed for better Energy Efficient optimal work load balance. In this paper we investigate the review and literature survey of parallel computers for future improvements. We aim to present the theoretical and technical terminologies for parallel computing. The focus here is the process of comparative analysis of single core and multicore systems to run an application program for faster execution time and optimize the scheduler for better performance.
Undergraduate Topics in Computer Science, 2018
Undergraduate Topics in Computer Science (UTiCS) delivers high-quality instructional content for undergraduates studying in all areas of computing and information science. From core foundational and theoretical material to final-year topics and applications, UTiCS books take a fresh, concise, and modern approach and are ideal for self-study or for a one-or two-semester course. The texts are all authored by established experts in their fields, reviewed by an international advisory board, and contain numerous examples and problems. Many include fully worked solutions.
IOSR Journal of Computer Engineering, 2014
Parallel programming represents the next turning point in how software engineers write software. Today, low-cost multi-core processors are widely available for both desktop computers and laptops. As a result, applications will increasingly need to be paralleled to fully exploit the multi-core-processor throughput gains that are becoming available. Unfortunately, writing parallel code is more complex than writing serial code. This is where the OpenMP programming model enters the parallel computing picture. OpenMP helps developers create multi-threaded applications more easily while retaining the look and feel of serial programming. The term algorithm performance is a systematic and quantitative approach for constructing software systems to meet the performance objectives such as response time, throughput, scalability and resource utilization. The performances (speedup) of parallel algorithms on multi-core system have been presented in this paper. The experimental results on a multi-core processor show that the proposed parallel algorithms achieves good performance compared to the sequential.
IEEE Transactions on Education, 1996
A parallel random access machine (PRAM)-oriented programming language called 11 and its implementation on transputer networks are presented. The approach taken is a compromise between efficiency and simplicity. The 11 language has been conceived as a tool for the study, design, analysis, verification, and teaching of parallel algorithms. One of the main features of this Pascal-like language is the ability to mix parallelism and recursion allowing a simple and elegant formulation of a large number of parallel algorithms. A method for the complexity analysis of 11 programs, called PRSW, is introduced. The current version of the 11 compiler guarantees the conservation of the PRSW complexity of the algorithms translated. Furthermore, the computational results show a good behavior of the system for PRAM efficient algorithms.
Acta Informatica, 1976
This paper presents a model of parallel computing. Six examples illustrate the method of programming. An implementation scheme for programs is also presented. t
iaeme
The parallel computing is a new parallel execution technique in which the java code is parallelized so that each part of code is executed on different system in accordance with the availability of machines. It speeds up the execution of a particular application to a great extent. The parallelized code should be such that it should not be dependent with each other. Dependencies among the code should be detected at run time as far as possible. The integration of workstations in a distributed environment enables a more efficient function distribution in which application programs run on workstations, called application servers, while database functions are handled by dedicated computers, called database servers. This has led to the present trend in distributed system architecture, where sites are organized as specialized servers rather than as general-purpose computers. A parallel computer, or multiprocessor, is itself a distributed system made of a number of nodes connected by a fast network within a cabinet [7].
37th Annual Hawaii International Conference on System Sciences, 2004. Proceedings of the, 2004
Security is a challenging aspect of communications today that touches many areas including memory space, processing speed, code development and maintenance issues. When it comes to dealing with lightweight computing devices, each of these problems is amplified. In an attempt to address some of these problems, SUN's Java 2 Standard Edition version 1.4 includes the Java Cryptography Architecture (JCA). The JCA provides a single encryption API for application developers within a framework where multiple service providers may implement different algorithms. To the extent possible application developers have available multiple encryption technologies through a framework of common classes, interfaces and methods. The One Time Pad encryption method is a simple and reliable cryptographic algorithm whose characteristics make it attractive for communication with limited computing devices. The major difficulty of the One-Time pad is key distribution.In this paper, we present an implementation of One-Time Pad as a JCA service provider, and demonstrate its usefulness on Palm devices.
2000
Large-scale parallel computations are more common than ever, due to the increasing availability of multi-processor systems. However, writing parallel software is often a complicated and error-prone task. To relieve Diffpack users of the tedious and low-level technical details of parallel programming, we have designed a set of new software modules, tools, and programming rules, which will be the topic of
Parallel programming is a great challenging of modern computing. The main goal of that parallel programming is to improve performance of computer applications. A well-structured parallel application can achieve better performance over sequential execution on existing and upcoming parallel computer architecture. This paper describes the experimental evaluation of parallel application performance with thread-safe data structure in real time execution. Described different performance issues of this paper, help to make efficient parallel application for better performance. Before describing the experimental evaluation, this paper describes some methodologies relevant of parallel programming. The evaluation of parallel applications has been done by experimental result evaluation and performance measurements. The knowledge of proper partitioning, oversubscription, proper work load balancing, way of memory sharing will help to make more efficient parallel application.
In this paper a survey on current trends in parallel computing has been studied that depicts all the aspects of parallel computing system. A large computational problem that can not be solved by a single CPU can be divided into a chunk of small enough subtasks which are processed simultaneously by a parallel computer. The parallel computer consists of parallel computing hardware, parallel computing model, software support for parallel programming. Parallel performance measurement parameters and parallel benchmarks are used to measure the performance of a parallel computing system. The hardware and the software are specially designed for parallel algorithm and programming. This paper explores all the aspects of parallel computing and its usefulness.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proc. ACM 2nd Intl. Workshop on FPGAs, 1994
International Journal of Linguistics and Computational Applications , 2015
European Scientific Journal, ESJ, 2015
ACM Computing Surveys (CSUR), 1989
1990
Annals of Multicore and Gpu Programming Amgp, 2014
International Journal of Advanced Computer Science and Applications, 2021