0% found this document useful (0 votes)
25 views10 pages

Python Parallel Processing and Multiproc

Uploaded by

Cakra Bimantara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views10 pages

Python Parallel Processing and Multiproc

Uploaded by

Cakra Bimantara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Academic Journal of Nawroz University (AJNU), Vol.10, No.

3, 2021
This is an open access article distributed under the Creative Commons Attribution License
Copyright ©2017. e-ISSN: 2520-789X
https://doi.org/10.25007/ajnu.v10n3a1145

Python Parallel Processing and Multiprocessing: A Review


Zena A. Aziz1, Diler Naseradeen Abdulqader2, Amira B. Sallow3, Herman Khalid Omer4
1 Technical College of Information Akre, Duhok Polytechnic University, Kurdistan Region-Iraq
2,3,4 Department of Computer and Communication, Nawroz University, Kurdistan Region-Iraq

ABSTRACT
Parallel and multiprocessing algorithms break down significant numerical problems into smaller subtasks, reducing the total
computing time on multiprocessor and multicore computers. Parallel programming is well supported in proven programming
languages such as C and Python, which are well suited to “heavy-duty” computational tasks. Historically, Python has been
regarded as a strong supporter of parallel programming due to the global interpreter lock (GIL). However, times have changed.
Parallel programming in Python is supported by the creation of a diverse set of libraries and packages. This review focused on
Python libraries that support parallel processing and multiprocessing, intending to accelerate computation in various fields,
including multimedia, attack detection, supercomputers, and genetic algorithms. Furthermore, we discussed some Python
libraries that can be used for this purpose.

KEYWORDS: Parallel processing, multiprocessing, Python, CPU, Multicore CPU, GPU.

1. Introduction
In recent years, the Python programming language has clusters, and clouds.
gained momentum for scientific computing. Often Parallel computing, on the other hand, is a
conventional tools like MatLab are replaced. [1]. It is computational paradigm where several instructions
open to all at no cost because Python is open source, are performed concurrently. It is based on the premise
and its portability makes its usability possible on many that significant problems can often be broken into
platforms. The language itself is lightweight, abridged, separate ones and solved simultaneously (parallel).
and highly suitable for quick prototyping, although it Bit-level parallelism, instruction-level parallelism,
is strong enough to write significant applications. data parallelism, and task parallelism are the four
Some people don't give it enough credit for its usability types of parallel computation [6].
and flexibility. Hiotas been used for many years, especially in high-
Python can very well be integrated with the C/C++ so performance computing; however, interest in this field
that external performance or code-based modules can has recently increased due to physical hardware
be easily invoked. Besides, it offers a wide range of constraints on CPU frequency, such as shared-memory
scientific libraries, e.g. Processing and analyzing data, and distributed services, as well as infrastructure
plotting and graphical user interfaces [2][3][4]. All networks, clusters, and clouds [7][8]. Furthermore, the
these feature makes Python attractive to the scientific use of such resources and the generation of heat by
public but it has to be parallel to the languages used in computers has become a focus of recent technological
large projects. CPython is the default implementation advancement. As a result, parallel computing has
and most commonly used. [5], due to the global look established itself as a key concept in computer
in the interpreter, several threads cannot be run at architecture, specifically in multi-core processors.
once. A variety of options have been developed to The Python multiprocessing module [9] allows
create many Python processes, Shared and elastic processes to be spawned in SMP machines with an API
infrastructure environments, including networks, like the module for threading, explicit calls for process

345
Academic Journal of Nawroz University (AJNU), Vol.10, No.3, 2021

generation, declaration passing, and implementation,


cooperation, and result selection. The GIL problem is
avoided by the multiprocessing module, which
launches sub processes rather than threads through a
fork system call. Parallel Python (PP) is a Python
module that implements frameworks for parallel
Python code execution on SMP and clusters. It is based
on an API that includes explicit functions for
Figure 1: Structure of Parallel Processing [13]
specifying the number of workers to be used,
submitting jobs for execution, obtaining worker 2.1 Parallel Processing benefits
results, Etc. Similarly to the multiprocessing module, Just one program could be run on the original
the programmer is in charge of parallelism computers at a given time. The intensive operational
management, which combines the actual algorithm program for one hour would take two hours to
parallelism management. [10]. complete, and a tape-collection program that lasted for
The paper is organized as follows. Sections 2, 3, 4, and one hour would take two hours. In parallel, all
5 include context theory; Section 6 addresses related programs are run simultaneously at the beginning of
work. Section 7 contains a discussion of the analysis. the parallel processing. The machine begins an input
Section 8 concludes with observations and future and output instructions first, and while waiting to
work. complete the mission, the intensive operations
2. Parallel Processing program would be executed. It will take less than one
Most modern PCs, workstations, and even handheld hour to complete the two tasks. [14] .
devices contain several central processing unit (CPU)
cores. These cores are self-contained and can execute 2.2 Applications for Parallel Processing
various instructions at the same time. Programs that
use parallelization to take advantage of multiple cores ▪ Parallel processing systems are used to ensure the
run faster and allow better use of CPU resources. security and dependability of the United States'
Parallel Processing is another term for speeding up the remaining nuclear weapons arsenal. In the absence
efficiency of running a program by dividing it into of nuclear testing, either above or below ground,
smaller pieces that can be performed simultaneously very fine-grained numerical simulation is needed
on multiple processors. [11], In general, each to evaluate and forecast potential problems caused
component has its processor. A program running on Q by long-term storage of nuclear products. [15].
processors can complete Q times faster than a program ▪ Parallel processing is used to create computer-
running on one processor. [12]. generated vehicles and railings to monitor the
strength and endurance of the railings in the event
of a collision. Executing one model on a single
processing system will take up to five days, while
it only takes a few hours on a parallel machine.
▪ Airlines use parallel processing to analyze
customer data, estimate requests, and determine
the fees to charge.

346
Academic Journal of Nawroz University (AJNU), Vol.10, No.3, 2021

▪ MRI images and models of bone implantation in the system rather than using several copies of
systems are examined using medical parallel the same data.
processing equipment. • In this method increase reliability, since the ability
▪ Other uses include broken coding, geological is spread over many processors, the reliability is
research, animated graphics, computer fluid increased. If one of the processors fails, the
dynamics, chemistry, the science of physics, system's speed will be slightly slowed, but the
electronic styling, and climatology. system will continue to function normally.
3. Multiprocessing
4. Python
The capacity of a device to support more than one
For learning as well as actual world programming,
processor at the same time is referred to as
Python is an appropriate language. Python is Guido
multiprocessing. In a multiprocessing method,
van Rossum's strong object-oriented language of
applications are divided into smaller chunks of code
programming. The language designs allow the user to
that run independently. The operating system assigns
write simple programs on large and small scales. [5].
these threads to the processors, which improves
Python supports many programming paradigms,
system performance. It does two things at once: it runs
including object-oriented, mandatory, functional, or
code on multiple CPUs at the same time, or it runs code
procedural types. Python supports the essential
on the same CPU and achieves speedups by using
feature. Python supports an automatic memory
"wasted" CPU cycles while the software is waiting for
management system of a dynamic kind and has broad
external resources such as file loading, API calls, and
and extensive standard libraries. Many operating
so on.
systems have Python interpreters available.

Figure 2: Structure of Multiprocessing

Figure 2: Python-based Parallel Processing Libraries [16] .


3.1 The Benefits of Multiprocessing
4.1 Python Libraries for Parallel Processing and
• Enhanced Throughput: More work can be
Multiprocessing systems.
completed in the same amount of time by
In this section, some Python Libraries will be
increasing the number of processors.
discussed. Python Programming Language provides a
• Saving money by sharing memory, buses,
standard library as well as a variety of libraries for
peripherals, and so on: When opposed to multiple
parallel processing and multiprocessing system.
single systems, a multiprocessor system saves
a. The multiprocessing library: It allows parallel
resources. Furthermore, if numerous programs
processing in which multiple processes with
run on the same data, it is less expensive to store
different input arguments can be produced from a
the data on a single disk shared by all processors
single function. On the other hand, the process
library allows external processes such as another

347
Academic Journal of Nawroz University (AJNU), Vol.10, No.3, 2021

Python script or a C/C++ execution from a Python the client-side, serializes Python code and related
script. [17]. data and stores it in object storage. The client
b. JMetalPy: Create an environment for solving instructs the stored actions to run concurrently
multi-objective optimization problems using and then awaits the output. PyWren takes the code
traditional meta-heuristics and techniques for and processes the related data from object storage
preference articulation and emotional problems, on the server-side, saving the output. [21].
as well as a rich set of features and real-time and f. PyNetLogo: Is a connector. The Python general-
interactive visualization. JMetalPy also supports purpose programming language will be used to
parallel computing in multicore and cluster handle NetLogo. Due to Python’s increasing
systems. [18]. demand in the field of IT in general, the analysts
c. Parsl: It is a Python-based parallel scripting library and modelers have the ability to choose within
that facilitates data-oriented workflows that are many different selections. PyNetLogo features
both asynchronous and implicitly parallel. Using include monitoring using one of NetLogo's
Swift's model as a foundation [12], Parsl extends example frameworks in an interactive Python
Python scripts (or applications) with advanced environment to conduct a global data analysis
parallel workflow capabilities. Parsl scripting links with parallel processing. [22].
selected python functions, and external 5. Literature Review
applications (called apps), with shared This paper reviewed many papers related to
input/output data objects in versatile parallel multiprocessing and parallel processing issues that
workflows. Parsl summarizes the execution Python solved. And demonstrates how
environment for multi-core processors, clusters, multiprocessing and parallel processing can
and supercomputers [19]. significantly reduce calculation time by using python
d. Ray: Is an open-source Python parallel and libraries.
distributed library. Ray provides a cohesive D. Meunier et al., [23] NetLogo can be controlled
interface for expressing in cooperation calculations through the programming language of Python. Given
that are parallel to the role and actor based on a Python's growing popularity in computer science,
single dynamic implementation motor. Ray modelers and analysts now have more choices.
monitors the system's control state using a PyNetLogo features include controlling one of
distributed scheduler and a parallel fault-tolerant NetLogo's example models from an integrated Python
store to meet performance requirements. [20]. environment and performing a global sensitivity
e. PyWren: an open-source project that runs user- analysis with parallel processing.
supplied Python code and dependencies as server- J. Kready et al., [24] This paper proposes an
less activities on a server-less platform. PyWren implementation using multiprocessing from Python to
performs server-less actions at a large scale and process a parallel application for the YouTube Data
tracks the effects without needing awareness of API. First, parallel data collection from YouTube. The
how they are invoked and run. PyWren provides tests show that multiprocessing increases the output
a client that operates locally and a runtime by 400 percent with parallel processing for YouTube
deployed as a server-less action in the cloud. data collection. These enhancements minimize
PyWren uses object storage to communicate calculation time by using multi-threaded CPUs.
between the client and server sides. PyWren, on J. Niruthika et al., [25] The output of the parallel Aho-

348
Academic Journal of Nawroz University (AJNU), Vol.10, No.3, 2021

Corasick algorithm was compared to that of the serial computer programming expertise. BayesFactorFMRI
version. Aho-Corasick is a well-known algorithm that can be downloaded for free from Zenodo and GitHub.
solves the problem of exact string matching, which is a Neuroimaging researchers who wish to analyze their
significant problem in the field of computer science. fMRI data with Bayesian analysis will usually use it, as
The results show that parallel Aho-Corasick it is more sensitive than conventional analysis and
implemented in Python has a lower time performance increases efficiency by spreading analytical tasks
than its serial counterpart, while parallel Aho-Corasick across multiple processors.
implemented in C has a higher time performance than G. Heine, T et al., [29] Introduced a method for
its serial counterpart. As a result, Python is unsuitable asynchronous streaming. Stream subscriptions are
for parallelizing the Aho-Corasick algorithm since the proposed as a tool for monitoring public opinion. A
algorithm's CPU consumption may be significant prototype is presented that integrates Twitter sources,
compared to its I/O usage. Python text processing, and Cassandra storage
A. Benítez-Hidalgo et al ., [26] jMetalPy is methods, with three main points elaborated on: 1) A
implemented in a Python-based multi-objective comparison of results in writing techniques. 2) Data
optimization system with meta-heuristics. It is parallelization and asynchronous concurrent database
distributed under the MIT license and is freely writes are used in multiprocessing procedures. 3)
available to the public on GitHub. They presented and Monitoring of public opinion by noun extraction.
discussed the central architecture of the NSGA-II D. Datta et al., [30] The performance of parallelized
program and some of its variants as ample examples CPUs was compared. Python's Ray library is used to
of how to use this framework. Dynamic optimization, parallelize multicore CPUs. In this project, the
parallelism, and data processing decision-making are benchmark image classification algorithm used is
all assisted by Metal. based Convolutional Neural Network. The author
Y. Babuji et al., [27] Parsl is a parallel script library that attempted to demonstrate the Parallelization of a
extends Python through fast, scalable, and adaptable CPU's multicores which allows for faster training of a
encoding parallels. Experimental results on computing model. In this paper, a comparison analysis was
in Blue Waters show that Python scripts can run conducted between three different Convolutional
components of just 5 MS overhead, scale to over Neural Network models.
250,000 employees across more than 8,000 nodes, and T. Shaffer et al., [31] Native Python functions were
process up to 1200 tasks a second. It has shown proposed at scale, and techniques for dynamically
multitasking, collaborative, web-based, and machine evaluating a minimal collection of dependencies and
learning skills in biology, cosmology, and materials assembling a lightweight function monitor (LFM) that
science. captures the software environment and manages
D. S. Wahyuni [28] The BayesFactorFMRI tool, written resources at the granularity of single functions were
in R and Python, was presented to enable Bayesian introduced. The author tests these approaches in
second-level analysis and Bayesian meta-analysis for various settings, from a campus cluster to a
multiprocessing fMRI image data by neuroimaging supercomputer, and demonstrates that their advanced
researchers. This tool accelerates computer-intensive dependency management planning and complex
Bayesian fMRI analysis by using multiprocessing. Its resource management strategies outperform the
graphical user interface (GUI) enables researchers to competition.
conduct Bayesian fMRI analysis without the need for E. Jonas et al., [32] Introduced the MPI Python connect

349
Academic Journal of Nawroz University (AJNU), Vol.10, No.3, 2021

with the standard MPI communication API, known as subsequent implementation. For example purposes,
mpiPython. The author discussed the design issues the Python programming language is used. Various
associated with the implementation of the mpiPython models of genetic algorithm parallel processing are
API in this paper. The second part of the paper also provided and described. The Python
addressed the node/parallel output to compare implementations of the models are then defined and
mpiPython with other MPI bindings on a Linux compared using iteration count as a criterion. While
cluster. individual model output can only be compared to a
Galvez et al., [33] CharmPy was introduced as a certain degree, all parallel models outperform the
parallel programming model and application based on simple serial model.
the Python programming language. It had many H. Jan et al., [37] In this article, the NetLogo connector
distinguishing features, including a simpler model and was initially introduced, which connects the NetLogo
API, improved flexibility, and writing anything in modeling agent to a Python environment. This was
Python. Another example is a general-purpose illustrated with one of NetLogo's sample versions. The
distributed map function that can run independent library SALib Python was used as an example of the
jobs on multiple nodes simultaneously and supports more complex tests given in a Python GUI in Sobol's
load balancing. The authors also demonstrated how to variance-based structural reliability analysis of the
use CharmPy to write parallel Python applications that model. For better results in the study, the ipyparallel
scale to massive core counts on supercomputers and library was used to parallel sequential simulations.
perform similarly to MPI or C++ versions. Zhang et al., [38] This paper proposed Quant Cloud, a
R. Eggen et al,. [34] The effect of the global python program that integrates a parallel Python framework
interpreter lock (GIL) has been examined. To show the with a C++-coded Big Data system. This extensive data
effect of GIL, the authors analyze a comparison of framework is built in C++, and the user methods are
python threads to python processes. The GIL leads to written in Python. A coprocessor-based parallel
sequential execution of threads, while concurrent strategy underpins the automatic parallel execution of
processing is executed. Processes need more start-up Python code. They have put the program into two
time; it answers the amount of data needed to execute popular algorithms: moving window and self-
processes faster than threads. adjusting average movements (ARMA). The Intel Xeon
M. R. Rizqullah et al., [35] The middleware in this E5 and Xeon Phi processors are thoroughly compared.
paper was developed using the Python parallel Their approach to parallelization is almost linear and
programming language and installed on a Raspberry is suitable for today's multicore processors. The
Pi 3. The console frame was designed to help people findings show that their method is almost linear.
learn the basics of IoT through the transmission and Sindhu et al.,[39] A Python multi-processing library
receipt of control data to access sensors or actuators. has implemented a simultaneous implementation of
This middleware transforms a command line for the Max-p problem. The author achieves speeds up to
running or accessing the various IoT module features. 12 and 19 times with the best sequential algorithm for
In order to increase program operating time developing and improving phases utilizing an
performance, Python employs multiprocessing or intuitive multi-lock data structure. In order to validate
multithreading. the algorithms, the author provides detailed
V. Skorpil et al., [36] The paper discussed various experiential results.
methods for parallelizing genetic algorithms with Real et al., [40] This paper has presented Auto Parallel

350
Academic Journal of Nawroz University (AJNU), Vol.10, No.3, 2021

which is a Python module that facilitates parallelism A. V. M. Barone et al., [44] Introduce a broad and
and runs on distributed infrastructures. It is built on diverse Parallel Corpus with its documentation strings
top of PyCOMP and is sequential, This helps in ("docstrings") created by scrapping open source
making it easy to scale up to hundreds of cores for repositories on GitHub, with a hundred thousand
creative purposes. Users can specify the affine loop on Python functionalities. The paper defined the
sequential methods using the @parallel annotation fundamental results in neural machine-created
instead of testifying sequential python code. As it translations for the code documentation and code
turns out, the generated codes for Choles, LU, and QR generation tasks. To further increase the number of
algorithms can achieve similar performance without training information
any effort from the programmer. Thus, taking the Auto Table 1: Summary of Review Papers Based on Parallel
Processing and Multiprocessing in Python
Pip parallelizes distributed systems one step further.
Ref. Year Objectives Methods / Research Applied Field
Tools Problem
Z. Rinkevicius et al ., [41] Prsened an Open source [23] 2020 Create parallel NeuroPyc Multi-modal Health Care
processing on and decided (Brain
software named VeloxChem that was created to pipelines that can reproducible Pipelines)
be shared and keep brain
measure electronic complicated, real linear response track of all connectivity
analyses. pipelines
functions for functional theories of Hartree–Fock and [24] 2020 Reduce Python The requests Multimedia
computation time from the (YouTube)
Kohn–Sham density. Points to an objective software of YouTube Data YouTube Data
API request in API take time
framework written in Python/C++ layered fashion, parallel.
[25] 2019 Checking the Pyrhon. Problem of Electronic
VeloxChem enables the time-efficiently prototyping of performance of the Exact String Dictionary
parallel version of Matching (Aho–
new techniques without cooperating computational the Aho-Corasick Corasick)
algorithm against
achievement. its serial version
[26] 2019 Create multi-object jMetalPy Multi- Engineering,
V. Canh Vu et al,. [42] In parallel, a genetic optimizations like Objective Economics and
quick prototyping Optimization Logistics
programming technique for classifying data patterns facilities and a vast with Met-
number of data Heuristics
for wireless attack detection was presented. The author libraries available,
and support multi-
performed tests on the same computer system core and cluster
systems for parallel
computing
configuration, parameters and datasets in order to processing,
analyzing, and
associate the performance of Karoo GP and standard viewing.
[27] 2019 Build a dynamic Parsl Encoding Biology,
GP. Karoo GP was, however, implemented alongside component parallelism cosmology,
dependency graph and materials
the high-speed GPU processing mechanism when the that can then run science.
effectively on one
mainstream GP for multi-core CPUs has been used. or more processors
[28] 2021 Comparing BayesFact Perform Image
Karoo GP is much faster than its average GP, according Bayesian meta- orFMRI Bayesian Processing
analysis of fMRI second-level
to performance. image data with analysis and
multiprocessing Bayesian meta-
S. Khan and A. Latif [43] Proposed solution eliminates with serial analysis. analysis

[29] 2018 Combining Twitter Python Asynchronous Multimedia


this constraint and allows a single machine to run streams, by Python, streaming (Twitter)
Multiprocessing Stream
several instances. The SIME method for the procedures subscriptions
employing data
measurement of critical clearance time (CCT) and the parallelization.
[30] 2020 Comparing the Python’s Huge Image
stability of the rotor angles is measured on a piece of performance of Ray Convolutional Processing
parallelized CPUs library Neural
single infinite system equipment (SIME). This method Networks

reduces computational time as a parallel factor and


dramatically improves the handling and aggregation
of the tasks. The approach is generic and possible.

351
Academic Journal of Nawroz University (AJNU), Vol.10, No.3, 2021

[31] 2021 Resource Parsl Dependency Complex


management in a management application at
distributed system planning and supercomputer 6. Discussion
raises issues complex - scale.
relating to granular resource
parallelism, management
Increased use of Python and other high-level
management of strategies
software programming languages calls for intuitive interfaces in
environments, and
adaptation
[32] 2020 Creates to
a message Python Big Data Media, Data libraries written in different components in the lower
passing Python Science,
linking interface to Physics, languages and applications. In combination with the
facilitate parallel Healthcare.
computing growing need for parallel computation (for example
[33] 2018 Write parallel CharmPy Distributed Computation
Python apps with asynchronous and
CharmPy, which execution- Communicatio
because of big data and the end of Moore's law), this
rely on a driven n.
[34] 2019 Examine Thread Python Thread and Security change to orchestration instead of execution calls for a
and multi-Process Process
Efficiency in Efficiency in revision on how parallelism in programs is
python. Python
[35] 2019 Creating an App, A Python Difficulties of Console
console program to command line Application interpreted. In order to compare the performance of
help people command for
understand IoT for
[36] 2019 The use of genetic Master-
IoT users
Algorithm Genetic the python libraries in parallel processing and
algorithm parallel Slave speed and load Algorithms
processing that is distribution multiprocessing fields, we reviewed some papers
adapted to this
[37] 2019 To manage one of PyNetLog global Controling the which used python libraries for the purpose of parallel
NetLogo's o sensitivity Communicatio
examples from a analysis of Net n and Linking
Python-interactive Lgo
processing and multiprocessing. As a result, some of
environment,
perform a parallel the researcher proposed a python based new software
processing global
sensitivity analysis. such as VeloxChem to be used for Actual and complex
[38] 2018 Coding Big Data Pyrhon Data Finance
system in C++. Analysis and
electronic response functions calculation. Moreover,
big data
[39] 2018 Max-p problem Python Max-P Area Geospatial Parsl, a parallel scripting library is used by the author
parallel Efficiency and
implementation Synergy Babuji [19] for constructing a dynamic dependency
[40] 2019 facilitates Python Improving how Programming graph of components. On the other hand, Shffere [31],
parallelism and users cannot language
runs on distributed deal with
infrastructural distributed and
worked with Parsl scripting for issues relating to
infrastructures parallel
computing granular parallelism, management of software
problems
directly. environments, and adaptation to computing.
[41] 2020 Calculating real Python Execution in Spectroscopy
and complex cluster simulations Foremother, Python parallel scripting language used
electronic linear environments
responses at the with high for Internet of Things (IoT) console applications. This
Hartree–Fock and efficiency.
Kohn–Sham survey aims to concentrate on python open sources
density stages.
Computer language libraries used in different parallel and
effectiveness
sacrifice multiprocessing systems. There are numerous
[42] 2018 Parallel to the (Karoo Classify data Security
classification of GP) patterns for feedbacks and so, they can be used widely in the
data patterns for the wireless
identification of attacks future.
wireless attacks
[43] 2019 Reduce computer Python Software PowerFactory 7. Conclusion
time as a parallel Instance
factor and greatly Modular and This paper demonstrated that Python is a new, mature,
increase handling Scalable.
and aggregation of complete, and scalable scripting language that is well-
results
[44] 2017 Introduce the Python The nature and Documentation suited to scientific research and education in power
extensive and the production and code
diverse parallel of code is not generation system analysis. Python's programming language
corpus with its reprehensible
documentation by the current provides the resources needed to run parallel code on
("docstrings") of company.
one hundred 000 multicore machines. Throughout the paper, several
Python functions,
provided by Python libraries were discussed that are used in
removing the
GitHub open
source repository.
parallel and multiprocessing in various approaches.

352
Academic Journal of Nawroz University (AJNU), Vol.10, No.3, 2021

Multimedia, websites, massive core counts on 10.1016/j.ascom.2013.04.002.


18. B. Lewis, I. Smith, M. Fowler, and J. Licato, “The robot mafia: A
supercomputers, genetic algorithms, attack detection, test environment for deceptive robots,” 28th Mod. Artif. Intell.
Cogn. Sci. Conf. MAICS 2017, pp. 189–190, 2017, doi:
and so on were all reviewed in the related work 10.1145/1235.
19. Y. Babuji et al., “Introducing Parsl: A Python Parallel Scripting
section. It was stated that Python has features for Library,” pp. 1–2, 2017, [Online]. Available:
https://doi.org/10.5281/zenodo.891533#.WdOPKS_nvdE.me
spreading work between multiple processes, allowing ndeley.
it to take advantage of multiple CPU cores and larger 20. P. Moritz et al., “Ray : A Distributed Framework for Emerging
AI Applications This paper is included in the Proceedings of
quantities of usable machine memory. the,” USENIX Symp. Oper. Syst. Des. Implement., 2018.
21. J. Sampé, G. Vernik, M. Sánchez-Artigas, and P. García-López,
“Serverless data analytics in the IBM cloud,” Middlew. Ind. 2018
8. Reference - Proc. 2018 ACM/IFIP/USENIX Middlew. Conf. (Industrial Track),
1. S. M. Lim, A. B. M. Sultan, M. N. Sulaiman, A. no. October 2019, pp. 1–8, 2018, doi: 10.1145/3284028.3284029.
Mustapha, and K. Y. Leong, “Crossover and mutation 22. M. Jaxa-Rozen and J. Kwakkel, “PyNetLogo: Linking NetLogo
operators of genetic algorithms,” Int. J. Mach. Learn. Comput., with Python,” J. Artif. Soc. Soc. Simul., vol. 21, Mar. 2018, doi:
vol. 7, no. 1, pp. 9–12, 2017, doi: 10.18178/ijmlc.2017.7.1.611. 10.18564/jasss.3668.
2. I. V. Kotenko, I. B. Saenko, and A. G. Kushnerevich, 23. D. Meunier et al., “NeuroPycon: An open-source python
“Architecture of the parallel big data processing system for toolbox for fast multi-modal and reproducible brain
security monitoring of internet of things networks,” SPIIRAS connectivity pipelines,” Neuroimage, vol. 219, no. June, 2020,
Proc., vol. 4, no. 59, pp. 5–30, 2018, doi: 10.15622/sp.59.1. doi: 10.1016/j.neuroimage.2020.117020.
3. A. Ebrahim, J. A. Lerman, B. O. Palsson, and D. R. Hyduke, 24. J. Kready, S. A. Shimray, M. N. Hussain, and N. Agarwal,
“COBRApy: COnstraints-Based Reconstruction and Analysis “YouTube data collection using parallel processing,” Proc. -
for Python,” BMC Syst. Biol., vol. 7, 2013, doi: 10.1186/1752- 2020 IEEE 34th Int. Parallel Distrib. Process. Symp. Work. IPDPSW
0509-7-74. 2020, pp. 1119–1122, 2020, doi:
10.1109/IPDPSW50202.2020.00185.
4. L. D. Dalcin, R. R. Paz, P. A. Kler, and A. Cosimo, “Parallel
distributed computing using Python,” Adv. Water Resour., vol. 25. J. Niruthika and S. Pranavan, “222 implementation of parallel
34, no. 9, pp. 1124–1139, 2011, doi: aho-corasick algorithm in python,” no. November, pp. 222–233,
10.1016/j.advwatres.2011.04.013. 2019.
5. A. Rogohult, “Benchmarking Python Interpreters,” KTH, Sk. för 26. A. Benítez-Hidalgo, A. J. Nebro, J. García-Nieto, I. Oregi, and J.
datavetenskap och Kommun. (CSC), 2016, p. 119, 2016. Del Ser, “jMetalPy: A python framework for multi-objective
optimization with metaheuristics,” arXiv, 2019.
6. W. Y. Je, S. Teh, and H. R. Chern, “A parallelizable chaos-based
true random number generator,” 2018. 27. Y. Babuji et al., “Parsl: Pervasive parallel programming in
Python,” HPDC 2019- Proc. 28th Int. Symp. High-Performance
7. [7] W. K. Lee, R. C. W. Phan, W. S. Yap, and B. M. Goi,
Parallel Distrib. Comput., pp. 25–36, 2019, doi:
“SPRING: a novel parallel chaos-based image encryption
10.1145/3307681.3325400.
scheme,” Nonlinear Dyn., vol. 92, no. 2, pp. 575–593, 2018, doi:
10.1007/s11071-018-4076-6. 28. H. Han, “BayesFactorFMRI: Implementing Bayesian Second-
Level fMRI Analysis with Multiple Comparison Correction and
8. T. Wang and Q. Kemao, “Parallel computing in experimental
Bayesian Meta-Analysis of fMRI Images with
mechanics and optical measurement: A review (II),” Opt. Lasers
Multiprocessing,” J. Open Res. Softw., vol. 9, no. 1, pp. 1–7, 2021,
Eng., vol. 104, no. June 2017, pp. 181–191, 2018, doi:
doi: 10.5334/jors.328.
10.1016/j.optlaseng.2017.06.002.
29. G. Heine, T. Woltron, and W. Alexander, “Towards a Scalable
9. E. Tejedor et al., “PyCOMPSs: Parallel computational
Data-Intensive Text Processing Architecture with Python and
workflows in Python,” Int. J. High Perform. Comput. Appl., vol.
Towards a Scalable Data-Intensive Text Processing
31, no. 1, pp. 66–82, 2017, doi: 10.1177/1094342015594678.
Architecture with Python and Cassandra,” no. November,
10. R. Filguiera, A. Krause, M. Atkinson, I. Klampanos, and A.
2018.
Moreno, “Dispel4py: A Python framework for data-intensive
scientific computing,” Int. J. High Perform. Comput. Appl., vol.
30. D. Datta, D. Mittal, N. P. Mathew, and J. Sairabanu,
“Comparison of Performance of Parallel Computation of CPU
31, no. 4, pp. 316–334, 2017, doi: 10.1177/1094342016649766.
Cores on CNN model,” Int. Conf. Emerg. Trends Inf. Technol. Eng.
11. S. R. M. Zebari and N. O. Yaseen, “Effects of Parallel Processing
ic-ETITE 2020, pp. 1–8, 2020, doi: 10.1109/ic-
Implementation on Balanced Load-Division Depending on
ETITE47903.2020.142.
Distributed Memory Systems Client / Server Principles,” vol.
31. T. Shaffer, Z. Li, B. Tovar, and Y. Babuji, “Lightweight Function
5, no. 3, 2011.
Monitors for Fine-Grained Management in Large Scale Python
12. M. Wilde, M. Hategan, J. M. Wozniak, B. Clifford, D. S. Katz,
Applications,” pp. 1–11.
and I. Foster, “Swift: A language for distributed parallel
32. H. Park, J. Denio, J. Choi, and H. Lee, “MpiPython: A robust
scripting,” Parallel Comput., vol. 37, no. 9, pp. 633–652, 2011, doi:
python MPI binding,” Proc. - 3rd Int. Conf. Inf. Comput. Technol.
10.1016/j.parco.2011.05.005.
ICICT 2020, pp. 96–101, 2020, doi:
13. S. Y. Yu, S. R. Chhetri, A. Canedo, P. Goyal, and M. A. Al
10.1109/ICICT50521.2020.00023.
Faruque, “Pykg2vec: A python library for knowledge graph
33. J. J. Galvez, K. Senthil, and L. Kale, “CharmPy: A Python
embedding,” arXiv, vol. 22, pp. 1–6, 2019.
Parallel Programming Model,” Proc. - IEEE Int. Conf. Clust.
14. C. Evangelinos and C. Hill, “Cloud Computing for Parallel
Comput. ICCC, vol. 2018-Septe, pp. 423–433, 2018, doi:
Scientific HPC Applications: Feasibility of Running Coupled
10.1109/CLUSTER.2018.00059.
Atmosphere-Ocean Climate Models on Amazon’s EC2,” Ratio,
34. R. Eggen and E. M. Eggen, “Thread and Process Efficiency in
vol. 2, Jan. 2008.
Python,” pp. 32–36.
15. K. Asanovíc et al., “The Landscape of Parallel Computing
Research : A View from Berkeley,” pp. 1–54, 2006.
35. M. R. Rizqullah, A. R. Anom Besari, I. Kurnianto Wibowo, R.
Setiawan, and D. Agata, “Design and implementation of
16. T. Kim, “Survey and Performance Test of Python-based
middleware system for IoT devices based on raspberry Pi,” Int.
Libraries for Parallel Processing,” pp. 1–4, 2020.
Electron. Symp. Knowl. Creat. Intell. Comput. IES-KCIC 2018 -
17. N. Singh, L. M. Browne, and R. Butler, “Parallel astronomical Proc., pp. 229–234, 2019, doi: 10.1109/KCIC.2018.8628528.
data processing with Python: Recipes for multicore machines,”
36. V. Skorpil, V. Oujezsky, P. Cika, and M. Tuleja, “Parallel
Astron. Comput., vol. 2, pp. 1–10, 2013, doi:
Processing of Genetic Algorithms in Python Language,” Prog.

353
Academic Journal of Nawroz University (AJNU), Vol.10, No.3, 2021

Electromagn. Res. Symp., vol. 2019-June, pp. 3727–3731, 2019,


doi: 10.1109/PIERS-Spring46901.2019.9017332.
37. H. Jan, J. H. Pynetlogo, L. Netlogo, M. Jaxa-rozen, and J. H.
Kwakkel, “Article PyNetLogo : Linking NetLogo with Python
Reference PyNetLogo : Linking NetLogo with Python,” vol. 21,
no. 2.
38. P. Zhang, Y. Gao, and X. Shi, “QuantCloud: A software with
automated parallel python for Quantitative Finance
applications,” Proc. - 2018 IEEE 18th Int. Conf. Softw. Qual.
Reliab. Secur. QRS 2018, pp. 388–396, 2018, doi:
10.1109/QRS.2018.00052.
39. V. Sindhu, “ScholarWorks @ Georgia State University
Exploring Parallel Efficiency and Synergy for Max-P Region
Problem Using Python,” 2018.
40. F. Real, A. Batou, T. Ritto, and C. Desceliers, “Stochastic
modeling for hysteretic bit–rock interaction of a drill string
under torsional vibrations,” J. Vib. Control, p. 107754631982824,
2019, doi: 10.1177/ToBeAssigned.
41. Z. Rinkevicius et al., “VeloxChem: A Python-driven density-
functional theory program for spectroscopy simulations in
high-performance computing environments,” Wiley Interdiscip.
Rev. Comput. Mol. Sci., vol. 10, no. 5, pp. 1–14, 2020, doi:
10.1002/wcms.1457.
42. V. Canh Vu and T. H. Hoang, “Detect Wi-Fi Network Attacks
Using Parallel Genetic Programming,” Proc. 2018 10th Int. Conf.
Knowl. Syst. Eng. KSE 2018, pp. 370–375, 2018, doi:
10.1109/KSE.2018.8573378.
43. S. Khan and A. Latif, “Python based scenario design and
parallel simulation method for transient rotor angle stability
assessment in PowerFactory,” 2019 IEEE Milan PowerTech,
PowerTech 2019, pp. 1–6, 2019, doi: 10.1109/PTC.2019.8810949.
44. A. V. M. Barone and R. Sennrich, “A parallel corpus of Python
functions and documentation strings for automated code
documentation and code generation,” arXiv, 2017.

354

You might also like