Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010
…
2 pages
1 file
The Journal of Supercomputing
The optimization of the execution time of a parallel algorithm can be achieved through the use of an analytical cost model function representing the running time. Typically the cost function includes a set of parameters that model the behavior of the system and the algorithm. In order to reach an optimal execution, some of these parameters must be fitted according to the input problem and to the target architecture. An optimization problem can be stated where the modeled execution time for the algorithm is used to estimate the parameters. Due to the large number of variable parameters in the model, analytical minimization techniques are discarded. Exhaustive search techniques can be used to solve the optimization problem, but when the number of parameters or the size of the computational system increases, the method is impracticable due to time restrictions. The use of approximation methods to guide the search is also an alternative. However, the dependence on the algorithm modeled and the bad quality of the solutions as a result of the presence of many local optima values in the objective functions are also drawbacks to these techniques. The problem becomes particularly difficult in complex systems hosting a large number of heterogeneous processors solving non-trivial scientific applications. The use of metaheuristics allows for the development of valid approaches to solve general problems with a large number of parameters. A well-known advantage of metaheuristic methods is the ability to obtain high-quality solutions at low running times while maintaining generality. We propose combining the parameterized analytical cost model function and metaheuristic minimization methods, which contributes to a novel real alternative to minimize the parallel execution time in complex systems. The success of the proposed approach is shown with two different algorithmic schemes on parallel heterogeneous systems. Furthermore, the development of a general framework allows us to easily develop and experiment with different metaheuristics to adjust them to particular problems.
2012
Abstract Classical scheduling formulations typically assume static resource requirements and focus on deciding when to start the problem activities, so as to optimize some performance metric. In many practical cases, however, the decision maker has the ability to choose the resource assignment as well as the starting times: this is a far-from-trivial task, with deep implications on the quality of the final schedule. Joint resource assignment and scheduling problems are incredibly challenging from a computational perspective.
Time Management, 2012
Control Engineering Practice, 1996
Complex systems are large applications, typically running on distributed, heterogeneous networks, driven by a number of distinct constraints and desiderata on goals such as performance, real-time behavior, and fault tolerance. These requirements frequently conflict, and satisfaction of these design objectives interacts strongly with assignment of system tasks to processors. The NSWC design framework DESTINATION provides an assignment module which can be used to optimize the system, as measured by the value of a weighted combination of objective cost functions. For even modest-sized systems and networks, assignment space is too large to search exhaustively; however, there are numerous algorithms which generate heuristically good assignments. However, compile-time evaluation of many interesting design factors, even those clearly related to assignment, is impossible without some estimate of the schedule. This paper therefore discusses approaches for determining a reasonable "pseudo-schedule" for a given system, network, and assignment, and the use of this to simulate execution in evaluating cost functions.
Proceedings of International Network Optimization …
Advances in microprocessors and computer networks have made distributed systems reality. However, exploiting the full potential of these systems requires efficient allocation of tasks comprising a distributed application to the available processors of the systems. This problem is known to be NP-hard and therefore untractable as soon as the number of tasks and/or processors exceeds a few units. This paper presents an optimal, memory efficient, algorithm for allocating an application program onto processors of a distributed system to minimize the program completion time. The algorithm derived from the well known Branch-and-Bound with some modifications to minimize its computational time. Some experimental results are given to show the effectiveness of the proposed algorithm.
SICE Journal of Control, Measurement, and System Integration
Resource allocation and scheduling under scarce resources and limited time are always critical and challenging tasks, not only because of the complex situation with diverse needs involved, but also of any unpredictable occurrence during the whole dynamic process. This work proposes an agent-based framework to integrate the resource allocation and scheduling under a set of limitations, which could respond to contingent changes as a dynamic system. We focus on the following research questions and formulate them as a constraint satisfaction problem: how many resources should be assigned and dispatched to which location, in which sequence and under what process scheduling with time, resource availability, and ability-matching limits. We first give the corresponding formal definition, and then combine real-coded genetic algorithm and dynamic scheduling of multi-functional resource assignment to tackle the above proposed research questions. In addition, we experiment the model with a small make-up case to suggest some preliminary scenarios. In future, this framework would be further applied to real life emergency situations with empirical data for training purposes and providing insight for relevant policy makers.
Wireless Communications and Mobile Computing
Resource management efficiency can be a beneficial step toward optimizing power consumption in software-hardware integrated systems. Languages such as C, C++, and Fortran have been extremely popular for dealing with optimization, memory management, and other resource management. We investigate novel algorithmic architectures capable of optimizing resource requirements and increasing energy efficiency. The experimental results obtained with C++ can be extended to other programming languages as well. We emphasize the inherent drawbacks of memory management operators. These operators are intended to be extremely generic in their application, just as the concept of dynamic memory is. As a result, they are unable to take advantage of the various optimization techniques and opportunities that specific use cases present. Each source code file is modeled after its own distinct memory usage pattern, which can be used to speed up memory management routines. Such concepts are frequently time-c...
International Journal of Engineering and Advanced Technology, 2019
Optimization problems are different from other mathematical problems in that they are able to discover solutions which are ideal or near ideal in accordance to the goals. Problems are not solved in one step, but we follow different sequence of steps to reach the solution. The steps could be to define problems, construct and solve models and evaluate and implement solutions. This paper presents an overall outlook of how a problem of optimization type can be solved
2012
The ARTOS research project at the University of Ulm explores the runtime administration of concurrent soft real-time applications. Available resources need to be assigned to applications, taking into account their priorities and utility. The applications support different modes with associated resource requirements. Higher modes reflect better quality of the outcome. An algorithm using a dynamic programming approach was designed to find the optimal solution to the problem of assigning resources to applications while achieving the highest possible utility. The algorithm was implemented in Java and in C as part of our preliminary work. Its evaluation shows which factors have the most pronounced impact on the performance of the optimization. It is investigated to which extent — regarding varying the numbers of applications and their modes, as well as the available resource limits — the algorithm is suitable for deployment in real-time systems.
Multiprocessors Tasks Scheduling Problem (MTSP) is a NP hard problem and MTSP is the most intensively studied problem in the wide area of optimization. There are a number of approximation algorithms and heuristics proposed in the literature which can yield to good solutions. But with the increase in the number of multiprocessors tasks, the complexity of the problem goes on increasing. There are a number of optimization techniques that can be used to solve extremely large sized problems with millions of tasks. In this paper we have discussed the use of various optimization techniques like Heuristic Algorithm, Genetic Algorithm, PSO, etc. to solve the MTSP.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings Seventh Euromicro Workshop on Real-Time Systems
Academic Staff Union of Polytechnics (ASUP), Iree Chapter, Osun State Polytechnic, Osun State, Nigeria, 2023
… of the 1st IEEE/ACM/IFIP …, 2003
Proceedings of the Genetic and Evolutionary Computation Conference on - GECCO '17, 2017
Proceedings Title: Proceedings of the 2012 Winter Simulation Conference (WSC), 2012
Proceedings of the 14th ACM SIGPLAN/SIGBED conference on Languages, compilers and tools for embedded systems - LCTES '13, 2013
Advances in Computers, 2005
Proceedings ED&TC European Design and Test Conference