Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006
AI
The document titled "Advances in Information Technologies for Electromagnetics" explores the evolution in the design and implementation of electromagnetic circuits and antennas, highlighting the increased complexity of modern applications that necessitate the use of computational and numerical methods. The content spans various topics including web services architecture, grid computing, complex computational electromagnetics, hybrid techniques, and future work in the field of electromagnetic simulations, aiming to enhance performance and efficiency in electromagnetic engineering.
IFIP Advances in Information and Communication Technology, 1996
2010
Large Scale Architectures, as Grid Computing, provide resources allowing to handle complex problems, vast collections of data storage, specific processing and collaborative interaction between distributed communities. Nowadays, there are several scientific applications that runs on Grid Computing architectures. However, in most of these cases, applications need to be adapted in order to better exploit the capacities and opportunities of scalability, heterogeneity and pervasive characteristics of Grid Computing. Large Scale Computational Electromagnetics problems place computational limitations in terms of hardware capacities. Parallel approaches proposed to face the demand of Computational Electromagnetics (CEM) aim to provide solutions to tackle high degree of complexity of the application and interaction with these large scale architectures. In this work, a description of the adaptation and implementation of Computational Electromagnetics solutions in a grid computing enabled environment is presented in terms of scientific results and performance associated with architectural opportunities.
Memorias De La Ula, 2009
Large Scale Architectures, as Grid Computing, provide resources allowing to handle complex problems, vast collections of data storage, specific processing and collaborative interaction between distributed communities. Nowadays, there are several scientific applications that runs on Grid Computing architectures. However, in most of these cases, applications need to be adapted in order to better exploit the capacities and opportunities of scalability, heterogeneity and pervasive characteristics of Grid Computing.
IETE Technical Review, 2011
Spurious electromagnetic interactions within and between systems endanger and may disturb their proper functional performance. The prevention, elimination, or suppression of such electromagnetic interference (EMI) to a satisfactory level constitutes the engineering discipline electromagnetic compatibility (EMC). Its objective is preservation or recovery of expected performance of electronic systems at minimal cost and minimal disturbance of the systems' development process. The need to provide timely solutions, commensurate with the system state of development, binds EMC tools, methodology, and management to efficient approximations, modeling, and predictions relying on incomplete knowledge of the system, and efficient measurement sequences. Lessons learned on EMC effort management, and insights gained are reviewed in perspective of 50 years of development of EMC.
IEEE Communications Magazine, 2004
2006 First European Conference on Antennas and Propagation, 2006
The ACE project initiated the start of several integration activities between European institutions involved in electromagnetic modeling of antennas with planar or conformal topologies. The goal of the integration activities was / is not to create a global software package that integrates the software of all partners, but to initiate a long term process for antenna software integration activities within the European antenna community. During the first two years of ACE the integration activities were performed in several groups with a rather small number of partners in each group. The groups were formed by partners who wanted to integrate a specific approach developed by one partner into the software code of another partner. This allows increasing the capability and efficiency of a software code. In this paper a short overview of all integration activities is given.
Recent Advances in Computational Science and Engineering, 2002
The development of large-scale multidisciplinary scientific applications for high-performance computers today involves managing the interaction between portions of the application developed by different groups. The CCA (Common Component Architecture) Forum is developing a component architecture specification to address high-performance scientific computing, emphasizing scalable (possibly-distributed) parallel computations. This paper presents an examination of the CCA software in sequential and parallel electromagnetics applications using unstructured adaptive mesh refinement (AMR). The CCA learning curve and the process for modifying Fortran 90 code (a driver routine and an AMR library) into two components are described. The performance of the original applications and the componentized versions are measured and shown to be comparable.
International Journal of Electrical Engineering Education, 1992
Time saved by parallel separate development of hardware and software for a given application is often lost when it comes to marrying the two components-but adequate training can prevent this, says Jonathan Bowen Hardware and software are often designed separately and in parallel to speed up the design stage. A microprocessor-based project is described which demonstrates some examples of the possible pitfalls during the integration of these two components. Participants in the project are also introduced to the advanced development equipment which should be used for this purpose. The experiences of a group of postgraduate students who initially undertook the project are described. microsystems system integration development syslems Because of time constraints, various aspects of industrial development work are often carried out in parallel. In microprocessor-based systems, an obvious breakdown of tasks is that between the production of software and hardware. However, when these components are produced separately many difficulties can arise when they are integrated into a complete product. For instance part of the system may have been incompletely or incorrectly specified. The purpose of the project described in this paper is to illustrate some of these problems and also to allow hands-on experience of the sophisticated equipment needed for this purpose. The project was developed as part of an intensive one-week UK Science and Engineering Research Council vacation course on robot technology for postgraduate students. Owing to the short timescale of the project, a relatively simple product was designed and partially implemented for the participants. The hardware was completely designed apart from the choice of divider
Future Generation Computer Systems, 2008
The Grid-Enabled Computational Electromagnetics (GECEM) portal is a problem-solving environment that uses grid technologies to support scientists in accessing distributed resources for the solution of computational electromagnetics (CEM) problems. These resources include input files specifying the system geometry, and proprietary software and hardware for mesh generation and CEM simulation. Through interacting with a web-based grid portal, a user can access these resources, submit jobs, monitor and execute distributed grid applications, and collaboratively visualize the results of the CEM simulation. Thus, the portal allows users to use Grid infrastructure to share resources among geographically distributed partners in order to execute large-scale computational electromagnetics simulations, and to collaboratively analyze and visualize the results. This paper describes a secure, web-based portal, built on the GridSphere framework, composed of a number of JSR-168 compliant portlets that deploy services at the back end for discovery and management of distributed resources, and for invoking services for mesh generation and CEM simulation. The paper also describes how security is achieved through the Grid Security Infrastructure and single sign-on.
Wiley Series in Communications Networking & Distributed Systems, 2003
1995
Abstract We describe our development of a" real world" electromagnetic application on distributed computing systems. A computational electromagnetics (CEM) simulation for radar cross-section (RCS) modeling of full scale airborne systems has been ported to three networked workstation cluster systems: an IBM RS/6000 cluster with Ethernet connection; a DEC Alpha farm connected by a FDDI-based Gigaswitch; and an ATM-connected SUN IPXs testbed.
Grid Computing. Wiley InterScience, Hoboken, NJ, S, 2003
Part A of this book, chapters 1 to 5, provides an overview and motivation for Grids. Further chapter 37 is an illuminating discussion from 1992 of Metacomputing-a key early concept on which much of the Grid has been built. Chapter 2 is a short overview of the Grid reprinted from Physics Today [14]. Chapter 3 gives a detailed recent history of the Grid while chapter 4 describes the software environment of the seminal I-WAY experiment at SC95. This conference project challenged participantsincluding for instance the Legion activity of chapter 10-to demonstrate Grid-like applications on an OC-3 backbone. Globus [6] grew out of the software needed to support these 60 applications at 17 sites; the human intensive scheduling and security used by I-WAY showed the way to today's powerful approaches. Many of these applications employed visualization including CAVE virtual reality stations as demonstrated in the early Metacomputing work of chapter 37. Chapter 5 brings us to 2002 and describes the experience building Globus-based grids for NASA and DoE. Metacomputing and hence the Grid was born in the High Performance Computing and Communication (HPCC) activities of the 1980's and 1990's. In particular the multiagency Grand (application) Challenges brought critical issues to the fore. We realized the importance of coupling programs together to solve multidisciplinary applications. Here we see beginnings of "Science as a team sport" elaborated in section 9. We saw applications like that of figure 2 that linked instruments, visualization, computing and data [3]-often in a pipeline-and were one of the first important classes of successful Grid applications. HPCC of course built on the growing use of parallel computing and software infrastructure like PVM, MPI, HPF, and OpenMP to support scalable applications. Initially it was thought that the Grid would be most useful in extending these parallel computing approaches from tightly coupled clusters to geographically distributed systems. However more important has been the integration of loosely coupled system-each component of which might be running in parallel on a low-latency parallel machine. The critical Grid task of managing these heterogeneous components as we scale the size of distributed systems replaces that of the tight synchronization of the typically identical (in program but not data as in SPMD-single program multiple data-model) parts of a domain decomposed parallel application. Networking was injected into the Grid-initially with initiatives like the Gigabit testbed program [15] with its projects-Aurora, Blanca, Casa, Nectar, and Vistanet-and dual goals: to investigate potential testbed network architectures, and to explore their usefulness for end-users. Computational Grids got their name from analogies with other national or global infrastructure systems [16]. They share properties with Electric Power Grids; both are ubiquitous; in both cases one does not need to know source of (electric or computing) power (transformer or generator, PC or supercomputer) and the supplying organization. Computational Grids also have differences-they have a wider performance spectrum, more and more heterogeneous services, complex access and security issues, and complex socio-political factors. As we move from distributed to Grid computing, we are moving from largely addressing geographical separation to a focus on the integration and management of software. We are developing a new software engineering as the Grid Web services described starting in chapter 7, provide a component model applicable to any software development project. Interesting features include intrinsic self-documentation of such
In this paper we report about the Grid activities that are being carried out at the Information and Communication Technologies Division of CIEMAT. At the same time, a summary of the supercomputation ones is also presented. Most of these activities have been developed in the framework of the Sixth and Seventh Framework Programmes of the European Commission, some of them mainly focused on the European communities, but others on the Latin American ones too.
Much of today's finite elements-based software for electromagnetic field computation has been developed on an ad hoc basis, often as an outgrowth of ongoing research whether at universities or agencies like NASA. " Design " is seen as the execution of computational algorithms yielding the computed design, rather than the holistic process that begins with the design of the software itself and ends in the computations using that software. As a result finite element software product development has not been informed by the benefits of starting with a rigorous requirements analysis and going through the normally mandated design process in any rational software engineering development process. This paper examines such a development and identifies the various benefits that arise from that process. Of particular note is the ability to have a list of software components from which we pick the most appropriate to the problem being addressed – thereby giving users choice to suit their particular computational environment, be it in methods or hardware; and the ability to implement a model of the design using UML and then transform that into source code using modern facilities. The latter makes porting to new programming languages easy and ensures that the finite element code is proper rather than simply working. Reengineering legacy software by transforming existing code to UML for analysis becomes possible. And the planning document leads to all goals being realised and is easily transformed into a user manual. 1. THE SOFTWARE ENGINEERING LIFECYCLE * A formal software development process is carefully defined in terms of the software engineering lifecycle for the best design and product outcome. The development of finite element field computation software, however, commenced in an era when the discipline of software engineering had not even begun, let alone matured. Indeed much of the development was in languages such as FORTRAN that today are not considered the best. The focus was on the computation rather than on a continuous process from software development to computation using the
IEEE Transactions on Electromagnetic Compatibility, 1991
A knowledge-based approach for the modeling of electromagnetic interactions in a system is described. The purpose is to determine any unwanted EM effects that could jeopardize the safety and operation of the system. Modeling the interactions in a system requires the examination of the compounded and propagated effects of the electromagnetic fields. A useful EM modeling approach is one that is incremental and constraint based. The approach taken here subdivides the modeling task into two parts: a) the definition of the related electromagnetic topology and b) the propagation of the electromagnetic constraints. A prototype of some of the EM constraints has been implemented in Quintus Prolog under NeWS on a Sun workstation. User interaction is through a topology drawing tool and a stack-based attribute interface similar to the Hyper-CardTM interface of the Apple Macintosh computer.
The Grid has the prospective to essentially change the way science and engineering are done. Aggregate power of manipulative resources connected by networks—of the Grid— surpasses that of any single supercomputer by many orders of greatness. At the same time, our skill to carry out computations of the scale and level of detail required, for example, to study the Universe, or simulate a rocket engine, are severely constrained by available computing power. Hence, such presentations should be one of the main powerful forces behind the expansion of Grid computing. Grid computing is evolving as new surroundings for solving hard problems. Linear and nonlinear optimization problems can be computationally costly. The resource access and organization is one of the most significant key factors for grid computing. It requires a mechanism with automatically making decisions, ability to support computing tasks cooperating and scheduling. Grid computing is a dynamic research area which assurances to provide a springy structure of compound, energetic and distributed resource sharing and cultured problem solving environments. The Grid is not only a low level organization for secondary computation, but can also simplify and enable material and knowledge sharing at the higher semantic levels, to support knowledge mixing and distribution.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.