Papers by Iulian Grindeanu
REMORA v1.0
OSTI OAI (U.S. Department of Energy Office of Scientific and Technical Information), Oct 11, 2023
Accelerating Multivariate Functional Approximation Computation with Domain Decomposition Techniques
Lecture Notes in Computer Science, 2023

arXiv (Cornell University), Oct 12, 2022
Compactly expressing large-scale datasets through Multivariate Functional Approximations (MFA) ca... more Compactly expressing large-scale datasets through Multivariate Functional Approximations (MFA) can be critically important for analysis and visualization to drive scientific discovery. Tackling such problems requires scalable data partitioning approaches to compute MFA representations in amenable wall clock times. We introduce a fully parallel scheme to reduce the total work per task in combination with an overlapping additive Schwarz-based iterative scheme to compute MFA with a tensor expansion of B-spline bases, while preserving full degree continuity across subdomain boundaries. While previous work on MFA has been successfully proven to be effective, the computational complexity of encoding large datasets on a single process can be severely prohibitive. Parallel algorithms for generating reconstructions from the MFA have had to rely on post-processing techniques to blend discontinuities across subdomain boundaries. In contrast, a robust constrained minimization infrastructure to impose higher-order continuity directly on the MFA representation is presented here. We demonstrate the effectiveness of the parallel approach with domain decomposition solvers, to minimize the subdomain error residuals of the decoded MFA, and more specifically to recover continuity across non-matching boundaries at scale. The analysis of the presented scheme for analytical and scientific datasets in 1-, 2-and 3-dimensions are presented. Extensive strong and weak scalability performances are also demonstrated for large-scale datasets to evaluate the parallel speedup of the MPI-based algorithm implementation on leadership computing machines.

Geometry and Mesh Representations for Ice Sheet Modeling
AGUFM, Dec 1, 2010
ABSTRACT Timothy J. Tautges, Argonne National Laboratory Iulian Grindeanu, Argonne National Labor... more ABSTRACT Timothy J. Tautges, Argonne National Laboratory Iulian Grindeanu, Argonne National Laboratory Simulations of land-based ice sheets require definitions of the basal and ice sheet top surface geometries; these models are used to generate discretizations/meshes on which the equations are solved, as well as to provide tangent and normal evaluations used to formulate boundary conditions. We report on our implementation of a smooth surface representation of ice bed data. Starting with point-based elevation data provided in the Cresis dataset, a triangulation is constructed, then decimated to a reasonable number of triangles. Local Bezier patches are computed on the triangles, leading to a C1-continuous surface providing point location, tangent, and normal field evaluations. This representation is evaluated directly from unstructured mesh generation algorithms, resulting in an unstructured mesh of user-specified refinement for the ice sheet. Examples are given of meshes generated for the Jakobshavn ice sheet based on CRESIS bed elevation data. Unstructured quadrilateral mesh of Jakobshavn ice sheet bed. Close-up of Jakobshavn quadrilateral mesh, showing conformity with surface features.

arXiv (Cornell University), Jan 3, 2023
B-spline models are a powerful way to represent scientific data sets with a functional approximat... more B-spline models are a powerful way to represent scientific data sets with a functional approximation. However, these models can suffer from spurious oscillations when the data to be approximated are not uniformly distributed. Model regularization (i.e., smoothing) has traditionally been used to minimize these oscillations; unfortunately, it is sometimes impossible to sufficiently remove unwanted artifacts without smoothing away key features of the data set. In this article, we present a method of model regularization that preserves significant features of a data set while minimizing artificial oscillations. Our method varies the strength of a smoothing parameter throughout the domain automatically, removing artifacts in poorly-constrained regions while leaving other regions unchanged. The proposed method selectively incorporates regularization terms based on first and second derivatives to maintain model accuracy while minimizing numerical artifacts. The behavior of our method is validated on a collection of two-and three-dimensional data sets produced by scientific simulations. In addition, a key tuning parameter is highlighted and the effects of this parameter are presented in detail. This paper is an extension of our previous conference paper at the 2022 International Conference on Computational Science (ICCS) [1].
Multivariate Functional Approximation of Scientific Data
Mathematics and visualization, 2022

Sizing Design Sensitivity Analysis and Optimization of a Hemispherical Shell With a Nonradial Penetrated Nozzle
Journal of Pressure Vessel Technology-transactions of The Asme, Aug 1, 1998
This paper presents a sizing optimization of a hemispherical shell with a nonradial penetrated no... more This paper presents a sizing optimization of a hemispherical shell with a nonradial penetrated nozzle, under internal pressure loading. The objective of the optimization is to decrease the stress concentration in the intersection area, through the variation of the thicknesses of the shells, on condition the volume have a minimum value. This is accomplished by performing a stress analysis and by developing a sizing optimization. The variational equations of elasticity and an adjoint variable method are employed to calculate sizing design sensitivities of stresses. This continuum approach combined with a gradient method could be applied to broad classes of elastic structural sizing optimization problems.
Customizable adaptive regularization techniques for B-spline modeling
Journal of Computational Science
Advanced Partitioning Strategies for Scalable Remapping in Climate Models
Proposed for presentation at the SIAM Conference on Computational Science and Engineering held March 1-5, 2021 in Fort Worth, TX, US., 2021

B-spline models are a powerful way to represent scientific data sets with a functional approximat... more B-spline models are a powerful way to represent scientific data sets with a functional approximation. However, these models can suffer from spurious oscillations when the data to be approximated are not uniformly distributed. Model regularization (i.e., smoothing) has traditionally been used to minimize these oscillations; unfortunately, it is sometimes impossible to sufficiently remove unwanted artifacts without smoothing away key features of the data set. In this article, we present a method of model regularization that preserves significant features of a data set while minimizing artificial oscillations. Our method varies the strength of a smoothing parameter throughout the domain automatically, removing artifacts in poorly-constrained regions while leaving other regions unchanged. The behavior of our method is validated on a collection of two- and three-dimensional data sets produced by scientific simulations.

Lessons learned from integrating scientific libraries within a plugin architecture
Designing interoperable software is critical to boosting scientific productivity in many research... more Designing interoperable software is critical to boosting scientific productivity in many research applications, especially when exposed through flexible interfaces such as a plugin or component based framework. Such scalable applications with complex dependency chains, often require rigorous source configuration, continuous testing and flexible deployment processes that cover a wide range of platforms (Linux/OSX/Windows) and environment variations. We will discuss hurdles in achieving interoperability using lessons learned from developing a serial/parallel MOAB database plugin for VisIt. The development of this plugin has led us to question some library design choices, and emphasized the need for creating better processes (versioning and build configuration) that are resilient to software interface and lifecycle changes. Taking the lessons learned from this new VisIt plugin, we present best practices and guidelines that are more broadly applicable to scientific software development in CSE applications.<br><br>Poster presented at SIAM CSE17 PP108 Minisymposterium: Software Productivity and Sustainability for CSE and Data Science.

We present the Feature Tracking Kit (FTK), a framework that simplifies, scales, and delivers vari... more We present the Feature Tracking Kit (FTK), a framework that simplifies, scales, and delivers various feature-tracking algorithms for scientific data. The key of FTK is our high-dimensional simplicial meshing scheme that generalizes both regular and unstructured spatial meshes to spacetime while tessellating spacetime mesh elements into simplices. The benefits of using simplicial spacetime meshes include (1) reducing ambiguity cases for feature extraction and tracking, (2) simplifying the handling of degeneracies using symbolic perturbations, and (3) enabling scalable and parallel processing. The use of simplicial spacetime meshing simplifies and improves the implementation of several feature-tracking algorithms for critical points, quantum vortices, and isosurfaces. As a software framework, FTK provides end users with VTK/ParaView filters, Python bindings, a command line interface, and programming interfaces for feature-tracking applications. We demonstrate use cases as well as scal...

IEEE Transactions on Visualization and Computer Graphics, 2021
We present the Feature Tracking Kit (FTK), a framework that simplifies, scales, and delivers vari... more We present the Feature Tracking Kit (FTK), a framework that simplifies, scales, and delivers various feature-tracking algorithms for scientific data. The key of FTK is our simplicial spacetime meshing scheme that generalizes both regular and unstructured spatial meshes to spacetime while tessellating spacetime mesh elements into simplices. The benefits of using simplicial spacetime meshes include (1) reducing ambiguity cases for feature extraction and tracking, (2) simplifying the handling of degeneracies using symbolic perturbations, and (3) enabling scalable and parallel processing. The use of simplicial spacetime meshing simplifies and improves the implementation of several feature-tracking algorithms for critical points, quantum vortices, and isosurfaces. As a software framework, FTK provides end users with VTK/ParaView filters, Python bindings, a command line interface, and programming interfaces for feature-tracking applications. We demonstrate use cases as well as scalability studies through both synthetic data and scientific applications including tokamak, fluid dynamics, and superconductivity simulations. We also conduct end-to-end performance studies on the Summit supercomputer. FTK is open sourced under the MIT license: https://github.com/hguo/ftk. Index Terms-Feature tracking, spacetime meshing, distributed and parallel processing, critical points, isosurfaces, vortices. ✦ 1 INTRODUCTION Feature tracking is a core topic in scientific visualization for understanding dynamic behaviors in time-varying simulation and experimental data. By tracking features such as extrema, vortex cores, and boundary surfaces, one can highlight key regions in visualization, reduce data to store, and

Computational physics solvers take a continuous domain of interest, the problem geometry, and app... more Computational physics solvers take a continuous domain of interest, the problem geometry, and apply specialized discretizations based on the Partial Differential Equation (PDE) describing the relevant physical model. The discretization involves two components: computing a discrete mesh of the domain that accurately models the continuous geometry and defining the solution fields as degrees-of-freedom on the mesh, that needs to be computed by appropriately solving the discretized operators. The resulting single/multi component physics solutions need to be serialized back to a format amenable for visualization. The Scalable Interfaces for Geometry and Mesh based Applications (SIGMA) toolkit provides interfaces and tools to access geometry data, create high quality unstructured meshes along with unified data-structures to load and manipulate parallel computational meshes for various applications to enable efficient physics solver implementations. Mesh generation is a complex problem since most problem geometries involve complicated curved surfaces that require physics imposed spatial resolution and optimized elements for good quality. These tools simplify the process of generation and handling of discrete meshes with scalable algorithms to leverage efficient usage from desktop to petascale architectures. SIGMA contains several components including CGM for the geometry handling, MOAB to represent the unstructured mesh data-structure, MeshKit containing several mesh generation algorithms and CouPE for driving coupled multi-physics solvers. These components are used tightly within the SHARP toolkit that is part of the Integrated Product Line for fast reactor simulations. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. In this report, we present details on SIGMA toolkit along with its component structure, capabilities, and feature additions in FY15, release cycles, and continuous integration process. These software processes along with updated documentation are imperative to successfully integrate and utilize in several applications including the SHARP coupled analysis toolkit for reactor core systems funded under the NEAMS DOE-NE program.
ACES4BGC Project Anual Report
ACES4BGC Project Anual Report 2014

Procedia Computer Science, 2013
Earth science high-performance applications often require extensive analysis of their output in o... more Earth science high-performance applications often require extensive analysis of their output in order to complete the scientific goals or produce a visual image or animation. Often this analysis cannot be done in situ because it requires calculating time-series statistics from state sampled over the entire length of the run or analyzing the relationship between similar time series from previous simulations or observations. Many of the tools used for this postprocessing are not themselves highperformance applications, but the new Parallel Gridded Analysis Library (ParGAL) provides high-performance data-parallel versions of several common analysis algorithms for data from a structured or unstructured grid simulation. The library builds on several scalable systems, including the Mesh Oriented DataBase (MOAB), a library for representing mesh data that supports structured, unstructured finite element, and polyhedral grids; the Parallel-NetCDF (PNetCDF) library; and Intrepid, an extensible library for computing operators (such as gradient, curl, and divergence) acting on discretized fields. We have used ParGAL to implement a parallel version of the NCAR Command Language (NCL) a scripting language widely used in the climate community for analysis and visualization. The data-parallel algorithms in ParGAL/ParNCL are both higher performing and more flexible than their serial counterparts.
Improving climate model coupling through complete mesh representation
Improving climate model coupling through complete mesh representation<br><br>
Uploads
Papers by Iulian Grindeanu