Asynchronous Methods in Scientific and Mathematical Computing

Kalyan S. Perumalla

The following is a small selection of publications that illustrate the wide range of areas in which asynchronous methods are being applied:

[1] H.
Karimabadi, J. Driscoll, Y. Omelchenko, and N. Omidi, "A New Asynchronous
Methodology for Modeling of Physical Systems: Breaking the Curse of Courant
Condition," *Journal of Computational
Physics*, vol. 205, 2005.

[2] A. Lew, J. E. Marsden, M. Ortiz, and M.
West, "Asynchronous Variational Integrators," *Archives for Rational Mechanics and Analysis*, vol. 167, pp. 85-146,
2002.

[3] Y. Tang, K. S. Perumalla, R. M.
Fujimoto, H. Karimabadi, J. Driscoll, and Y. Omelchenko, "Optimistic
Simulations of Physical Systems using Reverse Computation," *SIMULATION: Transactions of The Society for
Modeling and Simulation International*, vol. 82, pp. 61-73, 2006.

[4] A. H. Buss and P. J. Sanchez, "Simple Movement and Detection in Discrete Event Simulation," presented at Winter Simulation Conference, Orlando, FL, 2005.

[5] J. Nutaro, "Parallel Discrete Event
Simulation with Application to Continuous Systems," in *Department of Electrical and Computer
Engineering*, vol. Ph.D. Tucson, AZ: University of Arizona, 2003, pp. 182.

[6] B. Lubachevsky, "Several Unsolved Problems in Large-Scale Discrete Event Simulations," presented at Workshop on Parallel and Distributed Simulation, 1993.

[7] S. Miller and S. Luding,
"Event-driven Molecular Dynamics in Parallel," *Journal of Computational Physics*, vol. 193, pp. 306-316, 2004.

[8] J. M. Esposito and V. Kumar, "An
Asynchronous Integration and Event Detection Algorithm for Simulating
Mult-Agent Hybrid Systems," *ACM
Transactioins on Modeling and Computer Simulation*, vol. 14, pp. 363-388,
2004.

The abstracts of the preceding references are reproduced here (from open online sources) as an at-a-glance snapshot for reader convenience.

· Sequential discrete event simulation approaches to electrostatic codes [1]

**Abstract** Computer
simulation of many important complex physical systems has reached a plateau
because most conventional techniques are ill equipped to deal with the
multi-scale nature of such systems. The traditional technique to simulate
physical systems modeled by partial differential equations consists of breaking
the simulation domain into a spatial grid and then advancing the state of the
system synchronously at regular discrete time intervals. This so-called
time-driven (or time-stepped) simulation (TDS) has inherent inefficiencies such
as the time step restriction imposed by a global CFL (Courant–Friedrichs–Levy)
condition. There is ongoing research on introducing local time refinement
(local versus global CFL) within the time-stepped methodology. Here, we propose
an entirely different (asynchronous) simulation methodology which uses the
spatial grid but the time advance is based on a discrete event-driven (as
opposed to time-driven) approach. This new technique immediately offers several
major advantages over TDS. First, it allows each part of the simulation, that
is the individual cells in case of fluid simulations and individual particles
within a cell in case of particle simulations, to evolve based on their own
physically determined time scales. Second, unlike in the TDS where the system
is updated faithfully in time based on a pre-selected user specified time step,
here the role of time step is replaced by a threshold for local state change. In
our technique, individual parts of the global simulation state are updated on a
“need-to-be-done-only” basis. Individual parts of the simulation domain set
their own time scales for change, which may vary in space as well as during the
run. In a particle-in-cell (PIC) simulation, DES enables a self-adjusting
temporal mesh for each simulation entity down to assigning an individual time
step to each particle. In this paper, we illustrate this new technique via the
example of a spacecraft charging in a neutral plasma due to injection of a
charged beam particle from its surface and compare its performance with the
traditional techniques. We find that even in one-dimension, the new DES
technology can be more than 300 times faster than the traditional TDS. Aside from
sheer performance advantages, the real power of this technique is in its
inherent ability to adapt to the spatial inhomogeneity of the problem. This
enables building intelligent algorithms where interaction of simulation
entities (e.g., cells, particles) follow elementary rules set by the underlying
physics laws. Finally, our extensions of this technique to other problems such
as the solution of diffusion equation and electromagnetic codes are briefly
discussed.

· Asynchronous update approach to numerical integration, applied to material science [2]

**Abstract** We
describe a new class of asynchronous variational integrators (AVI) for
nonlinear elastodynamics. The AVIs are distinguished by the following
attributes: (i) The algorithms permit the selection of independent time steps
in each element, and the local time steps need not bear an integral relation to
each other; (ii) the algorithms derive from a spacetime form of a discrete
version of

· Parallel simulation of physical systems [3]

**Abstract** Efficient
computer simulation of complex physical phenomena has long been challenging due
to their multiphysics and multiscale nature. In contrast to traditional
time-stepped execution methods, the authors describe an approach using
optimistic parallel discrete event simulation (PDES) and reverse computation
techniques to execute plasma physics codes. They show that reverse
computation-based optimistic parallel execution can significantly reduce the
execution time of an example plasma simulation without requiring a significant
amount of additional memory compared to conservative execution techniques. The
authors describe an application-level reverse computation technique that is
efficient and suitable for complex scientific simulations.

· Discrete event approach to coverage crossing detection [4]

**Abstract** Many
scenarios involving simulation require modeling movement and sensing.
Traditionally, this has been done in a time-stepped manner, often because of a
mistaken belief that using a pure discrete event approach is infeasible. This
paper discusses how simple motion (linear, uniform, two-dimensional) and simple
sensing can be modeled with a pure Discrete Event approach. We demonstrate that
this approach is not only feasible, it is often more desirable from several
standpoints.

· Discrete event system specification for asynchronous simulation of continuous and/or hybrid systems [5]

**Abstract** Recent
advances in discrete event modeling of continuous systems have emphasized the
need for high performance simulation engines. This need is particularly acute
when discrete event methods are applied to the numerical solution of partial
differential equations. Accurate approximations can require thousands, or even
millions, of cells. The corresponding requirements for memory and computing
power can readily exceed what is available on a single processor computer.
Discrete event simulations are characterized by asynchronous and irregular,
random, or data dependent behavior. This makes parallel algorithm design
particularly challenging. Known parallel discrete event simulation algorithms
have been developed in terms of event and process oriented world views. In
contrast to this, the Discrete Event System Specification (DEVS) forms the
foundation of research into discrete event approximations of continuous
systems. While event and process oriented models can be expressed in terms of
the DEVS modeling formalism, there are DEVS models that do not seem to have an
equivalent representation in the event or process oriented world views. This
leaves open the question of how existing parallel discrete event simulation
algorithms must be adapted in order to simulate DEVS models. In this
dissertation, discrete simulation algorithms are built up from the basic
definition of a discrete event system. The parallel algorithms that are
developed through this approach are shown to operate correctly. To conclude
this study, these algorithms are applied to producing numerical solutions of a
hyperbolic conservation law (Sod's shock tube problem) and the wave equation.

· Optimal packing problems with application to physical systems and communication networks [6]

**Abstract** Several
mathematical and algorithmic problems that have arisen in discrete event
simulations of large systems are described. The simulated systems belong to the
areas of computational physics, queueing networks, and econometric models.

· Event-driven molecular dynamics [7]

**Abstract** Although
event-driven algorithms have been shown to be far more efficient than
time-driven methods such as conventional molecular dynamics, they have not
become as popular. The main obstacle seems to be the difficulty of
parallelizing event-driven molecular dynamics. Several basic ideas have been
discussed in recent years, but to our knowledge no complete implementation has
been published yet. In this paper we present a parallel event-driven algorithm
including dynamic load-balancing, which can be easily implemented on any
computer architecture. To simplify matters our explanations refer to a basic
multi-particle system of hard spheres, but can be extended easily to a wide
variety of possible models.

· Hybrid system simulation [8]

**Abstract** A
simulation algorithm is presented for multi-agent hybrid systems---systems
consisting of many sets of nonsmooth differential equations---such as systems
involving multiple rigid bodies, vehicles, or airplanes. The differential
equations are partitioned into coupled subsystems, called "agents";
and the conditions which trigger the discontinuities in the derivatives, called
"events", may depend on the global state vector. Such systems
normally require significant computational resources to simulate because a
global time step is used to ensure the discontinuity is properly handled. When
the number of systems is large, forcing all system to be simulated at the same
rate creates a computational bottleneck, dramatically decreasing efficiency. By
using a control systems approach for selecting integration step sizes, we avoid
using a global time step. Each subsystem can be simulated asynchronously when
the state is away from the event. As the state approaches the event, the
simulation is able to synchronize each of the local time clocks in such a way
that the discontinuities are properly handled without the need for "roll
back". The algorithm's operation and utility is demonstrated on an example
problem inspired by autonomous highway vehicles. Using a combination of
stochastic modelling and numerical experiments we show that the algorithm
requires significantly less computation time when compared with traditional
simulation techniques for such problems, and scales more favorably with problem
size.