Contact:

Kate Nelson

CCE Headquarters

77 Massachusetts Avenue

Room 35-434B

Cambridge, MA 02139

617.253.3725 | Email

Other seminar series of interest

Computational Research in Boston and Beyond(CRIBB)

Numerical Methods for Partial Differential Equations

*Thursday, March 23rd | 12:00 PM | 37-212 *

**Optimal interpolatory model reduction: Moving from linear to nonlinear dynamics**

*Serkan Gugercin*

Department of Mathematics

Virginia Tech, Blacksburg

ABSTRACT:

Numerical simulation of large-scale dynamical systems plays a crucial role and may be the only possibility in studying a great variety of complex physical phenomena with applications ranging from heat transfer to fluid dynamics, to signal propagation and interference in electronic circuits, and many more. However these large-scale dynamical systems present significant computational difficulties when used in numerical simulation. Model reduction aims to reduce this computational burden by constructing simpler (reduced order) models, which are much easier and faster to simulate yet accurately represent the original system. These simpler reduced order models can then serve as efficient surrogates for the original, replacing them as components in larger systems; facilitating rapid development of controllers for real time applications; enabling optimal system design and uncertainty analysis.

For linear dynamical systems, model reduction has achieved great success. In the case of linear dynamics, we know how to construct, at a modest cost, (locally) optimal, input-independent reduced models; that is, reduced models that are uniformly good over all inputs having bounded energy. In addition, in some cases we can achieve this goal using only input/output data without a priori knowledge of internal dynamics. Even though model reduction has been successfully and effectively applied to nonlinear dynamical systems as well, both the reduction process and the reduced models are usually input dependent and the high fidelity of the resulting approximation is generically restricted to the training input/data. In this talk, we will offer remedies to this situation. First, we will review model reduction for linear systems by using rational interpolation as the underlying framework. The concept of transfer function will prove fundamental in this setting. Then, we will show how rational interpolation and transfer function concepts can be extended to nonlinear dynamics, specifically to bilinear systems and quadratic-in-state systems, allowing us to construct input-independent reduced models in this setting as well. Several numerical examples will be illustrated to support the discussion.

*Thursday, December 8th | 12:00 PM | 37-212 *** **

**Explicit parametric solutions:** **the “extra-mile” for Model Order Reduction**

*Antonio Huerta*

Laboratori de Càlcul Numèric (LaCàN)

Universitat Politècnica de Catalunya·BarcelonaTech, Spain** **

ABSTRACT:

Computational Mechanics techniques are well integrated in today’s applied sciences and engineering practice. However, their cost (man-hours for pre- and post-processing, as well as CPU time) is unaffordable for a large number of applications in a daily industrial/engineering production environment as well as for many scientific simulations. This is substantiated when the simulation requires a large number of possible scenarios, corresponding to different instances of a parametric family of problems (for instance in the context of support for decision-making, optimization and uncertainty quantification).

A standard strategy to reduce costs and increase the practical interest of these computational techniques is using pre-computed solutions or the more formal framework of reduced order models. This requires pre-computing offline some representative samples of the parametric family of solutions (viz. snapshots for Reduced Basis methods, principal components for POD, …). Then, any other instance is computed online with a small computational overhead. In the case of the Proper Generalized Decomposition, the offline phase provides an explicit description of the parametric solution, i.e. an *explicit parametric solution*. Thus, the offline phase is more involved but the online phase is a simple functional evaluation with a negligible computational overhead.

This approach allows to easily adapt parameterized solutions in Graphical User Interfaces (GUIs) to readily visualize solutions in fast decision-making processes (supported by human-computer interaction).

But more important, these explicit parametric solutions can be further employed as an ingredient in more sophisticated computational strategies.

*Thursday, November 3rd, 2016 | 12:00 PM | 37-212*

**Bayesian Inference of High-Dimensional Dynamical Models: The Survival of the Fittest**

*Pierre Lermusiaux*

Mechanical Engineering

Massachusetts Institute of Technology

ABSTRACT:

We develop and illustrate a dynamics-based Bayesian inference methodology that assimilates sparse observations for the joint inference of the state, parameters, boundary conditions, and initial conditions of dynamical models, but also of the parameterizations, geometry, and model equations themselves. The joint Bayesian inference combines stochastic Dynamically Orthogonal (DO) partial differential equations for reduced-dimension probabilistic prediction with Gaussian Mixture Models for nonlinear filtering and smoothing. The Bayesian model inference is completed by parallelized and analytical computation of marginal likelihoods for multiple candidate models. The classes of models considered correspond to competing scientific hypotheses and differ in complexity and in the representation of specific processes. Within each model class, model equations have unknown parameters and uncertain parameterizations, all of which are estimated. For each model class, the result is a Bayesian update of the joint distribution of the state, parameters, and parameterizations. The combined scientific result is a rigorous Bayesian inference of the marginal distribution of dynamical model equations. Examples are provided for time-dependent fluid and ocean flows. They include the inference of multiscale bottom gravity current dynamics, motivated in part by classic dense water overflows and their relevance to climate monitoring. The Bayesian inference of biogeochemical reaction equations is then presented, illustrating how PDE-based machine learning could rigorously guide the selection and discovery of complex ecosystem or other reaction models. This is joint work with our MSEAS group at MIT.

*Thursday, May 12th | 12:00 PM | 37-212*

**Kernel Approximations for Surrogate Modelling in Simulation Science**

*Bernard Haasdonk*

Institute of Applied Analysis and Numerical Simulation

Professor for Numerical Mathematics

University of Stuttgart

ABSTRACT:

Data-based approaches are gaining increasing attention for generating or improving simulation models in CSE. Application settings comprise modelling from data, i.e. measurements are given, and we aim to find a model, that can be used for simulation, or approximative surrogate modelling, where a model is given and a cheap surrogate model is constructed based on simulation data of the former.

In this presentation I focus on kernel methods for generating such models. These powerful techniques have proven to be successful in various applications in data-science such as pattern recognition, machine learning, bioinformatics, etc. In addition to relevant applicability, they also enable elegant mathematical analysis in so called reproducing kernel Hilbert spaces (RKHS).

In the context of simulation models, kernel methods can be used for sparse vectorial function approximation, for example by vectorial support vector regression or the vectorial kernel orthogonal greedy algorithm (VKOGA). For the VKOGA theoretical analysis can be given in terms of local optimality and convergence rates [2]. The resulting approximants allow efficient complexity reduction in projection-based model order reduction [1] or in multiscale problems as demonstrated on applications from biomechanics and porous media flow [3].

References:

[1] Wirtz, D. & Haasdonk, B.: Efficient a-posteriori error estimation for

nonlinear kernel-based reduced systems, Systems and Control Letters,

2012, 61, 203 - 211.

[2] Wirtz, D. & Haasdonk, B.: An Improved Vectorial Kernel Orthogonal Greedy Algorithm,

Dolomites Research Notes on Approximation, 2013, 6, 83-100.

[3] Wirtz, D.; Karajan, N. & Haasdonk, B.: Surrogate Modelling of multiscale models using

kernel methods, International Journal of Numerical Methods in Engineering,

2015, 101, 1-28.

*Thursday, April 28th | 12:00 PM | 37-212*

**Impact of horizontal resolution (1/12 to 1/50 degree) on Gulf Stream separation and penetration in a series of North Atlantic HYCOM numerical simulations**

*Eric Chassignet*

Director, Center for Ocean-Atmospheric Prediction Studies

Professor of Oceanography, Department of Earth, Ocean and Atmospheric Science

Florida State University

ABSTRACT:

The impact of horizontal resolution (1/12 to 1/50 degree) on Gulf Stream separation and penetration is analyzed in a series of identical North Atlantic HYCOM configurations. The specific questions that will be addressed are as follows: When does a solution converge or is "good enough"? Are the mesoscale and sub-mesoscale eddy activity representative of interior quasigeostrophic (QG) or surface quasigeostrophic (SQG) turbulence? How well do the simulations compare to observations? We will show that the increase in resolution (1/50 degree) does lead to a substantial improvement in the Gulf Stream representation (surface and interior) when compared to observations and the results will be discussed in terms of ageostrophic contributions and power spectra.

*Thursday, March 31st | 12:00 PM | 37-212*

**Julia: Fast, Flexible and Fun**

*Alan Edelman*

Professor of Applied Mathematics

MIT

ABSTRACT:

With perhaps 150,000 users, Julia’s popularity is growing very fast. One newspaper even listed Julia as one of the skills that can lead to a higher salary.

In this talk, I will explain why Julia is fast, flexible, and fun. Simply put, the older computer languages such as Fortran and C were built for the computer. Modern technical languages have been built for humans. Julia strikes the proper balance for 2016 and the future, enabling humans and computers to cooperate.

With Julia, users get past the notion that a computer language is just a means to an end. Julia’s flexibility gives users power that they did not even know they wanted, but once learned there is no turning back.

*Thursday, February 25th | 12:00 PM | 37-212*

**Analytical Research at Bose Corporation: Enabling Innovation**

*John Wendell*

Analytical Research Group

Bose Corporation

ABSTRACT:

Twenty years ago, numerical tools like finite element modelling had little impact on Bose products. A few problems had begun to crop up which were intractable for traditional modelling approaches used in the audio industry, and we started an advanced analysis project to bring in numerical tools for magnetics, non-linear structures, and acoustics. Today it would be hard to name an important Bose product with no impact from numerical modelling, and we’ve used these tools not just for design assurance but even more often to gain insight for invention and innovation. This talk will cover the current range of activity, from acoustics to non-linear dynamics, and show several specific examples where numerical modelling has contributed to new products and innovative designs.

John Wendell leads the Analytical Research Group at Bose Corporation. At M.I.T. he earned Ph.D. and S.B. degrees in Aeronautics and Astronautics, with a focus on structural dynamics and aeroelasticity.

*Thursday, December 3rd | 12:00 PM | 37-212*

**Multiscale reservoir simulation: from Poisson's equation to real-field models**

*Knut-Andreas Lie*

Professor, PhD Chief Scientist, Dept of Applied Math

SINTEF, Oslo, Norway

ABSTRACT:

A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. For the past ten years or so, my research group has worked on adapting and extending these methods so that they can be applied to simulate real petroleum reservoirs. In the first part of the talk, I will explain why this is a difficult undertaking, review key ideas that have been researched, and point out failures and successes.

In the second part of the talk, I present our most recent method, the multiscale restriction-smoothed basis (MsRSB) method, which has been implemented in the INTERSECT R&P simulator. The MsRSB method has three main advantages: First, the input grid and its coarse partition can have general polyhedral geometry and unstructured topology. Secondly, the method is accurate and robust compared to existing multiscale methods. Finally, the method is formulated on top of a cell-centered, conservative, finite-volume method and is applicable to any flow model in which one can isolate a pressure equation.

*Thursday, November 5th | 12:00 PM | 37-212*

**Spectral Analysis of High Reynolds Number Turbulent Channel Flow**

*Robert D. Moser*

Professor, Mechanical Engineering

W. A. "Tex" Moncrief Jr. Chair in Computational Engineering & Sciences

Deputy Director, Institute for Computational Engineering and Sciences

The University of Texas at Austin

ABSTRACT:

Direct numerical simulations (DNS) of turbulent channel flow at friction Reynolds numbers up to 5200 have recently been performed to study high Reynolds number wall-bounded turbulence. DNS result have shown that this Reynolds number is high enough to exhibit scale separation between the near-wall and outer regions, and other high-Reynolds-number features (Lee & Moser J. Fluid Mech. vol. 774, 2015). A spectral analysis of simulation results, particularly the terms in the evolution equation for two-point correlation, have been performed to study the interaction of the near-wall turbulence with that of the outer flow. In this analysis, the turbulent transport terms that arise from the non-linear terms in the Navier-Stokes equations can be decomposed into a part describing transfer between scales at constant distance from the wall and a part describing interactions across the wall-normal direction.

The results show that at Re=5200 there are two distinct peaks in the energy spectra as a function of wall distance, one near the wall at high wavenumber and one far from the wall at low wavenumber, as expected at high Reynolds numbers. Further, there is a distinct difference in the structure of the scale transfer and wall-normal (y) transport terms between the near-wall and outer layers. In the latter (above y+=200) there is Kolmogorov style transfer of energy from large scale to dissipative small scales and a scale-similar, scale-local and y-local transport in y. This is as would be expected in the overlap (or log) region. In the near wall region, the structure is more complex. However, it appears that the interaction between the near-wall and outer layers is relatively simple, with evidence of transport of energy from the outer region to the near wall primarily at large scales. These results are consistent with the ideas of autonomous near-wall dynamics as described by Jimenez & Pinelli (J. Fluid Mech. vol. 389 1999) and of modulation of the near-wall layer by the outer layer as discussed, for example, by Marusic et al (Science vol. 329, 2010).

This analysis also suggest a path to successful near-wall modeling for large eddy simulation (LES) of wall-bounded turbulent flows. In particular, by ensuring the resolved scales in the horizontal are sufficiently large in wall units and that the near-wall region is excluded from representation via LES, it may be possible to take advantage of the weak interaction discussed above in formulating a near-wall model.

In this talk, details of the spectral analysis and the DNS it is derived from will be discussed, as will the implications for LES wall modeling.

*Thursday, October 22nd | 12:00 PM | 37-212*

**An overview of the development of the HDG methods**

*Bernardo Cockburn*

Distinguished McKnight University Professor, School of Mathematics

University of Minnesota

ABSTRACT:

We provide an overview of the evolution of the so-called hybridizable discontinuous Galerkin (HDG) methods. We motivate the introduction of the methods and describe the main ideas of their development within the framework of steady-state diffusion. We then describe the status of their application to other problems of practical interest. A significant part of this material is joint work with N.C. Nguyen and J. Peraire, from MIT.

*Thursday September 24th | 12:00 PM | 37-212*

**A concurrent multi-scale model for the thermomechanical response of ceramics**

*Julian J. Rimoli*

Assistant Professor, School of Aerospace Engineering

Georgia Institute of Technology

ABSTRACT:

Predictive modeling of material behavior is a problem that necessarily spans several scales: for example, inter-atomic interactions dominate the elastic behavior of materials, point and line defects condition their inelastic response, and planar defects such as grain boundaries could introduce length-dependent macroscopic material properties. With novel manufacturing and material synthesis techniques allowing the manipulation of characteristic material length scales, e.g., grain size and grain size distribution, the possibility of designing microstructures for a desired macroscopic performance is becoming closer to reality. In this context, mesoscale models become key to link the fundamental processes obtained from the lower scales with the continuum models needed by designers and engineers. In this work, we formulate efficient models and numerical schemes for understanding the length-dependent thermomechanical response of ceramics with rich microstructures. Our approach consists of: (i) a sub-micron scale model for the thermal conductivity, (ii) a classic Fourier heat transport model at the mesoscale, and (iii) a continuum model of thermomechanical deformation that explicitly resolves the microscopic geometric features of the material. The capabilities of the model are demonstrated through a series of examples, which highlight the potential that our proposed framework has for designing materials and metamaterials with improved thermomechanical performance. For example, our simulations show that one could tailor the material’s microstructure, e.g. through grading of the grain size, and affect the resulting distribution of thermal stresses within the material.

*Thursday, April 30th │ 12:00 PM │ 37-212*

**Reachability and Learning for Hybrid Systems**

*Claire Tomlin*

Charles A. Desoer Chair in the College of Engineering

Electrical Engineering and Computer Science

University of California, Berkeley

ABSTRACT:

Hybrid systems are a modeling tool allowing for the composition of continuous and discrete state dynamics. They can be represented as continuous systems with modes of operation modeled by discrete dynamics, with the two kinds of dynamics influencing each other. Hybrid systems have been essential in modeling a variety of important problems, such as aircraft flight management, air and ground transportation systems, robotic vehicles and human-automation systems. These systems use discrete logic in control because discrete abstractions make it easier to manage complexity and discrete representations more naturally accommodate linguistic and qualitative information in controller design.

A great deal of research in recent years has focused on the synthesis of controllers for hybrid systems. For safety specifications on the hybrid system, namely to design a controller that steers the system away from unsafe states, we will present a synthesis and computational technique based on optimal control and game theory. In the first part of the talk, we will review these methods and their application to collision avoidance and avionics design in air traffic management systems, and networks of manned and unmanned aerial vehicles. It is frequently of interest to synthesize controllers with more detailed performance specifications on the closed loop trajectories. For such requirements, we will present a toolbox of methods combining reachability with data-driven techniques inspired by machine learning, to enable performance improvement while maintaining safety. We will illustrate these “safe learning” methods on a quadrotor UAV experimental platform which we have at Berkeley.

*Thursday February 12, 2015 | 12:00 PM | 37-212*

**Assembly of particles in microfluidic-devices**

*Nadine Aubry*

Dean, College of Engineering University

Distinguished Professor, Mechanical and Industrial Engineering Department

Northeastern University

ABSTRACT:

In this presentation, I will report on progress made regarding the manipulation and assembly of particles suspended in liquids or at fluid-liquid interfaces. Emphasis will be given to the dynamics of suspended neutral particles subjected to electric fields in microfluidic devices. In particular it will be shown how the combination of fluid and electrostatic forces leads to the judicious assembly of particles at fluid-liquid interfaces into packed and non-packed organized patterns of long range order.

*Thursday October 30, 2014 | 12:15 PM | 37-212*

**Optimization in Electrical Power Grid Monitoring and Analysis**

*Stephen Wright*

Computer Sciences Department

University of Wisconsin-Madison

ABSTRACT:

In quasi-steady state, the electrical power grid is well modeled by a system of algebraic equations that relate the supplies and demands at nodes of the grid to (complex) voltages at the nodes and currents on the lines. This "AC model" can be used to formulate nonlinear optimization problems to study various issues related to monitoring and security of the grid. We discuss three such issues here. The first is restoration of feasible operation of the grid following a disruption, with minimal shedding of demand loads. The second issue is vulnerability analysis, in which we seek the attack that causes maximum disruption to the grid, as measured by the amount of load that must be shed to return it to feasibility. The third issue is the use of streaming data from phasor measurement units (PMUs) to detect single-line outages rapidly, and optimal placement of these units to maximize reliability of detection. In each case we discuss the optimization models and algoithms that are used to formulate and solve these problems.

*Thursday October 16, 2014 | 4:00 PM | 56-114*

**Inverting high frequency wave equations**

*Lexing Ying*

Department of Mathematics and Institute for Computational and Mathematical Engineering

Stanford University

ABSTRACT:

Wave is ubiquitous as we see it everywhere around us. The numerical solution of high frequency wave propagation has been a longstanding challenge in computational science and engineering. This talk addresses this problem in the time-harmonic regime. We consider a sequence of examples with important applications, and for each we construct an efficient preconditioner (approximate inverse) that allows one to solve the system with a small number of iterations. From these examples emerges a new framework, where sparsity, geometry of wave phenomenon, and highly accurate discretizations are combined together to address this challenging topic.

###
*Thursday September 25, 2014 | 4:00 PM | 56-114*

**Infinite-Dimensional Numerical Analysis**

*Christoph Schwab, PhD*

Seminar for Applied Mathematics

ETH Zurich

ABSTRACT:

Spurred by the emerging engineering discipline of Uncertainty Quantification and the `big-data, sparse information' issue, engineering and life-sciences have seen an explosive development in numerics of direct-, inverse- and optimization problems for (deterministic or stochastic) differential equations on high- or even infinite-dimensional state- and parameter-spaces, and for statistical inference on these spaces, conditional on given (possibly large) data.

One objective of this talk is a (biased...) survey of several emerging computational methodologies that allow efficient treatment of high- or infinite-dimensional inputs to partial differential equations in engineering, and to illustrate their performance by computational examples.

We address in particular Multilevel Monte-Carlo (MLMC) and Multilevel Quasi-Monte-Carlo (MLQMC) Methods, adaptive Smolyak and generalized polynomial chaos (gpc), of Galerkin and collocation type, and tensor compression techniques.

A second objective is to indicate elements of a mathematical basis for these methods that has emerged in recent years that has allowed to prove dimension-independent rates of convergence. The reates are shown to be limited only by the order of the method and by certain sparsity measures for the uncertain inputs' KL, gpc or ANOVA decompositions.

Examples include stochastic elliptic and parabolic PDE, their Bayesian inversion, control and optimization, reaction rate models in biological systems engineering, shape inversion in acoustic and electromagnetic scattering, and nonlinear hyperbolic conservation laws.

Despite favourable scaling, massively parallel computation is, as a rule, required for online simulations of realistic problems. Scalability and Fault Tolerance in an exascale compute environment become crucial issues in their practical deployment.

Acknowledgements:

Grant support by Swiss National Science Foundation (SNF), ETH High Performance Computing Grant, and the European Research Council (ERC).

*Thursday September 18, 2014 | 4:00 PM | 56-114*

**Stochastic optimization in electricity systems**

*Andy Philpott*,* PhD*

Electric Power Optimization Centre

University of Auckland

ABSTRACT:

Methods for optimization under uncertainty are becoming increasingly important in models of electricity systems. Recent interest has been driven by the growth in disruptive technologies (e.g. wind power, solar power and energy storage) which contribute to or ameliorate the volatility of demand that must be met by conventional electricity generation and transmission. In such systems, enough capacity and ramping plant must be on hand to deal with sudden increases in net demand.

On the other hand, in systems dominated by hydro power with uncertain inflows, stochastic optimization models have been studied for many years. Rather than planning capacity, these models construct generation policies to minimize some measure of total thermal fuel costs and risks of future energy shortages. Such models are indispensible in benchmarking the performance of hydro-dominated electricity markets.

We present two classes of model that show how uncertainty is incorporated in these two settings. In the first class, net demand is modeled as a time-inhomogeneous Markov chain, and we seek least-cost investments in thermal generation and transmission capacity that will cover random demand variations. We solve the investment problem using a combination of Benders and Dantzig-Wolfe decomposition. The investment solutions are contrasted with those from conventional screening-curve models.

The second class of model is a stochastic dynamic program that treats reservoir inflows as random. This is solved by DOASA, our implementation of the stochastic dual dynamic programming method of Pereira and Pinto (1991), which has been the subject of some recent interest in the stochastic programming community. We give some examples of the application of DOASA to the New Zealand electricity system by the electricity market regulator.

(Joint work with Golbon Zakeri, Geoff Pritchard, Athena Wu)

*Thursday May 1, 2014 | 4:00 PM | 4-237 *

**Hurricane Storm Surge Models Using Integrated Ocean Basin to Shelf to Inland Floodplain Unstructured Grids***Joannes J. Westerink, Ph.D.*

Departments of Civil & Environmental Engineering and Earth Sciences

University of Notre Dame

ABSTRACT:

Hurricane wind wave, storm surge, and current environments in the coastal ocean and adjacent coastal floodplain are characterized by their high energy and by their spatial variability. These processes impact offshore energy assets, navigation, ports and harbors, deltas, wetlands, and coastal communities. The potential for an enormous catastrophic impact in terms of loss of life and economic losses is substantial.

Computational models for wind waves and storm driven currents and surge must provide a high level of grid resolution, fully couple the energetic processes, and perform quickly for risk assessment, flood mitigation system design, and forecasting purposes. In order to accomplish this, high performance scalable codes are essential. To this end, we have developed an MPI based domain decomposed unstructured grid framework that minimizes global communications, efficiently handles localized sub-domain to sub-domain communication, applies a local inter-model paradigm with all model to model communications being kept on identical cores for sub-domains, and carefully manages output by assigning specialized cores for this purpose. Continuous Galerkin (CG) and Discontinuous Galerkin (DG) implementations are examined. Performance of explicit and implicit implementations of the wave-current coupled system on up to 32,000 cores for various platforms is evaluated.

The system has been extensively validated with an ever increasing amount of wave, water level and current data that has being collected for recent storms including Hurricanes Katrina (2005), Rita (2005), Gustav (2008), Ike (2008), and Sandy (2012). The modeling system helps understand the physics of hurricane storm surges including processes such as geostrophically driven forerunner, shelf waves that propagate far away from the storm, wind wave – surge interaction, surge capture and propagation by protruding deltaic river systems, the influence of storm size and forward speed, and frictionally controlled inland penetration.

These models are being applied by the US Army Corps of Engineers (USACE) in the development of the recently completed hurricane risk reduction system in Southern Louisiana as well as for the development of FEMA Digital Flood Insurance Rate Maps (DFIRMS) for Texas, Louisiana, Mississippi, and other Gulf and Atlantic coast states. NOAA applies the models in extra-tropical and tropical storm surge forecasting.

Current algorithmic development is focused on DG solvers, ideally suited for the associated strongly advective flows. Due to the larger numbers of degrees of freedom for a specific grid, DG solutions have traditionally been more costly than CG solutions. It is demonstrated that high order implementations of DG leads to several orders of magnitude improvement in cost per accuracy performance as compared to lower order methods. In addition, loop level optimization further improves the efficiency of DG solutions by a factor of 4 to 5. It is noted that curved boundaries must be treated using super-parametric elements for p=1 and p=2 and iso-parametric elements for p=3 in order to achieve anticipated convergence rates.

*Thursday October 17, 2013 | 4:00 PM | 56-114 *

**Software in Computational Science***Wolfgang Bangerth, Ph.D.*

Department of Mathematics

Texas A&M University

ABSTRACT:

Software is where the rubber hits the road in Computational Science and Engineering: this is where we ultimately have to implement the numerical methods we describe in our papers, and where we have to implement the concrete physical setting we wish to simulate.

Over the past decade, many high quality libraries have been developed that make writing advanced computational software simpler. In this talk, I will briefly introduce the deal.II finite element library (http://www.dealii.org) whose development I lead and show how it has

enabled us to develop the ASPECT (http://aspect.dealii.org) code for simulations of convection in the earth mantle. I will also discuss some of the results obtained with this code.

###
*Thursday, September 26, 2013 | 4:00-5:00 PM | Room 56-114*

**Simulating Denmark Strait Overflow, the Greatest Waterfall You've Never Heard Of***Thomas W. N. Haine, Morton K. Blaustein Professor and Chair*

Department of Earth & Planetary Sciences

The Johns Hopkins University

ABSTRACT:

The North Atlantic is a special place where the ocean circulation has an important influence on Earth's climate. The deep flow in particular plays a critical role and the fluid dynamics of the surface-to-sea-floor circulation is subtle and sensitive to extrinsic forcing. We focus on the Denmark Strait where the flow is channeled by bathymetry and dense Arctic waters overflow a shallow gap into the deep North Atlantic. We simulate the fluid dynamics of this region using numerical circulation models. We also compute the trajectories of 10,000 Lagrangian particles to elucidate the fluid kinematics and diabatic mixing processes. This approach illuminates the nature of the overflow and suggests ways in which it can be efficiently monitored. The role of scientific computing is central to these efforts, and is emphasized throughout.

*Thursday, May 9, 2013 | 4:00-5:00 PM | Room 4-237*

**The Potential Impacts of Climate Change on the Delaware Bay Oyster***Dale Haidvogel*

Institute of Marine & Coastal Sciences

Rutgers University

Abstract:

Oysters are a particularly good model organism for understanding marine diseases, the environmental drivers that influence them, and the likely impact of climate change on the diseases. Here, we consider the oyster populations in Delaware Bay, an urbanized estuary on the mid- Atlantic coast dominated by a single freshwater input and strong tidal forcing. Two diseases, MSX and dermo, both caused by introduced protozoan pathogens, have been the principal factors affecting oyster populations in Delaware Bay. These diseases have been monitored since 1953, and the oyster population responses have been documented.

A series of numerical simulations has been conducted to explore the relationships between observed/simulated environmental variability and the observed prevalence of the oyster diseases in Delaware Bay in the contemporary period 1953–2009. In addition, a sequence of climate sensitivity studies has been performed to anticipate the potential for future impacts on the oyster populations. We describe the conclusions drawn from these model studies, in particular, the role of local water properties and circulation patterns in regulating disease prevalence in the estuary, and the possibly significant consequences of (e.g.) rising sea levels.

*Thursday, April 11 | 4:00-5:00 pm | Room 4-237*

**It’s all about energy! The impact of computational materials science**

*Giulia Galli*

Department of Chemistry

University of California at Davis

ABSTRACT:

We will introduce several problems involved in understanding and controlling energy conversion processes, including solar, photo-electrochemical and thermoelectric energy conversion. Using examples from recent studies based on ab initio and atomistic simulations, we will discuss two intertwined issues: what is the impact of computational materials science on energy related problems? How do we take up the challenge of building much needed tighter connections between computational and laboratory experiments? http://angstrom.ucdavis.edu/

### Thursday*, March 14, 2013* | 4:00-5:00 pm | Room 4-237

**From CFD to computational finance and back again***Mike Giles*

Mathematical Institute

University of Oxford

ABSTRACT:

After 25 years working on CFD (including 11 years at MIT in Aero/Astro) the speaker switched research areas to Monte Carlo simulation in computational finance. This talk will describe how he brought and adapted two well-established CFD methodologies (adjoint design methods and multigrid) to this new area, and how he is now bringing multilevel Monte Carlo back to CFD for uncertainty quantification.

http://people.maths.ox.ac.uk/gilesm/

*Monday, February 11, 2013* | 4:00-5:00 pm | Room 32-124

**Uncertainty quantification of ice sheet mass balance projections using ISSM***Eric Larour*

Thermal & Cryogenics Engineering

Jet Propulsion Lab, CA

ABSTRACT:

Understanding and modeling the evolution of continental ice sheets such as Antarctica and Greenland can be a difficult task because a lot of the inputs used in transient ice flow models, either inferred from satellite or in-situ observations, carry large measurement errors that will propagate forward and impact projection assessments. Here, we aim at comprehensively quantifying error margins on model diagnostics such as mass outflux at the grounding line, maximum surface velocity and overall ice-sheet volume, applied to major outlet glaciers in Antarctica and Greenland.

Our analysis relies on uncertainty quantification methods implemented in the Ice Sheet System Model (ISSM), developed at the Jet Propulsion Laboratory in collaboration with the University of California at Irvine.

We focus in particular on sensitivity analysis to try and understand the local influence of specific inputs on model results, and sampling analysis to quantify error margins on model diagnostics. Our results demonstrate the expected influence of measurement errors in surface altimetry, bedrock position and basal friction. They also demonstrate the influence of model inputs such as surface mass balance, which can contribute significant errors to projections of ice sheet mass balance within a time horizon of 20-30 years.

*Thursday, December 13, 2012* | 4:00-5:00 pm | Room 32-124

**Simulation-based Inference from Second-Order Diffusions on Tangent and Co-Tangent Bundles of Riemann and Hilbert Manifolds***Mark A Girolami*

Department of Statistical Science

University College London

ABSTRACT:

The requirement for the systematic quantification of uncertainty in the predictions made from mathematical models of engineering systems is driving a new research agenda to address the statistical and computational challenges being presented. Obtaining calibrated predictive distributions from mathematical models of physical engineering systems places extraordinary demands on the current capabilities of simulation based inference methods such as Markov chain Monte Carlo demanding a revolution in the development of such statistical methods and their algorithmic implementations. In this lecture a class of Markov chain Monte Carlo methods will be introduced by defining second-order diffusions on the tangent and co-tangent bundles of manifolds characterising the invariant distributions of the diffusions. Both finite and infinite dimensional manifolds will be considered with diffusion and Hamiltonian-based constructions emerging defining a class of sampling methods that can address some of the difficulties presented when characterising uncertainty in mathematical models. Some examples will be provided and discussion surrounding the many open issues that remain will conclude the lecture.

*Thursday, November 8, 2012* | 4:00-5:00 pm | Room 32-124

**Statistical Approaches to Combining Models and Observations***L. Mark Berliner*

Department of Statistics

Ohio State University

ABSTRACT:

Numerical models and observational data are critical in modern science and engineering. Since both of these information sources involve uncertainty, the use of statistical, probabilistic methods plays a fundamental role. I discuss a general Bayesian framework for combining uncertain information and indicate how various approaches (ensemble forecasting, UQ, etc.) fit in this framework. I then develop the framework in the context of using output from very large computer models. Two illustrations are presented: use of climate system models in the context of climate change analysis and incorporating various analyses in ocean forecasting.

*Thursday, October 18, 2012* | 4:00-5:00 pm | Room 32-124

**From Sorcery to Science: How Hollywood Physics Advances Computational Engineering***Eitan Grinspun*

Computer Science Department

Columbia University, NY

ABSTRACT:

Blockbuster films depend on computational physics. The focus is on models that capture the qualitative, characteristic behavior of a mechanical system. Visual effects employ mathematical and computational models of hair, fur, skin, cloth, fire, granular media, and liquids. This is scientific computing with a twist. But techniques developed originally for film can also advance consumer products, biomedical research, and basic physical understanding. Here at MIT, they are used to investigate geometric nonlinearities in the mechanics of thin structures.

I will describe computational models based on discrete differential geometry (DDG). The focus is on the formal geometric structure of a mechanical system. We build a discrete (hence readily computable) geometry from the ground up, mimicking the axioms, structures, and symmetries of the smooth setting. Problems addressed via DDG include dynamic evolution of thin visco-elastic structures, granular media, and the tying of tight knots.

As a concrete example, I will present a unified treatment of viscous fluid threads and elastic Kirchhoff rods, demonstrating canonical coiling, folding, and breakup in dynamic simulations. The method is used in the latest Hollywood films, Adobe Photoshop paintbrushes, surgical training software, and physics labs. The computations are in quantitative agreement to experimental data for a variety of nonlinear physical phenomena, including hysteretic transitions between coiling regimes, and the first elasto-mechanical sewing machine.

*Thursday, September 20, 2012 |* 4:00-5:00 pm | Room 32-124

**Solving Fluid Structure Interaction Problems on Overlapping Grids***Bill Henshaw*

Center for Applied Scientific Computing

Lawrence Livermore National Laboratory, CA

ABSTRACT:

This talk will discuss the numerical solution of fluid structure interaction (FSI) problems on overlapping grids. Overlapping grids are an efficient, flexible and accurate way to represent complex, possibly moving, geometry using a collection of overlapping structured grids. For incompressible flows with moving geometry we have been developing a scheme that is based on an approximated-factored compact scheme for the momentum equations together with a multigrid solver for the pressure equation. The overall scheme is fourth-order accurate in space and (currently) second-order accurate in time. The scheme will be described and results will be presented for some three-dimensional (parallel) computations of flows with moving rigid-bodies. In recent work, we have also been developing an FSI scheme based on the use of deforming composite grids (DCG). In the DCG approach, moving boundary-fitted grids are used near the deforming solid interfaces and these overlap non-moving grids which cover the majority of the domain. For compressible flows and elastic solids we have derived a new interface projection scheme, based on the solution of a fluid-solid Riemann problem, that overcomes the well known “added-mass” instability for light solids. The FSI-DCG approach is described and validated for some fluid structure problems involving high speed compressible flows coupled to rigid and elastic solids. The interesting case of a shock hitting an ellipse of zero mass is also presented.

*Wednesday, May 16, 2012* | 4:00-5:00 pm | Room 4-237

**Large-scale Modeling for Complex Systems: Bridging VLSI CAD Algorithms with Clinical Applications****Xin Li*

Center for Silicon System Implementation

Department of Electrical and Computer Engineering

Carnegie Mellon University

ABSTRACT:

This talk presents several novel methodologies to facilitate large-scale modeling for complex systems. Our work is motivated by the emerging need of large-scale statistical performance modeling for analog and mixed-signal integrated circuits (ICs). The objective is to capture the impact of process and environmental variations for today’s nanoscale circuits. In particular, we explore a number of novel statistical techniques (e.g., sparse regression, model fusion, etc) to address the modeling challenges posed by high dimensionality and strong nonlinearity. As such, the parametric yield of nanoscale ICs can be predicted both accurately and efficiently. This talk also discusses how the proposed modeling techniques can be further applied to adaptive post-silicon tuning of analog and mixed-signal circuits.

In addition, our algorithms originally developed for VLSI CAD problems have been successfully extended to other non-CAD applications. The second part of this talk briefly discusses a clinical application of brain computer interface based on magnetoencephalography (MEG). The objective of BCI is to provide a direct control pathway from brain to external devices. We will show how statistical modeling algorithms can be applied to improve the signal-to-noise ratio of MEG recording.

*This presentation is co-sponsored by CCE and The Computational Prototyping Group.

*Wednesday, May 9, 2012* | 4:00-5:00 pm | Room 4-237

**Statistical Approaches to Combining Models and Observations***Mark Berliner*

Department of Statistics

Ohio State University

ABSTRACT:

Numerical models and observational data are critical in modern science and engineering. Since both of these information sources involve uncertainty, the use of statistical, probabilistic methods plays a fundamental role. I discuss a general Bayesian framework for combining uncertain information and indicate how various approaches (ensemble forecasting, UQ, etc.) fit in this framework. I then develop the framework in the context of using output from very large computer models. Two illustrations are presented: use of climate system models in the context of climate change analysis and incorporating various analyses in ocean forecasting.

*Wednesday, April 4, 2012* | 4:00-5:00 pm | Room 4-237

**Computer Experiments on Fermions and the Onset of Turbulence**

Berni Alder

University of California, Davis

Lawrence Livermore National Laboratory

ABSTRACT:

We have examined permutation space in a homogeneous system (He3) through the efficient grand canonical ensemble (worm algorithm) and found that knowing only the pair permutation is sufficient to construct all higher order permutations and, furthermore, that higher order alternate permutations beyond about 10 exactly cancel, thus eliminating the non polynomial sign problem and allowing the properties of helium to be numerically exactly determined to low temperatures.

We developed a conservative adaptive mesh refinement (AMR) scheme for the Lattice Boltzmann method to study the effect of small amplitude, distributed wall roughness on flow stability, including implementation of an H function for numerical stability. Preliminary results indicate that microscopic boundary roughness can initiate macroscopic turbulence.

*Wednesday, March 21, 2012* | 4:00-5:00 pm | Room 4-237

**Towards High Throughput Composable Multilevel Solvers for Implicit Multiphysics Simulation**

Jed Brown

Argonne National Laboratory

ABSTRACT:

Multiphysics problems present unique challenges for algorithms and software. The best methods are usually not known in advance and change with physical regime, problem size, and available hardware. The presence of disparate temporal scales often demand implicit or semi-implicit time integration. Multilevel methods are necessary for algorithmic scalability, placing new demands on hardware and the components necessary for extensible software. Composable libraries attempt to manage the complex design space by decoupling problem specification and reusable algorithmic components from the composition of a specific algorithm.

Implicit and IMEX solution methods for multiphysics problems are either monolithic (treating the coupled problem directly) or split (solving reduced systems independently). Software for monolithic multigrid is historically more intrusive, requiring a monolithic hierarchy instead of independent hierarchies, but the required number of iterations may be smaller due to coupling on all levels. I will describe tools developed in the PETSc library for composing algebraic solvers, including “multigrid inside splitting” and “splitting inside multigrid” using the same user specification. I will also explore variants of methods and future directions for adapting implicit multilevel algorithms to deliver more “science per watt” by better utilizing emerging hardware, for which the relative cost of memory bandwidth and communication is increasing relative to floating point performance.

I will present applications of these tools to multiphysics problems in glaciology, geodynamics, and plasma physics.

• A high order accurate finite element model for 3D polythermal ice flow on unstructured meshes using matrix-free operators and embedded low order discretizations for preconditioning.

• Performance and time accuracy of fully implicit methods for mantle and lithosphere dynamics, preconditioned using nested splitting and multigrid.

• IMEX time integration and composition of nonlinear solvers for stiff wave problems in magnetohydrodynamics.

*Wednesday, March 7, 2012* | 4:00-5:00 pm | Room 4-237

**V&V or the Schizophrenia of Prediction Science: from Diagnosis to Therapy**

Roger G. Ghanem

University of Southern California

ABSTRACT:

Scientific discourse about Uncertainty Quantification, Verification and Validation has been on a high note for the past decade, driven to a large extent by the demand from various constituencies that computational resources finally deliver on their promise of reproducing if not predicting physical reality. Clearly, the value of such an achievement will be enormous, ranging from superior product design to disaster mitigation and the management of complex systems such as financial markets, SmartGrids, and society itself. A number of challenges are quickly identified on the path to delivering this computational surrogate. First, reality itself is elusive and is not always described with commensurate topologies. This problem is somewhat mitigated by experimental techniques that can simultaneously measure physical phenomena at multiple scales. Second, mathematical models introduces additional assumptions that are often manifested as modeling and parametric uncertainties. Third, and even when these models are accurate and well-resolved numerically, uncertainties are introduced at the physical realization stage, for instance when a device is actually built, reflecting additional tolerances and imperfections.

Physical reality, the mathematical model, its implementation into software components, and the as-built device, describe an equivalence class of objects that are expected to result in similar decisions. These can be viewed as multiple personalities of the same entity, and V&V can be viewed as the art and science of tackling this divergence while enabling its successful resolution.

In this talk, I will describe our efforts at the “psycho-analysis” and “psycho-therapy” of models, that rely on the astute packaging of information in accordance with the axioms of probability theory, and a function-analytic approach for treating randomness.

*Wednesday, February 22, 2012* | 4:00-5:00 pm | Room 4-237

**Tracking Multiphase Physics: Geometry, Foams, and Thin Films**

James A Sethian

Professor of Mathematics, University of California at Berkeley

ABSTRACT:

Many scientific and engineering problems involve interconnected moving interfaces separating different regions, including dry foams, crystal grain growth and multi-cellular structures in man-made and biological materials. Producing consistent and well-posed mathematical models that capture the motion of these interfaces, especially at degeneracies, such as triple points and triple lines where multiple interfaces meet, is challenging.

Joint with Robert Saye of UC Berkeley, we introduce an efficient and robust mathematical and computational methodology for computing the solution to two and three-dimensional multi-interface problems involving complex junctions and topological changes in an evolving general multiphase system. We demonstrate the method on a collection of problems, including geometric coarsening flows under curvature and incompressible flow coupled to multi-fluid interface problems. Finally, we compute the dynamics of unstable foams, such as soap bubbles, evolving under the combined effects of gas-fluid interactions, thin-film lamella drainage, and topological bursting.

*Wednesday, December 7, 2011* | 4:00-5:00 pm | Room 1-390

**Fast Tree Algorithms for Inverse Medium Problems with Multiple Excitations**

George Biros

Professor of Mechanical Engineering, Institute for Computational Engineering and Science, University of Texas, Austin

ABSTRACT:

I will consider the inverse medium problem for the time-harmonic wave equation with broadband and multi-point illumination in the low frequency regime. Such a problem finds many applications in geosciences (e.g. ground penetrating radar), non-destructive evaluation (acoustics), and medicine (optical tomography). The main challenge is the computational costs related with the multiple excitations. On the one hand, multiple excitations are crucial for improving the quality of the reconstruction of the unknown medium. One the other hand, solving the inverse medium problem becomes computationally intractable for existing algorithms as it is common to have hundreds to thousands of excitations.

I will describe a tree algorithm that, for a class of problems, provides orders of magnitude speedups over existing algorithms. The new methodology is closely related to integral equation formulations, the fast multipole method, and recent developments on direct solvers based on hierarchical matrix decompositions. I will conclude with numerical results for the inverse medium problem for the scalar Helmholtz problem.

*Wednesday, November 30, 2011* | 4:00-5:00 pm | Room 1-390

**Quantum Simulations of Materials at the Mesoscale: Physics, Algorithms, and Applications**

Emily A. Carter

Gerhard R. Andlinger Professor in Energy & the Environment, Princeton University

ABSTRACT:

Many materials phenomena are controlled by features at the mesoscale, i.e., a length scale above atoms but below the micron scale. In addition to the fashionable example of nanostructures, other practical examples abound. For instance, the mechanical properties of metals are largely controlled by the nucleation and motion of dislocations and their interaction with other defects (e.g., grain boundaries, solutes, precipitates) in crystals. Experiments (e.g., electron microscopy) provide post-mortem examination of these features. By contrast, computer simulations can interrogate these defects *in situ*. Of course, reliability of the simulations is always an issue. Our research aims to develop predictive simulation tools that do not rely on any experimental input, such that they produce a truly independent source of data for comparison with experiment. This assumption/empirical-input-free approach requires going back to basic physical laws, namely those of quantum mechanics to describe electron distributions in materials. Normally such techniques are prohibitively expensive for simulating more than a few hundred atoms on supercomputers. But because of algorithmic improvements to a quantum mechanics method (orbital-free density functional theory) that makes it scale fully linearly with system size, we are now able to simulate fully quantum mechanically and accurately large scale defects that play key roles in plastic deformation and ductile fracture of main group metals and metal alloys, with accuracy rivaling the most accurate solid state quantum mechanics methods available. Recent advances in the physical approximations used to evaluate the electron kinetic energy (via new nonlocal kinetic energy density functionals) and the electron-ion interaction have extended the range of reliability of this technique beyond main group metals to semiconductor materials, and very recently, to transition metals. Current applications are focused on predicting the behavior of dislocations in aluminum and magnesium and their alloys; ultimately we hope to guide optimization of the composition and microstructure of lightweight metal alloys (by finding a sweet spot in the ductility-strength tradeoff) that can be used to improve the fuel efficiency of vehicles.

*Wednesday, November 16, 2011*

*Wednesday, November 16, 2011*

**High-order Simulation of Problems with Evolving Geometries**

Adrian Lew

Assistant Professor of Mechanical Engineering, Stanford University

4:00-5:00 pm | Room 1-390

Abstract:

Multiple problems in engineering involve geometries that evolve with the problem. Fluid-structure interaction, phase transformation, and shape optimization problems are the most common, but crack propagation problems and solids undergoing extreme deformations need such strategies as well.

Three types of approaches are typically adopted for these problems: periodic remeshing, arbitrary Lagrangian-Eulerian kinematic descriptions, and embedded or immersed boundary methods. The first one is generally considered computationally expensive, the second one breaks down under very large deformations, and the last one often leads to low-order methods because of a poor representation of the geometry.

In this talk, I will introduce the concept of “Universal Meshes”, which combines the best of each one of the above strategies. In a nutshell, a Universal Mesh for a class of domains is a triangulation that is able to mesh any of the domains in the class upon minor perturbations of the positions of its nodes. Hence, as the domain evolves, the pertubed universal mesh provides an exact triangulation of the geometry. It is then possible to formulate high-order methods for problems with evolving geometries in a standard way.

I will show the application of these ideas to hydraulic fracturing and ballistic penetration problems. In the former, in which a crack in an elastic medium advances due to a pressurized fluid in its interior, the universal mesh is used to exactly mesh the faces of the evolving crack. This enables the coupled solution of the elasticity equations in the domain, and the lubrication equations for the motion of the fluid on the crack manifold. In contrast, for ballistic penetration problems, these ideas are used to periodically remesh the domain of the deforming solid. Along the way, I will briefly highlight other ideas related to discontinuous Galerkin and time-integration methods, which we created for these two problems as well.

*Wednesday, November 2, 2011*

**The Ocean Sink of Anthropogenic Carbon**

Samar Khatiwala

Lamont Associate Research Professor, Lamont-Doherty Earth Observatory, Columbia University

4:00-5:00 pm | Room 1-390

Abstract:

The release of fossil fuel CO2 to the atmosphere by human activity has been implicated as the predominant cause of global climate change. The ocean plays a crucial role in mitigating the effects of this perturbation to the climate system, sequestering 20 to 35% of anthropogenic CO2 emissions from the atmosphere. While much progress has been made in recent years in understanding and quantifying this sink, considerable uncertainty remains as to the distribution of anthropogenic CO2 in the ocean, its rate of uptake over the industrial era, and the partitioning of fossil fuel CO2 between the ocean and land biosphere.

In this talk, I will present the first observationally-based reconstruction of the 3-dimensional, time-evolving history of anthropogenic carbon in the ocean over the industrial era. The reconstruction is based on a novel inverse method that allows us to deconvolve the ocean’s transport Green function from oceanographic data. We show that ocean uptake of anthropogenic CO2 has increased sharply since the 1950s and is currently at 2.5+/-0.6 PgC/y, with a total inventory of 150+/-26 PgC. The Southern Ocean is the primary conduit by which this CO2 enters the ocean. Our results also suggest that the terrestrial biosphere was a source of CO2 until the 1940s, subsequently turning into a sink. To better understand the effectiveness of the ocean as a long term sink of CO2 emission, I will also describe model simulations of the path density, a diagnostic of advective-diffusive flows that allows us to track surface-to-surface transport of water and tracers, and thus quantify the time scales and pathways between uptake of CO2 and subsequent re-exposure to the atmosphere.

*Wednesday, September 14, 2011*

**Active Gels — When Mechanics Meets Chemistry**

Zhigang Suo

Allen E. & Marilyn M. Puckett Professor of Mechanics and Materials, School of Engineering and Applied Sciences, Harvard University

4:00-5:00 pm | Room 1-390

Abstract:

Long and flexible polymers can be covalently crosslinked to form a three-dimensional network. The network can imbibe a solvent and swell, forming a gel. Gels have many uses, including personal care, drug delivery, tissue engineering, microfluidic regulation, and oilfield management. Mixtures of macromolecular networks and mobile molecules also constitute most tissues of plants and animals. The amount of swelling can be large and reversible. The gels are active in that the amount of swelling can be regulated by environmental stimuli, such as force, electric field, pH, salinity, and light. This talk describes a theory that combines the mechanics of large deformation and the chemistry of molecular mixtures. The theory is illustrated with examples of swelling-induced large deformation, contact, and bifurcation. The theory is further illustrated with recent experiments.

*Wednesday, May 4, 2011*

**Nano-scale hybrid systems for efficient solar energy conversion**

Efthimios Kaxiras

John Hasbrouck Van Vleck Professor of Pure and Applied Physics, and Head of Computational Physics and Materials Science Research Group, Harvard University

4:00-5:00 pm | Room 3-370

Abstract:

The design and optimization of low-cost photovoltaic (PV) materials is a key component toward achieving energy sufficiency and reducing green-house gas emissions. Dye-sensitized solar cells (DSC’s) are often cited as one of the promising third-generation cells that could contribute significantly to the solution of this problem. Better design of these systems by carefully selecting the appropriate organic molecules, semiconductor substrates and surfaces, through the use of theoretical models with predictive power, are required to speed up the development of successful DSC devices. To this end, we have developed a new theoretical tool in the context of time-dependent density-functional-theory (TD-DFT) that allows us to propagate the coupled ion and electron dynamics in real time. This approach provides a realistic description of charge injection mechanisms between the organic molecule and the semiconducting substrate. This unique capability opens new possibilities for comprehensive studies of the factors that affect the stability and efficiency of such organic-inorganic composite systems.

*Wednesday, April 20, 2011*

**Finite Elements in Coastal Ocean Circulation Modeling**

Clint Dawson, Professor, Aerospace Engineering and Engineering Mechanics, and head of the Computational Hydraulics Group, Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin

4:00-5:00 pm | Room 3-370

Abstract:

Finite element methods of various types have been implemented by a number of researchers for modeling global and regional ocean circulation. In this talk, we will discuss and compare two approaches investigated by the author and several collaborators for modeling circulation in the coastal ocean: a continuous Galerkin finite element method based on the generalized wave continuity equation (GWCE) [1], and a discontinuous Galerkin (DG) formulation [2, 3] based on the primitive form of the shallow water equations.

The GWCE is the basis for the Advanced Circulation (ADCIRC) model [4], which is a widely used quasi-operational shallow water simulator. While this model has been shown to work well in many “real-world” situations, such as in modeling hurricane storm surges in the Gulf coast of the U.S. [5–7], it suffers from some drawbacks, including mass conservation errors, stability isues, and is restricted to low order approximations. The DG formulation has certain potential advantages, including local mass conservation, the use of numerical fluxes and slope-limiters to prevent spurious oscillations, and the ability to use higher-order approximations locally within an element [8].

We will compare both methods for standard model problems, examining both accuracy and parallel efficiency. We will also discuss how features particular to coastal regions, such as wetting/drying and levees, are handled in each model. Finally, we will discuss results for both models applied to actual hurricanes along the Gulf of Mexico coast, including Hurricane Ike, which hit Texas in 2008. We will conclude with a discussion of current challenges and future research directions.

References

[ 1] D. R. Lynch and W. R. Gray, A wave equation model for finite element computations, Computers and Fluids, 7, pp. 207-228, 1979.

[ 2] V. Aizinger, C. Dawson, A discontinuous Galerkin method for two-dimensional flow and transport in shallow water, Advances in Water Resources, 25, pp. 67-84, 2002.

[ 3] E.J. Kubatko, J.J. Westerink, C. Dawson, hp Discontinuous Galerkin methods for advection dominated problems in shallow water flow, Comput. Meth. Appl. Mech. Engrg. 196, pp. 437–451, 2006.

[ 4] R. A. Luettich, J. J. Westerink and N. W. Scheffner, ADCIRC: An advanced three-dimensional circulation model for shelves, coasts and estuaries, Report 1: Theory and methodology of ADCIRC-2DDI and ADCIRC-3DL, Dredging Research Program Technical Report DRP-92-6, U.S. Army EngineersWaterways Experiment Station, Vicksburg, MS, 1992.

[5] S. Bunya, J.C. Dietrich, J.J.Westerink, B.A. Ebersole, J.M. Smith, J.H. Atkinson, R. Jensen, D.T. Resio, R.A. Luettich, C. Dawson, V.J. Cardone, A.T. Cox, M.D. Powell, H.J. Westerink, H.J. Roberts, A high-resolution coupled riverine flow, tide, wind, wind wave and storm surge model for Southern Louisiana and Mississippi: Part I - Model development and validation, Monthly Weather Review, 138, pp. 345–377, DOI: 10.1175/2009MWR2906.1, 2010.

[6] J.C. Dietrich, S. Bunya, J.J. Westerink, B.A. Ebersole, J.M. Smith, J.H. Atkinson, R. Jensen, D.T. Resio, R.A. Luettich, C. Dawson, V.J. Cardone, A.T. Cox, M.D. Powell, H.J. Westerink and H.J. Roberts, A high-resolution coupled riverine flow, tide, wind, wind wave and storm surge model for Southern Louisiana and Mississippi: Part II - Synoptic description and analyses of Hurricanes Katrina and Rita, Monthly Weather Review, 138, pp. 378–404, DOI: 10.1175/2009MWR2907.1, 2010.

[7] J.C. Dietrich, J.J. Westerink, A.B. Kennedy, J.M. Smith, R. Jensen, M. Zijlema, L.H. Holthuijsen, C. Dawson, R.A. Luettich, Jr., M.D. Powell, V.J. Cardone, A.T. Cox, G.W. Stone, M.E. Hope, S. Tanaka, L.G.Westerink, H.J.Westerink, and Z. Cobell, Hurricane Gustav (2008) waves, storm surge and currents: Hindcast and synoptic analysis in Southern Louisiana, submitted to Monthly Weather Review.

[8] E. Kubatko, S. Bunya, C. Dawson and J.J. Westerink, Dynamic p-adaptive Runge-Kutta discontinuous Galerkin methods for the shallow water equations, Comput. Methods Appl. Mech. Engrg., 198, pp. 1766–1774, 2009.

*Wednesday, March 9, 2011*

**Reduced-order modeling and interpolation on matrix manifolds for time-critical applications in engineering**

Charbel Farhat, Professor, Aeronautics & Astronautics and Mechanical Engineering, Institute for Computational Mathematics & Engineering, Stanford University

4:00-5:00 pm | Room 3-370

Abstract:

In many applications, high-fidelity time-dependent numerical simulations remain so computationally intensive that they cannot be used as often as needed, or are more often used in special circumstances than routinely. This is the case, for example, for turbulent computational fluid dynamics (CFD) computations at high Reynolds numbers. Consequently in many engineering fields, the impact of computational sciences on time-critical operations such as design, design optimization, and control has not yet fully materialized. For these operations, model reduction methods that can faithfully reproduce the essential features of the larger computational models at a fraction of their computational cost offer a promising alternative. This presentation will cover some recent advances in this field and their applications to the support of the aerodynamic design of a Formula 1 car and the flutter flight testing of a fighter aircraft. It will also demonstrate their real-time performance on a mobile device such as a tablet computer or a smart phone, for the CFD-based flutter analysis of a wing-store configuration parameterized by altitude, free-stream Mach number and fuel fill level.

*Wednesday, February 9, 2011*

**A new approach to molecular simulation**

Vijay Pande, Associate Professor of Chemistry, Structural Biology, and Computer Science, Stanford University

4:00-5:00 pm | Room 3-370

Abstract:

In order to rationally design molecular systems for dynamical properties, an important step is the ability to quantitatively predict dynamics of molecules at the atomic scale. The traditional approach to molecular simulation, eg. running Molecular Dynamics simulations and analyzing the resulting trajectories, has numerous short comings. In particular, such methods typically yield only anecdotal result and reach timescales thousands to millions of times too short to connect directly with experiment. I will present a new approach to molecular simulation. Through a combination of Molecular Dynamics simulation, Bayesian inference, and worldwide distributed computing, one can directly simulate in chemical detail, many complex, experimentally systems on length scales and timescales previously unimaginable. I will give examples of how these new methods can lead to significant breakthroughs in classically challenging problems, such as protein folding, as well as biological and biomedical applications, including Aß protein misfolding in Alzheimer’s Disease.

*Wednesday, December 8, 2010*

**Fluid-structure interaction: Models, algorithms, applications**

Alfio Quarteroni, Chair of Modelling and Scientific Computing, Mathematics Institute of Computational Science and Engineering, École Polytechnique Fédérale de Lausanne

4:00-5:00 pm | Room 3-370

Abstract:

In this talk I will present a general numerical framework aimed at the solution of partial differential equations that model the interaction between fluids and solids. Several families of numerical algorithms will be addressed, including those based on dimensional reduction. Finally, a few applications will be shown, in particular in the field of cardiovascular flow modeling and in that of yacht design.

*Wednesday, November 3, 2010*

**Adaptive Multiscale Finite-Volume Formulation for Multiphase Flow in Heterogeneous Porous Media**

Hamdi Tchelepi, Associate Professor of Energy Resources Engineering, School of Earth Sciences, and Co-Director, Center for Computational Earth and Environmental Sciences, Stanford University

4:00-5:00 pm | Room 3-370

Abstract:

Multiscale methods for subsurface flow modeling have received significant attention in recent years. Multiscale formulations are not robust enough yet for general-purpose reservoir simulation practice. Here, we address two features associated with highly heterogeneous models that pose serious challenges to the multiscale finite-volume formulation, namely, permeability channels and strongly anisotropic transmissibility (tensor permeability, large grid aspect ratio). A two-stage Algebraic Multiscale Solver (TAMS) is used for the elliptic pressure system. In the first stage, a global coarse-scale problem, which employs the (dual) multiscale basis functions, is constructed and solved using algebraic multigrid. Galerkin and locally conservative operators are investigated. The second stage uses a local preconditioner (e.g., Block Incomplete LU) on the fine-grid. TAMS is combined with adaptive reconstruction of the saturation field to solve nonlinear immiscible two-phase displacements in highly heterogeneous formations.

*Wednesday, October 13, 2010*

**The Well-Tempered Ensemble**

Michele Parrinello, ETH Zurich, Department of Chemistry and Applied Biosciences, and Director, Swiss Center for Scientific Computing (CSCS)

4:00-5:00 pm | Room 3-370

Abstract:

In order to alleviate the sampling problem in complex systems characterized by metastable states separated by large barriers we introduce the well-tempered ensemble defined by the partition function Zγ = ∫ dU [e-βU N(U)]1/γ where U is the potential energy and N(U) the density of states. As γ is varied, Zγ spans the range from the canonical (γ =1) to the multicanonical (γ = ∞) ensemble. We show that, to a first approximation, at intermediate values of γ the average energy is close to its canonical value while its fluctuations are strongly enhanced. These properties can be used to enhance sampling especially in combination with parallel tempering. We show that Zγ is the ensemble sampled in a well-tempered metadynamics run [A. Barducci, G. Bussi, and M. Parrinello, Phys. Rev. Lett. 100, 020603 (2008)] that uses the energy as a collective variable. Canonical ensemble averages are then recovered by applying a recently developed reweighing scheme [M. Bonomi, A. Barducci, and M. Parrinello, J. Comput. Chem. 30, 1615 (2009)]. In a series of applications as varied as the Ising model, a Go-like model for HIV protease and the freezing of a Lennard Jones liquid we demonstrate orders of magnitude gain in sampling efficiency.

*Wednesday, October 6, 2010*

**Implicit Filters for Data Assimilation**

Alexandre Chorin, Department of Mathematics, University of California, Berkeley

4:00-5:00 pm | Room 3-370

Abstract: There are many problems in science, e.g. in meteorology and economics, where one needs to estimate the current state of a system on the basis of an uncertain model supplemented by a stream of noisy data. This is often done by a particle filter, which samples the probability density generated by the uncertain equation as it is conditioned by the data. The resulting algorithm can be very expensive; it is difficult to sample a many-dimensional probability density, except when the problem is linear and Gaussian, because there is typically an immense number of states to sample, with all but a minute fraction having negligible probability, so that one needs to conduct some kind of expensive search.

Implicit sampling offers a solution to this problem. The idea is to pick a desired probability, and the find a sample that assumes this probability by solving an appropriate characteristic equation. It is of course important to solve this equation efficiently, and I will also explain how this can be done, with examples, including applications to oceanography.

The difficulty in sampling large-dimensional non-Gaussian probability densities is a major roadblock in other areas of science as well, and I will briefly discuss some other applications. I will develop all methods from scratch.

*Wednesday, September 15, 2010*

**Simulation and Plasticity Across Length- and Timescales: Irradiated Materials and Metallic Glasses**

David Rodney, Grenoble Institute of Technology

4:00-5:00 pm | Room 3-370

Abstract:

Understanding the plasticity of materials, whether crystalline or amorphous, requires accounting for multiple length- and timescales. So far, emphasis has mostly been placed on length-scales in crystals, by coupling or linking molecular dynamics, dislocation dynamics and the finite element method. Multiple time-scale methods are less advanced although they are essential to account for thermally-activated processes that control for instance, plasticity in high-Peierls stress crystals or nucleation of shear bands in glasses. An overview of our work in that field will be presented, starting with irradiated metals where novel atomistic interaction mechanisms between dislocations and irradiation defects, once incorporated in dislocation dynamics simulations, explain the microstructures observed at the micron scale in electron microscopy. For multiple time-scale methods, we will show how determining the distributions of thermally-activated events in a glass helps us understand its plasticity. Finally, we will present recent results on how quantum tunneling and zero-point vibration affect the thermally-activated glide of dislocations at low temperatures.

*Wednesday, May 12, 2010*

**A Stochastic Newton Method for Large-Scale Statistical Inverse Problems**

Omar Ghattas, Institute for Computational Engineering and Sciences, University of Texas at Austin

4:00-5:00 pm | Room 1-390

Abstract:

We are interested in the solution of several inverse problems in solid earth geophysics, including the inference of mantle constitutive parameters from observed plate motions, earth seismic velocities from surface seismograms, and polar ice sheet basal friction from satellite observations. Each of these inverse problems is most naturally cast as a large-scale statistical inverse problem in the framework of Bayesian inference: given observations and their uncertainty, the governing PDEs and their uncertainty, and a prior model of the parameter field and corresponding uncertainty, find the posterior probability density of the parameters.

The posterior density is a surface in high dimensions, and the standard approach is to sample it via a Markov-chain Monte Carlo (MCMC) method and then compute statistics of the samples. However, standard MCMC methods view the underlying parameter-to-observable map as a black box, and thus do not exploit PDE structure. As such, these methods become prohibitive for high dimensional parameter spaces and expensive-to-solve PDEs.

Here, we present a Langevin-accelerated MCMC method for sampling high-dimensional, expensive-to-evaluate probability densities. The method builds on previous work in Metropolized Langevin dynamics, which uses gradient information to guide the sampling in useful directions, improving acceptance probabilities and convergence rates.

**CANCELLED** due to volcano

Wednesday, April 21, 2010

**Simulation and Plasticity Across Length- and Timescales: Irradiated Materials and Metallic Glasses**

David Rodney, Grenoble Institute of Technology

4:00-5:00 pm | Room 1-390

Abstract:

Understanding the plasticity of materials, whether crystalline or amorphous, requires accounting for multiple length- and timescales. So far, emphasis has mostly been placed on the length-scale issue in crystals, by coupling or linking molecular dynamics, dislocation dynamics and the finite element method. Multiple time-scale methods are less advanced although they are essential to account for thermally-activated processes that control for instance, plasticity in high-Peierls stress crystals or shear band nucleation in glasses. An overview of our work in that field will be presented, starting with irradiated metals where novel atomistic interaction mechanisms between dislocations and irradiation defects, once incorporated in dislocation dynamics simulations, explain the microstructures observed at the micron scale in electron microscopy. For multiple time-scale methods, a promising approach is “on-the-fly” kinetic Monte Carlo where thermally-activated processes are determined directly on atomistic systems using saddle-point search methods. This method will be illustrated on the transition between slow creep and fast plastic deformation in metallic glasses.

*April 7, 2010*

**Micro and Macro Models of Gradient-flow Type for Crowd Motion in Emergency Situations**

Bertrand Maury Laboratoire de Mathématiques, Université Paris-Sud

4:00-5:00 pm | Room 1-390

Abstract:

We are interested in modeling the evacuation of a building in emergency situations. We propose a class of models based on the following considerations: individuals tend to minimize their own dissatisfaction, and the global instantaneous behaviour results from a balance between fulfillment of individualistic tendencies and respect of the congestion constraint.

This modelling approach can be carried out microscopically (each individual is represented by a disc) and macroscopically (the population is described by a diffuse density). The microscopic model has a natural gradient flow structure (it can be formulated as a steepest descent phenomenon associated to some global dissatisfaction function).

We will describe how the Wasserstein’s setting, which in some way consists in following people in their motion (in a Lagrangian way) makes it possible to extend the gradient flow structure to the macroscopic model, and provides a natural framework for this type of unilateral evolution problem.

We shall address the numerical issues raised by those models, and illustrate their behaviour in some typical situations.

*March 3, 2010*

**Development and Application of the ReaxFF Reactive Force Field in Atomistic-scale Simulations on Combustion, Catalysis and Material Failure**

Adri Van Duin, Mechanical and Nuclear Engineering, Pennsylvania State University

4:00-5:00 pm | Room 1-390

Abstract: While quantum mechanical (QM) methods allow for highly accurate atomistic scale simulations, their high computational expense limits applications to fairly small systems (generally smaller than 100 atoms) and mostly to statical, rather than dynamical, approaches. Force field (FF) methods are magnitudes faster than QM-methods, and as such can be applied to perform nanosecond-dynamics simulations on large (>>1000 atoms) systems. However, these FF-methods can usually only describe a material close to its equilibrium state and as such can not properly simulate bond dissociation and formation.

This lecture will describe how the traditional, non-reactive FF-concept can be extended for application including reactive events by introducing bond order/bond distance concepts. Furthermore, it will address how these reactive force fields can be trained against QM-data, thus greatly enhancing their reliability and transferability. Finally, this lecture will describe recent applications of the ReaxFF reactive force fields to a wide range of different materials and applications, focusing specifically on applications associated with combustion, catalysis and material failure.

*February 10, 2010*

**Color Printers, Mailboxes, Fish, Climate Change, and Homer Simpson or Centroidal Voronoi Tessellations: Algorithms and Applications**

Max Gunzburger, Department of Scientific Computing, Florida State University

4:00-5:00 pm | Room 1-390

Abstract:

Centroidal Voronoi tessellations (CVTs) are special Voronoi diagrams for which the generators of the diagrams are also the centers of mass (with respect to a given density function) of the Voronoi cells. CVTs have many uses and applications, a non-exhaustive list of which includes data compression, image segmentation and edge detection, clustering, cell biology, territorial behavior of animals, resource allocation, stippling, grid generation in volumes and on surfaces, meshless computing, hypercube sampling, and reduced-order modeling. We discuss mathematical features of CVTs (that give an indication of why they are so effective) as well as deterministic and probabilistic methods for their construction. Our main focus, however, is on considering as many applications of CVTs as time permits.

*December 2, 2009*

**Plumber’s Wonderland Found on Graphene**

Ju Li, Material Theories Group, Department of Materials Science

Abstract:

Curvy nanostructures such as carbon nanotubes and fullerenes have extraordinary properties but are difficult to pick up, handle and assemble into devices after synthesis. We have performed experimental and modeling research into how to construct curvy nanostructures directly integrated on graphene, taking advantage of the fact that graphene, an atomically thin two-dimensional sheet, bends easily after open edges have been cut on it, which can then fuse with other open edges, like a plumber connecting metal fittings. By applying electrical current heating to few-layer graphene inside an electron microscope, we observed the in situ creation of many interconnected, curved carbon nanostructures, such as graphene bilayer edges (BLEs), aka “fractional nanotubes”; BLE polygons equivalent to “squashed fullerenes” and “anti quantum-dots”; and nanotube-BLE junctions connecting multiple layers of graphene. Further theoretical research has indicated that multiple-layer graphene offers unique opportunities for tailoring carbon-based structures and engineering novel nano-devices with complex topologies.

*November 4, 2009*

**Large-Scale Nonlinear Optimization**

Andreas Wächter, Mathematical Sciences Department, T.J. Watson Research Center, IBM

4:00-5:00 pm | Room 1-390

Abstract:

In this talk, we present an algorithm for nonlinear continuous optimization which uses a primal-dual interior point approach to facilitate the handling of up to millions of equality and inequality constraints. A filter line-search method ensures convergence to local solutions of the optimization problem (under standard assumptions). This algorithm has been implemented as the open source software package Ipopt and has been used to solve a number of real-life industrial and scientific applications. Some recent improvements will be discussed, such as the adaption of the method for the use of iterative linear solvers (e.g., for the solution of 3D PDE constrained optimization problems), as well as a distributed memory implementation for massively parallel computing environments.

*October 7, 2009*

**New Discontinuous Petrov-Galerkin Techniques for Designing Numerical Schemes**

Jay Gopalakrishnan, Department of Mathematics, University of Florida

4:00-5:00 pm | Room 1-390

Abstract:

Finite element methods of the Petrov-Galerkin type have different trial and test spaces. While approximate solutions lie in the trial spaces, the equations are satisfied weakly up to the test spaces. Asserting a general principle that although one must choose trial spaces with good approximation properties, one may pick test spaces solely for stability properties, I will exemplify it by designing specific novel schemes. The presentation will be largely focused on our initial studies for the transport equations. But the potential for applying the technique in greater generality will be conveyed.

*September 16, 2009*

**Nanomaterials for Energy Conversion and Storage: Insight from Theoretical Calculations**

Jeffrey Grossman, Department of Materials Science and Engineering, MIT

4:00-5:00 pm | Room 1-390

Abstract:

Materials for energy conversion and storage can be greatly improved by taking advantage of unique effects that occur at the nanoscale. In our work, we develop and apply classical and quantum mechanical calculations to predict key properties that govern the conversion and storage efficiencies in these materials, including structural and electronic effects, interfacial charge separation, excited state phenomena, band level alignment, confinement effects, reaction pathway energetics, and novel synthesis approaches. An overview of our work will be presented, with examples in solar photovoltaics, thermoelectrics, hydrogen storage, and solar fuels. We use these examples to illustrate how computational approaches can improve our understanding and lead to more efficient materials and ultimately devices.