Complex Systems Modeling and Computation
In the Focus Group Complex Systems Modeling and Computation, Hans Fischer Senior Prof. Yannis Kevrekidis collaborates with his hosts Prof. Katharina Krischer (Non-Equilibrium Chemical Physics) and Prof. Oliver Junge (Numerics of Complex Systems).
Complex multiscale systems and especially complex processes involving cascades of scales are ubiquitous in current natural science research. Such processes feature more than two characteristic scales, their smallest and largest scales are widely separated, and much of their scale range participates in the process interactions. Also, they are often too complex for experimental studies, but with the steady increase of computing power, there is hope that they can be understood through computational simulations. Such simulations remain very challenging, however, as their wide range of scales is associated with very large numbers of degrees of freedom and in many cases this will prohibit brute-force all-detail computational modeling far into the future. Moreover, interactions of the smallest, largest, and intermediate scales often render most established theoretical or computational tools ineffective or inapplicable because most of them are well founded only for two-scale problems.
The Problem: In our University textbooks, the equations describing a phenomenon (fluid flow, chemotaxis, mechanics) are typically written at the same level that we want the information (macroscopic velocity fields, bacterial concentrations, macroscopic deformations). But these days, increasingly, the level at which the physics are understood (molecular, cellular, agent-based) is much finer than the macroscopic, human, systems level at which we want to get information. We do not have the time/intelligence/experience to obtain good macro descriptions (good closures) – so we are “stuck” with simulating and observing very detailed models of great complexity at great cost. Sometimes we do not even know what the right macroscopic variables (the right observables, the right reaction coordinates, the right order parameters) are. If we had macro-equations, getting information from the models, designing, controlling, optimizing, would be “easy” – we have great computational tools for macro-level, continuum models, based on calculus and numerical analysis.
We do not have accurate macroscopic equations, but we have “fine scale” descriptions. Yet we do not want to simulate molecules for long times and large spaces and many parameter values – instead we think: if we had macroscopic PDEs, what would our numerical subroutines do? They would use the PDEs to get residuals, Jacobians, time derivatives, Hessians at specific moments in time and specific grid points in space – so, now that we do not have macro equations, we set up and run brief bursts of fine scale simulations where macro numbers are needed; and instead of getting these numbers from a closed formula, we get them from a brief (possibly ensemble of) numerical experiments with the fine scale model. We call this equation-free not because we do not have equations (we have micro equations) – it is the macro equations that we do not have. We use calculus, and Taylor series, and traditional continuum numerical analysis to help us design the right micro computational experiment from which to get the numbers to do macro computations with. Traditional numerical analysis (initial value solvers, PDE discretizations, eigenvalue algorithms) become protocols for the design of computational experiments with the fine scale code – protocols on where to iteratively collect computational data. We use this to drastically accelerate the way we extract macro information from micro models of complex systems – we know how to do this for problems that range from macromolecular folding to micelle formation, and form chemotaxis and multiscale flow to agent-based simulations in the social sciences; there are nontrivial technical issues, but we can do this well.
The Catch: To do equation-free computations, we may not need macro equations, but we need to know what the right macro observables are – the right macroscopic variables. If we do not know these variables, we cannot design the right fine scale computational experiments – and we do not know how to observe them to get the right macro information. This is then the “variable-free” part: use manifold learning to process the fine-scale simulation data streams on the fly, to obtain the right variables with which to do equation-free computations on the fly.
The Ambition is then to functionally integrate machine learning with equation-free algorithms in order to model complex systems “the best way possible”: (a) run the fine scale codes intelligently to find the right macro variables (using, for example, Diffusion Maps); (b) use traditional numerical algorithms and the right observables to get local Taylor series – the backbone of traditional continuum numerics; and (c) use these local derivatives, residuals, Jacobians, Hessians in the right observables to design the next fine scale simulation – the next place where interesting macro information needs to be collected. Joint reduction of the state variables and the multitude of parameters in complex dynamical systems is also a current research focus.
Publications of the Focus Group