Hans-Werner van Wyk

Assistant Professor, Mathematics and Statistics, Auburn University

Projects

Flow and Transport Models

(Collaborators: Yanzhao Cao, Song Chen, Olcay Ciftci (graduate student), Dmitry Glotov)

During my time at Auburn, my co-authors and I have investigated two coupled flow and transport models: bioconvective flow equations used to model shallow suspensions of micro-organisms in a fluid, and variable density flow and transport equations, used to model saltwater intrusion in coastal aquifers. In both cases, variations in the concentration of the suspension/solution influence flow parameters which in turn determine the solute's spatial distribution. In [1], Yanzhao Cao, Song Chen, and I consider the well-posedness and finite element approximation of a generalized bioconvective flow model in which micro-organisms with a tendency to swim up toward the surface aggregate there and, upon reaching a critical mass, drop down under the influence of gravity, thereby creating a convective pattern observed both in nature and experiment. In [1] we consider transport, coupled with a Navier-Stokes-type equation that allows for a constitutive relation of the viscosity in terms of concentration. We show the existence and uniqueness of the weak solution of the system in two dimensions and construct numerical approximations based on the finite element method, for which we obtain error estimates. In joint work with Y. Cao, Dmitry Glotov and graduate student Olcay Ciftci (graduated in Summer 2020), we are currently investigating well-posedness, numerical approximations, and reduced order models of the variable density flow and transport equations related to saltwater intrusion. Here, salt concentration affects the coupled Darcy flow via a constitutive relation with fluid density.

Fractional Laplacian

(Collaborators: John Burkardt, Siwei Duo, Serge Guerngar, Max Gunzburger, Mariam Khatchatrian (graduate student), Erkan Nane, Miroslav Stoyanov, Suleyman Ulusoy, Yanzhi Zhang)

The fractional Laplacian is a nonlocal generalization of the classical Laplacian that is often used to model diffusion processes. My interest in this operator dates back to my work on the generative models for power-law noises (see [4]), and has continued during my time at Auburn. Numerical approximations of the fractional Laplacian are challenging, due to the fact that it is nonlocal, as well the presence of a hypersingular kernel. In [3], Siwei Duo (graduate student), Yanzhi Zhang, and I developed a novel finite difference method to discretize the fractional Laplacian in hypersingular integral form. By introducing a splitting parameter, we formulated the fractional Laplacian as the weighted integral of a weak singular function, which is then approximated by the weighted trapezoidal rule. Compared to other existing methods, our method is more accurate and simpler to implement, and moreover it closely resembles the central difference scheme for the classical Laplace operator. Specifically, we obtained the same second order convergence rate under sufficient regularity, regardless of the operator's fractional power. Together with Erkan Nane, Serge Guerngar, and Suleyman Ulusoy --[10],[11] --, we investigated the estimation of fractional spatial and temporal scales related to L\'evy processes generated by a double-fractional Laplacian. We showed that these parameters can in principal be uniquely determined and developed optimization schemes to identify them numerically from observations. In ongoing work, E. Nane, Mariam Khatchatryan (graduate student), and I are developing numerical methods to determine blow-up times in nonlinear fractional heat equations.

Statistical Sampling

(Collaborators: Max Gunzburger, Baris Kopruluoglu (graduated student), Fauziya Yakasai (graduate student))

In physical systems that operate under uncertain conditions, statistical sampling provides a means of obtaining information about the distribution of various physical quantities of interest related to the underlying model output, such as their means, variances or even empirical distributions. The computational cost of obtaining reliable estimates depends largely on (i) the cost of each sample simulation and on (ii) the statistical complexity of the underlying parameter space/output. The computation of a physical quantity's statistics amounts to an integration problem over the appropriate probability space. For low-complexity quantities of interest, i.e. those that depend only on a small number of random variables, there are efficient interpolative numerical quadrature methods, such as sparse grid stochastic collocation, that require only a relatively small number of samples to converge. In [13] (submitted), Max Gunzburger and I consider extensions of the well-known Chebyshev-type integration rules to incorporate the underlying probability density explicitly in the rule by modifying either the quadrature weights or nodes. As parameter complexity increases, the convergence rates of these interpolative schemes deteriorates until, for sufficiently high stochastic dimensions, the Monte Carlo method, whose convergence rate depends only on the quantity's variance, becomes the only viable sampling scheme. In ongoing work, graduate student Fauziya Yakasai and I propose a hybrid sampling scheme for high complexity systems, in which the quantity of interest is conditioned on a low dimensional approximation of the underlying parameter. Integration over the low-dimensional projected parameter space is achieved through efficient collocation methods, whereas the high dimensional conditioned integrals (with lower conditional variance) are estimated by Monte Carlo sampling, leading to overall gains in efficiency as well as means of exploiting local sample path clustering through the use of reduced order models or sensitivity-based covariates for example.

Astronomical Spectroscopy

(Collaborators: Adam Foster, Stuart Loch, Kyle Stewart (graduate student))

In recent years, I had the opportunity of applying my expertise in statistical sampling to the quantification of uncertainties in atomic spectra. The spectra of electromagnetic radiation emitted from distant astrophysical objects often provides the only means of obtaining information about their chemical composition, temperature, density, mass, distance, luminosity, and relative motion. These properties are inferred with reference to predictions made by the underlying atomic models that describe their atomic structure as well as various photon-emissive/absorptive atomic processes. Atomic energies, cross-sections, rates, and line intensities have been computed and measured experimentally over decades for a multitude of ions and are widely available in astrophysical databases, such as NIST or ATOMDB. However, significant discrepancies exist in the literature, which complicates the use of the resulting spectral diagnostics. Although the physical mechanisms that govern the atomic structure and -processes are theoretically well-established, modeled by the Schr\"odinger equations and associated quantum-mechanical algebraic constraints, these systems are potentially highly complex and computationally intense to solve, with many possible processes and channels contributing to the computed spectra. In joint work with atomic physicist Stuart Loch, astrophysicist Adam Foster, and graduate student Kyle Stewart, funded by NASA grant (16-APRA16-0092), we set out to establish a systematic framework for quantifying uncertainties/variations arising from (i) numerical/convergence errors, (ii) uncertainties in estimated parameters, and (iii) variations due to different modeling choices, and propagating them via collisional radiative models to uncertainties in predicted diagnostic reference values. The resulting error bars will likely help explain discrepancies in computed astrophysical predictions, provide confidence levels in measured spectra, and help determine the reliability of various spectral diagnostic quantities.

Optimization Under Uncertainty

(Collaborators: Yanzhao Cao, Somak Das, Junshan Lin, Luke Oeding)

Questions of design and control in systems subject to uncertainty can often be formulated as deterministic optimization problems, where the quantity to be minimized is some statistic related the model output. In [2], Y. Cao, Junshan Lin, and I considered the optimal design of thin film solar cell to maximize the expected absorption of light over a range of frequencies. For the design variable, we chose the parameters that determine the statical properties of the random rough interface between the transparent conductive oxide layer and the absorptive layer. We derived a steepest descent method, constrained by the Helmholtz equation as a model for light scattering, which resulted in an optimal interface with increased average light absorption compared to currently used interfaces. Stochastic sampling often presents a significant computational bottleneck in optimization under uncertainty, especially when cost functionals and gradients are computed via Monte Carlo sampling. This can lead to slow overall convergence rates, since statistical quantities must be computed at each iteration within the optimization loop even if the optimization method itself converges fast. Stochastic optimization methods, such as the stochastic gradient method, efficiently incorporate stochastic sampling into the optimization iteration thereby improving the convergence rate. Such methods are widely used in machine learning. In [12] (submitted), Y. Cao, Somak Das (graduate student), Luke Oeding, and I analyze the convergence of the stochastic alternating least squares (SALS) method in the decomposition of random tensors. Tensors are multi-dimensional arrays that appear frequently in data science applications. Their decomposition into canonical factors, i.e. the sum of rank-one tensors, provides important insights as the components of the rank-one terms, the factors, represent meaning in the data, similar to the principal component analysis of a matrix. Unlike for matrices however, tensor decompositions can only be computed indirectly, through optimization. In [12] we prove convergence of the SALS algorithm under mild assumptions on the observed tensor samples. In ongoing work, Y. Cao, S. Das, and I are analyzying the application of AdaGrad stochastic optimization in the distributed control of parabolic heat equation.