Papers by Karl Meerbergen
We discuss a new method for handling random acoustic excitations. The idea is to approximate the ... more We discuss a new method for handling random acoustic excitations. The idea is to approximate the cross power spectral density matrix of the response by a low rank matrix. We illustrate the low rank approximation can be computed efficiently by the implicitly restarted block Lanczos method. We give a theoretical explanation.
Numerical Linear Algebra with Applications, 2015
SIAM Journal on Matrix Analysis and Applications, 2015
We propose a new uniform framework of compact rational Krylov (CORK) methods for solving large-sc... more We propose a new uniform framework of compact rational Krylov (CORK) methods for solving large-scale nonlinear eigenvalue problems A(λ)x = 0. For many years, linearizations were used for solving polynomial and rational eigenvalue problems. On the other hand, for the general nonlinear case, A(λ) can first be approximated by a (rational) matrix polynomial and then a convenient linearization is used. However, the major disadvantage of linearization-based methods is the growing memory and orthogonalization costs with the iteration count, i.e., in general they are proportional to the degree of the polynomial. Therefore, the CORK family of rational Krylov methods exploits the structure of the linearization pencils by using a generalization of the compact Arnoldi decomposition. In this way, the extra memory and orthogonalization costs due to the linearization of the original eigenvalue problem are negligible for large-scale problems. Furthermore, we prove that each CORK step breaks down into an orthogonalization step of the original problem dimension and a rational Krylov step on small matrices. We also briefly discuss implicit restarting of the CORK method and how to exploit low rank structure. The CORK method is illustrated with two large-scale examples.
IEEE Conference on Decision and Control and European Control Conference, 2011
We present a model order reduction method which allows the construction of a reduced, delay free ... more We present a model order reduction method which allows the construction of a reduced, delay free model of a given dimension for linear time-delay systems. The method builds on the equivalent representation of the time-delay system as an infinite-dimensional linear problem. It combines ideas from a finite-dimensional approximation via a spectral discretization on the one hand, and a Krylov-Padé model reduction approach on the other hand. The method exhibits a good spectral approximation of the original model, in the sense that the smallest characteristic roots are well approximated and the nonconverged eigenvalues of the reduced model have a favorable location, and it preserves moments at zero and at infinity. The model reduction approach is illustrated by means of a PDE model for a heated rod with delay in the boundary control.
Recent Advances in Optimization and its Applications in Engineering, 2010
ABSTRACT Optimization problems such as the parameter design of dynamical systems are often comput... more ABSTRACT Optimization problems such as the parameter design of dynamical systems are often computationally expensive. In this paper, we apply Krylov based model order reduction techniques to the parameter design problem of an acoustic cavity to accelerate the computation of both function values and derivatives, and therefore, drastically improve the performance of the optimization algorithms. Two types of model reduction techniques are explored: conventional model reduction and parameterized model reduction. The moment matching properties of derivative computation via the reduced model are discussed. Numerical results show that both methods are efficient in reducing the optimization time.
Lecture Notes in Control and Information Sciences, 2012
9th IFAC Workshop on Time Delay Systems (2010), 2010
SIAM Journal on Scientific Computing, 2015
Lecture Notes in Computational Science and Engineering, 2005
Lecture Notes in Computational Science and Engineering, 2005
IMA Journal of Numerical Analysis, 2014
SIAM Journal on Scientific Computing, 2014
A new rational Krylov method for the efficient solution of nonlinear eigenvalue problems, A(λ)x =... more A new rational Krylov method for the efficient solution of nonlinear eigenvalue problems, A(λ)x = 0, is proposed. This iterative method, called fully rational Krylov method for nonlinear eigenvalue problems (abbreviated as NLEIGS), is based on linear rational interpolation and generalizes the Newton rational Krylov method proposed in [R. Van Beeumen, K. Meerbergen, and W. Michiels, SIAM J. Sci. Comput., 35 (2013), pp. A327-A350]. NLEIGS utilizes a dynamically constructed rational interpolant of the nonlinear function A(λ) and a new companion-type linearization for obtaining a generalized eigenvalue problem with special structure. This structure is particularly suited for the rational Krylov method. A new approach for the computation of rational divided differences using matrix functions is presented. It is shown that NLEIGS has a computational cost comparable to the Newton rational Krylov method but converges more reliably, in particular, if the nonlinear function A(λ) has singularities nearby the target set. Moreover, NLEIGS implements an automatic scaling procedure which makes it work robustly independently of the location and shape of the target set, and it also features low-rank approximation techniques for increased computational efficiency. Small-and large-scale numerical examples are included. From the numerical experiments we can recommend two variants of the algorithm for solving the nonlinear eigenvalue problem.
International Journal of Parallel Programming, 2013
ABSTRACT The bulk synchronous parallel (BSP) model, as well as parallel programming interfaces ba... more ABSTRACT The bulk synchronous parallel (BSP) model, as well as parallel programming interfaces based on BSP, classically target distributed-memory parallel architectures. In earlier work, Yzelman and Bisseling designed a MulticoreBSP for Java library specifically for shared-memory architectures. In the present article, we further investigate this concept and introduce the new high-performance MulticoreBSP for C library. Among other features, this library supports nested BSP runs. We show that existing BSP software performs well regardless whether it runs on distributed-memory or shared-memory architectures, and show that applications in MulticoreBSP can attain high-performance results. The paper details implementing the Fast Fourier Transform and the sparse matrix–vector multiplication in BSP, both of which outperform state-of-the-art implementations written in other shared-memory parallel programming interfaces. We furthermore study the applicability of BSP when working on highly non-uniform memory access architectures.
SIAM Journal on Matrix Analysis and Applications, 2014
The Schmidt-Eckart-Young theorem for matrices states that the optimal rankr approximation of a ma... more The Schmidt-Eckart-Young theorem for matrices states that the optimal rankr approximation of a matrix is obtained by retaining the first r terms from the singular value decomposition of that matrix. This paper considers a generalization of this optimal truncation property to the rank decomposition (Candecomp/Parafac) of tensors and establishes a necessary orthogonality condition. We prove that this condition is not satisfied at least by an open set of positive Lebesgue measure in complex tensor spaces. It is proved, moreover, that for complex tensors of small rank this condition can be satisfied only by a set of tensors of Lebesgue measure zero. Finally, we demonstrate that generic tensors in cubic tensor spaces are not optimally truncatable.
Journal of Computational Electronics, 2014
We present a nonlinear eigenvalue solver enabling the calculation of bound solutions of the Schrö... more We present a nonlinear eigenvalue solver enabling the calculation of bound solutions of the Schrödinger equation in a system with contacts. We discuss how the imposition of contacts leads to a nonlinear eigenvalue problem and discuss the numerics for a one-and twodimensional potential. We reformulate the problem so that the eigenvalue problem can be efficiently solved by the recently proposal rational Krylov method for nonlinear eigenvalue problems, known as NLEIGS. In order to improve the convergence of the method, we propose a holomorphic extension such that we can easily deal with the branch points introduced by a square root. We use our method to determine the bound states of the one-dimensional Pöschl-Teller potential, a two-dimensional potential describing a particle in a canyon and the multi-band Hamiltonian of a topological insulator.
International Journal of Dynamics and Control, 2014
A continuous dynamical system is stable if all eigenvalues lie strictly in the left half of the c... more A continuous dynamical system is stable if all eigenvalues lie strictly in the left half of the complex plane. However, this is not a robust measure because stability is no longer guaranteed when the system parameters are slightly perturbed. Therefore, the pseudospectrum of a matrix and its pseudospectral abscissa are studied. Mostly, one is often interested in computing the distance to instability, because it is a robust measure for stability against perturbations. As a first contribution, this paper presents two iterative methods for computing the distance to instability, considering complex perturbations. The first one is based on locating a zero of the pseudospectral abscissa function. This method is particularly suitable for large sparse matrices as it is based on repeated eigenvalue computations, where the original matrix is perturbed with a rank one matrix. The second method is based on a recently proposed global optimization technique. The advantages of both methods can be combined in a hybrid algorithm. As a second contribution we show that the methods apply to a broad class of nonlinear eigenvalue problems, in particular eigenvalue problems inferred from linear delay-differential equations, and, therefore, they are useful for a wide range of problems. In the numerical examples the standard eigenvalue problem, the quadratic eigenvalue problem and the delay eigenvalue problem are addressed.
Mathematics of Computation, 2004
Uploads
Papers by Karl Meerbergen