Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004, PAMM
…
2 pages
1 file
Existence or fixed point theorems, combined with interval analytic methods, provide a means to computationally prove the existence of a zero of a nonlinear system in a given interval vector. One such test is based on Borsuk's existence theorem. We discuss preconditioning techniques that are aimed at improving the effectiveness of this test. *
International Journal of Physical Sciences, 2011
We discuss Hansen-Sengupta operator in the context of circular interval arithmetic for the algebraic inclusion of zeros of interval nonlinear systems of equations. It was demonstrated by showing the effects of applying repeatedly preconditioners of inverses of the midpoint interval matrices on the well known Trapezoidal Newton method at each iteration cycle wherein, the work of Shokri (2008) was our major tool of investigation. It was shown that the Trapezoidal interval Newton method with inverse midpoint interval matrix as preconditioner is not a H-continuous map and that Baire category failed to hold in the sense of Aguelov et al. (2007). This was more so since it produced from our numerical example, not only overestimated results but, also results that are not finitely bounded which we compare with results computed previously given in Uwamusi.
1990
Interval Newton methods in conjunction with generalized bisection can form the basis of algorithms that nd all real roots within a speci ed box X R n of a system of nonlinear equations F (X) = 0 with mathematical certainty, even in nite-precision arithmetic. In such methods, the system F (X) = 0 is transformed into a linear interval system 0 = F (M) + F 0 (X)(X ? M); if interval arithmetic is then used to bound the solutions of this system, the resulting boxX contains all roots of the nonlinear system. We may use the interval Gauss{Seidel method to nd these solution bounds. In order to increase the overall e ciency of the interval Newton / generalized bisection algorithm, the linear interval system is multiplied by a preconditioner matrix Y before the interval Gauss{Seidel method is applied. Here, we review results we have obtained over the past few years concerning computation of such preconditioners. We emphasize importance and connecting relationships, and we cite references for the underlying elementary theory and other details.
Mathematics of Computation, 2002
Traditional computational fixed point theorems, such as the Kantorovich theorem (made rigorous with directed roundings), Krawczyk's method, or interval Newton methods use a computer's floating-point hardware computations to mathematically prove existence and uniqueness of a solution to a nonlinear system of equations within a given region of n-space. Such computations require the Jacobi matrix of the system to be nonsingular in a neighborhood of a solution. However, in previous work we showed how we could mathematically verify existence of singular solutions in a small region of complex n-space containing an approximate real solution. We verified existence of such singular solutions by verifying that the topological degree of a small region is nonzero; a nonzero topological degree implies existence of a solution in the interior of the region. Here, we show that, when the actual topological degree in complex space is odd and the rank defect of the Jacobi matrix is one, the topological degree of a small region containing the singular solution can be verified to be plus or minus one in real space. The algorithm for verification in real space is significantly simpler and more efficient. We demonstrate this efficiency with numerical experiments.
1999
Abstract. Computational fixed point theorems can be used to automatically verify existence and uniqueness of a solution to a nonlinear system of equations F (x)= 0, F: Rn→ Rn within a given region x of n-space. But such computations succeed only when the Jacobi matrix F (x) is nonsingular everywhere in x. However, in many practical problems, the Jacobi matrix is singular, or nearly so, at the solution x∗, F (x∗)= 0. In such cases, arbitrarily small perturbations of the problem result in problems F (x)= 0 either with no solutions in x or with ...
1991
Interval Newton methods in conjunction with generalized bi- section can form the basis of algorithms that Þnd all real roots within a speciÞed box X öRn of a system of nonlinear equations F(X) = 0 with mathematical certainty, even in Þnite-precision arithmetic. In such methods, the system F(X) = 0 is transformed into a linear interval system 0 = F(M)
SIAM Journal on Numerical Analysis, 2000
Computational fixed point theorems can be used to automatically verify existence and uniqueness of a solution to a nonlinear system of n equations in n variables ranging within a given region of n-space. Such computations succeed, however, only when the Jacobi matrix is nonsingular everywhere in this region. However, in problems such as bifurcation problems or surface intersection problems, the Jacobi matrix can be singular, or nearly so, at the solution. For n real variables, when the Jacobi matrix is singular, tiny perturbations of ...
Computing, 2008
Finding bounding sets to solutions to systems of algebraic equations with uncertainties in the coefficients, as well as rapidly but rigorously locating all solutions to nonlinear systems or global optimization problems, involves bounding the solution sets to systems of equations with wide interval coefficients. In many cases, singular systems are admitted within the intervals of uncertainty of the coefficients, leading to unbounded solution sets with more than one disconnected component. This, combined with the fact that computing exact bounds on the solution set is NP-hard, limits the range of techniques available for bounding the solution sets for such systems. However, the componentwise nature and other properties make the interval Gauss-Seidel method suited to computing meaningful bounds in a predictable amount of computing time. For this reason, we focus on the interval Gauss-Seidel method. In particular, we study and compare various preconditioning techniques we have developed over the years but not fully investigated, comparing the results. Based on a study of the preconditioners in detail on some simple, specially-designed small systems, we propose two heuristic algorithms, then study the behavior of the preconditioners on some larger, randomly generated systems, as well as a small selection of systems from the Matrix Market collection.
2011
The problem of enclosing all solutions of an underdetermined system of equations is considered. A few variants of the algorithm to solve this problem are compared – some of the features come from the literature and some are original. The paper discusses both implementational and theoretical issues of the problem, including a useful theorem that is proved. Shared-memory parallelization, using OpenMP is also considered and numerical results for proper test problems are presented.
Numerical Algorithms, 2020
We deal with interval parametric systems of linear equations and the goal is to solve such systems, which basically comes down to finding an enclosure for a parametric solution set. Obviously we want this enclosure to be as tight as possible. The review of the available literature shows that in order to make a system more tractable most of the solution methods use left preconditioning of the system by the midpoint inverse. Surprisingly, and in contrast to standard interval linear systems, our investigations have shown that double preconditioning can be more efficient than a single one, both in terms of checking the regularity of the system matrix and enclosing the solution set. Consequently, right (which was hitherto mentioned in the context of checking regularity of interval parametric matrices) and double preconditioning together with the p-solution concept enable us to solve a larger class of interval parametric linear systems than most of existing methods. The applicability of the proposed approach to solving interval parametric linear systems is illustrated by several numerical examples.
2005
Finding bounding sets to solutions to systems of algebraic equations with uncertainties in the coefficients, as well as finding mathematically rigorous but rapid location of all solutions to nonlinear systems or finding global optima, involves bounding the solution sets to systems of equations with wide interval coefficients. The interval Gauss–Seidel algorithm has various properties that make it suited to this task. However, the system must in general be preconditioned for the interval Gauss–Seidel method to be effective. The most common preconditioner has been the “inverse midpoint” preconditioner; however, we have proposed other classes of preconditioners that obey certain optimality conditions, and have shown empirically advantages of their use. In this paper, we revisit similar preconditioners, previously applied only in the context of interval Newton methods, that are appropriate when the solution set may have more than one semi-infinite component. We first review our previous...
5 hábitos para mejorar tu calidad de vida, 2024
KASAYI'S MASTERS THESIS IN HUMAN RIGHTS , 2021
Jurnal Kesejahteraan dan Pelayanan Sosial
Meridiano 47 - Journal of Global Studies, 2024
Wiley Interdisciplinary Reviews: Energy and Environment
Journal of Mountain Science, 2023
Asian Journal of Research in Computer Science, 2023
Indian Journal of Cardiovascular Disease in Women WINCARS, 2018
BMC Gastroenterology, 2020
Clinical and Vaccine Immunology, 2010
American Journal of Hypertension, 2000
Future Journal of Pharmaceutical Sciences, 2021
Advances in the Theory of Nonlinear Analysis and its Application, 2020
African Journal of Traditional, Complementary and Alternative Medicines, 2011