Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, arXiv (Cornell University)
The paper is devoted to construction of an optimal interpolation formula in K2(P2) Hilbert space. Here the interpolation formula consists of a linear combination N β=0 C β (z)ϕ(x β ) of given values of a function ϕ from the space K2(P2). The difference between functions and the interpolation formula is considered as a linear functional called the error functional. The error of the interpolation formula is estimated by the norm of the error functional. We obtain the optimal interpolation formula by minimizing the norm of the error functional by coefficients C β (z) of the interpolation formula. The obtained optimal interpolation formula is exact for trigonometric functions sin ωx and cos ωx. At the end of the paper we give some numerical results which confirm our theoretical results.
In the present paper using S.L. Sobolev's method interpolation splines minimizing the semi-norm in 2 2 () K P space are constructed. Explicit formulas for coefficients of interpolation splines are obtained. The obtained interpolation spline is exact for the functions 1 2 3 sin 2 x e x − and 1 2 3 cos 2 x e x −. Also we give some numerical results where we showed connection between optimal quadrature formula and obtained interpolation spline in the space 2 2 () K P .
In this paper we are interested in polynomial interpolation of irregular functions namely those elements of L 2 (R, µ) for µ a given probability measure. This is of course doesn't make any sense unless for L 2 functions that, at least, admit a continuous version. To characterize those functions we have, first, constructed, in an abstract fashion , a chain of Sobolev like subspaces of a given Hilbert space H 0. Then we have proved that the chain of Sobolev like subspaces controls the existence of a continuous version for L 2 functions and gives a pointwise polynomial approximation with a quite accurate error estimation.
Calcolo, 2019
In the present paper we investigate the problem of construction of the optimal interpolation formulas in the space W (m,m-1) 2 (0, 1). We find the norm of the error functional which gives the upper bound for the error of the interpolation formulas in the space W (m,m-1) 2 (0, 1). Further we get the system of linear equations for coefficients of the optimal interpolation formulas. Using the discrete analogue of the differential operator d 2m dx 2m -d 2m-2 dx 2m-2 and its properties we find explicit formulas for the coefficients of the optimal interpolation formulas. Finally, we give some numerical results which the confirm theoretical results of the paper.
2014
In the present paper using S.L. Sobolev’s method interpolation splines minimizing the semi-norm in 2 2 ()K P space are constructed. Explicit formulas for coefficients of interpolation splines are obtained. The obtained interpolation spline is exact for the functions 1
Arkiv för matematik, 1984
Given an interpolation couple (A0, A~), the approximation functional is dcfined
Computational Optimization and Applications, 2017
In this paper, interpolating curve or surface with linear inequality constraints is considered as a general convex optimization problem in a Reproducing Kernel Hilbert Space. We propose a new approximation method based on a discretized optimization problem in a finite-dimensional Hilbert space under the same set of constraints. We prove that the approximate solution converges uniformly to the optimal constrained interpolating function. An algorithm is derived and numerical examples with boundedness and monotonicity constraints in one and two dimensions are given. Keywords Optimization • RKHS • Interpolation • Inequality constraints 1 Introduction Let X be a nonempty set of R d (d ≥ 1) and E = C 0 (X) the linear (topological) space of real valued continuous functions on X. Given n distinct points x (1) ,. .. , x (n) ∈ X and y 1 ,. .. , y n ∈ R, we define the set I of interpolating functions by I := f ∈ E, f x (i) = y i , i = 1,. .. , n .
Calcolo, 2013
Using S.L. Sobolev's method, we construct the interpolation splines minimizing the semi-norm in K 2 (P 2 ), where K 2 (P 2 ) is the space of functions φ such that φ is absolutely continuous, φ belongs to L 2 (0, 1) and 1 0 (ϕ (x) + ϕ(x)) 2 dx < ∞. Explicit formulas for coefficients of the interpolation splines are obtained. The resulting interpolation spline is exact for the trigonometric functions sin x and cos x. Finally, in a few numerical examples the qualities of the defined splines and D 2 -splines are compared. Furthermore the relationship of the defined splines with an optimal quadrature formula is shown. Interpolation splines • Hilbert space • a seminorm minimizing property • S.L. Sobolev's method • discrete argument function • discrete analogue of a differential operator • coefficients of interpolation splines Mathematics Subject Classification (2000) MSC 41A05, 41A15 The work of the second author was supported in part by the Serbian Ministry of Education, Science and Technological Development.
HAL (Le Centre pour la Communication Scientifique Directe), 2020
Iskanadjiev I. M. Tugallanishi fiksirlangan nochiziqli differentsial o'yinlardagi Pontryagin quyi operatori haqida ……………………………………………………………..………………………………………………..……...... 13 Rasulov T. H., Bahronov B. I. Frisrixs modeli sonli tasvirining tuzilishi: rangi ikkiga teng qo'zg'alishli 1 o'lchamli hol ……………………………………………………………………………….
Interpolation In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points. In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the independent variable. It is often required to interpolate (i.e., estimate) the value of that function for an intermediate value of the independent variable. A different problem which is closely related to in terpolation is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complex to evaluate efficiently. A few known data points from the original function can be used to create an interpolation based on a simpler function. Of course, when a simple function is used to estimate data points from the original, interpolation errors are usually present; however, depending on the problem domain and the interpolation method used, the gain in simplicity may be of greater value than the resultant loss in precision. In the examples below if we consider as a topological space and the function forms a different kind of Banach spaces then the problem is treated as "interpolation of operators". The classical results about interpolation of operators are the Riesz–Thorin theorem and the
In this paper we investigate some aspect of the partial realization problem formulated for Schur-functions considered on the right half plane of C. This analysis can be considered to be partially complementary to the results of A. Lindquist, C. Byrnes et al. on Caratheodory functions, [2], [4], [3]. 1 Preliminaries and notation Let F be a rational p m matrix of McMillan degree N , whose entries lie in the Hardy space of the right half-plane. We shall denote by CI the right half-plane, and by + the corresponding Hardy space of vector or matrix valued functions (the proper dimension will be understood from the context). The space + is naturally endowed with the scalar product, < F, G >= 2# Tr F (iy)G(iy) # dy, (1.1) and we shall denote by 2 the associated norm. Note that if M is a complex matrix, Tr stands for its trace, M T for its transpose and M # for its transpose conjugate. Similarly, we define # + to be Hardy space of essentially bounded functions analytic on the right hal...
2000
Lorentz and Shimogaki [2] have characterized those pairs of Lorentz A spaces which satisfy the interpolation property with respect to two other pairs of A spaces. Their proof is long and technical and does not easily admit to generalization. In this paper we present a short proof of this result whose spirit may be traced to Lemma 4.3 of [4] or perhaps more accurately to the theorem of Marcinkiewicz [5, p. 112]. The proof involves only elementary properties of these spaces and does allow for generalization to interpolation for n pairs and for M spaces, but these topics will be reported on elsewhere. The Banach space A^ [1, p. 65] is the space of all Lebesgue measurable functions ƒ on the interval (0, /) for which the norm is finite, where </> is an integrable, positive, decreasing function on (0, /) and/* (the decreasing rearrangement of |/|) is the almost-everywhere unique, positive, decreasing function which is equimeasurable with \f\. A pair of spaces (A^, A v) is called an interpolation pair for the two pairs (A^, A Vl) and (A^2, A V2) if each linear operator which is bounded from A^ to A v (both /== 1, 2) has a unique extension to a bounded operator from A^ to A v. THEOREM (LORENTZ-SHIMOGAKI). A necessary and sufficient condition that (A^, A w) be an interpolation pair for (A^, A Vi) and (A^2, A V2) is that there exist a constant A independent of s and t so that (*) ^(0/0(5) ^ A max(TO/^(a)) t=1.2 holds, where O 00=ƒ S <j>{r) dr,-" , VaC'Wo Y a (r) dr.
2007
1 0 u 2 (t) dt (1.1) over the admissible set A = {u ∈ L 2 [0, 1] : J(u) = C}, (1.2)
Journal of Mathematical Analysis and Applications, 1999
Journal of Approximation Theory, 1996
A theory of best approximation with interpolatory contraints from a finitedimensional subspace M of a normed linear space X is developed. In particular, to each x # X, best approximations are sought from a subset M(x) of M which depends on the element x being approximated. It is shown that this``parametric approximation'' problem can be essentially reduced to the``usual'' one involving a certain fixed subspace M 0 of M. More detailed results can be obtained when (1) X is a Hilbert space, or (2) M is an``interpolating subspace'' of X (in the sense of [1]).
2016
e is compact and show a positive answer under a variety of conditions. For example it suffices that X o be a UMD-space or that X o is reflexive and there is a Banach space so that X 0 = [W,X l~\I for some 0<ot< 1. 1991 Mathematics subject classification: 46M35.
Bulletin of the American Mathematical Society
Journal of Approximation Theory, 2010
In this paper we prove that the existence of an error formula of a form suggested in [2] leads to some very specific restrictions on an ideal basis that can be used in such formulas. As an application, we provide a negative answer to one version of the question posed by Carl de Boor (cf.
SIAM Journal on Matrix Analysis and Applications
New contributions are offered to the theory and numerical implementation of the Discrete Empirical Interpolation Method (DEIM). A substantial tightening of the error bound for the DEIM oblique projection is achieved by index selection via a strong rank revealing QR factorization. This removes the exponential factor in the dimension of the search space from the DEIM projection error, and allows sharper a priori error bounds. Well-known canonical structure of pairs of projections is used to reveal canonical structure of DEIM. Further, the DEIM approximation is formulated in weighted inner product defined by a real symmetric positive-definite matrix W. The weighted DEIM (W-DEIM) can be interpreted as a numerical implementation of the Generalized Empirical Interpolation Method (GEIM) and the more general Parametrized-Background Data-Weak (PBDW) approach. Also, it can be naturally deployed in the framework when the POD Galerkin projection is formulated in a discretization of a suitable energy (weighted) inner product such that the projection preserves important physical properties such as e.g. stability. While the theoretical foundations of weighted POD and the GEIM are available in the more general setting of function spaces, this paper focuses to the gap between sound functional analysis and the core numerical linear algebra. The new proposed algorithms allow different forms of W-DEIM for point-wise and generalized interpolation. For the generalized interpolation, our bounds show that the condition number of W does not affect the accuracy, and for point-wise interpolation the condition number of the weight matrix W enters the bound essentially as min D=diag κ 2 (DW D), where κ 2 (W) = W 2 W −1 2 is the spectral condition number.
Computers & Mathematics with Applications, 1995
We investigate convergence at interpolation projections for an arbitrary but fixed set at functions.
arXiv (Cornell University), 2018
In the present paper optimal interpolation formulas are constructed in W (m,m−1) 2 (0, 1) space. Explicit formulas for coefficients of optimal interpolation formulas are obtained. Some numerical results are presented.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.