1.Introduction

Meshfree methods, in particular those based on Moving Least Squares (MLS) schemes, have been used to solve a wide range of Partial Differential and Ordinary Differential Equations. These methods have been around for a number of years with many researchers publishing in a variety of fields. They have sparked growing interest upon recognition that the expense of the mesh generation can be avoided in the development of simulations of complex problems and applications in science and engineering. The full development of a Meshless Method could provide an economical advantage over the more traditional Finite Element or Finite Volume Schemes. This especially holds true for applications where, in complicated finite difference schemes, mesh refinement is needed to handle boundary value problems (BVPs) with singular solution behavior or complicated geometric domains, for example in cases where localized mesh refinement is desired (^{2}).

Despite their great applicability and recognized advantages, several drawbacks have been found for meshless methods since the last three decades. Most of them have a common root: computational cost. As the finite element method, meshless methods based on moving least squares, approximate an unknown function as a linear combination of a set of basis functions with a set of coefficients depending on the function of interest; however in the finite element method this basis is polynomial while in meshfree methods it is rational (typically), so the calculation of derivatives in meshfree methods is not as straightforward as in finite element methods and has been proved to be time consuming. An alternative to the direct calculation of the derivatives was introduced in (^{13}) and called diffuse derivatives or pseudo-derivatives, which are much easier to compute and reduce the total computational time.

The diffuse- or pseudo-derivative approach has been used in many papers (^{6}, ^{8}, ^{11}, ^{13}, ^{3}, ^{17}, ^{9} and ^{16}).

In (^{9}), the simplicity of the differentiation of the diffuse derivatives was exploited to develop spaces of functions that were approximately divergence-free. In most of these MLS/PD papers, the approximation is defined via collocation with the full partial differential equation (PDE) instead of a weak form, which has been prove to lead to ill conditioned matrices very quickly, and none full error analysis has been furbished yet.

In (^{6}) and (^{14}), a convergence theorem was provided for a stabilized MLS/PD approach for second-order boundary value problems (BVPs), but based on a Galerkin formulation. In this paper, we provide error estimates for a least squares MLS/PD approach, based on the minimization of a convenient functional, which is different from both the collocation and Galerkin approaches mentioned above, and constitutes the first work with diffuse derivatives in that direction.The outline of this paper is as follows. In section 2 we introduce the prototype problem as well as notation and ideas behind MLS schemes and PDs. We also provide a regularity estimate for the differential operator. In section 3 we introduce the MLS/PD least squares method, provide a key approximation property and show uniqueness and existence for the method. In section 4 the error analysis is given and section 5 has a short computational example. Section 6 gives some commentary and future directions.

2.Preliminaries and Prototype BVP

In this section we introduce our model second-order BVP and preliminary information on the problem as well as background on MLS methods and PDs. In this paper we will study the following one-dimensional BVP:

Here κ and *f* are functions and 0 < κ_{0} ≤ κ(*x*) ≤ κ_{1} < ∞ for all *x* Ω ∈. We will place restrictions on the size of κ_{2} = _{
maxx∈Ω
} | κ’(*x*)|.This problem is somewhat simple but we note that the theoretical error analysis for a similar stablilized Galerkin MLS/PD method was performed in (^{6}) and extended to two-dimensions in (^{14}).

Our MLS approach uses the set of distinct nodes Λ_{
N
} = {*x*
_{1},…,*x*
_{N}} on interval Ω. A direct MLS approximation _{
UR
} = _{
UR
} (*x*) is

In this approximation of *U*, the coefficient vector
is chosen so it minimizes, thinking of *x* as fixed, the functional

Typically, we have 0 < *R* << 1 and, so there must be at least 2*m* nodes (the _{
xj’
} s) in each subinterval of length *R*, we set *R* ≅ 2m/*N* (Precise details on this as well as the other grid assumptions can be found in (^{6}), (^{5}) and (^{15}). The weight function *W* is smooth and nonnegative, has maximum value 1, support in
and is symmetric about 0. A short calculation shows that

Here *M* is the *m* + 1 × *m* + 1 matrix given by

where we used the polynomial vector
and B(x) is a (m + 1) × *N* matrix with the

_{
PU
} (*x,y*) provides a local *m*th degree polynomial approximation for *U* in variable *y* near fixed position *x*. Note that _{
PU
} can also be written as follows:

Observe that the proper derivative of _{
UR
} involves differentiation of the complicated function
(a time consuming task that has been described as one of the drawbacks of meshfree methods). The *pseudo-derivative* denoted by operator δ is simpler and only involves differentiation of polynomials;

For an approximation space, we let _{
VR
} denote the family of spaces of MLS functions defined from values _{
v1
} , …, _{
vN
} .

We are especially interested in the stabilization functional

As is customary with error analyses, we rely heavily on Sobolev spaces (See, for instance, (^{4})). We use the notation _{
Wm,p
} (Ω) to denote the space consisting of functions with *m* derivatives in _{
Lp
} (Ω) (1 ≤ *p* ≤ ∞) and _{
Hm
} (Ω) for the special case where *p* = 2. For the seminorms and norms we use the following notation:

For *Z* ∈ *H*
^{1} (Ω), we use the Sobolev inequality along with the fact that Ω = (0,1) to show that

(2.2) ||Z||_{0,∞,Ω} ≤ ||Z||_{1,2,Ω}.

We will further use the standard Cauchy-Schwarz inequality as well as the arithmetic-geometric mean; for 0 < δ < 1 and positive constants *a* and *b*, ƎC_{δ}>0 so

(2.3*) ab* ≤ _{
δa2
} + *C*
_{δ}
*b*
^{2}

We will assume good approximation properties for _{
UR
} .

**Lemma 2.1.** (Approximability of _{
UR
} ) There exists *C* > 0 so that

For the proof see (^{1}) or restatements in (^{6}), (^{5}) or (^{15}).

We also assume the following generalization of the theorem in (^{6}).

**Label 2.2.**(Stabilization by Θ)For *v* ∈ _{
VR
} there exists a constant *C* > 0, independent of *R* and *v* , such that

The *j* = 1 case for the first inequality is provided in (^{6}) and the *j* = 2 case is proved in (^{5}). They are both based on the analysis in (^{1}). Finally,

**Lemma 2.3.**For
derived from smooth function *Z*, there is C > 0 independent of *R* and norms of *Z* so

Again see (^{6}) or (^{5}) for the result based on work in (^{1}).

We now provide a regularity/stability estimate for the differential operator *L*.

**Lemma 2.4.** (*L*-Regularity) For *w* ∈ *H*
^{2} (Ω) and assuming that κ_{2} < *min*{2κ_{0}, κ_{0}
^{2}}, there exist *C*
_{1} > 0 and *C*
_{2} > 0 such that

**Proof.** We first integrate the square of _{
LW
} ;

Now, using the fact that

κ*ww*’’ = (κ*ww*’) - κ(*w*’)^{2} - κ’ *ww*’

and the Cauchy-Schwarz inequality as well as our upper and lower bounds on κ, we have

From the Sobolev inequality (2.2) we know that

*w*’(0)| ≤ ||*w*’||_{0,∞,Ω} ≤ ||*w*||_{2,2,Ω} and |*w*’(1)| ≤ ||w’||_{0,∞,Ω}≤||w||_{2,2,Ω.}

From this we conclude

-2(*κ*(1)*w*(1)*w*’(1)- *κ*(0)*w*(0)*w*’(0)) ≥ -4*κ*
_{1}
*max*{|*w*(0)|,|*w*(1)|}||*w*||_{2,2,Ω}.

We also know that

where we used the Cauchy-Schwarz inequality and arithmetic-geometric mean inequality (2.3). So

Using the assumption that _{
κ2
} < *min*{2κ_{0}
*,* κ_{0}
^{2}} and choosing δ < 1 sufficiently small gives the desired estimate (2.7).

3.Least Squares MLS/PD Approximation Scheme

In this section we introduce our least squares MLS/PD method and show that it has a unique solution. Let, for *v* ∈ _{
VR
} , _{
Lδv
} = *- δ2v + κv*. The MLS/PD least squares scheme is as follows:

Note that *η* > 0 is a constant parameter. We first show that this problem has a unique solution.

**Theorem 3.1.**
*(Existence and uniqueness for (3.1)) There exists η*
_{0} > 0 *such that for each η ≥ η*
_{0}
*the problem (3.1) has a unique solution*
*u*.

**Proof.** Note that the problem (3.1) is really the minimization of a finite dimensional quadratic form.

It is known that this kind of matrix problem is equivalent to a matrix-vector system of equations and if the matrix is nonsingular there will be a unique solution to the minimization problem.

The functional derivative of Ф with respect to *v* in direction of test function *ϕ* will be zero at the minimum point *u*;

In (^{5}) we show that

Again, since (3.1) is finite dimensional and a quadratic form, we know that if we can show uniqueness then the existence of a solution follows.

So, we suppose there are two solutions *u*
_{1} and *u*
_{2} for which

and then let *γ* = _{
u1
} - *u*
_{2}. Subtracting the equations for *i* = 1 and *i* = 2 and letting *ϕ = γ,* the difference of the two solutions, we find

Also, from the triangle inequality, we can show that

||*L*
_{δγ}||_{0,2,Ω} ≥ | ||*L*
_{γ}||_{0,2,Ω} - ||*L*
_{γ} - *L*
_{δγ}||_{0,2,Ω}| = |||*L*
_{γ}||_{0,2,Ω} - ||γ’’ - δ^{2}
_{γ}||_{0,2,Ω}|.

So, squaring the inequality above and dropping the ||γ’’ - δ^{2}
_{γ}||^{2}
_{0,2,Ω}|term, we have

||*L*
_{δγ}||^{2}
_{0,2,Ω} ≥ | ||*L*
_{γ}||^{2}
_{0,2,Ω} - 2 ||*L*
_{γ}||_{0,2,Ω}||*γ*’’ - δ^{2}
_{γ}||_{0,2,Ω}.

Substituting this into (3.3), we have

From the arithmetic-geometric mean inequality (2.3) we find

Now choosing *δ* = 1/2 and using the *j* = 2 case of (2.5) we have that there is a *C* > 0 so

Showing that *Lγ* = 0, *γ*(0) = 0 and *γ*(1) = 0. From Lemma 2.4 and (2.7) we have ||γ||_{2,2,Ω} = 0 from which we can now conclude *γ* ≡ 0.

4.Theoretical Error Analysis

In this section we provide a theoretical error estimate for this method. We first examine Ф (_{
UR
} ). Since *LU = f* we have

The triangle inequality along with Lemma's 2.1-2.3 can be used to show that there is a *C* > 0 so

Recall from Lemma 2.3 and equation (2.6) we also have that there is a *C* > 0 so

where we used the Sobolev inequality (2.2) to bound the point values and Lemma 2.1, equation (2.4).

Putting these conclusions together we now have that Ф(_{
UR
} ) ≤ *CR*
^{2m-2}. Since *u* is the minimum over _{
VR
} we then have

So we now consider the error *e = u* - _{
UR
} . Then

where we used the approximation (2.4), the stabilization estimate (2.5) and estimate (4.1). For the boundary (endpoint) errors we have

and, in the same manner, we can show that

So, now, from the regularity theorem for *L* which is equation (2.7) and knowing that *u* ∈ *H*
^{2} (Ω) and _{
UR
} ∈ *H*
^{2} (Ω) and, we have

Thus, from the estimates above and the approximation of _{
UR
} corresponding to *U* by _{
UR
} in Lemma 2.1, we have now proved,

**Theorem 4.1.** (Error Rate) There exists *C* > 0 independent of *R* so that

5.Computations with MLS/PD Method for Example Elliptic BVP

In this section we investigate the computational performance of our least squares MLS/PD method. The schemes studied computationally in the pseudo-derivative papers listed in the introduction are mostly collocation approaches while we use numerical integration.However, none of these PD collocation schemes are stabilized as we do. We are especially interested in the behavior of this approach with and without the stabilizing term.

For our computations we used the following example problem. If we let *κ* ≡ 1 and define

*U*(*x*) = (1 - x)sinh (*x*),

which satisfies the Dirichlet BCs *U*(0) = *U*(1) = 0 then we find that

*LU = -U’’* + *U* = 2cosh(*x*).

So we set *f*(*x*) = 2cosh(*x*). to create a test problem with known solution in closed form.

The minimization problem (3.1) was solved from the variational equation Ф’(*u*)
= 0 ∀*ϕ* ∈ _{
VR
} . The resulting integrals were evaluated by numerical integration where we broke the domain Ω =
into _{
NS
} = 8 subintervals and used _{
NL
} = 6 Gauss-Lobatto point integration nodes on each of these subintervals.

In our sample computations, we used polynomial degree *m* = 2 and obtained the errors and rates for the stabilized and unstabilized methods,These are displayed in table below.The errors are the maximum of absolute values of differences between the true and approximate solutions at 100 points on
.

The “rate” was computed from the log of successive errors divided by the log of successive *R*’s. As seen, the rates in the table are better than predicted by theorem 2.

We found that the condition numbers for the non-stabilized scheme increased very quickly and, for instance, in the *R* = 0.1 case was 1.80e17. We believe this was the cause of the significant loss of accuracy. The stabilized scheme was much better conditioned; in the case when *R* = 0.1 the matrix had condition number 8.69e5. Other computer experiments with increased numbers of Gauss-Lobatto points and subintervals still led to very similar results to those we shown.

6.Conclusions and Extensions

In this paper we have introduced a least squares MLS/PD method with a special stabilization term to add control of the errors made in the pseudo-derivative approximation of the real derivatives. We have proved the method will have a unique solution and provided a theoretical error analysis. Our computations confirm the accuracy of our scheme and provide some evidence that the stabilization is helpful by keeping the matrix conditioning at a moderate level.

Although we have not done so here, the work could be extended to multiple dimensions as was done in (^{14}).