700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > 数值计算详细笔记(二):非线性方程解法

数值计算详细笔记(二):非线性方程解法

时间:2019-03-08 08:09:21

相关推荐

数值计算详细笔记(二):非线性方程解法

前言

如果你对这篇文章可感兴趣,可以点击「【访客必读 - 指引页】一文囊括主页内所有高质量博客」,查看完整博客分类与对应链接。

文章目录

前言2. Nonlinear Equation f(x) = 02.1 Bisection Method2.1.1 Root-Finding Problem2.1.2 Bisection Algorithm2.1.3 Convergence Analysis2.2 Fixed Point Method2.2.1 Introduction2.2.2 Algorithm2.2.3 Existence and Uniqueness2.2.4 Convergence Analysis2.2.5 Bounds for the Error2.3 Newton's Method2.3.1 Introduction2.3.2 Algorithm2.3.3 Convergence2.3.4 Secant Method2.3.5 False Position Method2.4 Error Analysis for Iteration Methods2.4.1 The Rate of Sequence Convergence2.4.2 Convergent Order of Fixed-Point Iteration (Improved)2.4.3 Zero of Multiplicity2.4.4 Convergence of Newton's Method2.4.5 Convergence rate of Secant Method2.5 Accelerating Convergence2.5.1 Aitken's method2.5.2 Steffensen's method2.6 Zeros of Polynomials and Muller's Method2.6.1 Polynomial Theorem2.6.2 Horner's Method2.6.3 Deflation Method2.6.4 Muller's Algorithm

2. Nonlinear Equation f(x) = 0

2.1 Bisection Method

2.1.1 Root-Finding Problem

Root-Finding or Zero-Finding Problem

Given a function f(x)f(x)f(x) in one variable xxx, finding a root xxx of an equation of the form f(x)=0f(x)=0f(x)=0.

Solution xxx is called a root of equation $f(x)=0 $, or zero of function f(x)f(x)f(x).

2.1.2 Bisection Algorithm

Prejudgment

By the Intermediate Value Theorem, if

f∈C[a,b],andf(a)∗f(b)<0,f\in C[a,b], and \ f(a)*f(b)<0, f∈C[a,b],andf(a)∗f(b)<0,

then there exists at least a point x∗∈(a,b)x^*\in (a,b)x∗∈(a,b), such that f(x∗)=0f(x^*)=0f(x∗)=0.

Algorithm Description

INPUT: endpoints a,ba,ba,b; tolerance TOLTOLTOL; maximum number of iterations NNN.

OUTPUT: approximate solution ccc or message of failure.

Step 111: Set k=1,FA=f(a)k = 1,FA=f(a)k=1,FA=f(a);

Step 222: While k≤Nk\leq Nk≤N, do Steps 3~63~63~6

Step 333: Set c=a+b2c= \displaystyle\frac{a+b}{2}c=2a+b​; and compute FC=f(c)FC=f(c)FC=f(c).Step 444: If FC=0FC=0FC=0 or ∣b−a∣2<TOL\displaystyle\frac{|b-a|}{2}<TOL2∣b−a∣​<TOL, then output ccc, (Procedure complete successfully.) Stop!Step 555: If FA∗FC<0FA*FC<0FA∗FC<0, then set b=cb=cb=c; else set a=ca=ca=cStep 666: Set k=k+1k=k+1k=k+1.

Step 777: OUTPUT “Method failed after NNN iterations.” STOP!

Other Stopping Criteria

Other Stopping Criteria for Iteration procedures with a given tolerance ε>0:\varepsilon >0:ε>0:

∣pn−pn−1∣<ε∣pn−pn−1∣∣pn∣<ε∣f(pn)∣<ε|p_n-p_{n-1}|<\varepsilon\\ \displaystyle\frac{|p_n-p_{n-1}|}{|p_n|} < \varepsilon \\ |f(p_n)|< \varepsilon ∣pn​−pn−1​∣<ε∣pn​∣∣pn​−pn−1​∣​<ε∣f(pn​)∣<ε

2.1.3 Convergence Analysis

Theorem

Suppose that f∈C[a,b]f\in C[a,b]f∈C[a,b], and f(a)∗f(b)<0f(a)*f(b)<0f(a)∗f(b)<0. The Bisection method generates a sequence {pn}1∞\{{p_n}\}_1^\infty{pn​}1∞​ approximating a zero point ppp of fff with

∣pn−p∣≤b−a2n,n≥1.|p_n-p|\leq \displaystyle\frac{b-a}{2^n},n\geq1. ∣pn​−p∣≤2nb−a​,n≥1.

Proof

By the procedure, we know that

∣b1−a1∣=∣b−a∣,∣b2−a2∣=∣b1−a1∣2=∣b−a∣2,...∣bn−an∣=∣bn−1−an−1∣2=∣b−a∣2n−1,|b_1-a_1|=|b-a|,\\ |b_2-a_2|=\displaystyle\frac{|b_1-a_1|}{2}=\displaystyle\frac{|b-a|}{2},\\ ...\\ |b_n-a_n|=\displaystyle\frac{|b_{n-1}-a_{n-1}|}{2}=\displaystyle\frac{|b-a|}{2^{n-1}}, ∣b1​−a1​∣=∣b−a∣,∣b2​−a2​∣=2∣b1​−a1​∣​=2∣b−a∣​,...∣bn​−an​∣=2∣bn−1​−an−1​∣​=2n−1∣b−a∣​,

Since pn=an+bn2p_n=\displaystyle\frac{a_n+b_n}{2}pn​=2an​+bn​​ and p∈(an,pn]p\in (a_n,p_n]p∈(an​,pn​] or p∈[pn,bn)p\in [p_n,b_n)p∈[pn​,bn​) for all n≥1n\geq 1n≥1, it follows that

∣pn−p∣≤∣bn−an∣2=b−a2n.|p_n-p|\leq \displaystyle\frac{|b_n-a_n| }{2}=\displaystyle\frac{b-a}{2^n}. ∣pn​−p∣≤2∣bn​−an​∣​=2nb−a​.

Convergence Rate

Since

∣pn−p∣≤∣bn−an∣2=∣b−a∣2n|p_n-p|\leq \displaystyle\frac{|b_n-a_n|}{2}=\displaystyle\frac{|b-a|}{2^n} ∣pn​−p∣≤2∣bn​−an​∣​=2n∣b−a∣​

The Sequence {pn}n=1∞\{p_n\}_{n=1}^\infty{pn​}n=1∞​ converges to ppp with rate of convergence O(12n)O(\displaystyle\frac{1}{2^n})O(2n1​), that is

pn=p+O(12n)p_n=p+O(\displaystyle\frac{1}{2^n}) pn​=p+O(2n1​)

Other Property

Bisection is certain to converge, but does so slowly.

Given starting interval [a,b][a,b][a,b], length of interval after kkk iterations is b−a2k\displaystyle\frac{b-a}{2^k}2kb−a​, so achieving error tolerance of ε((b−a)2k<ε)\varepsilon(\displaystyle\frac{(b-a)}{2^k}<\varepsilon)ε(2k(b−a)​<ε) requires k≈[log2b−aε]k\approx[log_2^{\frac{b-a}{\varepsilon}}]k≈[log2εb−a​​] iterations, regardless of function fff involved.

2.2 Fixed Point Method

2.2.1 Introduction

Fixed point of given function g:R→Rg:\mathbb{R}\rightarrow \mathbb{R}g:R→R is value x∗x^*x∗ such that x∗=g(x∗)x^*=g(x^*)x∗=g(x∗)Many iterative methods for solving nonlinear equations use fixed-point iteration scheme of form

xk+1=g(xk)x_{k+1}=g(x_k) xk+1​=g(xk​)

where fixed points for ggg are solutions for f(x)=0f(x)=0f(x)=0.

Example

If f(x)=x2−x−2f(x)=x^2-x-2f(x)=x2−x−2, it has two roots x∗=2x^*=2x∗=2 and x∗=−1x^*=-1x∗=−1. Then fixed points of each of functions

g(x)=x2−2g(x)=x+2g(x)=1+2xg(x)=x2+22∗x−1g(x)=x^2-2 \\ g(x) = \sqrt{x+2}\\ g(x) = 1+\displaystyle\frac{2}{x} \\ g(x) = \displaystyle\frac{x^2+2}{2*x-1} g(x)=x2−2g(x)=x+2​g(x)=1+x2​g(x)=2∗x−1x2+2​

are solutions to equation f(x)=0f(x)=0f(x)=0.

2.2.2 Algorithm

Definition

To approximate the fixed point of a function g(x)g(x)g(x), we choose an initial approximation p0p_0p0​, and generate the sequence {pn}n=0∞\{p_n\}^\infty_{n=0}{pn​}n=0∞​ by letting

{Givenp0pn=g(pn−1),n=0,1,...,\left\{ \begin{aligned} Given & \ \ \ \ p_0 \\ p_n = & g(p_{n-1}),n=0,1,..., \end{aligned} \right. {Givenpn​=​p0​g(pn−1​),n=0,1,...,​

for each n≥1n\geq 1n≥1.If the sequence {pn}n=0∞\{p_n\}^\infty_{n=0}{pn​}n=0∞​ converges to ppp and g(x)g(x)g(x) is continuous, then we have

p=lim⁡n→∞pn=lim⁡n→∞g(pn)=g(lim⁡n→∞pn)=g(p).p = \lim_{n\rightarrow \infty}p_n=\lim_{n\rightarrow \infty}g(p_n)=g(\lim_{n\rightarrow \infty}p_n)=g(p). p=n→∞lim​pn​=n→∞lim​g(pn​)=g(n→∞lim​pn​)=g(p).

and a solution to x=g(x)x=g(x)x=g(x) is obtained.This technique described above is calledfixed point iteration(or functional iteration).

Example

Pseudo-Code

INPUT: Initial approximation p0p_0p0​, tolerance TOLTOLTOL, Maximum number of iteration NNN.

OUTPUT: approximate solution ppp or message of failure.

Step 111: Set n=1n = 1n=1.

Step 222: While n≤Nn\leq Nn≤N, do Steps 3~53~53~5.

Step 333: Set p=g(p0)p= g(p_0)p=g(p0​).Step 444: If ∣p−p0∣<TOL|p-p_0|<TOL∣p−p0​∣<TOL, then output ppp; (Procedure complete successfully.) Stop!Step 555: Set n=n+1,p0=pn=n+1, p_0=pn=n+1,p0​=p.

Step 666: OUTPUT “Method failed after NNN iterations.” STOP!

2.2.3 Existence and Uniqueness

Theorem: Sufficient Conditions for the Existence and Uniqueness of a Fixed Point

【Existence】If g(x)∈C[a,b]g(x)\in C[a,b]g(x)∈C[a,b] and g(x)∈[a,b]g(x)\in [a,b]g(x)∈[a,b] for all x∈[a,b]x\in [a,b]x∈[a,b], then g(x)g(x)g(x) has a fixed point in [a,b][a,b][a,b]. (g(x)∈[a,b]g(x)\in [a,b]g(x)∈[a,b] means max{g(x)}≤b,min{g(x)}≥amax\{g(x)\}\leq b,min\{g(x)\}\geq amax{g(x)}≤b,min{g(x)}≥a)【Uniqueness】If, in addition, g′(x)g'(x)g′(x) exists on (a,b)(a,b)(a,b), and a positive constant k<1k<1k<1 exists with ∣g′(x)∣≤k|g'(x)|\leq k∣g′(x)∣≤k, for all x∈(a,b)x\in (a,b)x∈(a,b).

Meet the two conditions above means the fixed point in [a,b][a,b][a,b] is unique.

Proof for Existence

If g(a)=ag(a)=ag(a)=a or g(b)=bg(b)=bg(b)=b, then g(x)g(x)g(x) has a fixed point at an endpoint.Suppose not, then it must be true that g(a)>ag(a)>ag(a)>a and g(b)<bg(b)<bg(b)<b.Thus the function h(x)=g(x)−xh(x)=g(x)-xh(x)=g(x)−x is continuous on [a,b][a,b][a,b], and we have

h(a)=g(a)−a>0,h(b)=g(b)−b<0.h(a)=g(a)-a>0,h(b)=g(b)-b<0. h(a)=g(a)−a>0,h(b)=g(b)−b<0.The Intermediate Value Theorem implies that there exists p∈(a,b)p\in (a,b)p∈(a,b) for h(x)=g(x)−xh(x)=g(x)-xh(x)=g(x)−x which h(p)=0h(p)=0h(p)=0.Thus g(p)−p=0g(p)-p=0g(p)−p=0, and ppp is a fixed point of g(x)g(x)g(x).

The proving process above changes g(x)=xg(x)=xg(x)=x to f(x)=g(x)−xf(x)=g(x)-xf(x)=g(x)−x, changing fixed point problem into zero point existing problem.

Proof for Uniqueness

Using contradiction method to prove this characteristic.

Suppose, in addition, that ∣g′(x)≤k≤1∣|g'(x)\leq k\leq 1|∣g′(x)≤k≤1∣ and that ppp and qqq are both fixed points in [a,b][a,b][a,b] with p≠q.p\not=q.p​=q.Then by the Mean Value Theorem, a number ξ\xiξ exists between ppp and qqq. Hence, in [a,b][a,b][a,b],

g(p)−g(q)p−q=g′(ξ)\displaystyle\frac{g(p)-g(q)}{p-q}=g'(\xi) p−qg(p)−g(q)​=g′(ξ)

exists.Then

∣p−q∣=∣g(p)−g(q)∣=∣g′(ξ)∣∣p−q∣≤k∗∣p−q∣<∣p−q∣,|p-q|=|g(p)-g(q)|=|g'(\xi)||p-q|\leq k*|p-q|<|p-q|, ∣p−q∣=∣g(p)−g(q)∣=∣g′(ξ)∣∣p−q∣≤k∗∣p−q∣<∣p−q∣,

which is a contradiction.So p=qp=qp=q, and the fixed point in [a,b][a,b][a,b] is unique.

2.2.4 Convergence Analysis

Theorem

Meet the two conditions above, we can find that, for any number p0∈[a,b]p_0\in[a,b]p0​∈[a,b], the sequence {pn}0∞\{p_n\}^\infty_0{pn​}0∞​ defined by

pn=g(pn−1),n≥1p_n=g(p_{n-1}), n\geq 1 pn​=g(pn−1​),n≥1

converges to the unique fixed point ppp in [a,b][a,b][a,b].

Proof

Using the Intermediate Value Theorem, we can prove the existence.

Using the fact that ∣g′(x)∣≤k|g'(x)|\leq k∣g′(x)∣≤k and the Mean Value Theorem, we have

∣pn−p∣=∣g(pn−1−g(p))∣=∣g′(ξ)∣∣pn−1−p∣≤k∗∣pn−1−p∣,|p_n-p|=|g(p_{n-1}-g(p))|=|g'(\xi)||p_{n-1}-p|\leq k*|p_{n-1}-p|, ∣pn​−p∣=∣g(pn−1​−g(p))∣=∣g′(ξ)∣∣pn−1​−p∣≤k∗∣pn−1​−p∣,

where ξ∈(a,b)\xi \in (a,b)ξ∈(a,b).

Applying this inequality inductively shows

∣pn−p∣≤k∗∣pn−1−p∣≤k2∗∣pn−2−p∣≤...≤kn∗∣p0−p∣.|p_n-p|\leq k*|p_{n-1}-p|\leq k^2*|p_{n-2}-p|\leq ...\leq k^n*|p_{0}-p|. ∣pn​−p∣≤k∗∣pn−1​−p∣≤k2∗∣pn−2​−p∣≤...≤kn∗∣p0​−p∣.

Since k<1k<1k<1,

lim⁡n→∞∣pn−p∣≤lim⁡n→∞kn∣p0−p∣=0,\lim_{n\rightarrow\infty}|p_n-p|\leq \lim_{n\rightarrow\infty}k^n|p_0-p|=0, n→∞lim​∣pn​−p∣≤n→∞lim​kn∣p0​−p∣=0,

and {pn}0∞\{p_n\}_0^\infty{pn​}0∞​ converges to ppp.

Convergence Rate

Since

∣pn−p∣≤kn∗∣p0−p∣≤kn∗∣b−a∣|p_n-p|\leq k^n*|p_0-p|\leq k^n*|b-a| ∣pn​−p∣≤kn∗∣p0​−p∣≤kn∗∣b−a∣,

the Sequence {pn}n=0∞\{p_n\}_{n=0}^\infty{pn​}n=0∞​ converges to ppp with rate of convergence O(kn)O(k^n)O(kn) with k<1k< 1k<1, that is

pn=p+O(kn),k<1p_n=p+O(k^n),k<1 pn​=p+O(kn),k<1.

2.2.5 Bounds for the Error

Corollary

If the solution for g(x)=xg(x)=xg(x)=x exists and is unique, then the bounds for the error involved in using pnp_npn​ to approximate ppp are given by

∣pn−p∣≤kn∗max{p0−a,b−p0}|p_n-p|\leq k^n*max\{p_0-a,b-p_0\} ∣pn​−p∣≤kn∗max{p0​−a,b−p0​}

and

∣pn−p∣≤kn1−k∗∣p1−p0∣,|p_n-p|\leq \displaystyle\frac{k^n}{1-k}*|p_1-p_0|, ∣pn​−p∣≤1−kkn​∗∣p1​−p0​∣,

for all n≥1n\geq 1n≥1.

Proof

∣pn−pn−1∣≤∣g(pn−1)−g(pn−2)∣≤k∗∣pn−1−pn−2∣≤...≤kn−1∣p1−p0∣.|p_n-p_{n-1}|\leq |g(p_{n-1})-g(p_{n-2})|\leq k*|p_{n-1}-p_{n-2}|\leq ...\leq k^{n-1}|p_1-p_0|. ∣pn​−pn−1​∣≤∣g(pn−1​)−g(pn−2​)∣≤k∗∣pn−1​−pn−2​∣≤...≤kn−1∣p1​−p0​∣.

Let m>nm>nm>n, using the theorem ∣a+b∣≤∣a∣+∣b∣|a+b|\leq |a|+|b|∣a+b∣≤∣a∣+∣b∣, then we have

∣pm−pn∣≤∣pm−pm−1∣+∣pm−1−pm−2∣+...+∣pn+1−pn∣≤(km−1+km−2+...+kn)∣p1−p0∣∣pm−pm∣≤kn∗(1+k+...+km−n−1)∣p1−p0∣≤kn∗1−km−n−11−k∗∣p1−p0∣|p_m-p_n|\leq |p_m-p_{m-1}|+|p_{m-1}-p_{m-2}|+...+|p_{n+1}-p_n|\leq (k^{m-1}+k^{m-2}+...+k^n)|p_1-p_0|\\ |p_m-p_m|\leq k^n*(1+k+...+k^{m-n-1})|p_1-p_0|\leq k^n*\displaystyle\frac{1-k^{m-n-1}}{1-k}*|p_1-p_0| ∣pm​−pn​∣≤∣pm​−pm−1​∣+∣pm−1​−pm−2​∣+...+∣pn+1​−pn​∣≤(km−1+km−2+...+kn)∣p1​−p0​∣∣pm​−pm​∣≤kn∗(1+k+...+km−n−1)∣p1​−p0​∣≤kn∗1−k1−km−n−1​∗∣p1​−p0​∣.

Let m→∞m\rightarrow\inftym→∞, we have

lim⁡m→∞∣pm−pn∣=∣p−pn∣≤kn1−k∗∣p1−p0∣.\lim_{m\rightarrow\infty}|p_m-p_n|=|p-p_n|\leq \displaystyle\frac{k^n}{1-k}*|p_1-p_0|. m→∞lim​∣pm​−pn​∣=∣p−pn​∣≤1−kkn​∗∣p1​−p0​∣.

2.3 Newton’s Method

2.3.1 Introduction

Status

The Newton-Raphson (or simply Newton’s) method is one of the most powerful and well-known numerical methods for solving a root-finding problem

f(x)=0.f(x)=0. f(x)=0.

Rough Description

(1) Suppose that f∈C2[a,b]f\in C^2[a,b]f∈C2[a,b], and x∗x^*x∗ is a solution of f(x)=0f(x)=0f(x)=0.

(2) Let x^∈[a,b]\hat{x}\in [a,b]x^∈[a,b] be an approximation to x∗x^*x∗ such that f′(x^)≠0f'(\hat{x})\not=0f′(x^)​=0 and ∣x^−x∗∣|\hat{x}-x^*|∣x^−x∗∣ is “small”.

Consider the first Taylor polynomial for f(x)f(x)f(x) expanded about x^\hat{x}x^,

f(x)=f(x^)+(x−x^)f′(x^)+(x−x^)22f′′(ξ(x)).f(x)=f(\hat{x})+(x-\hat{x})f'(\hat{x})+\displaystyle\frac{(x-\hat{x})^2}{2}f''(\xi(x)). f(x)=f(x^)+(x−x^)f′(x^)+2(x−x^)2​f′′(ξ(x)).

where ξ(x)\xi(x)ξ(x) lies between xxx and x^\hat{x}x^.

Thus, consider f(x∗)=0f(x^*)=0f(x∗)=0, and gives

0=f(x∗)≈f(x^)+(x∗−x^)f′(x^).0=f(x^*)\approx f(\hat{x})+(x^*-\hat{x})f'(\hat{x}). 0=f(x∗)≈f(x^)+(x∗−x^)f′(x^).

Solution for finding x∗x^*x∗ is

x∗≈x^−f(x^)f′(x^).x^*\approx \hat{x}-\displaystyle\frac{f(\hat{x})}{f'(\hat{x})}. x∗≈x^−f′(x^)f(x^)​.

2.3.2 Algorithm

Definition

Starts with an initial approximation x0x_0x0​Defined iteration scheme by

xn=xn−1−f(xn−1)f′(xn−1),∀n≥1x_n=x_{n-1}-\displaystyle\frac{f(x_{n-1})}{f'(x_{n-1})},\forall n\geq 1 xn​=xn−1​−f′(xn−1​)f(xn−1​)​,∀n≥1This scheme generates the sequence {xn}0∞\{x_n\}_0^\infty{xn​}0∞​

Example

Pseudo-Code

The function fff is differentiable and p0p_0p0​ is an initial approximation.

INPUT: Initial approximation p0p_0p0​, tolerance TOLTOLTOL, Maximum number of iteration NNN.

OUTPUT: approximate solution ppp or message of failure.

Step 111: Set n=1n = 1n=1.

Step 222: While n≤Nn\leq Nn≤N, do Steps 3~53~53~5.

Step 333: Set p=p0−f(p0)f′(p0)p= p_0-\displaystyle\frac{f(p_0)}{f'(p_0)}p=p0​−f′(p0​)f(p0​)​.Step 444: If ∣p−p0∣<TOL|p-p_0|<TOL∣p−p0​∣<TOL, then output ppp; (Procedure complete successfully.) Stop!Step 555: Set n=n+1,p0=pn=n+1, p_0=pn=n+1,p0​=p.

Step 666: OUTPUT “Method failed after NNN iterations.” STOP!

2.3.3 Convergence

Theorem

(1) f∈C2[a,b].f\in C^2[a,b].f∈C2[a,b].

(2) p∈[a,b]p\in [a,b]p∈[a,b] is such that f(p)=0f(p)=0f(p)=0 and f′(p)≠0f'(p)\not=0f′(p)​=0.

Then there exists a δ>0\delta>0δ>0 such that Newton’s method generates a sequence {pn}1∞\{p_n\}_1^\infty{pn​}1∞​ converging to ppp for any initial approximation p0∈[p−δ,p+δ]p_0\in[p-\delta,p+\delta]p0​∈[p−δ,p+δ].

Proof

pn=g(pn−1)g(x)=x−f(x)f′(x)p_n=g(p_{n-1})\\ g(x)=x-\displaystyle\frac{f(x)}{f'(x)} pn​=g(pn−1​)g(x)=x−f′(x)f(x)​

Things need to prove:According to the proofing process of the fixed point method, we need to find an interval [p−δ,p+δ][p-\delta,p+\delta][p−δ,p+δ] that ggg maps into itself, and ∣g′(x)∣≤k<1|g'(x)|\leq k<1∣g′(x)∣≤k<1 for all x∈[p−δ,p+δ].x\in[p-\delta,p+\delta].x∈[p−δ,p+δ]. (Existence and Uniqueness)

Proving Process:

∃δ1>0,g(x)∈C[p−δ1,p+δ1]\exists \delta_1>0,g(x) \in C[p-\delta_1,p+\delta_1]∃δ1​>0,g(x)∈C[p−δ1​,p+δ1​]

Since f′(p)≠0f'(p)\not=0f′(p)​=0 and f′f'f′ is continuous, there exists δ1>0\delta_1>0δ1​>0 such that f′(x)≠0f'(x)\not= 0f′(x)​=0 and $f’(x)\in C[a,b] $ with ∀x∈[p−δ1,p+δ1]\forall x\in [p-\delta_1,p+\delta_1]∀x∈[p−δ1​,p+δ1​].

g′(x)∈C[p−δ1,p+δ1]g'(x) \in C[p-\delta_1,p+\delta_1]g′(x)∈C[p−δ1​,p+δ1​]

g′(x)=f(x)f′′(x)(f′(x))2g'(x)=\displaystyle\frac{f(x)f''(x)}{(f'(x))^2}g′(x)=(f′(x))2f(x)f′′(x)​ for all x∈[p−δ1,p+δ1]x\in [p-\delta_1,p+\delta_1]x∈[p−δ1​,p+δ1​]. Since f∈C2[a,b]f\in C^2[a,b]f∈C2[a,b], g′(x)∈[p−δ1,p+δ2]g'(x)\in [p-\delta_1,p+\delta_2]g′(x)∈[p−δ1​,p+δ2​].

∣g′(x)∣≤k<1|g'(x)|\leq k< 1∣g′(x)∣≤k<1

f(p)=0,g′(p)=0f(p)=0,g'(p)=0 f(p)=0,g′(p)=0

Since g′∈C[p−δ1,p+δ1]g'\in C[p-\delta_1,p+\delta_1]g′∈C[p−δ1​,p+δ1​], there exists a δ\deltaδ with 0<δ<δ10<\delta<\delta_10<δ<δ1​ and

∣g′(x)∣≤k<1,∀x∈[p−δ,p+δ].|g'(x)|\leq k<1, \forall x\in [p-\delta,p+\delta]. ∣g′(x)∣≤k<1,∀x∈[p−δ,p+δ].

g∈[p−δ,p+δ]↦[p−δ,p+δ]g\in[p-\delta,p+\delta]\mapsto [p-\delta,p+\delta]g∈[p−δ,p+δ]↦[p−δ,p+δ]

According to the Mean Value Theorem, if x∈[p−δ,p+δ]x\in[p-\delta,p+\delta]x∈[p−δ,p+δ], there exists

ξ∈[x,p],∣g(x)−g(p)∣=∣g′(ξ)∣∣x−p∣∣g(x)−p∣=∣g(x)−g(p)∣=∣g′(ξ)∣∣x−p∣≤k∣x−p∣<∣x−p∣<δ\xi \in [x,p],|g(x)-g(p)|=|g'(\xi)||x-p|\\ |g(x)-p|=|g(x)-g(p)|=|g'(\xi)||x-p|\leq k|x-p|<|x-p|<\delta ξ∈[x,p],∣g(x)−g(p)∣=∣g′(ξ)∣∣x−p∣∣g(x)−p∣=∣g(x)−g(p)∣=∣g′(ξ)∣∣x−p∣≤k∣x−p∣<∣x−p∣<δ

Thus, g∈[p−δ,p+δ]↦[p−δ,p+δ]g\in[p-\delta,p+\delta]\mapsto [p-\delta,p+\delta]g∈[p−δ,p+δ]↦[p−δ,p+δ].

According to the proving process above, all the hypotheses of the Fixed-Point Theorem are satisfied for g(x)=x−f(x)f′(x)g(x)=x-\displaystyle\frac{f(x)}{f'(x)}g(x)=x−f′(x)f(x)​. Therefore, the sequence ${p_n}_{n=1}^\infty $ defined by

pn=g(pn−1),∀n≥1p_n=g(p_n-1),\forall n\geq 1 pn​=g(pn​−1),∀n≥1

converges to ppp for any p0∈[p−δ,p+δ].p_0\in[p-\delta,p+\delta].p0​∈[p−δ,p+δ].

2.3.4 Secant Method

Background

For Newton’s method, each iteration requires evaluation of both function f(xk)f(x_k)f(xk​) and its derivative f′(xk)f'(x_k)f′(xk​), which may be inconvenient or expensive.

Improvement

Derivative is approximated by finite difference using two successive iterates, so iteration becomes

xk+1=xk−f(xk)∗xk−xk−1f(xk)−f(xk−1).x_{k+1}=x_k-f(x_k)*\displaystyle\frac{x_k-x_{k-1}}{f(x_k)-f(x_{k-1})}. xk+1​=xk​−f(xk​)∗f(xk​)−f(xk−1​)xk​−xk−1​​.

This method is known as secant method.

Example

Pseudo-Code

INPUT: Initial approximation p0,p1p_0,p_1p0​,p1​, tolerance TOLTOLTOL, Maximum number of iteration NNN.

OUTPUT: approximate solution ppp or message of failure.

Step 111: Set n=2,q0=f(p0),q1=f(p1)n = 2,q_0=f(p_0),q_1=f(p_1)n=2,q0​=f(p0​),q1​=f(p1​).

Step 222: While n≤Nn\leq Nn≤N, do Steps 3~53~53~5.

Step 333: Set p=p1−q1∗p1−p0q1−q0p= p_1-q_1*\displaystyle\frac{p_1-p_0}{q_1-q_0}p=p1​−q1​∗q1​−q0​p1​−p0​​.(Compute pip_ipi​)Step 444: If ∣p−p1∣<TOL|p-p_1|<TOL∣p−p1​∣<TOL, then output ppp; (Procedure complete successfully.) Stop!Step 555: Set n=n+1,p0=p1,p1=p,q0=q1,q1=f(p)n=n+1, p_0=p_1,p_1=p, q_0=q_1,q_1=f(p)n=n+1,p0​=p1​,p1​=p,q0​=q1​,q1​=f(p).

Step 666: OUTPUT “Method failed after NNN iterations.” STOP!

2.3.5 False Position Method

Definition

To find a solution of f(x)=0f(x)=0f(x)=0 for a given continuous function fff on the interval [p0,p1][p_0,p_1][p0​,p1​], where f(p0)f(p_0)f(p0​) and f(p1)f(p_1)f(p1​) have opposite signs

f(p0)∗f(p1)<0.f(p_0)*f(p_1)<0. f(p0​)∗f(p1​)<0.

The approximation p2p_2p2​ is chosen in same manner as in Secant Method, as the xxx-intercept (xxx轴截距) of the line joining (p0,f(p0))(p_0,f(p_0))(p0​,f(p0​)) and (p1,f(p1))(p_1,f(p_1))(p1​,f(p1​)).

To decide the Secant Line to compute p3p_3p3​, we need to check f(p2)∗f(p1)f(p_2)*f(p_1)f(p2​)∗f(p1​) or f(p2)∗f(p0)f(p_2)*f(p_0)f(p2​)∗f(p0​).

If f(p2)∗f(p1)f(p_2)*f(p_1)f(p2​)∗f(p1​) is negative, we choose p3p_3p3​ as the xxx-intercept for the line joining (p1,f(p1)(p_1,f(p_1)(p1​,f(p1​) and p2,f(p2)p_2,f(p_2)p2​,f(p2​).

In a similar manner, we can get a sequence {pn}2∞\{p_n\}_2^\infty{pn​}2∞​ which approximates to the root.

Example

Pseudo-Code

INPUT: Initial approximation p0,p1p_0,p_1p0​,p1​, tolerance TOLTOLTOL, Maximum number of iteration NNN.

OUTPUT: approximate solution ppp or message of failure.

Step 111: Set n=2,q0=f(p0),q1=f(p1)n = 2,q_0=f(p_0),q_1=f(p_1)n=2,q0​=f(p0​),q1​=f(p1​).

Step 222: While n≤Nn\leq Nn≤N, do Steps 3~63~63~6.

Step 333: Set p=p1−q1∗p1−p0q1−q0p= p_1-q_1*\displaystyle\frac{p_1-p_0}{q_1-q_0}p=p1​−q1​∗q1​−q0​p1​−p0​​.(Compute pip_ipi​)Step 444: If ∣p−p1∣<TOL|p-p_1|<TOL∣p−p1​∣<TOL, then output ppp; (Procedure complete successfully.) Stop!Step 555: Set n=n+1,q=f(p)n=n+1, q=f(p)n=n+1,q=f(p).Step 666: If q∗q1<0q*q_1<0q∗q1​<0, set p0=p,q0=qp_0=p, q_0=qp0​=p,q0​=q; else set p1=p,q1=q.p_1=p, q_1=q.p1​=p,q1​=q.

Step 666: OUTPUT “Method failed after NNN iterations.” STOP!

2.4 Error Analysis for Iteration Methods

2.4.1 The Rate of Sequence Convergence

Definition

Suppose {pn}n=0∞\{p_n\}_{n=0}^\infty{pn​}n=0∞​ is a sequence that converges to ppp, with pn≠pp_n\not=ppn​​=p for all n.

If positive constants λ\lambdaλ and α\alphaα exist with

lim⁡n→∞∣pn+1−p∣∣pn−p∣α=λ,\lim_{n\rightarrow\infty} \displaystyle\frac{|p_{n+1}-p|}{|p_n-p|^\alpha}=\lambda, n→∞lim​∣pn​−p∣α∣pn+1​−p∣​=λ,

then {pn}n=0∞\{p_n\}_{n=0}^\infty{pn​}n=0∞​ converges to ppp of order α\alphaα, with asymptotic error constant λ\lambdaλ.

Properties

A sequence with a high order of convergence converges more rapidly than a sequence with a lower order.The asymptotic constant affects the speed of convergence but is not as important as the order.

Example

If α=1\alpha=1α=1, the sequence is linearly convergent.If α=2\alpha =2α=2, the sequence is quadratically convergent.

Summary

Using the Mean Value Theorem to prove Linear Convergence and the Taylor’s Theorem to prove Quadratic Convergence with g′(p)=0.g'(p)=0.g′(p)=0.

2.4.2 Convergent Order of Fixed-Point Iteration (Improved)

Convergent Oder of Fixed-Point Iteration

(1) g∈C[a,b]g\in C[a,b]g∈C[a,b] for all x∈[a,b]x\in[a,b]x∈[a,b]

(2) g′(x)g'(x)g′(x) is continuous on (a,b)(a,b)(a,b) and a positive constant 0<k<10<k<10<k<1 exists with ∣g′(x)∣≤k|g'(x)|\leq k∣g′(x)∣≤k, for all x∈(a,b)x\in(a,b)x∈(a,b).

If g′(p)≠0g'(p)\not=0g′(p)​=0, then for any number p0p_0p0​ in [a,b][a,b][a,b], the sequence pn=g(pn−1)p_n=g(p_{n-1})pn​=g(pn−1​), for n≥1n\geq 1n≥1, converges only linearly to the unique fixed point ppp in [a,b][a,b][a,b].

Proof

pn+1−p=g(pn)−g(p)=g′(ξn)(pn−p),p_{n+1}-p=g(p_n)-g(p)=g'(\xi_n)(p_n-p), pn+1​−p=g(pn​)−g(p)=g′(ξn​)(pn​−p),

where ξn\xi_nξn​ is between pnp_npn​ and ppp.

Since {pn}n=0∞\{p_n\}_{n=0}^\infty{pn​}n=0∞​ converges to ppp, and ξn\xi_nξn​ is between pnp_npn​ and ppp, thus {ξn}n=0∞\{\xi_n\}_{n=0}^\infty{ξn​}n=0∞​ also converges to ppp.

Thus,

lim⁡n→∞∣pn+1−p∣∣pn−p∣=lim⁡n→∞∣g′(ξn)∣=∣g′(p)∣,\lim_{n\rightarrow\infty}\displaystyle\frac{|p_{n+1}-p|}{|p_n-p|}=\lim_{n\rightarrow\infty}|g'(\xi_n)|=|g'(p)|, n→∞lim​∣pn​−p∣∣pn+1​−p∣​=n→∞lim​∣g′(ξn​)∣=∣g′(p)∣,

fixed-point iteration exhibits linear convergence with asymptotic error constant ∣g′(p)∣|g'(p)|∣g′(p)∣ whenever g′(p)≠0g'(p)\not=0g′(p)​=0, which also implies that higher-order convergence for fixed-point methods can occur only when g′(p)=0g'(p)=0g′(p)=0.

Quadratical Convergence

Let ppp be a solution of the equation x=g(x)x=g(x)x=g(x).

(1) g′(p)=0g'(p)=0g′(p)=0

(2) g′′g''g′′ is continuous and strictly bounded by MMM on an open interval III containing ppp.

Then there exists a δ>0\delta >0δ>0 such that, for p0∈[p−δ,p+δ]p_0\in [p-\delta, p+\delta]p0​∈[p−δ,p+δ], the sequence defined by pn=g(pn−1)p_n=g(p_{n-1})pn​=g(pn−1​), when n≥1n\geq 1n≥1, converges at least quadratically to ppp.

Moreover, for sufficiently large values of nnn,

∣pn+1−p∣<M2∣pn−p∣2.|p_{n+1}-p|<\displaystyle\frac{M}{2}|p_n-p|^2. ∣pn+1​−p∣<2M​∣pn​−p∣2.

Proof

Due to the two conditions described above,

g(x)=g(p)+g′(p)(x−p)+g′′(ξ)2(x−p)2=p+g′′(ξ)2∗(x−p)2g(x)=g(p)+g'(p)(x-p)+\displaystyle\frac{g''(\xi)}{2}(x-p)^2=p+\displaystyle\frac{g''(\xi)}{2}*(x-p)^2 g(x)=g(p)+g′(p)(x−p)+2g′′(ξ)​(x−p)2=p+2g′′(ξ)​∗(x−p)2

is derived, that ξ\xiξ lies between xxx and ppp.

Thus,

pn+1=g(pn)=p+g′′(ξ)2∗(pn−p)2pn+1−p=g′′(ξ)2∗(pn−p)2lim⁡n→∞g′′(ξ)=g′′(p)lim⁡n→∞∣pn+1−pn∣∣pn−p∣2=lim⁡n→∞g′′(ξ)2=g′′(p)2p_{n+1}=g(p_n)=p+\displaystyle\frac{g''(\xi)}{2}*(p_n-p)^2\\ p_{n+1}-p=\displaystyle\frac{g''(\xi)}{2}*(p_n-p)^2\\ \lim_{n\rightarrow\infty}g''(\xi)=g''(p)\\ \lim_{n\rightarrow\infty}\displaystyle\frac{|p_{n+1}-p_n|}{|p_n-p|^2}=\lim_{n\rightarrow\infty}\displaystyle\frac{g''(\xi)}{2}=\displaystyle\frac{g''(p)}{2} pn+1​=g(pn​)=p+2g′′(ξ)​∗(pn​−p)2pn+1​−p=2g′′(ξ)​∗(pn​−p)2n→∞lim​g′′(ξ)=g′′(p)n→∞lim​∣pn​−p∣2∣pn+1​−pn​∣​=n→∞lim​2g′′(ξ)​=2g′′(p)​

Since g′′g''g′′ is strictly bounded by MMM on the interval ∣p−δ,p+δ∣|p-\delta,p+\delta|∣p−δ,p+δ∣, for sufficiently large values of nnn,

∣pn+1−p∣<M2∣pn−p∣2|p_{n+1}-p|<\displaystyle\frac{M}{2}|p_n-p|^2 ∣pn+1​−p∣<2M​∣pn​−p∣2

is also derived.

Construct a quadratically convergent fixed-point problem

Let

g(x)=x−ϕ(x)f(x)g′(x)=1−ϕ′(x)f(x)−ϕ(x)f′(x)g(x)=x-\phi(x)f(x)\\ g'(x)=1-\phi'(x)f(x)-\phi(x)f'(x) g(x)=x−ϕ(x)f(x)g′(x)=1−ϕ′(x)f(x)−ϕ(x)f′(x)

And the condition is g′(p)=0g'(p)=0g′(p)=0, thus ϕ(p)=1f′(p)\phi(p)=\displaystyle\frac{1}{f'(p)}ϕ(p)=f′(p)1​.

A reasonable approach is to let ϕ(x)=1f′(x)\phi(x)=\displaystyle\frac{1}{f'(x)}ϕ(x)=f′(x)1​, which is the Newton’s method.

Remarks

the convergence rate of Fixed-Point iteration is usually linear, with constant C=∣g′(p)∣C=|g'(p)|C=∣g′(p)∣.But if g′(p)=0g'(p)=0g′(p)=0, then the convergence rate is at least quadraticquadraticquadratic.

2.4.3 Zero of Multiplicity

Definition

A solution ppp of f(x)=0f(x)=0f(x)=0 is a zero of multiplicity mmm of f(x)f(x)f(x) if for x≠px\not=px​=p, we can write

f(x)=(x−p)mq(x),f(x)=(x-p)^mq(x), f(x)=(x−p)mq(x),

where

lim⁡x→pq(x)≠0.\lim_{x\rightarrow p}q(x)\not=0. x→plim​q(x)​=0.

Theorem

f∈C1[a,b]f\in C^1[a,b]f∈C1[a,b] has a simple zero at ppp in (a,b)(a,b)(a,b) if and only if f(p)=0f(p)=0f(p)=0, but f′(p)≠0f'(p)\not=0f′(p)​=0.The function f∈Cm[a,b]f\in C^m[a,b]f∈Cm[a,b] has a zero of multiplicity mmm at ppp if and only if

0=f(p)=f′(p)=f′′(p)=...=f(m−1)(p).0=f(p)=f'(p)=f''(p)=...=f^{(m-1)}(p). 0=f(p)=f′(p)=f′′(p)=...=f(m−1)(p).

but fm(p)≠0f^m(p)\not=0fm(p)​=0.

Proof

If fff has a simple zero at ppp, then

f(p)=0f(x)=(x−p)∗q(x)lim⁡x→pq(x)≠0.f(p)=0\\ f(x)=(x-p)*q(x)\\ \lim_{x\rightarrow p}q(x)\not=0. f(p)=0f(x)=(x−p)∗q(x)x→plim​q(x)​=0.

Since f∈C1[a,b]f\in C^1[a,b]f∈C1[a,b],

f′(p)=lim⁡x→pf′(x)=lim⁡x→p[q(x)+(x−p)∗q′(x)]=lim⁡x→pq(x)≠0.f'(p)=\lim_{x\rightarrow p}f'(x)=\lim_{x\rightarrow p}[q(x)+(x-p)*q'(x)]=\lim_{x\rightarrow p}q(x)\not=0. f′(p)=x→plim​f′(x)=x→plim​[q(x)+(x−p)∗q′(x)]=x→plim​q(x)​=0.

2.4.4 Convergence of Newton’s Method

Property

Newton’s method transforms nonlinear equation f(x)=0f(x)=0f(x)=0 into fixed-point problem x=g(x)x=g(x)x=g(x) with g(x)=x−f(x)f′(x)g(x)=x-\displaystyle\frac{f(x)}{f'(x)}g(x)=x−f′(x)f(x)​.

If ppp is a simple root, f(p)=0,f′(p)≠0,g′(p)=0f(p)=0,f'(p)\not=0,g'(p)=0f(p)=0,f′(p)​=0,g′(p)=0, thus the convergence rate is quadratic. (Iterations must start close enough to root.)

If ppp is a root of multiplicity,

f(x)=(x−p)mq(x)g′(p)=1−1m≠0,f(x)=(x-p)^mq(x)\\ g'(p)=1-\displaystyle\frac{1}{m}\not=0, f(x)=(x−p)mq(x)g′(p)=1−m1​​=0,

thus the convergence rate is linear.

the Method of avoiding multiple root

f(x)=(x−p)mq(x)u(x)=f(x)f′(x)u(x)=(x−p)∗q(x)mq(x)+(x−p)q′(x)u′(x)=1m≠0.f(x)=(x-p)^mq(x)\\ u(x)=\displaystyle\frac{f(x)}{f'(x)}\\ u(x)=(x-p)*\displaystyle\frac{q(x)}{mq(x)+(x-p)q'(x)}\\ u'(x)=\displaystyle\frac{1}{m}\not=0. f(x)=(x−p)mq(x)u(x)=f′(x)f(x)​u(x)=(x−p)∗mq(x)+(x−p)q′(x)q(x)​u′(x)=m1​​=0.

Thus ppp is a simple root of u(x)u(x)u(x). Then we change the Newton’s method into

g(x)=x−u(x)u′(x)=x−f(x)f′(x)f′(x)2−f(x)f′′(x)g(x)=x-\displaystyle\frac{u(x)}{u'(x)}=x-\displaystyle\frac{f(x)f'(x)}{f'(x)^2-f(x)f''(x)} g(x)=x−u′(x)u(x)​=x−f′(x)2−f(x)f′′(x)f(x)f′(x)​

whose convergence rate is also quadratic.

2.4.5 Convergence rate of Secant Method

Convergence rate of secant method is normally superlinear, with r≈1.618r\approx1.618r≈1.618, which is lower than Newton’s method.Secant method need to evaluate two previous functions per iteration, there is no requirement to evaluate the derivative.Its disadvantage is that it needs two starting guesses which close enough to the solution in order to converge.

2.5 Accelerating Convergence

2.5.1 Aitken’s method

Background

Accelerating the convergence of a sequence that is linearly convergent, regardless of its origin or application.

lim⁡n→∞pn+1−ppn−p=λ,λ≠0.\lim_{n\rightarrow \infty} \displaystyle\frac{p_{n+1}-p}{p_n-p}=\lambda,\lambda\not=0. n→∞lim​pn​−ppn+1​−p​=λ,λ​=0.

Thus, when nnn is sufficiently large,

pn+1−ppn−p≈pn+2−ppn+1−pp≈pn∗pn+2−pn+12pn+2−2∗pn+1+pnp≈pn−(pn+1−pn)2pn+2−2∗pn+1+pn\displaystyle\frac{p_{n+1}-p}{p_n-p}\approx \displaystyle\frac{p_{n+2}-p}{p_n+1-p}\\ p\approx\displaystyle\frac{p_n*p_{n+2}-p_{n+1}^2}{p_{n+2}-2*p_{n+1}+p_n}\\ p\approx p_n-\displaystyle\frac{(p_{n+1}-p_{n})^2}{p_{n+2}-2*p_{n+1}+p_n} pn​−ppn+1​−p​≈pn​+1−ppn+2​−p​p≈pn+2​−2∗pn+1​+pn​pn​∗pn+2​−pn+12​​p≈pn​−pn+2​−2∗pn+1​+pn​(pn+1​−pn​)2​

Aitken’s Δ\DeltaΔ method is to define a new sequence p^n=0∞:{\hat{p}}_{n=0}^\infty:p^​n=0∞​:

p^=pn−(pn+1−pn)2pn+2−2∗pn+1+pn,\hat{p}=p_n-\displaystyle\frac{(p_{n+1}-p_{n})^2}{p_{n+2}-2*p_{n+1}+p_n}, p^​=pn​−pn+2​−2∗pn+1​+pn​(pn+1​−pn​)2​,

whose convergence rate is faster than the original sequence {pn}n=0∞\{p_n\}_{n=0}^\infty{pn​}n=0∞​.

Definition

Given the sequence {pn}n=0∞\{p_n\}_{n=0}^\infty{pn​}n=0∞​, the forward difference Δpn\Delta p_nΔpn​ is defined by

Δpn=pn+1−pn,n≥0.\Delta p_n=p_{n+1}-p_n,n\geq 0. Δpn​=pn+1​−pn​,n≥0.

Higher powers Δkpn\Delta^kp_nΔkpn​ are defined recursively by

Δkpn=Δ(Δk−1pn),k≥2.\Delta^k p_n=\Delta(\Delta^{k-1}p_{n}),k\geq 2. Δkpn​=Δ(Δk−1pn​),k≥2.

For example,

Δ2pn=Δ(Δpn)=Δ(pn+1−pn)=Δ(pn+1)−Δ(pn)=pn+2−2pn+1+pn.\Delta^2p_n=\Delta(\Delta p_{n})=\Delta(p_{n+1}-p_n )=\Delta(p_{n+1})-\Delta(p_n)=p_{n+2}-2p_{n+1}+p_n. Δ2pn​=Δ(Δpn​)=Δ(pn+1​−pn​)=Δ(pn+1​)−Δ(pn​)=pn+2​−2pn+1​+pn​.

Thus, pn^=pn−(Δpn)2Δ2pn\hat{p_n}=p_n-\displaystyle\frac{(\Delta p_n)^2}{\Delta^2p_n}pn​^​=pn​−Δ2pn​(Δpn​)2​.

Theorem

Condition:

lim⁡n→∞pn+1−ppn−p=λ,λ≠0.(pn−p)(pn+1−p)>0\lim_{n\rightarrow \infty} \displaystyle\frac{p_{n+1}-p}{p_n-p}=\lambda,\lambda\not=0.\\ (p_n-p)(p_{n+1}-p)>0 n→∞lim​pn​−ppn+1​−p​=λ,λ​=0.(pn​−p)(pn+1​−p)>0

Result: the sequence {pn^}n=0∞\{\hat{p_n}\}_{n=0}^\infty{pn​^​}n=0∞​ converges to ppp faster than {pn}n=0∞\{p_n\}_{n=0}^\infty{pn​}n=0∞​ in the sense that

lim⁡n→∞p^n−ppn−p=0.\lim_{n\rightarrow\infty}\displaystyle\frac{\hat{p}_n-p}{p_n-p}=0. n→∞lim​pn​−pp^​n​−p​=0.

Proof:

2.5.2 Steffensen’s method

Definition

The function is p=g(p)p=g(p)p=g(p), and the initial approximation is p0p_0p0​, p0^=p0−(Δp0)2Δ2p0\hat{p_0}=p_0-\displaystyle\frac{(\Delta p_0)^2}{\Delta^2p_0}p0​^​=p0​−Δ2p0​(Δp0​)2​.

Assume that p0^\hat{p_0}p0​^​ is a better approximation than p2p_2p2​, so applying fixed-point iteration to p0^\hat{p_0}p0​^​ instead of p2p_2p2​, and the computing process shows below.

Theorem

Suppose that x=g(x)x=g(x)x=g(x) has the solution ppp with g′(p)≠1g'(p)\not=1g′(p)​=1.

If there exists a δ>0\delta>0δ>0 such that g∈C3[p−δ,p+δ]g\in C^3[p-\delta,p+\delta]g∈C3[p−δ,p+δ],

then Steffensen’s method gives quadratic convergence for any p0∈[p−δ,p+δ].p_0\in [p-\delta,p+\delta].p0​∈[p−δ,p+δ].

Pseudo-Code

INPUT: Initial approximation p0p_0p0​, tolerance TOLTOLTOL, Maximum number of iteration NNN.

OUTPUT: approximate solution ppp or message of failure.

Step 111: Set n=1n = 1n=1.

Step 222: While n≤Nn\leq Nn≤N, do Steps 3~53~53~5.

Step 333: Set p1=g(p0),p2=g(p1),p=p0−(p1−p0)2p2−2p1+p0p_1=g(p_0),p_2=g(p_1),p= p_0-\displaystyle\frac{(p_1-p_0)^2}{p_2-2p_1+p_0}p1​=g(p0​),p2​=g(p1​),p=p0​−p2​−2p1​+p0​(p1​−p0​)2​.Step 444: If ∣p−p0∣<TOL|p-p_0|<TOL∣p−p0​∣<TOL, then output ppp; (Procedure complete successfully.) Stop!Step 555: Set n=n+1,p0=pn=n+1, p_0=pn=n+1,p0​=p.

Step 666: OUTPUT “Method failed after NNN iterations.” STOP!

2.6 Zeros of Polynomials and Muller’s Method

2.6.1 Polynomial Theorem

2.6.2 Horner’s Method

Background

A more efficient method to calculate the P(x0)P(x_0)P(x0​) and P′(x0)P'(x_0)P′(x0​) for a polynomial P(x)P(x)P(x).

Theorem

Let

P(x)=∑i=0i=naixi.P(x)=\sum\limits_{i=0}^{i=n}a_ix^i. P(x)=i=0∑i=n​ai​xi.

Construction process for P(x_0) (Substitute formulas one by one to verify)

if bn=anb_n=a_nbn​=an​ and

bk=ak+bk+1x0,k∈[0,n−1],b_k=a_k+b_{k+1}x_0,k\in [0,n-1], bk​=ak​+bk+1​x0​,k∈[0,n−1],

then b0=P(x0)b_0=P(x_0)b0​=P(x0​).

Moreover, if

Q(x)=∑i=1nbixi−1Q(x)=\sum\limits_{i=1}^{n}b_ix^{i-1} Q(x)=i=1∑n​bi​xi−1

then

P(x)=(x−x0)Q(x)+b0.P(x)=(x-x_0)Q(x)+b_0. P(x)=(x−x0​)Q(x)+b0​.

Construction process for P’(x_0) (Substitute formulas one by one to verify)

P(x)=(x−x0)Q(x)+b0P′(x)=Q(x)+(x−x0)Q′(x)P′(x0)=Q(x0)P(x)=(x-x_0)Q(x)+b_0\\ P'(x)=Q(x)+(x-x_0)Q'(x)\\ P'(x_0)=Q(x_0) P(x)=(x−x0​)Q(x)+b0​P′(x)=Q(x)+(x−x0​)Q′(x)P′(x0​)=Q(x0​)

Let Q(x)=∑i=1nbixi−1=(x−x0)R(x)+c1Q(x)=\sum\limits_{i=1}^{n}b_ix^{i-1}=(x-x_0)R(x)+c_1Q(x)=i=1∑n​bi​xi−1=(x−x0​)R(x)+c1​, where R(x)=∑i=2ncixi−2R(x)=\sum\limits_{i=2}^{n}c_ix^{i-2}R(x)=i=2∑n​ci​xi−2. Thus

cn=bn,ck=bk+ck+1x0,k∈[1,n−1],Q(x0)=c1=P′(x0).c_n=b_n,\\ c_k=b_k+c_{k+1}x_0,k\in[1,n-1],\\ Q(x_0)=c_1=P'(x_0). cn​=bn​,ck​=bk​+ck+1​x0​,k∈[1,n−1],Q(x0​)=c1​=P′(x0​).

Pseudo-Code

To compute the value P(x0)P(x_0)P(x0​) and P′(x0)P'(x_0)P′(x0​) for a function P(x)=∑i=0naixi.P(x)=\sum\limits_{i=0}^{n}a_ix^i.P(x)=i=0∑n​ai​xi.

INPUT: Degree nnn, coefficients a0,a1,...,ana_0,a_1,...,a_na0​,a1​,...,an​ of polynomial P(x)P(x)P(x), point x0x_0x0​.

OUTPUT: Values of P(x0)P(x_0)P(x0​) and P′(x0)P'(x_0)P′(x0​).

Step 111: Set y=any = a_ny=an​ (bnb_nbn​ for QQQ), z=0z=0z=0 (cn+1c_{n+1}cn+1​ for RRR).

Step 222: For j=n−1,n−2,...,0j=n-1,n-2,...,0j=n−1,n−2,...,0, set

z=y+z∗x0z=y+z*x_0z=y+z∗x0​ (cj+1c_{j+1}cj+1​ for RRR),y=aj+y∗x0y =a_j+y*x_0y=aj​+y∗x0​ (bjb_jbj​ for QQQ).

Step 333: OUTPUT y:P(x0)y:P(x_0)y:P(x0​) and z:P′(x0)z:P'(x_0)z:P′(x0​).

2.6.3 Deflation Method

Newton’s method combined with Honor’s method

Deflation Method (压缩技术)

Suppose that xNx_NxN​ in the Nth iteration of the Newton-Raphson procedure, is an approximation zero of P(x)P(x)P(x), then

P(x)=(x−xN)Q(x)+b0=(x−xN)Q(x)+P(xN)≈(x−xN)Q(x).P(x)=(x-x_N)Q(x)+b_0=(x-x_N)Q(x)+P(x_N)\approx (x-x_N)Q(x). P(x)=(x−xN​)Q(x)+b0​=(x−xN​)Q(x)+P(xN​)≈(x−xN​)Q(x).

Let x1^=xN\hat{x_1}=x_Nx1​^​=xN​ be the approximate zero of PPP, and Q1(x)=Q(x)Q_1(x)=Q(x)Q1​(x)=Q(x) be the approximate factor, then we have

P(x)≈(x−x1^)Q1(x).P(x)\approx (x-\hat{x_1})Q_1(x). P(x)≈(x−x1​^​)Q1​(x).

To find the second approximate zero of P(x)P(x)P(x), we can use the same procedure to Q1(x)Q_1(x)Q1​(x), give Q1(x)≈(x−x2^)Q2(x)Q_1(x)\approx(x-\hat{x_2})Q_2(x)Q1​(x)≈(x−x2​^​)Q2​(x), where Q2(x)Q_2(x)Q2​(x) is a polynomial of degree n−2n-2n−2. Thus P(x)≈(x−x1^)Q1(x)≈(x−x1^)(x−x2^)Q2(x)P(x)\approx (x-\hat{x_1})Q_1(x)\approx (x-\hat{x_1})(x-\hat{x_2})Q_2(x)P(x)≈(x−x1​^​)Q1​(x)≈(x−x1​^​)(x−x2​^​)Q2​(x).

Repeat this procedure, till Qn−2(x)Q_{n-2}(x)Qn−2​(x) which is an quadratic polynomial and can be solved by quadratic formula. We can get all approximate zeros of P(x)P(x)P(x). This method is calleddeflation method.

2.6.4 Muller’s Algorithm

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。