In many cases it is not necessary to provide anything, except the
Lagrangian, it will be auto-detected (and an error raised if this
couldn’t be done).
funcs (Function or an iterable of Functions) – The functions that the Lagrangian depends on. The Euler equations
are differential equations for each of these functions.
vars (Symbol or an iterable of Symbols) – The Symbols that are the independent variables of the functions.
Returns:
eqns – The list of differential equations, one for each function.
This module implements an algorithm for efficient generation of finite
difference weights for ordinary differentials of functions for
derivatives from 0 (interpolation) up to arbitrary order.
The core algorithm is provided in the finite difference weight generating
function (finite_diff_weights), and two convenience functions are provided
for:
estimating a derivative (or interpolate) directly from a series of points
is also provided (apply_finite_diff).
differentiating by using finite difference approximations
Order = 0 corresponds to interpolation.
Only supply so many points you think makes sense
to around x0 when extracting the derivative (the function
need to be well behaved within that region). Also beware
of Runge’s phenomenon.
See also
sympy.calculus.finite_diff.finite_diff_weights
References
Fortran 90 implementation with Python interface for numerics: finitediff
Returns an approximation of a derivative of a function in
the form of a finite difference formula. The expression is a
weighted sum of the function at a number of discrete values of
(one of) the independent variable(s).
Parameters:
derivative (a Derivative instance) –
points (sequence or coefficient, optional) – If sequence: discrete values (length >= order+1) of the
independent variable used for generating the finite
difference weights.
If it is a coefficient, it will be used as the step-size
for generating an equidistant sequence of length order+1
centered around x0. default: 1 (step-size 1)
x0 (number or Symbol, optional) – the value of the independent variable (wrt) at which the
derivative is to be approximated. Default: same as wrt.
wrt (Symbol, optional) – “with respect to” the variable for which the (partial)
derivative is to be approximated for. If not provided it
is required that the Derivative is ordinary. Default: None.
The algorithm is not restricted to use equidistant spacing, nor
do we need to make the approximation around x0, but we can get
an expression estimating the derivative at an offset:
Note that the above form preserves the product rule in discrete form.
If we want we can pass evaluate=True to get another form (which is
usually not what we want):
Calculates the finite difference weights for an arbitrarily spaced
one-dimensional grid (x_list) for derivatives at x0 of order
0, 1, …, up to order using a recursive formula. Order of accuracy
is at least len(x_list)-order, if x_list is defined correctly.
Parameters:
order (int) – Up to what derivative order weights should be calculated.
0 corresponds to interpolation.
x_list (sequence) – Sequence of (unique) values for the independent variable.
It is useful (but not necessary) to order x_list from
nearest to furthest from x0; see examples below.
x0 (Number or Symbol) – Root or value of the independent variable for which the finite
difference weights should be generated. Default is S.One.
Returns:
A list of sublists, each corresponding to coefficients for
increasing derivative order, and each containing lists of
coefficients for increasing subsets of x_list.
>>> from..importS>>> from..calculusimportfinite_diff_weights>>> res=finite_diff_weights(1,[-S(1)/2,S(1)/2,S(3)/2,S(5)/2],0)>>> res[[[1, 0, 0, 0], [1/2, 1/2, 0, 0], [3/8, 3/4, -1/8, 0], [5/16, 15/16, -5/16, 1/16]], [[0, 0, 0, 0], [-1, 1, 0, 0], [-1, 1, 0, 0], [-23/24, 7/8, 1/8, -1/24]]]>>> res[0][-1]# FD weights for 0th derivative, using full x_list[5/16, 15/16, -5/16, 1/16]>>> res[1][-1]# FD weights for 1st derivative[-23/24, 7/8, 1/8, -1/24]>>> res[1][-2]# FD weights for 1st derivative, using x_list[:-1][-1, 1, 0, 0]>>> res[1][-1][0]# FD weight for 1st deriv. for x_list[0]-23/24>>> res[1][-1][1]# FD weight for 1st deriv. for x_list[1], etc.7/8
Each sublist contains the most accurate formula at the end.
Note, that in the above example res[1][1] is the same as res[1][2].
Since res[1][2] has an order of accuracy of
len(x_list[:3])-order=3-1=2, the same is true for res[1][1]!
If weights for a finite difference approximation of 3rd order
derivative is wanted, weights for 0th, 1st and 2nd order are
calculated “for free”, so are formulae using subsets of x_list.
This is something one can take advantage of to save computational cost.
Be aware that one should define x_list from nearest to farest from
x0. If not, subsets of x_list will yield poorer approximations,
which might not grand an order of accuracy of len(x_list)-order.
This module implements algorithms for finding singularities for a function
and identifying types of functions.
The differential calculus methods in this module include methods to identify
the following function types in the given Interval:
- Increasing
- Strictly Increasing
- Decreasing
- Strictly Decreasing
- Monotonic
# Note AccumulationBounds has an alias: AccumBounds
AccumulationBounds represent an interval [a, b], which is always closed
at the ends. Here a and b can be any value from extended real numbers.
The intended meaning of AccummulationBounds is to give an approximate
location of the accumulation points of a real function at a limit point.
Let a and b be reals such that a <= b.
langle a, brangle = {x in mathbb{R} mid a le x le b}
langle -infty, brangle = {x in mathbb{R} mid x le b} cup {-infty, infty}
langle a, infty rangle = {x in mathbb{R} mid a le x} cup {-infty, infty}
langle -infty, infty rangle = mathbb{R} cup {-infty, infty}
oo and -oo are added to the second and third definition respectively,
since if either -oo or oo is an argument, then the other one should
be included (though not as an end point). This is forced, since we have,
for example, 1/AccumBounds(0, 1) = AccumBounds(1, oo), and the limit at
0 is not one-sided. As x tends to 0-, then 1/x -> -oo, so -oo
should be interpreted as belonging to AccumBounds(1, oo) though it need
not appear explicitly.
In many cases it suffices to know that the limit set is bounded.
However, in some other cases more exact information could be useful.
For example, all accumulation values of cos(x) + 1 are non-negative.
(AccumBounds(-1, 1) + 1 = AccumBounds(0, 2))
A AccumulationBounds object is defined to be real AccumulationBounds,
if its end points are finite reals.
Let X, Y be real AccumulationBounds, then their sum, difference,
product are defined to be the following sets:
X + Y = { x+y mid x in X cap y in Y}
X - Y = { x-y mid x in X cap y in Y}
X * Y = { x*y mid x in X cap y in Y}
There is, however, no consensus on Interval division.
X / Y = { z mid exists x in X, y in Y mid y neq 0, z = x/y}
Note: According to this definition the quotient of two AccumulationBounds
may not be a AccumulationBounds object but rather a union of
AccumulationBounds.
Note
The main focus in the interval arithmetic is on the simplest way to
calculate upper and lower endpoints for the range of values of a
function in one or more variables. These barriers are not necessarily
the supremum or infimum, since the precise calculation of those values
can be difficult or impossible.
The exponentiation of AccumulationBounds is defined
as follows:
If 0 does not belong to X or n > 0 then
X^n = { x^n mid x in X}
otherwise
X^n = { x^n mid x neq 0, x in X} cup {-infty, infty}
Here for fractional n, the part of X resulting in a complex
AccumulationBounds object is neglected.
>>> AccumBounds(-1,4)**(S(1)/2)<0, 2>
>>> AccumBounds(1,2)**2<1, 4>
>>> AccumBounds(-1,oo)**(-1)<-oo, oo>
Note: <a, b>^2 is not same as <a, b>*<a, b>
>>> AccumBounds(-1,1)**2<0, 1>
>>> AccumBounds(1,3)<4True
>>> AccumBounds(1,3)<-1False
Some elementary functions can also take AccumulationBounds as input.
A function f evaluated for some real AccumulationBounds <a, b>
is defined as f(langle a, brangle) = { f(x) mid a le x le b }
>>> sin(AccumBounds(pi/6,pi/3))<1/2, sqrt(3)/2>
>>> exp(AccumBounds(0,1))<1, E>
>>> log(AccumBounds(1,E))<0, 1>
Some symbol in an expression can be substituted for a AccumulationBounds
object. But it doesn’t necessarily evaluate the AccumulationBounds for
that expression.
Same expression can be evaluated to different values depending upon
the form it is used for substituion. For example:
>>> (x**2+2*x+1).subs(x,AccumBounds(-1,1))<-1, 4>
>>> ((x+1)**2).subs(x,AccumBounds(-1,1))<0, 4>
References
Notes
Do not use AccumulationBounds for floating point interval arithmetic
calculations, use mpmath.iv instead.
Returns the difference of maximum possible value attained by
AccumulationBounds object and minimum possible value attained
by AccumulationBounds object.
Returns the intervals in the given domain for which the function
is continuous.
This method is limited by the ability to determine the various
singularities and discontinuities of the given function.
Tests the given function for periodicity in the given symbol.
Parameters:
f (Expr.) – The concerned function.
symbol (Symbol) – The variable for which the period is to be determined.
check (Boolean) – The flag to verify whether the value being returned is a period or not.
Returns:
The period of the function is returned.
None is returned when the function is aperiodic or has a complex period.
The value of 0 is returned as the period of a constant function.
Currently, we do not support functions with a complex period.
The period of functions having complex periodic values such
as exp, sinh is evaluated to None.
The value returned might not be the “fundamental” period of the given
function i.e. it may not be the smallest periodic value of the function.
The verification of the period through the check flag is not reliable
due to internal simplification of the given expression. Hence, it is set
to False by default.
Order = 0 corresponds to interpolation.
Only supply so many points you think makes sense
to around x0 when extracting the derivative (the function
need to be well behaved within that region). Also beware
of Runge’s phenomenon.
See also
sympy.calculus.finite_diff.finite_diff_weights
References
Fortran 90 implementation with Python interface for numerics: finitediff
Returns an approximation of a derivative of a function in
the form of a finite difference formula. The expression is a
weighted sum of the function at a number of discrete values of
(one of) the independent variable(s).
Parameters:
derivative (a Derivative instance) –
points (sequence or coefficient, optional) – If sequence: discrete values (length >= order+1) of the
independent variable used for generating the finite
difference weights.
If it is a coefficient, it will be used as the step-size
for generating an equidistant sequence of length order+1
centered around x0. default: 1 (step-size 1)
x0 (number or Symbol, optional) – the value of the independent variable (wrt) at which the
derivative is to be approximated. Default: same as wrt.
wrt (Symbol, optional) – “with respect to” the variable for which the (partial)
derivative is to be approximated for. If not provided it
is required that the Derivative is ordinary. Default: None.
The algorithm is not restricted to use equidistant spacing, nor
do we need to make the approximation around x0, but we can get
an expression estimating the derivative at an offset:
Note that the above form preserves the product rule in discrete form.
If we want we can pass evaluate=True to get another form (which is
usually not what we want):
In many cases it is not necessary to provide anything, except the
Lagrangian, it will be auto-detected (and an error raised if this
couldn’t be done).
funcs (Function or an iterable of Functions) – The functions that the Lagrangian depends on. The Euler equations
are differential equations for each of these functions.
vars (Symbol or an iterable of Symbols) – The Symbols that are the independent variables of the functions.
Returns:
eqns – The list of differential equations, one for each function.
Calculates the finite difference weights for an arbitrarily spaced
one-dimensional grid (x_list) for derivatives at x0 of order
0, 1, …, up to order using a recursive formula. Order of accuracy
is at least len(x_list)-order, if x_list is defined correctly.
Parameters:
order (int) – Up to what derivative order weights should be calculated.
0 corresponds to interpolation.
x_list (sequence) – Sequence of (unique) values for the independent variable.
It is useful (but not necessary) to order x_list from
nearest to furthest from x0; see examples below.
x0 (Number or Symbol) – Root or value of the independent variable for which the finite
difference weights should be generated. Default is S.One.
Returns:
A list of sublists, each corresponding to coefficients for
increasing derivative order, and each containing lists of
coefficients for increasing subsets of x_list.
>>> from..importS>>> from..calculusimportfinite_diff_weights>>> res=finite_diff_weights(1,[-S(1)/2,S(1)/2,S(3)/2,S(5)/2],0)>>> res[[[1, 0, 0, 0], [1/2, 1/2, 0, 0], [3/8, 3/4, -1/8, 0], [5/16, 15/16, -5/16, 1/16]], [[0, 0, 0, 0], [-1, 1, 0, 0], [-1, 1, 0, 0], [-23/24, 7/8, 1/8, -1/24]]]>>> res[0][-1]# FD weights for 0th derivative, using full x_list[5/16, 15/16, -5/16, 1/16]>>> res[1][-1]# FD weights for 1st derivative[-23/24, 7/8, 1/8, -1/24]>>> res[1][-2]# FD weights for 1st derivative, using x_list[:-1][-1, 1, 0, 0]>>> res[1][-1][0]# FD weight for 1st deriv. for x_list[0]-23/24>>> res[1][-1][1]# FD weight for 1st deriv. for x_list[1], etc.7/8
Each sublist contains the most accurate formula at the end.
Note, that in the above example res[1][1] is the same as res[1][2].
Since res[1][2] has an order of accuracy of
len(x_list[:3])-order=3-1=2, the same is true for res[1][1]!
If weights for a finite difference approximation of 3rd order
derivative is wanted, weights for 0th, 1st and 2nd order are
calculated “for free”, so are formulae using subsets of x_list.
This is something one can take advantage of to save computational cost.
Be aware that one should define x_list from nearest to farest from
x0. If not, subsets of x_list will yield poorer approximations,
which might not grand an order of accuracy of len(x_list)-order.
Tests the given function for periodicity in the given symbol.
Parameters:
f (Expr.) – The concerned function.
symbol (Symbol) – The variable for which the period is to be determined.
check (Boolean) – The flag to verify whether the value being returned is a period or not.
Returns:
The period of the function is returned.
None is returned when the function is aperiodic or has a complex period.
The value of 0 is returned as the period of a constant function.
Currently, we do not support functions with a complex period.
The period of functions having complex periodic values such
as exp, sinh is evaluated to None.
The value returned might not be the “fundamental” period of the given
function i.e. it may not be the smallest periodic value of the function.
The verification of the period through the check flag is not reliable
due to internal simplification of the given expression. Hence, it is set
to False by default.