modelparameters.sympy.solvers package

Submodules

modelparameters.sympy.solvers.bivariate module

modelparameters.sympy.solvers.bivariate.bivariate_type(f, x, y, **kwargs)[source]

Given an expression, f, 3 tests will be done to see what type of composite bivariate it might be, options for u(x, y) are:

x*y
x+y
x*y+x
x*y+y

If it matches one of these types, u(x, y), P(u) and dummy variable u will be returned. Solving P(u) for u and equating the solutions to u(x, y) and then solving for x or y is equivalent to solving the original expression for x or y. If x and y represent two functions in the same variable, e.g. x = g(t) and y = h(t), then if u(x, y) - p can be solved for t then these represent the solutions to P(u) = 0 when p are the solutions of P(u) = 0.

Only positive values of u are considered.

Examples

>>> from .solvers import solve
>>> from .bivariate import bivariate_type
>>> from ..abc import x, y
>>> eq = (x**2 - 3).subs(x, x + y)
>>> bivariate_type(eq, x, y)
(x + y, _u**2 - 3, _u)
>>> uxy, pu, u = _
>>> usol = solve(pu, u); usol
[sqrt(3)]
>>> [solve(uxy - s) for s in solve(pu, u)]
[[{x: -y + sqrt(3)}]]
>>> all(eq.subs(s).equals(0) for sol in _ for s in sol)
True

modelparameters.sympy.solvers.decompogen module

modelparameters.sympy.solvers.decompogen.compogen(g_s, symbol)[source]

Returns the composition of functions. Given a list of functions g_s, returns their composition f, where:

f = g_1 o g_2 o .. o g_n

Note: This is a General composition function. It also composes Polynomials. For only Polynomial composition see compose in polys.

Examples

>>> from .decompogen import compogen
>>> from ..abc import x
>>> from .. import sqrt, sin, cos
>>> compogen([sin(x), cos(x)], x)
sin(cos(x))
>>> compogen([x**2 + x + 1, sin(x)], x)
sin(x)**2 + sin(x) + 1
>>> compogen([sqrt(x), 6*x**2 - 5], x)
sqrt(6*x**2 - 5)
>>> compogen([sin(x), sqrt(x), cos(x), x**2 + 1], x)
sin(sqrt(cos(x**2 + 1)))
>>> compogen([x**2 - x - 1, x**2 + x], x)
-x**2 - x + (x**2 + x)**2 - 1
modelparameters.sympy.solvers.decompogen.decompogen(f, symbol)[source]

Computes General functional decomposition of f. Given an expression f, returns a list [f_1, f_2, ..., f_n], where:

f = f_1 o f_2 o ... f_n = f_1(f_2(... f_n))

Note: This is a General decomposition function. It also decomposes Polynomials. For only Polynomial decomposition see decompose in polys.

Examples

>>> from .decompogen import decompogen
>>> from ..abc import x
>>> from .. import sqrt, sin, cos
>>> decompogen(sin(cos(x)), x)
[sin(x), cos(x)]
>>> decompogen(sin(x)**2 + sin(x) + 1, x)
[x**2 + x + 1, sin(x)]
>>> decompogen(sqrt(6*x**2 - 5), x)
[sqrt(x), 6*x**2 - 5]
>>> decompogen(sin(sqrt(cos(x**2 + 1))), x)
[sin(x), sqrt(x), cos(x), x**2 + 1]
>>> decompogen(x**4 + 2*x**3 - x - 1, x)
[x**2 - x - 1, x**2 + x]

modelparameters.sympy.solvers.deutils module

Utility functions for classifying and solving ordinary and partial differential equations.

Contains

_preprocess ode_order _desolve

modelparameters.sympy.solvers.deutils.ode_order(expr, func)[source]

Returns the order of a given differential equation with respect to func.

This function is implemented recursively.

Examples

>>> from .. import Function
>>> from .deutils import ode_order
>>> from ..abc import x
>>> f, g = map(Function, ['f', 'g'])
>>> ode_order(f(x).diff(x, 2) + f(x).diff(x)**2 +
... f(x).diff(x), f(x))
2
>>> ode_order(f(x).diff(x, 2) + g(x).diff(x, 3), f(x))
2
>>> ode_order(f(x).diff(x, 2) + g(x).diff(x, 3), g(x))
3

modelparameters.sympy.solvers.diophantine module

modelparameters.sympy.solvers.diophantine.classify_diop(eq, _dict=True)[source]
modelparameters.sympy.solvers.diophantine.diophantine(eq, param=t, syms=None, permute=False)[source]

Simplify the solution procedure of diophantine equation eq by converting it into a product of terms which should equal zero.

For example, when solving, x^2 - y^2 = 0 this is treated as (x + y)(x - y) = 0 and x + y = 0 and x - y = 0 are solved independently and combined. Each term is solved by calling diop_solve().

Output of diophantine() is a set of tuples. The elements of the tuple are the solutions for each variable in the equation and are arranged according to the alphabetic ordering of the variables. e.g. For an equation with two variables, a and b, the first element of the tuple is the solution for a and the second for b.

Usage

diophantine(eq, t, syms): Solve the diophantine equation eq. t is the optional parameter to be used by diop_solve(). syms is an optional list of symbols which determines the order of the elements in the returned tuple.

By default, only the base solution is returned. If permute is set to True then permutations of the base solution and/or permutations of the signs of the values will be returned when applicable.

>>> from .diophantine import diophantine
>>> from ..abc import a, b
>>> eq = a**4 + b**4 - (2**4 + 3**4)
>>> diophantine(eq)
{(2, 3)}
>>> diophantine(eq, permute=True)
{(-3, -2), (-3, 2), (-2, -3), (-2, 3), (2, -3), (2, 3), (3, -2), (3, 2)}

Details

eq should be an expression which is assumed to be zero. t is the parameter to be used in the solution.

Examples

>>> from ..abc import x, y, z
>>> diophantine(x**2 - y**2)
{(t_0, -t_0), (t_0, t_0)}
>>> diophantine(x*(2*x + 3*y - z))
{(0, n1, n2), (t_0, t_1, 2*t_0 + 3*t_1)}
>>> diophantine(x**2 + 3*x*y + 4*x)
{(0, n1), (3*t_0 - 4, -t_0)}

See also

diop_solve, sympy.utilities.iterables.permute_signs, sympy.utilities.iterables.signed_permutations

modelparameters.sympy.solvers.inequalities module

Tools for solving inequalities and systems of inequalities.

modelparameters.sympy.solvers.inequalities.reduce_abs_inequalities(exprs, gen)[source]

Reduce a system of inequalities with nested absolute values.

Examples

>>> from .. import Abs, Symbol
>>> from ..abc import x
>>> from .inequalities import reduce_abs_inequalities
>>> x = Symbol('x', real=True)
>>> reduce_abs_inequalities([(Abs(3*x - 5) - 7, '<'),
... (Abs(x + 25) - 13, '>')], x)
(-2/3 < x) & (x < 4) & (((-oo < x) & (x < -38)) | ((-12 < x) & (x < oo)))
>>> reduce_abs_inequalities([(Abs(x - 4) + Abs(3*x - 5) - 7, '<')], x)
(1/2 < x) & (x < 4)
modelparameters.sympy.solvers.inequalities.reduce_abs_inequality(expr, rel, gen)[source]

Reduce an inequality with nested absolute values.

Examples

>>> from .. import Abs, Symbol
>>> from .inequalities import reduce_abs_inequality
>>> x = Symbol('x', real=True)
>>> reduce_abs_inequality(Abs(x - 5) - 3, '<', x)
(2 < x) & (x < 8)
>>> reduce_abs_inequality(Abs(x + 2)*3 - 13, '<', x)
(-19/3 < x) & (x < 7/3)
modelparameters.sympy.solvers.inequalities.reduce_inequalities(inequalities, symbols=[])[source]

Reduce a system of inequalities with rational coefficients.

Examples

>>> from .. import sympify as S, Symbol
>>> from ..abc import x, y
>>> from .inequalities import reduce_inequalities
>>> reduce_inequalities(0 <= x + 3, [])
(-3 <= x) & (x < oo)
>>> reduce_inequalities(0 <= x + y*2 - 1, [x])
x >= -2*y + 1
modelparameters.sympy.solvers.inequalities.reduce_rational_inequalities(exprs, gen, relational=True)[source]

Reduce a system of rational inequalities with rational coefficients.

Examples

>>> from .. import Poly, Symbol
>>> from .inequalities import reduce_rational_inequalities
>>> x = Symbol('x', real=True)
>>> reduce_rational_inequalities([[x**2 <= 0]], x)
Eq(x, 0)
>>> reduce_rational_inequalities([[x + 2 > 0]], x)
(-2 < x) & (x < oo)
>>> reduce_rational_inequalities([[(x + 2, ">")]], x)
(-2 < x) & (x < oo)
>>> reduce_rational_inequalities([[x + 2]], x)
Eq(x, -2)
modelparameters.sympy.solvers.inequalities.solve_poly_inequalities(polys)[source]

Solve polynomial inequalities with rational coefficients.

Examples

>>> from .inequalities import solve_poly_inequalities
>>> from ..polys import Poly
>>> from ..abc import x
>>> solve_poly_inequalities(((
... Poly(x**2 - 3), ">"), (
... Poly(-x**2 + 1), ">")))
Union(Interval.open(-oo, -sqrt(3)), Interval.open(-1, 1), Interval.open(sqrt(3), oo))
modelparameters.sympy.solvers.inequalities.solve_poly_inequality(poly, rel)[source]

Solve a polynomial inequality with rational coefficients.

Examples

>>> from .. import Poly
>>> from ..abc import x
>>> from .inequalities import solve_poly_inequality
>>> solve_poly_inequality(Poly(x, x, domain='ZZ'), '==')
[{0}]
>>> solve_poly_inequality(Poly(x**2 - 1, x, domain='ZZ'), '!=')
[Interval.open(-oo, -1), Interval.open(-1, 1), Interval.open(1, oo)]
>>> solve_poly_inequality(Poly(x**2 - 1, x, domain='ZZ'), '==')
[{-1}, {1}]
modelparameters.sympy.solvers.inequalities.solve_rational_inequalities(eqs)[source]

Solve a system of rational inequalities with rational coefficients.

Examples

>>> from ..abc import x
>>> from .. import Poly
>>> from .inequalities import solve_rational_inequalities
>>> solve_rational_inequalities([[
... ((Poly(-x + 1), Poly(1, x)), '>='),
... ((Poly(-x + 1), Poly(1, x)), '<=')]])
{1}
>>> solve_rational_inequalities([[
... ((Poly(x), Poly(1, x)), '!='),
... ((Poly(-x + 1), Poly(1, x)), '>=')]])
Union(Interval.open(-oo, 0), Interval.Lopen(0, 1))
modelparameters.sympy.solvers.inequalities.solve_univariate_inequality(expr, gen, relational=True, domain=S.Reals, continuous=False)[source]

Solves a real univariate inequality.

Parameters:
  • expr (Relational) – The target inequality

  • gen (Symbol) – The variable for which the inequality is solved

  • relational (bool) – A Relational type output is expected or not

  • domain (Set) – The domain over which the equation is solved

  • continuous (bool) – True if expr is known to be continuous over the given domain (and so continuous_domain() doesn’t need to be called on it)

Raises:

NotImplementedError – The solution of the inequality cannot be determined due to limitation in solvify.

Notes

Currently, we cannot solve all the inequalities due to limitations in solvify. Also, the solution returned for trigonometric inequalities are restricted in its periodic interval.

See also

solvify

solver returning solveset solutions with solve’s output API

Examples

>>> from .inequalities import solve_univariate_inequality
>>> from .. import Symbol, sin, Interval, S
>>> x = Symbol('x')
>>> solve_univariate_inequality(x**2 >= 4, x)
((2 <= x) & (x < oo)) | ((x <= -2) & (-oo < x))
>>> solve_univariate_inequality(x**2 >= 4, x, relational=False)
Union(Interval(-oo, -2), Interval(2, oo))
>>> domain = Interval(0, S.Infinity)
>>> solve_univariate_inequality(x**2 >= 4, x, False, domain)
Interval(2, oo)
>>> solve_univariate_inequality(sin(x) > 0, x, relational=False)
Interval.open(0, pi)

modelparameters.sympy.solvers.ode module

This module contains dsolve() and different helper functions that it uses.

dsolve() solves ordinary differential equations. See the docstring on the various functions for their uses. Note that partial differential equations support is in pde.py. Note that hint functions have docstrings describing their various methods, but they are intended for internal use. Use dsolve(ode, func, hint=hint) to solve an ODE using a specific hint. See also the docstring on dsolve().

Functions in this module

These are the user functions in this module:

  • dsolve() - Solves ODEs.

  • classify_ode() - Classifies ODEs into possible hints for dsolve().

  • checkodesol() - Checks if an equation is the solution to an ODE.

  • homogeneous_order() - Returns the homogeneous order of an expression.

  • infinitesimals() - Returns the infinitesimals of the Lie group of point transformations of an ODE, such that it is invariant.

  • ode_checkinfsol() - Checks if the given infinitesimals are the actual infinitesimals of a first order ODE.

These are the non-solver helper functions that are for internal use. The user should use the various options to dsolve() to obtain the functionality provided by these functions:

  • odesimp() - Does all forms of ODE simplification.

  • ode_sol_simplicity() - A key function for comparing solutions by simplicity.

  • constantsimp() - Simplifies arbitrary constants.

  • constant_renumber() - Renumber arbitrary constants.

  • _handle_Integral() - Evaluate unevaluated Integrals.

See also the docstrings of these functions.

Currently implemented solver methods

The following methods are implemented for solving ordinary differential equations. See the docstrings of the various hint functions for more information on each (run help(ode)):

  • 1st order separable differential equations.

  • 1st order differential equations whose coefficients or dx and dy are functions homogeneous of the same order.

  • 1st order exact differential equations.

  • 1st order linear differential equations.

  • 1st order Bernoulli differential equations.

  • Power series solutions for first order differential equations.

  • Lie Group method of solving first order differential equations.

  • 2nd order Liouville differential equations.

  • Power series solutions for second order differential equations at ordinary and regular singular points.

  • nth order linear homogeneous differential equation with constant coefficients.

  • nth order linear inhomogeneous differential equation with constant coefficients using the method of undetermined coefficients.

  • nth order linear inhomogeneous differential equation with constant coefficients using the method of variation of parameters.

Philosophy behind this module

This module is designed to make it easy to add new ODE solving methods without having to mess with the solving code for other methods. The idea is that there is a classify_ode() function, which takes in an ODE and tells you what hints, if any, will solve the ODE. It does this without attempting to solve the ODE, so it is fast. Each solving method is a hint, and it has its own function, named ode_<hint>. That function takes in the ODE and any match expression gathered by classify_ode() and returns a solved result. If this result has any integrals in it, the hint function will return an unevaluated Integral class. dsolve(), which is the user wrapper function around all of this, will then call odesimp() on the result, which, among other things, will attempt to solve the equation for the dependent variable (the function we are solving for), simplify the arbitrary constants in the expression, and evaluate any integrals, if the hint allows it.

How to add new solution methods

If you have an ODE that you want dsolve() to be able to solve, try to avoid adding special case code here. Instead, try finding a general method that will solve your ODE, as well as others. This way, the ode module will become more robust, and unhindered by special case hacks. WolphramAlpha and Maple’s DETools[odeadvisor] function are two resources you can use to classify a specific ODE. It is also better for a method to work with an nth order ODE instead of only with specific orders, if possible.

To add a new method, there are a few things that you need to do. First, you need a hint name for your method. Try to name your hint so that it is unambiguous with all other methods, including ones that may not be implemented yet. If your method uses integrals, also include a hint_Integral hint. If there is more than one way to solve ODEs with your method, include a hint for each one, as well as a <hint>_best hint. Your ode_<hint>_best() function should choose the best using min with ode_sol_simplicity as the key argument. See ode_1st_homogeneous_coeff_best(), for example. The function that uses your method will be called ode_<hint>(), so the hint must only use characters that are allowed in a Python function name (alphanumeric characters and the underscore ‘_’ character). Include a function for every hint, except for _Integral hints (dsolve() takes care of those automatically). Hint names should be all lowercase, unless a word is commonly capitalized (such as Integral or Bernoulli). If you have a hint that you do not want to run with all_Integral that doesn’t have an _Integral counterpart (such as a best hint that would defeat the purpose of all_Integral), you will need to remove it manually in the dsolve() code. See also the classify_ode() docstring for guidelines on writing a hint name.

Determine in general how the solutions returned by your method compare with other methods that can potentially solve the same ODEs. Then, put your hints in the allhints tuple in the order that they should be called. The ordering of this tuple determines which hints are default. Note that exceptions are ok, because it is easy for the user to choose individual hints with dsolve(). In general, _Integral variants should go at the end of the list, and _best variants should go before the various hints they apply to. For example, the undetermined_coefficients hint comes before the variation_of_parameters hint because, even though variation of parameters is more general than undetermined coefficients, undetermined coefficients generally returns cleaner results for the ODEs that it can solve than variation of parameters does, and it does not require integration, so it is much faster.

Next, you need to have a match expression or a function that matches the type of the ODE, which you should put in classify_ode() (if the match function is more than just a few lines, like _undetermined_coefficients_match(), it should go outside of classify_ode()). It should match the ODE without solving for it as much as possible, so that classify_ode() remains fast and is not hindered by bugs in solving code. Be sure to consider corner cases. For example, if your solution method involves dividing by something, make sure you exclude the case where that division will be 0.

In most cases, the matching of the ODE will also give you the various parts that you need to solve it. You should put that in a dictionary (.match() will do this for you), and add that as matching_hints['hint'] = matchdict in the relevant part of classify_ode(). classify_ode() will then send this to dsolve(), which will send it to your function as the match argument. Your function should be named ode_<hint>(eq, func, order, match)`.  If you need to send more information, put it in the ``match dictionary. For example, if you had to substitute in a dummy variable in classify_ode() to match the ODE, you will need to pass it to your function using the match dict to access it. You can access the independent variable using func.args[0], and the dependent variable (the function you are trying to solve for) as func.func. If, while trying to solve the ODE, you find that you cannot, raise NotImplementedError. dsolve() will catch this error with the all meta-hint, rather than causing the whole routine to fail.

Add a docstring to your function that describes the method employed. Like with anything else in SymPy, you will need to add a doctest to the docstring, in addition to real tests in test_ode.py. Try to maintain consistency with the other hint functions’ docstrings. Add your method to the list at the top of this docstring. Also, add your method to ode.rst in the docs/src directory, so that the Sphinx docs will pull its docstring into the main SymPy documentation. Be sure to make the Sphinx documentation by running make html from within the doc directory to verify that the docstring formats correctly.

If your solution method involves integrating, use Integral() instead of integrate(). This allows the user to bypass hard/slow integration by using the _Integral variant of your hint. In most cases, calling sympy.core.basic.Basic.doit() will integrate your solution. If this is not the case, you will need to write special code in _handle_Integral(). Arbitrary constants should be symbols named C1, C2, and so on. All solution methods should return an equality instance. If you need an arbitrary number of arbitrary constants, you can use constants = numbered_symbols(prefix='C', cls=Symbol, start=1). If it is possible to solve for the dependent function in a general way, do so. Otherwise, do as best as you can, but do not call solve in your ode_<hint>() function. odesimp() will attempt to solve the solution for you, so you do not need to do that. Lastly, if your ODE has a common simplification that can be applied to your solutions, you can add a special case in odesimp() for it. For example, solutions returned from the 1st_homogeneous_coeff hints often have many log() terms, so odesimp() calls logcombine() on them (it also helps to write the arbitrary constant as log(C1) instead of C1 in this case). Also consider common ways that you can rearrange your solution to have constantsimp() take better advantage of it. It is better to put simplification in odesimp() than in your method, because it can then be turned off with the simplify flag in dsolve(). If you have any extraneous simplification in your function, be sure to only run it using if match.get('simplify', True):, especially if it can be slow or if it can reduce the domain of the solution.

Finally, as with every contribution to SymPy, your method will need to be tested. Add a test for each method in test_ode.py. Follow the conventions there, i.e., test the solver using dsolve(eq, f(x), hint=your_hint), and also test the solution using checkodesol() (you can put these in a separate tests and skip/XFAIL if it runs too slow/doesn’t work). Be sure to call your hint specifically in dsolve(), that way the test won’t be broken simply by the introduction of another matching hint. If your method works for higher order (>1) ODEs, you will need to run sol = constant_renumber(sol, 'C', 1, order) for each solution, where order is the order of the ODE. This is because constant_renumber renumbers the arbitrary constants by printing order, which is platform dependent. Try to test every corner case of your solver, including a range of orders if it is a nth order solver, but if your solver is slow, such as if it involves hard integration, try to keep the test run time down.

Feel free to refactor existing hints to avoid duplicating code or creating inconsistencies. If you can show that your method exactly duplicates an existing method, including in the simplicity and speed of obtaining the solutions, then you can remove the old, less general method. The existing code is tested extensively in test_ode.py, so if anything is broken, one of those tests will surely fail.

modelparameters.sympy.solvers.ode.allhints = ('separable', '1st_exact', '1st_linear', 'Bernoulli', 'Riccati_special_minus2', '1st_homogeneous_coeff_best', '1st_homogeneous_coeff_subs_indep_div_dep', '1st_homogeneous_coeff_subs_dep_div_indep', 'almost_linear', 'linear_coefficients', 'separable_reduced', '1st_power_series', 'lie_group', 'nth_linear_constant_coeff_homogeneous', 'nth_linear_euler_eq_homogeneous', 'nth_linear_constant_coeff_undetermined_coefficients', 'nth_linear_euler_eq_nonhomogeneous_undetermined_coefficients', 'nth_linear_constant_coeff_variation_of_parameters', 'nth_linear_euler_eq_nonhomogeneous_variation_of_parameters', 'Liouville', '2nd_power_series_ordinary', '2nd_power_series_regular', 'separable_Integral', '1st_exact_Integral', '1st_linear_Integral', 'Bernoulli_Integral', '1st_homogeneous_coeff_subs_indep_div_dep_Integral', '1st_homogeneous_coeff_subs_dep_div_indep_Integral', 'almost_linear_Integral', 'linear_coefficients_Integral', 'separable_reduced_Integral', 'nth_linear_constant_coeff_variation_of_parameters_Integral', 'nth_linear_euler_eq_nonhomogeneous_variation_of_parameters_Integral', 'Liouville_Integral')

This is a list of hints in the order that they should be preferred by classify_ode(). In general, hints earlier in the list should produce simpler solutions than those later in the list (for ODEs that fit both). For now, the order of this list is based on empirical observations by the developers of SymPy.

The hint used by dsolve() for a specific ODE can be overridden (see the docstring).

In general, _Integral hints are grouped at the end of the list, unless there is a method that returns an unevaluable integral most of the time (which go near the end of the list anyway). default, all, best, and all_Integral meta-hints should not be included in this list, but _best and _Integral hints should be included.

modelparameters.sympy.solvers.ode.check_linear_2eq_order1(eq, func, func_coef)[source]
modelparameters.sympy.solvers.ode.check_linear_2eq_order2(eq, func, func_coef)[source]
modelparameters.sympy.solvers.ode.check_linear_3eq_order1(eq, func, func_coef)[source]
modelparameters.sympy.solvers.ode.check_linear_neq_order1(eq, func, func_coef)[source]
modelparameters.sympy.solvers.ode.check_nonlinear_2eq_order1(eq, func, func_coef)[source]
modelparameters.sympy.solvers.ode.check_nonlinear_2eq_order2(eq, func, func_coef)[source]
modelparameters.sympy.solvers.ode.check_nonlinear_3eq_order1(eq, func, func_coef)[source]
modelparameters.sympy.solvers.ode.check_nonlinear_3eq_order2(eq, func, func_coef)[source]
modelparameters.sympy.solvers.ode.checkinfsol(eq, infinitesimals, func=None, order=None)[source]

This function is used to check if the given infinitesimals are the actual infinitesimals of the given first order differential equation. This method is specific to the Lie Group Solver of ODEs.

As of now, it simply checks, by substituting the infinitesimals in the partial differential equation.

\[\frac{\partial \eta}{\partial x} + \left(\frac{\partial \eta}{\partial y} - \frac{\partial \xi}{\partial x}\right)*h - \frac{\partial \xi}{\partial y}*h^{2} - \xi\frac{\partial h}{\partial x} - \eta\frac{\partial h}{\partial y} = 0\]

where eta, and xi are the infinitesimals and h(x,y) = frac{dy}{dx}

The infinitesimals should be given in the form of a list of dicts [{xi(x, y): inf, eta(x, y): inf}], corresponding to the output of the function infinitesimals. It returns a list of values of the form [(True/False, sol)] where sol is the value obtained after substituting the infinitesimals in the PDE. If it is True, then sol would be 0.

modelparameters.sympy.solvers.ode.checkodesol(ode, sol, func=None, order='auto', solve_for_func=True)[source]

Substitutes sol into ode and checks that the result is 0.

This only works when func is one function, like f(x). sol can be a single solution or a list of solutions. Each solution may be an Equality that the solution satisfies, e.g. Eq(f(x), C1), Eq(f(x) + C1, 0); or simply an Expr, e.g. f(x) - C1. In most cases it will not be necessary to explicitly identify the function, but if the function cannot be inferred from the original equation it can be supplied through the func argument.

If a sequence of solutions is passed, the same sort of container will be used to return the result for each solution.

It tries the following methods, in order, until it finds zero equivalence:

  1. Substitute the solution for f in the original equation. This only works if ode is solved for f. It will attempt to solve it first unless solve_for_func == False.

  2. Take n derivatives of the solution, where n is the order of ode, and check to see if that is equal to the solution. This only works on exact ODEs.

  3. Take the 1st, 2nd, …, nth derivatives of the solution, each time solving for the derivative of f of that order (this will always be possible because f is a linear operator). Then back substitute each derivative into ode in reverse order.

This function returns a tuple. The first item in the tuple is True if the substitution results in 0, and False otherwise. The second item in the tuple is what the substitution results in. It should always be 0 if the first item is True. Note that sometimes this function will False, but with an expression that is identically equal to 0, instead of returning True. This is because simplify() cannot reduce the expression to 0. If an expression returned by this function vanishes identically, then sol really is a solution to ode.

If this function seems to hang, it is probably because of a hard simplification.

To use this function to test, test the first item of the tuple.

Examples

>>> from .. import Eq, Function, checkodesol, symbols
>>> x, C1 = symbols('x,C1')
>>> f = Function('f')
>>> checkodesol(f(x).diff(x), Eq(f(x), C1))
(True, 0)
>>> assert checkodesol(f(x).diff(x), C1)[0]
>>> assert not checkodesol(f(x).diff(x), x)[0]
>>> checkodesol(f(x).diff(x, 2), x**2)
(False, 2)
modelparameters.sympy.solvers.ode.checksysodesol(eqs, sols, func=None)[source]

Substitutes corresponding sols for each functions into each eqs and checks that the result of substitutions for each equation is 0. The equations and solutions passed can be any iterable.

This only works when each sols have one function only, like x(t) or y(t). For each function, sols can have a single solution or a list of solutions. In most cases it will not be necessary to explicitly identify the function, but if the function cannot be inferred from the original equation it can be supplied through the func argument.

When a sequence of equations is passed, the same sequence is used to return the result for each equation with each function substitued with corresponding solutions.

It tries the following method to find zero equivalence for each equation:

Substitute the solutions for functions, like x(t) and y(t) into the original equations containing those functions. This function returns a tuple. The first item in the tuple is True if the substitution results for each equation is 0, and False otherwise. The second item in the tuple is what the substitution results in. Each element of the list should always be 0 corresponding to each equation if the first item is True. Note that sometimes this function may return False, but with an expression that is identically equal to 0, instead of returning True. This is because simplify() cannot reduce the expression to 0. If an expression returned by each function vanishes identically, then sols really is a solution to eqs.

If this function seems to hang, it is probably because of a difficult simplification.

Examples

>>> from .. import Eq, diff, symbols, sin, cos, exp, sqrt, S
>>> from .ode import checksysodesol
>>> C1, C2 = symbols('C1:3')
>>> t = symbols('t')
>>> x, y = symbols('x, y', function=True)
>>> eq = (Eq(diff(x(t),t), x(t) + y(t) + 17), Eq(diff(y(t),t), -2*x(t) + y(t) + 12))
>>> sol = [Eq(x(t), (C1*sin(sqrt(2)*t) + C2*cos(sqrt(2)*t))*exp(t) - S(5)/3),
... Eq(y(t), (sqrt(2)*C1*cos(sqrt(2)*t) - sqrt(2)*C2*sin(sqrt(2)*t))*exp(t) - S(46)/3)]
>>> checksysodesol(eq, sol)
(True, [0, 0])
>>> eq = (Eq(diff(x(t),t),x(t)*y(t)**4), Eq(diff(y(t),t),y(t)**3))
>>> sol = [Eq(x(t), C1*exp(-1/(4*(C2 + t)))), Eq(y(t), -sqrt(2)*sqrt(-1/(C2 + t))/2),
... Eq(x(t), C1*exp(-1/(4*(C2 + t)))), Eq(y(t), sqrt(2)*sqrt(-1/(C2 + t))/2)]
>>> checksysodesol(eq, sol)
(True, [0, 0])
modelparameters.sympy.solvers.ode.classify_ode(eq, func=None, dict=False, ics=None, **kwargs)[source]

Returns a tuple of possible dsolve() classifications for an ODE.

The tuple is ordered so that first item is the classification that dsolve() uses to solve the ODE by default. In general, classifications at the near the beginning of the list will produce better solutions faster than those near the end, thought there are always exceptions. To make dsolve() use a different classification, use dsolve(ODE, func, hint=<classification>). See also the dsolve() docstring for different meta-hints you can use.

If dict is true, classify_ode() will return a dictionary of hint:match expression terms. This is intended for internal use by dsolve(). Note that because dictionaries are ordered arbitrarily, this will most likely not be in the same order as the tuple.

You can get help on different hints by executing help(ode.ode_hintname), where hintname is the name of the hint without _Integral.

See allhints or the ode docstring for a list of all supported hints that can be returned from classify_ode().

Notes

These are remarks on hint names.

_Integral

If a classification has _Integral at the end, it will return the expression with an unevaluated Integral class in it. Note that a hint may do this anyway if integrate() cannot do the integral, though just using an _Integral will do so much faster. Indeed, an _Integral hint will always be faster than its corresponding hint without _Integral because integrate() is an expensive routine. If dsolve() hangs, it is probably because integrate() is hanging on a tough or impossible integral. Try using an _Integral hint or all_Integral to get it return something.

Note that some hints do not have _Integral counterparts. This is because integrate() is not used in solving the ODE for those method. For example, nth order linear homogeneous ODEs with constant coefficients do not require integration to solve, so there is no nth_linear_homogeneous_constant_coeff_Integrate hint. You can easily evaluate any unevaluated Integrals in an expression by doing expr.doit().

Ordinals

Some hints contain an ordinal such as 1st_linear. This is to help differentiate them from other hints, as well as from other methods that may not be implemented yet. If a hint has nth in it, such as the nth_linear hints, this means that the method used to applies to ODEs of any order.

indep and dep

Some hints contain the words indep or dep. These reference the independent variable and the dependent function, respectively. For example, if an ODE is in terms of f(x), then indep will refer to x and dep will refer to f.

subs

If a hints has the word subs in it, it means the the ODE is solved by substituting the expression given after the word subs for a single dummy variable. This is usually in terms of indep and dep as above. The substituted expression will be written only in characters allowed for names of Python objects, meaning operators will be spelled out. For example, indep/dep will be written as indep_div_dep.

coeff

The word coeff in a hint refers to the coefficients of something in the ODE, usually of the derivative terms. See the docstring for the individual methods for more info (help(ode)). This is contrast to coefficients, as in undetermined_coefficients, which refers to the common name of a method.

_best

Methods that have more than one fundamental way to solve will have a hint for each sub-method and a _best meta-classification. This will evaluate all hints and return the best, using the same considerations as the normal best meta-hint.

Examples

>>> from .. import Function, classify_ode, Eq
>>> from ..abc import x
>>> f = Function('f')
>>> classify_ode(Eq(f(x).diff(x), 0), f(x))
('separable', '1st_linear', '1st_homogeneous_coeff_best',
'1st_homogeneous_coeff_subs_indep_div_dep',
'1st_homogeneous_coeff_subs_dep_div_indep',
'1st_power_series', 'lie_group',
'nth_linear_constant_coeff_homogeneous',
'separable_Integral', '1st_linear_Integral',
'1st_homogeneous_coeff_subs_indep_div_dep_Integral',
'1st_homogeneous_coeff_subs_dep_div_indep_Integral')
>>> classify_ode(f(x).diff(x, 2) + 3*f(x).diff(x) + 2*f(x) - 4)
('nth_linear_constant_coeff_undetermined_coefficients',
'nth_linear_constant_coeff_variation_of_parameters',
'nth_linear_constant_coeff_variation_of_parameters_Integral')
modelparameters.sympy.solvers.ode.classify_sysode(eq, funcs=None, **kwargs)[source]

Returns a dictionary of parameter names and values that define the system of ordinary differential equations in eq. The parameters are further used in dsolve() for solving that system.

The parameter names and values are:

‘is_linear’ (boolean), which tells whether the given system is linear. Note that “linear” here refers to the operator: terms such as x*diff(x,t) are nonlinear, whereas terms like sin(t)*diff(x,t) are still linear operators.

‘func’ (list) contains the :py:class:`~sympy.core.function.Function`s that appear with a derivative in the ODE, i.e. those that we are trying to solve the ODE for.

‘order’ (dict) with the maximum derivative for each element of the ‘func’ parameter.

‘func_coeff’ (dict) with the coefficient for each triple (equation number, function, order)`. The coefficients are those subexpressions that do not appear in ‘func’, and hence can be considered constant for purposes of ODE solving.

‘eq’ (list) with the equations from eq, sympified and transformed into expressions (we are solving for these expressions to be zero).

‘no_of_equations’ (int) is the number of equations (same as len(eq)).

‘type_of_equation’ (string) is an internal classification of the type of ODE.

References

-http://eqworld.ipmnet.ru/en/solutions/sysode/sode-toc1.htm -A. D. Polyanin and A. V. Manzhirov, Handbook of Mathematics for Engineers and Scientists

Examples

>>> from .. import Function, Eq, symbols, diff
>>> from .ode import classify_sysode
>>> from ..abc import t
>>> f, x, y = symbols('f, x, y', function=True)
>>> k, l, m, n = symbols('k, l, m, n', Integer=True)
>>> x1 = diff(x(t), t) ; y1 = diff(y(t), t)
>>> x2 = diff(x(t), t, t) ; y2 = diff(y(t), t, t)
>>> eq = (Eq(5*x1, 12*x(t) - 6*y(t)), Eq(2*y1, 11*x(t) + 3*y(t)))
>>> classify_sysode(eq)
{'eq': [-12*x(t) + 6*y(t) + 5*Derivative(x(t), t), -11*x(t) - 3*y(t) + 2*Derivative(y(t), t)],
'func': [x(t), y(t)], 'func_coeff': {(0, x(t), 0): -12, (0, x(t), 1): 5, (0, y(t), 0): 6,
(0, y(t), 1): 0, (1, x(t), 0): -11, (1, x(t), 1): 0, (1, y(t), 0): -3, (1, y(t), 1): 2},
'is_linear': True, 'no_of_equation': 2, 'order': {x(t): 1, y(t): 1}, 'type_of_equation': 'type1'}
>>> eq = (Eq(diff(x(t),t), 5*t*x(t) + t**2*y(t)), Eq(diff(y(t),t), -t**2*x(t) + 5*t*y(t)))
>>> classify_sysode(eq)
{'eq': [-t**2*y(t) - 5*t*x(t) + Derivative(x(t), t), t**2*x(t) - 5*t*y(t) + Derivative(y(t), t)],
'func': [x(t), y(t)], 'func_coeff': {(0, x(t), 0): -5*t, (0, x(t), 1): 1, (0, y(t), 0): -t**2,
(0, y(t), 1): 0, (1, x(t), 0): t**2, (1, x(t), 1): 0, (1, y(t), 0): -5*t, (1, y(t), 1): 1},
'is_linear': True, 'no_of_equation': 2, 'order': {x(t): 1, y(t): 1}, 'type_of_equation': 'type4'}
modelparameters.sympy.solvers.ode.constant_renumber(expr, symbolname, startnumber, endnumber)[source]

Renumber arbitrary constants in expr to have numbers 1 through N where N is endnumber - startnumber + 1 at most. In the process, this reorders expression terms in a standard way.

This is a simple function that goes through and renumbers any Symbol with a name in the form symbolname + num where num is in the range from startnumber to endnumber.

Symbols are renumbered based on .sort_key(), so they should be numbered roughly in the order that they appear in the final, printed expression. Note that this ordering is based in part on hashes, so it can produce different results on different machines.

The structure of this function is very similar to that of constantsimp().

Examples

>>> from .. import symbols, Eq, pprint
>>> from .ode import constant_renumber
>>> x, C0, C1, C2, C3, C4 = symbols('x,C:5')

Only constants in the given range (inclusive) are renumbered; the renumbering always starts from 1:

>>> constant_renumber(C1 + C3 + C4, 'C', 1, 3)
C1 + C2 + C4
>>> constant_renumber(C0 + C1 + C3 + C4, 'C', 2, 4)
C0 + 2*C1 + C2
>>> constant_renumber(C0 + 2*C1 + C2, 'C', 0, 1)
C1 + 3*C2
>>> pprint(C2 + C1*x + C3*x**2)
                2
C1*x + C2 + C3*x
>>> pprint(constant_renumber(C2 + C1*x + C3*x**2, 'C', 1, 3))
                2
C1 + C2*x + C3*x
modelparameters.sympy.solvers.ode.constantsimp(expr, constants)[source]

Simplifies an expression with arbitrary constants in it.

This function is written specifically to work with dsolve(), and is not intended for general use.

Simplification is done by “absorbing” the arbitrary constants into other arbitrary constants, numbers, and symbols that they are not independent of.

The symbols must all have the same name with numbers after it, for example, C1, C2, C3. The symbolname here would be ‘C’, the startnumber would be 1, and the endnumber would be 3. If the arbitrary constants are independent of the variable x, then the independent symbol would be x. There is no need to specify the dependent function, such as f(x), because it already has the independent symbol, x, in it.

Because terms are “absorbed” into arbitrary constants and because constants are renumbered after simplifying, the arbitrary constants in expr are not necessarily equal to the ones of the same name in the returned result.

If two or more arbitrary constants are added, multiplied, or raised to the power of each other, they are first absorbed together into a single arbitrary constant. Then the new constant is combined into other terms if necessary.

Absorption of constants is done with limited assistance:

  1. terms of Adds are collected to try join constants so e^x (C_1 cos(x) + C_2 cos(x)) will simplify to e^x C_1 cos(x);

  2. powers with exponents that are Adds are expanded so e^{C_1 + x} will be simplified to C_1 e^x.

Use constant_renumber() to renumber constants after simplification or else arbitrary numbers on constants may appear, e.g. C_1 + C_3 x.

In rare cases, a single constant can be “simplified” into two constants. Every differential equation solution should have as many arbitrary constants as the order of the differential equation. The result here will be technically correct, but it may, for example, have C_1 and C_2 in an expression, when C_1 is actually equal to C_2. Use your discretion in such situations, and also take advantage of the ability to use hints in dsolve().

Examples

>>> from .. import symbols
>>> from .ode import constantsimp
>>> C1, C2, C3, x, y = symbols('C1, C2, C3, x, y')
>>> constantsimp(2*C1*x, {C1, C2, C3})
C1*x
>>> constantsimp(C1 + 2 + x, {C1, C2, C3})
C1 + x
>>> constantsimp(C1*C2 + 2 + C2 + C3*x, {C1, C2, C3})
C1 + C3*x
modelparameters.sympy.solvers.ode.dsolve(eq, func=None, hint='default', simplify=True, ics=None, xi=None, eta=None, x0=0, n=6, **kwargs)[source]

Solves any (supported) kind of ordinary differential equation and system of ordinary differential equations.

It is classified under this when number of equation in eq is one. Usage

dsolve(eq, f(x), hint) -> Solve ordinary differential equation eq for function f(x), using method hint.

Details

eq can be any supported ordinary differential equation (see the

ode docstring for supported methods). This can either be an Equality, or an expression, which is assumed to be equal to 0.

f(x) is a function of one variable whose derivatives in that

variable make up the ordinary differential equation eq. In many cases it is not necessary to provide this; it will be autodetected (and an error raised if it couldn’t be detected).

hint is the solving method that you want dsolve to use. Use

classify_ode(eq, f(x)) to get all of the possible hints for an ODE. The default hint, default, will use whatever hint is returned first by classify_ode(). See Hints below for more options that you can use for hint.

simplify enables simplification by

odesimp(). See its docstring for more information. Turn this off, for example, to disable solving of solutions for func or simplification of arbitrary constants. It will still integrate with this hint. Note that the solution may contain more arbitrary constants than the order of the ODE with this option enabled.

xi and eta are the infinitesimal functions of an ordinary

differential equation. They are the infinitesimals of the Lie group of point transformations for which the differential equation is invariant. The user can specify values for the infinitesimals. If nothing is specified, xi and eta are calculated using infinitesimals() with the help of various heuristics.

ics is the set of boundary conditions for the differential equation.

It should be given in the form of {f(x0): x1, f(x).diff(x).subs(x, x2): x3} and so on. For now initial conditions are implemented only for power series solutions of first-order differential equations which should be given in the form of {f(x0): x1} (See issue 4720). If nothing is specified for this case f(0) is assumed to be C0 and the power series solution is calculated about 0.

x0 is the point about which the power series solution of a differential

equation is to be evaluated.

n gives the exponent of the dependent variable up to which the power series

solution of a differential equation is to be evaluated.

Hints

Aside from the various solving methods, there are also some meta-hints that you can pass to dsolve():

default:

This uses whatever hint is returned first by classify_ode(). This is the default argument to dsolve().

all:

To make dsolve() apply all relevant classification hints, use dsolve(ODE, func, hint="all"). This will return a dictionary of hint:solution terms. If a hint causes dsolve to raise the NotImplementedError, value of that hint’s key will be the exception object raised. The dictionary will also include some special keys:

  • order: The order of the ODE. See also ode_order() in deutils.py.

  • best: The simplest hint; what would be returned by best below.

  • best_hint: The hint that would produce the solution given by best. If more than one hint produces the best solution, the first one in the tuple returned by classify_ode() is chosen.

  • default: The solution that would be returned by default. This is the one produced by the hint that appears first in the tuple returned by classify_ode().

all_Integral:

This is the same as all, except if a hint also has a corresponding _Integral hint, it only returns the _Integral hint. This is useful if all causes dsolve() to hang because of a difficult or impossible integral. This meta-hint will also be much faster than all, because integrate() is an expensive routine.

best:

To have dsolve() try all methods and return the simplest one. This takes into account whether the solution is solvable in the function, whether it contains any Integral classes (i.e. unevaluatable integrals), and which one is the shortest in size.

See also the classify_ode() docstring for more info on hints, and the ode docstring for a list of all supported hints.

Tips

  • You can declare the derivative of an unknown function this way:

    >>> from .. import Function, Derivative
    >>> from ..abc import x # x is the independent variable
    >>> f = Function("f")(x) # f is a function of x
    >>> # f_ will be the derivative of f with respect to x
    >>> f_ = Derivative(f, x)
    
  • See test_ode.py for many tests, which serves also as a set of examples for how to use dsolve().

  • dsolve() always returns an Equality class (except for the case when the hint is all or all_Integral). If possible, it solves the solution explicitly for the function being solved for. Otherwise, it returns an implicit solution.

  • Arbitrary constants are symbols named C1, C2, and so on.

  • Because all solutions should be mathematically equivalent, some hints may return the exact same result for an ODE. Often, though, two different hints will return the same solution formatted differently. The two should be equivalent. Also note that sometimes the values of the arbitrary constants in two different solutions may not be the same, because one constant may have “absorbed” other constants into it.

  • Do help(ode.ode_<hintname>) to get help more information on a specific hint, where <hintname> is the name of a hint without _Integral.

Usage

dsolve(eq, func) -> Solve a system of ordinary differential equations eq for func being list of functions including x(t), y(t), z(t) where number of functions in the list depends upon the number of equations provided in eq.

Details

eq can be any supported system of ordinary differential equations This can either be an Equality, or an expression, which is assumed to be equal to 0.

func holds x(t) and y(t) being functions of one variable which together with some of their derivatives make up the system of ordinary differential equation eq. It is not necessary to provide this; it will be autodetected (and an error raised if it couldn’t be detected).

Hints

The hints are formed by parameters returned by classify_sysode, combining them give hints name used later for forming method name.

>>> from .. import Function, dsolve, Eq, Derivative, sin, cos, symbols
>>> from ..abc import x
>>> f = Function('f')
>>> dsolve(Derivative(f(x), x, x) + 9*f(x), f(x))
Eq(f(x), C1*sin(3*x) + C2*cos(3*x))
>>> eq = sin(x)*cos(f(x)) + cos(x)*sin(f(x))*f(x).diff(x)
>>> dsolve(eq, hint='1st_exact')
[Eq(f(x), -acos(C1/cos(x)) + 2*pi), Eq(f(x), acos(C1/cos(x)))]
>>> dsolve(eq, hint='almost_linear')
[Eq(f(x), -acos(C1/sqrt(-cos(x)**2)) + 2*pi), Eq(f(x), acos(C1/sqrt(-cos(x)**2)))]
>>> t = symbols('t')
>>> x, y = symbols('x, y', function=True)
>>> eq = (Eq(Derivative(x(t),t), 12*t*x(t) + 8*y(t)), Eq(Derivative(y(t),t), 21*x(t) + 7*t*y(t)))
>>> dsolve(eq)
[Eq(x(t), C1*x0 + C2*x0*Integral(8*exp(Integral(7*t, t))*exp(Integral(12*t, t))/x0**2, t)),
Eq(y(t), C1*y0 + C2(y0*Integral(8*exp(Integral(7*t, t))*exp(Integral(12*t, t))/x0**2, t) +
exp(Integral(7*t, t))*exp(Integral(12*t, t))/x0))]
>>> eq = (Eq(Derivative(x(t),t),x(t)*y(t)*sin(t)), Eq(Derivative(y(t),t),y(t)**2*sin(t)))
>>> dsolve(eq)
{Eq(x(t), -exp(C1)/(C2*exp(C1) - cos(t))), Eq(y(t), -1/(C1 - cos(t)))}
modelparameters.sympy.solvers.ode.get_numbered_constants(eq, num=1, start=1, prefix='C')[source]

Returns a list of constants that do not occur in eq already.

modelparameters.sympy.solvers.ode.homogeneous_order(eq, *symbols)[source]

Returns the order n if g is homogeneous and None if it is not homogeneous.

Determines if a function is homogeneous and if so of what order. A function f(x, y, cdots) is homogeneous of order n if f(t x, t y, cdots) = t^n f(x, y, cdots).

If the function is of two variables, F(x, y), then f being homogeneous of any order is equivalent to being able to rewrite F(x, y) as G(x/y) or H(y/x). This fact is used to solve 1st order ordinary differential equations whose coefficients are homogeneous of the same order (see the docstrings of ode_1st_homogeneous_coeff_subs_dep_div_indep() and ode_1st_homogeneous_coeff_subs_indep_div_dep()).

Symbols can be functions, but every argument of the function must be a symbol, and the arguments of the function that appear in the expression must match those given in the list of symbols. If a declared function appears with different arguments than given in the list of symbols, None is returned.

Examples

>>> from .. import Function, homogeneous_order, sqrt
>>> from ..abc import x, y
>>> f = Function('f')
>>> homogeneous_order(f(x), f(x)) is None
True
>>> homogeneous_order(f(x,y), f(y, x), x, y) is None
True
>>> homogeneous_order(f(x), f(x), x)
1
>>> homogeneous_order(x**2*f(x)/sqrt(x**2+f(x)**2), x, f(x))
2
>>> homogeneous_order(x**2+f(x), x, f(x)) is None
True
modelparameters.sympy.solvers.ode.infinitesimals(eq, func=None, order=None, hint='default', match=None)[source]

The infinitesimal functions of an ordinary differential equation, xi(x,y) and eta(x,y), are the infinitesimals of the Lie group of point transformations for which the differential equation is invariant. So, the ODE y’=f(x,y) would admit a Lie group x^*=X(x,y;varepsilon)=x+varepsilonxi(x,y), y^*=Y(x,y;varepsilon)=y+varepsiloneta(x,y) such that (y^*)’=f(x^*, y^*). A change of coordinates, to r(x,y) and s(x,y), can be performed so this Lie group becomes the translation group, r^*=r and s^*=s+varepsilon. They are tangents to the coordinate curves of the new system.

Consider the transformation (x, y) to (X, Y) such that the differential equation remains invariant. xi and eta are the tangents to the transformed coordinates X and Y, at varepsilon=0.

\[\left(\frac{\partial X(x,y;\varepsilon)}{\partial\varepsilon }\right)|_{\varepsilon=0} = \xi, \left(\frac{\partial Y(x,y;\varepsilon)}{\partial\varepsilon }\right)|_{\varepsilon=0} = \eta,\]

The infinitesimals can be found by solving the following PDE:

>>> from .. import Function, diff, Eq, pprint
>>> from ..abc import x, y
>>> xi, eta, h = map(Function, ['xi', 'eta', 'h'])
>>> h = h(x, y)  # dy/dx = h
>>> eta = eta(x, y)
>>> xi = xi(x, y)
>>> genform = Eq(eta.diff(x) + (eta.diff(y) - xi.diff(x))*h
... - (xi.diff(y))*h**2 - xi*(h.diff(x)) - eta*(h.diff(y)), 0)
>>> pprint(genform)
/d               d           \                     d              2       d
|--(eta(x, y)) - --(xi(x, y))|*h(x, y) - eta(x, y)*--(h(x, y)) - h (x, y)*--(x
\dy              dx          /                     dy                     dy

                    d             d
i(x, y)) - xi(x, y)*--(h(x, y)) + --(eta(x, y)) = 0
                    dx            dx

Solving the above mentioned PDE is not trivial, and can be solved only by making intelligent assumptions for xi and eta (heuristics). Once an infinitesimal is found, the attempt to find more heuristics stops. This is done to optimise the speed of solving the differential equation. If a list of all the infinitesimals is needed, hint should be flagged as all, which gives the complete list of infinitesimals. If the infinitesimals for a particular heuristic needs to be found, it can be passed as a flag to hint.

Examples

>>> from .. import Function, diff
>>> from .ode import infinitesimals
>>> from ..abc import x
>>> f = Function('f')
>>> eq = f(x).diff(x) - x**2*f(x)
>>> infinitesimals(eq)
[{eta(x, f(x)): exp(x**3/3), xi(x, f(x)): 0}]

References

  • Solving differential equations by Symmetry Groups, John Starrett, pp. 1 - pp. 14

modelparameters.sympy.solvers.ode.lie_heuristic_abaco1_product(match, comp=False)[source]

The second heuristic uses the following two assumptions on xi and eta

\[\eta = 0, \xi = f(x)*g(y)\]
\[\eta = f(x)*g(y), \xi = 0\]

The first assumption of this heuristic holds good if frac{1}{h^{2}}frac{partial^2}{partial x partial y}log(h) is separable in x and y, then the separated factors containing x is f(x), and g(y) is obtained by

\[e^{\int f\frac{\partial}{\partial x}\left(\frac{1}{f*h}\right)\,dy}\]

provided ffrac{partial}{partial x}left(frac{1}{f*h}right) is a function of y only.

The second assumption holds good if frac{dy}{dx} = h(x, y) is rewritten as frac{dy}{dx} = frac{1}{h(y, x)} and the same properties of the first assumption satisifes. After obtaining f(x) and g(y), the coordinates are again interchanged, to get eta as f(x)*g(y)

References

  • E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 7 - pp. 8

modelparameters.sympy.solvers.ode.lie_heuristic_abaco1_simple(match, comp=False)[source]

The first heuristic uses the following four sets of assumptions on xi and eta

\[\xi = 0, \eta = f(x)\]
\[\xi = 0, \eta = f(y)\]
\[\xi = f(x), \eta = 0\]
\[\xi = f(y), \eta = 0\]

The success of this heuristic is determined by algebraic factorisation. For the first assumption xi = 0 and eta to be a function of x, the PDE

\[\frac{\partial \eta}{\partial x} + (\frac{\partial \eta}{\partial y} - \frac{\partial \xi}{\partial x})*h - \frac{\partial \xi}{\partial y}*h^{2} - \xi*\frac{\partial h}{\partial x} - \eta*\frac{\partial h}{\partial y} = 0\]

reduces to f’(x) - ffrac{partial h}{partial y} = 0 If frac{partial h}{partial y} is a function of x, then this can usually be integrated easily. A similar idea is applied to the other 3 assumptions as well.

References

  • E.S Cheb-Terrab, L.G.S Duarte and L.A,C.P da Mota, Computer Algebra Solving of First Order ODEs Using Symmetry Methods, pp. 8

modelparameters.sympy.solvers.ode.lie_heuristic_abaco2_similar(match, comp=False)[source]

This heuristic uses the following two assumptions on xi and eta

\[\eta = g(x), \xi = f(x)\]
\[\eta = f(y), \xi = g(y)\]

For the first assumption,

  1. First frac{frac{partial h}{partial y}}{frac{partial^{2} h}{ partial yy}} is calculated. Let us say this value is A

  2. If this is constant, then h is matched to the form A(x) + B(x)e^{ frac{y}{C}} then, frac{e^{int frac{A(x)}{C} ,dx}}{B(x)} gives f(x) and A(x)*f(x) gives g(x)

  3. Otherwise frac{frac{partial A}{partial X}}{frac{partial A}{ partial Y}} = gamma is calculated. If

    a] gamma is a function of x alone

    b] frac{gammafrac{partial h}{partial y} - gamma’(x) - frac{ partial h}{partial x}}{h + gamma} = G is a function of x alone. then, e^{int G ,dx} gives f(x) and -gamma*f(x) gives g(x)

The second assumption holds good if frac{dy}{dx} = h(x, y) is rewritten as frac{dy}{dx} = frac{1}{h(y, x)} and the same properties of the first assumption satisifes. After obtaining f(x) and g(x), the coordinates are again interchanged, to get xi as f(x^*) and eta as g(y^*)

References

  • E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 10 - pp. 12

modelparameters.sympy.solvers.ode.lie_heuristic_abaco2_unique_general(match, comp=False)[source]

This heuristic finds if infinitesimals of the form eta = f(x), xi = g(y) without making any assumptions on h.

The complete sequence of steps is given in the paper mentioned below.

References

  • E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 10 - pp. 12

modelparameters.sympy.solvers.ode.lie_heuristic_abaco2_unique_unknown(match, comp=False)[source]

This heuristic assumes the presence of unknown functions or known functions with non-integer powers.

  1. A list of all functions and non-integer powers containing x and y

  2. Loop over each element f in the list, find frac{frac{partial f}{partial x}}{ frac{partial f}{partial x}} = R

    If it is separable in x and y, let X be the factors containing x. Then

    a] Check if xi = X and eta = -frac{X}{R} satisfy the PDE. If yes, then return

    xi and eta

    b] Check if xi = frac{-R}{X} and eta = -frac{1}{X} satisfy the PDE.

    If yes, then return xi and eta

    If not, then check if

    a] \(\xi = -R,\eta = 1\)

    b] \(\xi = 1, \eta = -\frac{1}{R}\)

    are solutions.

References

  • E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 10 - pp. 12

modelparameters.sympy.solvers.ode.lie_heuristic_bivariate(match, comp=False)[source]

The third heuristic assumes the infinitesimals xi and eta to be bi-variate polynomials in x and y. The assumption made here for the logic below is that h is a rational function in x and y though that may not be necessary for the infinitesimals to be bivariate polynomials. The coefficients of the infinitesimals are found out by substituting them in the PDE and grouping similar terms that are polynomials and since they form a linear system, solve and check for non trivial solutions. The degree of the assumed bivariates are increased till a certain maximum value.

References

  • Lie Groups and Differential Equations pp. 327 - pp. 329

modelparameters.sympy.solvers.ode.lie_heuristic_chi(match, comp=False)[source]

The aim of the fourth heuristic is to find the function chi(x, y) that satisifies the PDE frac{dchi}{dx} + hfrac{dchi}{dx} - frac{partial h}{partial y}chi = 0.

This assumes chi to be a bivariate polynomial in x and y. By intution, h should be a rational function in x and y. The method used here is to substitute a general binomial for chi up to a certain maximum degree is reached. The coefficients of the polynomials, are calculated by by collecting terms of the same order in x and y.

After finding chi, the next step is to use eta = xi*h + chi, to determine xi and eta. This can be done by dividing chi by h which would give -xi as the quotient and eta as the remainder.

References

  • E.S Cheb-Terrab, L.G.S Duarte and L.A,C.P da Mota, Computer Algebra Solving of First Order ODEs Using Symmetry Methods, pp. 8

modelparameters.sympy.solvers.ode.lie_heuristic_function_sum(match, comp=False)[source]

This heuristic uses the following two assumptions on xi and eta

\[\eta = 0, \xi = f(x) + g(y)\]
\[\eta = f(x) + g(y), \xi = 0\]

The first assumption of this heuristic holds good if

\[\frac{\partial}{\partial y}[(h\frac{\partial^{2}}{ \partial x^{2}}(h^{-1}))^{-1}]\]

is separable in x and y,

  1. The separated factors containing y is frac{partial g}{partial y}. From this g(y) can be determined.

  2. The separated factors containing x is f’’(x).

  3. hfrac{partial^{2}}{partial x^{2}}(h^{-1}) equals frac{f’’(x)}{f(x) + g(y)}. From this f(x) can be determined.

The second assumption holds good if frac{dy}{dx} = h(x, y) is rewritten as frac{dy}{dx} = frac{1}{h(y, x)} and the same properties of the first assumption satisifes. After obtaining f(x) and g(y), the coordinates are again interchanged, to get eta as f(x) + g(y).

For both assumptions, the constant factors are separated among g(y) and f’’(x), such that f’’(x) obtained from 3] is the same as that obtained from 2]. If not possible, then this heuristic fails.

References

  • E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 7 - pp. 8

modelparameters.sympy.solvers.ode.lie_heuristic_linear(match, comp=False)[source]

This heuristic assumes

  1. xi = ax + by + c and

  2. eta = fx + gy + h

After substituting the following assumptions in the determining PDE, it reduces to

\[f + (g - a)h - bh^{2} - (ax + by + c)\frac{\partial h}{\partial x} - (fx + gy + c)\frac{\partial h}{\partial y}\]

Solving the reduced PDE obtained, using the method of characteristics, becomes impractical. The method followed is grouping similar terms and solving the system of linear equations obtained. The difference between the bivariate heuristic is that h need not be a rational function in this case.

References

  • E.S. Cheb-Terrab, A.D. Roche, Symmetries and First Order ODE Patterns, pp. 10 - pp. 12

modelparameters.sympy.solvers.ode.ode_1st_exact(eq, func, order, match)[source]

Solves 1st order exact ordinary differential equations.

A 1st order differential equation is called exact if it is the total differential of a function. That is, the differential equation

\[P(x, y) \,\partial{}x + Q(x, y) \,\partial{}y = 0\]

is exact if there is some function F(x, y) such that P(x, y) = partial{}F/partial{}x and Q(x, y) = partial{}F/partial{}y. It can be shown that a necessary and sufficient condition for a first order ODE to be exact is that partial{}P/partial{}y = partial{}Q/partial{}x. Then, the solution will be as given below:

>>> from .. import Function, Eq, Integral, symbols, pprint
>>> x, y, t, x0, y0, C1= symbols('x,y,t,x0,y0,C1')
>>> P, Q, F= map(Function, ['P', 'Q', 'F'])
>>> pprint(Eq(Eq(F(x, y), Integral(P(t, y), (t, x0, x)) +
... Integral(Q(x0, t), (t, y0, y))), C1))
            x                y
            /                /
           |                |
F(x, y) =  |  P(t, y) dt +  |  Q(x0, t) dt = C1
           |                |
          /                /
          x0               y0

Where the first partials of P and Q exist and are continuous in a simply connected region.

A note: SymPy currently has no way to represent inert substitution on an expression, so the hint 1st_exact_Integral will return an integral with dy. This is supposed to represent the function that you are solving for.

Examples

>>> from .. import Function, dsolve, cos, sin
>>> from ..abc import x
>>> f = Function('f')
>>> dsolve(cos(f(x)) - (x*sin(f(x)) - f(x)**2)*f(x).diff(x),
... f(x), hint='1st_exact')
Eq(x*cos(f(x)) + f(x)**3/3, C1)

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_1st_homogeneous_coeff_best(eq, func, order, match)[source]

Returns the best solution to an ODE from the two hints 1st_homogeneous_coeff_subs_dep_div_indep and 1st_homogeneous_coeff_subs_indep_div_dep.

This is as determined by ode_sol_simplicity().

See the ode_1st_homogeneous_coeff_subs_indep_div_dep() and ode_1st_homogeneous_coeff_subs_dep_div_indep() docstrings for more information on these hints. Note that there is no ode_1st_homogeneous_coeff_best_Integral hint.

Examples

>>> from .. import Function, dsolve, pprint
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(2*x*f(x) + (x**2 + f(x)**2)*f(x).diff(x), f(x),
... hint='1st_homogeneous_coeff_best', simplify=False))
                         /    2    \
                         | 3*x     |
                      log|----- + 1|
                         | 2       |
                         \f (x)    /
log(f(x)) = log(C1) - --------------
                            3

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_1st_homogeneous_coeff_subs_dep_div_indep(eq, func, order, match)[source]

Solves a 1st order differential equation with homogeneous coefficients using the substitution u_1 = frac{text{<dependent variable>}}{text{<independent variable>}}.

This is a differential equation

\[P(x, y) + Q(x, y) dy/dx = 0\]

such that P and Q are homogeneous and of the same order. A function F(x, y) is homogeneous of order n if F(x t, y t) = t^n F(x, y). Equivalently, F(x, y) can be rewritten as G(y/x) or H(x/y). See also the docstring of homogeneous_order().

If the coefficients P and Q in the differential equation above are homogeneous functions of the same order, then it can be shown that the substitution y = u_1 x (i.e. u_1 = y/x) will turn the differential equation into an equation separable in the variables x and u. If h(u_1) is the function that results from making the substitution u_1 = f(x)/x on P(x, f(x)) and g(u_2) is the function that results from the substitution on Q(x, f(x)) in the differential equation P(x, f(x)) + Q(x, f(x)) f’(x) = 0, then the general solution is:

>>> from .. import Function, dsolve, pprint
>>> from ..abc import x
>>> f, g, h = map(Function, ['f', 'g', 'h'])
>>> genform = g(f(x)/x) + h(f(x)/x)*f(x).diff(x)
>>> pprint(genform)
 /f(x)\    /f(x)\ d
g|----| + h|----|*--(f(x))
 \ x  /    \ x  / dx
>>> pprint(dsolve(genform, f(x),
... hint='1st_homogeneous_coeff_subs_dep_div_indep_Integral'))
               f(x)
               ----
                x
                 /
                |
                |       -h(u1)
log(x) = C1 +   |  ---------------- d(u1)
                |  u1*h(u1) + g(u1)
                |
               /

Where u_1 h(u_1) + g(u_1) ne 0 and x ne 0.

See also the docstrings of ode_1st_homogeneous_coeff_best() and ode_1st_homogeneous_coeff_subs_indep_div_dep().

Examples

>>> from .. import Function, dsolve
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(2*x*f(x) + (x**2 + f(x)**2)*f(x).diff(x), f(x),
... hint='1st_homogeneous_coeff_subs_dep_div_indep', simplify=False))
                      /          3   \
                      |3*f(x)   f (x)|
                   log|------ + -----|
                      |  x         3 |
                      \           x  /
log(x) = log(C1) - -------------------
                            3

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_1st_homogeneous_coeff_subs_indep_div_dep(eq, func, order, match)[source]

Solves a 1st order differential equation with homogeneous coefficients using the substitution u_2 = frac{text{<independent variable>}}{text{<dependent variable>}}.

This is a differential equation

\[P(x, y) + Q(x, y) dy/dx = 0\]

such that P and Q are homogeneous and of the same order. A function F(x, y) is homogeneous of order n if F(x t, y t) = t^n F(x, y). Equivalently, F(x, y) can be rewritten as G(y/x) or H(x/y). See also the docstring of homogeneous_order().

If the coefficients P and Q in the differential equation above are homogeneous functions of the same order, then it can be shown that the substitution x = u_2 y (i.e. u_2 = x/y) will turn the differential equation into an equation separable in the variables y and u_2. If h(u_2) is the function that results from making the substitution u_2 = x/f(x) on P(x, f(x)) and g(u_2) is the function that results from the substitution on Q(x, f(x)) in the differential equation P(x, f(x)) + Q(x, f(x)) f’(x) = 0, then the general solution is:

>>> from .. import Function, dsolve, pprint
>>> from ..abc import x
>>> f, g, h = map(Function, ['f', 'g', 'h'])
>>> genform = g(x/f(x)) + h(x/f(x))*f(x).diff(x)
>>> pprint(genform)
 / x  \    / x  \ d
g|----| + h|----|*--(f(x))
 \f(x)/    \f(x)/ dx
>>> pprint(dsolve(genform, f(x),
... hint='1st_homogeneous_coeff_subs_indep_div_dep_Integral'))
             x
            ----
            f(x)
              /
             |
             |       -g(u2)
             |  ---------------- d(u2)
             |  u2*g(u2) + h(u2)
             |
            /

f(x) = C1*e

Where u_2 g(u_2) + h(u_2) ne 0 and f(x) ne 0.

See also the docstrings of ode_1st_homogeneous_coeff_best() and ode_1st_homogeneous_coeff_subs_dep_div_indep().

Examples

>>> from .. import Function, pprint, dsolve
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(2*x*f(x) + (x**2 + f(x)**2)*f(x).diff(x), f(x),
... hint='1st_homogeneous_coeff_subs_indep_div_dep',
... simplify=False))
                         /    2    \
                         | 3*x     |
                      log|----- + 1|
                         | 2       |
                         \f (x)    /
log(f(x)) = log(C1) - --------------
                            3

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_1st_linear(eq, func, order, match)[source]

Solves 1st order linear differential equations.

These are differential equations of the form

\[dy/dx + P(x) y = Q(x)\text{.}\]

These kinds of differential equations can be solved in a general way. The integrating factor e^{int P(x) ,dx} will turn the equation into a separable equation. The general solution is:

>>> from .. import Function, dsolve, Eq, pprint, diff, sin
>>> from ..abc import x
>>> f, P, Q = map(Function, ['f', 'P', 'Q'])
>>> genform = Eq(f(x).diff(x) + P(x)*f(x), Q(x))
>>> pprint(genform)
            d
P(x)*f(x) + --(f(x)) = Q(x)
            dx
>>> pprint(dsolve(genform, f(x), hint='1st_linear_Integral'))
       /       /                   \
       |      |                    |
       |      |         /          |     /
       |      |        |           |    |
       |      |        | P(x) dx   |  - | P(x) dx
       |      |        |           |    |
       |      |       /            |   /
f(x) = |C1 +  | Q(x)*e           dx|*e
       |      |                    |
       \     /                     /

Examples

>>> f = Function('f')
>>> pprint(dsolve(Eq(x*diff(f(x), x) - f(x), x**2*sin(x)),
... f(x), '1st_linear'))
f(x) = x*(C1 - cos(x))

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_1st_power_series(eq, func, order, match)[source]

The power series solution is a method which gives the Taylor series expansion to the solution of a differential equation.

For a first order differential equation frac{dy}{dx} = h(x, y), a power series solution exists at a point x = x_{0} if h(x, y) is analytic at x_{0}. The solution is given by

\[y(x) = y(x_{0}) + \sum_{n = 1}^{\infty} \frac{F_{n}(x_{0},b)(x - x_{0})^n}{n!},\]

where y(x_{0}) = b is the value of y at the initial value of x_{0}. To compute the values of the F_{n}(x_{0},b) the following algorithm is followed, until the required number of terms are generated.

  1. F_1 = h(x_{0}, b)

  2. F_{n+1} = frac{partial F_{n}}{partial x} + frac{partial F_{n}}{partial y}F_{1}

Examples

>>> from .. import Function, Derivative, pprint, exp
>>> from .ode import dsolve
>>> from ..abc import x
>>> f = Function('f')
>>> eq = exp(x)*(f(x).diff(x)) - f(x)
>>> pprint(dsolve(eq, hint='1st_power_series'))
                       3       4       5
                   C1*x    C1*x    C1*x     / 6\
f(x) = C1 + C1*x - ----- + ----- + ----- + O\x /
                     6       24      60

References

  • Travis W. Walker, Analytic power series technique for solving first-order differential equations, p.p 17, 18

modelparameters.sympy.solvers.ode.ode_2nd_power_series_ordinary(eq, func, order, match)[source]

Gives a power series solution to a second order homogeneous differential equation with polynomial coefficients at an ordinary point. A homogenous differential equation is of the form

\[P(x)\frac{d^2y}{dx^2} + Q(x)\frac{dy}{dx} + R(x) = 0\]

For simplicity it is assumed that P(x), Q(x) and R(x) are polynomials, it is sufficient that frac{Q(x)}{P(x)} and frac{R(x)}{P(x)} exists at x_{0}. A recurrence relation is obtained by substituting y as sum_{n=0}^infty a_{n}x^{n}, in the differential equation, and equating the nth term. Using this relation various terms can be generated.

Examples

>>> from .. import dsolve, Function, pprint
>>> from ..abc import x, y
>>> f = Function("f")
>>> eq = f(x).diff(x, 2) + f(x)
>>> pprint(dsolve(eq, hint='2nd_power_series_ordinary'))
          / 4    2    \        /   2    \
          |x    x     |        |  x     |    / 6\
f(x) = C2*|-- - -- + 1| + C1*x*|- -- + 1| + O\x /
          \24   2     /        \  6     /

References

modelparameters.sympy.solvers.ode.ode_2nd_power_series_regular(eq, func, order, match)[source]

Gives a power series solution to a second order homogeneous differential equation with polynomial coefficients at a regular point. A second order homogenous differential equation is of the form

\[P(x)\frac{d^2y}{dx^2} + Q(x)\frac{dy}{dx} + R(x) = 0\]

A point is said to regular singular at x0 if x - x0frac{Q(x)}{P(x)} and (x - x0)^{2}frac{R(x)}{P(x)} are analytic at x0. For simplicity P(x), Q(x) and R(x) are assumed to be polynomials. The algorithm for finding the power series solutions is:

  1. Try expressing (x - x0)P(x) and ((x - x0)^{2})Q(x) as power series solutions about x0. Find p0 and q0 which are the constants of the power series expansions.

  2. Solve the indicial equation f(m) = m(m - 1) + m*p0 + q0, to obtain the roots m1 and m2 of the indicial equation.

  3. If m1 - m2 is a non integer there exists two series solutions. If m1 = m2, there exists only one solution. If m1 - m2 is an integer, then the existence of one solution is confirmed. The other solution may or may not exist.

The power series solution is of the form x^{m}sum_{n=0}^infty a_{n}x^{n}. The coefficients are determined by the following recurrence relation. a_{n} = -frac{sum_{k=0}^{n-1} q_{n-k} + (m + k)p_{n-k}}{f(m + n)}. For the case in which m1 - m2 is an integer, it can be seen from the recurrence relation that for the lower root m, when n equals the difference of both the roots, the denominator becomes zero. So if the numerator is not equal to zero, a second series solution exists.

Examples

>>> from .. import dsolve, Function, pprint
>>> from ..abc import x, y
>>> f = Function("f")
>>> eq = x*(f(x).diff(x, 2)) + 2*(f(x).diff(x)) + x*f(x)
>>> pprint(dsolve(eq))
                              /    6    4    2    \
                              |   x    x    x     |
          /  4    2    \   C1*|- --- + -- - -- + 1|
          | x    x     |      \  720   24   2     /    / 6\
f(x) = C2*|--- - -- + 1| + ------------------------ + O\x /
          \120   6     /              x

References

  • George E. Simmons, “Differential Equations with Applications and Historical Notes”, p.p 176 - 184

modelparameters.sympy.solvers.ode.ode_Bernoulli(eq, func, order, match)[source]

Solves Bernoulli differential equations.

These are equations of the form

\[dy/dx + P(x) y = Q(x) y^n\text{, }n \ne 1`\text{.}\]

The substitution w = 1/y^{1-n} will transform an equation of this form into one that is linear (see the docstring of ode_1st_linear()). The general solution is:

>>> from .. import Function, dsolve, Eq, pprint
>>> from ..abc import x, n
>>> f, P, Q = map(Function, ['f', 'P', 'Q'])
>>> genform = Eq(f(x).diff(x) + P(x)*f(x), Q(x)*f(x)**n)
>>> pprint(genform)
            d                n
P(x)*f(x) + --(f(x)) = Q(x)*f (x)
            dx
>>> pprint(dsolve(genform, f(x), hint='Bernoulli_Integral')) 
                                                                               1
                                                                              ----
                                                                             1 - n
       //                /                            \                     \
       ||               |                             |                     |
       ||               |                  /          |             /       |
       ||               |                 |           |            |        |
       ||               |        (1 - n)* | P(x) dx   |  (-1 + n)* | P(x) dx|
       ||               |                 |           |            |        |
       ||               |                /            |           /         |
f(x) = ||C1 + (-1 + n)* | -Q(x)*e                   dx|*e                   |
       ||               |                             |                     |
       \\               /                            /                     /

Note that the equation is separable when n = 1 (see the docstring of ode_separable()).

>>> pprint(dsolve(Eq(f(x).diff(x) + P(x)*f(x), Q(x)*f(x)), f(x),
... hint='separable_Integral'))
 f(x)
   /
  |                /
  |  1            |
  |  - dy = C1 +  | (-P(x) + Q(x)) dx
  |  y            |
  |              /
 /

Examples

>>> from .. import Function, dsolve, Eq, pprint, log
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(Eq(x*f(x).diff(x) + f(x), log(x)*f(x)**2),
... f(x), hint='Bernoulli'))
                1
f(x) = -------------------
         /     log(x)   1\
       x*|C1 + ------ + -|
         \       x      x/

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_Liouville(eq, func, order, match)[source]

Solves 2nd order Liouville differential equations.

The general form of a Liouville ODE is

\[\frac{d^2 y}{dx^2} + g(y) \left(\! \frac{dy}{dx}\!\right)^2 + h(x) \frac{dy}{dx}\text{.}\]

The general solution is:

>>> from .. import Function, dsolve, Eq, pprint, diff
>>> from ..abc import x
>>> f, g, h = map(Function, ['f', 'g', 'h'])
>>> genform = Eq(diff(f(x),x,x) + g(f(x))*diff(f(x),x)**2 +
... h(x)*diff(f(x),x), 0)
>>> pprint(genform)
                  2                    2
        /d       \         d          d
g(f(x))*|--(f(x))|  + h(x)*--(f(x)) + ---(f(x)) = 0
        \dx      /         dx           2
                                      dx
>>> pprint(dsolve(genform, f(x), hint='Liouville_Integral'))
                                  f(x)
          /                     /
         |                     |
         |     /               |     /
         |    |                |    |
         |  - | h(x) dx        |    | g(y) dy
         |    |                |    |
         |   /                 |   /
C1 + C2* | e            dx +   |  e           dy = 0
         |                     |
        /                     /

Examples

>>> from .. import Function, dsolve, Eq, pprint
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(diff(f(x), x, x) + diff(f(x), x)**2/f(x) +
... diff(f(x), x)/x, f(x), hint='Liouville'))
           ________________           ________________
[f(x) = -\/ C1 + C2*log(x) , f(x) = \/ C1 + C2*log(x) ]

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_Riccati_special_minus2(eq, func, order, match)[source]

The general Riccati equation has the form

\[dy/dx = f(x) y^2 + g(x) y + h(x)\text{.}\]

While it does not have a general solution [1], the “special” form, dy/dx = a y^2 - b x^c, does have solutions in many cases [2]. This routine returns a solution for a(dy/dx) = b y^2 + c y/x + d/x^2 that is obtained by using a suitable change of variables to reduce it to the special form and is valid when neither a nor b are zero and either c or d is zero.

>>> from ..abc import x, y, a, b, c, d
>>> from .ode import dsolve, checkodesol
>>> from .. import pprint, Function
>>> f = Function('f')
>>> y = f(x)
>>> genform = a*y.diff(x) - (b*y**2 + c*y/x + d/x**2)
>>> sol = dsolve(genform, y)
>>> pprint(sol, wrap_line=False)
        /                                 /        __________________       \\
        |           __________________    |       /                2        ||
        |          /                2     |     \/  4*b*d - (a + c)  *log(x)||
       -|a + c - \/  4*b*d - (a + c)  *tan|C1 + ----------------------------||
        \                                 \                 2*a             //
f(x) = ------------------------------------------------------------------------
                                        2*b*x
>>> checkodesol(genform, sol, order=1)[0]
True

References

  1. http://www.maplesoft.com/support/help/Maple/view.aspx?path=odeadvisor/Riccati

  2. http://eqworld.ipmnet.ru/en/solutions/ode/ode0106.pdf - http://eqworld.ipmnet.ru/en/solutions/ode/ode0123.pdf

modelparameters.sympy.solvers.ode.ode_almost_linear(eq, func, order, match)[source]

Solves an almost-linear differential equation.

The general form of an almost linear differential equation is

\[f(x) g(y) y + k(x) l(y) + m(x) = 0 \text{where} l'(y) = g(y)\text{.}\]

This can be solved by substituting l(y) = u(y). Making the given substitution reduces it to a linear differential equation of the form u’ + P(x) u + Q(x) = 0.

The general solution is

>>> from .. import Function, dsolve, Eq, pprint
>>> from ..abc import x, y, n
>>> f, g, k, l = map(Function, ['f', 'g', 'k', 'l'])
>>> genform = Eq(f(x)*(l(y).diff(y)) + k(x)*l(y) + g(x))
>>> pprint(genform)
     d
f(x)*--(l(y)) + g(x) + k(x)*l(y) = 0
     dy
>>> pprint(dsolve(genform, hint = 'almost_linear'))
       /     //   -y*g(x)                  \\
       |     ||   --------     for k(x) = 0||
       |     ||     f(x)                   ||  -y*k(x)
       |     ||                            ||  --------
       |     ||       y*k(x)               ||    f(x)
l(y) = |C1 + |<       ------               ||*e
       |     ||        f(x)                ||
       |     ||-g(x)*e                     ||
       |     ||--------------   otherwise  ||
       |     ||     k(x)                   ||
       \     \\                            //

See also

sympy.solvers.ode.ode_1st_linear()

Examples

>>> from .. import Function, Derivative, pprint
>>> from .ode import dsolve, classify_ode
>>> from ..abc import x
>>> f = Function('f')
>>> d = f(x).diff(x)
>>> eq = x*d + x*f(x) + 1
>>> dsolve(eq, f(x), hint='almost_linear')
Eq(f(x), (C1 - Ei(x))*exp(-x))
>>> pprint(dsolve(eq, f(x), hint='almost_linear'))
                     -x
f(x) = (C1 - Ei(x))*e

References

  • Joel Moses, “Symbolic Integration - The Stormy Decade”, Communications of the ACM, Volume 14, Number 8, August 1971, pp. 558

modelparameters.sympy.solvers.ode.ode_lie_group(eq, func, order, match)[source]

This hint implements the Lie group method of solving first order differential equations. The aim is to convert the given differential equation from the given coordinate given system into another coordinate system where it becomes invariant under the one-parameter Lie group of translations. The converted ODE is quadrature and can be solved easily. It makes use of the sympy.solvers.ode.infinitesimals() function which returns the infinitesimals of the transformation.

The coordinates r and s can be found by solving the following Partial Differential Equations.

\[\xi\frac{\partial r}{\partial x} + \eta\frac{\partial r}{\partial y} = 0\]
\[\xi\frac{\partial s}{\partial x} + \eta\frac{\partial s}{\partial y} = 1\]

The differential equation becomes separable in the new coordinate system

\[\frac{ds}{dr} = \frac{\frac{\partial s}{\partial x} + h(x, y)\frac{\partial s}{\partial y}}{ \frac{\partial r}{\partial x} + h(x, y)\frac{\partial r}{\partial y}}\]

After finding the solution by integration, it is then converted back to the original coordinate system by subsituting r and s in terms of x and y again.

Examples

>>> from .. import Function, dsolve, Eq, exp, pprint
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(f(x).diff(x) + 2*x*f(x) - x*exp(-x**2), f(x),
... hint='lie_group'))
       /      2\    2
       |     x |  -x
f(x) = |C1 + --|*e
       \     2 /

References

  • Solving differential equations by Symmetry Groups, John Starrett, pp. 1 - pp. 14

modelparameters.sympy.solvers.ode.ode_linear_coefficients(eq, func, order, match)[source]

Solves a differential equation with linear coefficients.

The general form of a differential equation with linear coefficients is

\[y' + F\left(\!\frac{a_1 x + b_1 y + c_1}{a_2 x + b_2 y + c_2}\!\right) = 0\text{,}\]

where a_1, b_1, c_1, a_2, b_2, c_2 are constants and a_1 b_2 - a_2 b_1 ne 0.

This can be solved by substituting:

\[ \begin{align}\begin{aligned}x = x' + \frac{b_2 c_1 - b_1 c_2}{a_2 b_1 - a_1 b_2}\\y = y' + \frac{a_1 c_2 - a_2 c_1}{a_2 b_1 - a_1 b_2}\text{.}\end{aligned}\end{align} \]

This substitution reduces the equation to a homogeneous differential equation.

See also

sympy.solvers.ode.ode_1st_homogeneous_coeff_best(), sympy.solvers.ode.ode_1st_homogeneous_coeff_subs_indep_div_dep(), sympy.solvers.ode.ode_1st_homogeneous_coeff_subs_dep_div_indep()

Examples

>>> from .. import Function, Derivative, pprint
>>> from .ode import dsolve, classify_ode
>>> from ..abc import x
>>> f = Function('f')
>>> df = f(x).diff(x)
>>> eq = (x + f(x) + 1)*df + (f(x) - 6*x + 1)
>>> dsolve(eq, hint='linear_coefficients')
[Eq(f(x), -x - sqrt(C1 + 7*x**2) - 1), Eq(f(x), -x + sqrt(C1 + 7*x**2) - 1)]
>>> pprint(dsolve(eq, hint='linear_coefficients'))
                  ___________                     ___________
               /         2                     /         2
[f(x) = -x - \/  C1 + 7*x   - 1, f(x) = -x + \/  C1 + 7*x   - 1]

References

  • Joel Moses, “Symbolic Integration - The Stormy Decade”, Communications of the ACM, Volume 14, Number 8, August 1971, pp. 558

modelparameters.sympy.solvers.ode.ode_nth_linear_constant_coeff_homogeneous(eq, func, order, match, returns='sol')[source]

Solves an nth order linear homogeneous differential equation with constant coefficients.

This is an equation of the form

\[a_n f^{(n)}(x) + a_{n-1} f^{(n-1)}(x) + \cdots + a_1 f'(x) + a_0 f(x) = 0\text{.}\]

These equations can be solved in a general manner, by taking the roots of the characteristic equation a_n m^n + a_{n-1} m^{n-1} + cdots + a_1 m + a_0 = 0. The solution will then be the sum of C_n x^i e^{r x} terms, for each where C_n is an arbitrary constant, r is a root of the characteristic equation and i is one of each from 0 to the multiplicity of the root - 1 (for example, a root 3 of multiplicity 2 would create the terms C_1 e^{3 x} + C_2 x e^{3 x}). The exponential is usually expanded for complex roots using Euler’s equation e^{I x} = cos(x) + I sin(x). Complex roots always come in conjugate pairs in polynomials with real coefficients, so the two roots will be represented (after simplifying the constants) as e^{a x} left(C_1 cos(b x) + C_2 sin(b x)right).

If SymPy cannot find exact roots to the characteristic equation, a CRootOf instance will be return instead.

>>> from .. import Function, dsolve, Eq
>>> from ..abc import x
>>> f = Function('f')
>>> dsolve(f(x).diff(x, 5) + 10*f(x).diff(x) - 2*f(x), f(x),
... hint='nth_linear_constant_coeff_homogeneous')
... 
Eq(f(x), C1*exp(x*CRootOf(_x**5 + 10*_x - 2, 0)) +
C2*exp(x*CRootOf(_x**5 + 10*_x - 2, 1)) +
C3*exp(x*CRootOf(_x**5 + 10*_x - 2, 2)) +
C4*exp(x*CRootOf(_x**5 + 10*_x - 2, 3)) +
C5*exp(x*CRootOf(_x**5 + 10*_x - 2, 4)))

Note that because this method does not involve integration, there is no nth_linear_constant_coeff_homogeneous_Integral hint.

The following is for internal use:

  • returns = 'sol' returns the solution to the ODE.

  • returns = 'list' returns a list of linearly independent solutions, for use with non homogeneous solution methods like variation of parameters and undetermined coefficients. Note that, though the solutions should be linearly independent, this function does not explicitly check that. You can do assert simplify(wronskian(sollist)) != 0 to check for linear independence. Also, assert len(sollist) == order will need to pass.

  • returns = 'both', return a dictionary {'sol': <solution to ODE>, 'list': <list of linearly independent solutions>}.

Examples

>>> from .. import Function, dsolve, pprint
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(f(x).diff(x, 4) + 2*f(x).diff(x, 3) -
... 2*f(x).diff(x, 2) - 6*f(x).diff(x) + 5*f(x), f(x),
... hint='nth_linear_constant_coeff_homogeneous'))
                    x                            -2*x
f(x) = (C1 + C2*x)*e  + (C3*sin(x) + C4*cos(x))*e

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_nth_linear_constant_coeff_undetermined_coefficients(eq, func, order, match)[source]

Solves an nth order linear differential equation with constant coefficients using the method of undetermined coefficients.

This method works on differential equations of the form

\[a_n f^{(n)}(x) + a_{n-1} f^{(n-1)}(x) + \cdots + a_1 f'(x) + a_0 f(x) = P(x)\text{,}\]

where P(x) is a function that has a finite number of linearly independent derivatives.

Functions that fit this requirement are finite sums functions of the form a x^i e^{b x} sin(c x + d) or a x^i e^{b x} cos(c x + d), where i is a non-negative integer and a, b, c, and d are constants. For example any polynomial in x, functions like x^2 e^{2 x}, x sin(x), and e^x cos(x) can all be used. Products of sin’s and cos’s have a finite number of derivatives, because they can be expanded into sin(a x) and cos(b x) terms. However, SymPy currently cannot do that expansion, so you will need to manually rewrite the expression in terms of the above to use this method. So, for example, you will need to manually convert sin^2(x) into (1 + cos(2 x))/2 to properly apply the method of undetermined coefficients on it.

This method works by creating a trial function from the expression and all of its linear independent derivatives and substituting them into the original ODE. The coefficients for each term will be a system of linear equations, which are be solved for and substituted, giving the solution. If any of the trial functions are linearly dependent on the solution to the homogeneous equation, they are multiplied by sufficient x to make them linearly independent.

Examples

>>> from .. import Function, dsolve, pprint, exp, cos
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(f(x).diff(x, 2) + 2*f(x).diff(x) + f(x) -
... 4*exp(-x)*x**2 + cos(2*x), f(x),
... hint='nth_linear_constant_coeff_undetermined_coefficients'))
       /             4\
       |            x |  -x   4*sin(2*x)   3*cos(2*x)
f(x) = |C1 + C2*x + --|*e   - ---------- + ----------
       \            3 /           25           25

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_nth_linear_constant_coeff_variation_of_parameters(eq, func, order, match)[source]

Solves an nth order linear differential equation with constant coefficients using the method of variation of parameters.

This method works on any differential equations of the form

\[f^{(n)}(x) + a_{n-1} f^{(n-1)}(x) + \cdots + a_1 f'(x) + a_0 f(x) = P(x)\text{.}\]

This method works by assuming that the particular solution takes the form

\[\sum_{x=1}^{n} c_i(x) y_i(x)\text{,}\]

where y_i is the ith solution to the homogeneous equation. The solution is then solved using Wronskian’s and Cramer’s Rule. The particular solution is given by

\[\sum_{x=1}^n \left( \int \frac{W_i(x)}{W(x)} \,dx \right) y_i(x) \text{,}\]

where W(x) is the Wronskian of the fundamental system (the system of n linearly independent solutions to the homogeneous equation), and W_i(x) is the Wronskian of the fundamental system with the ith column replaced with [0, 0, cdots, 0, P(x)].

This method is general enough to solve any nth order inhomogeneous linear differential equation with constant coefficients, but sometimes SymPy cannot simplify the Wronskian well enough to integrate it. If this method hangs, try using the nth_linear_constant_coeff_variation_of_parameters_Integral hint and simplifying the integrals manually. Also, prefer using nth_linear_constant_coeff_undetermined_coefficients when it applies, because it doesn’t use integration, making it faster and more reliable.

Warning, using simplify=False with ‘nth_linear_constant_coeff_variation_of_parameters’ in dsolve() may cause it to hang, because it will not attempt to simplify the Wronskian before integrating. It is recommended that you only use simplify=False with ‘nth_linear_constant_coeff_variation_of_parameters_Integral’ for this method, especially if the solution to the homogeneous equation has trigonometric functions in it.

Examples

>>> from .. import Function, dsolve, pprint, exp, log
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(f(x).diff(x, 3) - 3*f(x).diff(x, 2) +
... 3*f(x).diff(x) - f(x) - exp(x)*log(x), f(x),
... hint='nth_linear_constant_coeff_variation_of_parameters'))
       /                     3                \
       |                2   x *(6*log(x) - 11)|  x
f(x) = |C1 + C2*x + C3*x  + ------------------|*e
       \                            36        /

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_nth_linear_euler_eq_homogeneous(eq, func, order, match, returns='sol')[source]

Solves an nth order linear homogeneous variable-coefficient Cauchy-Euler equidimensional ordinary differential equation.

This is an equation with form 0 = a_0 f(x) + a_1 x f’(x) + a_2 x^2 f’’(x) cdots.

These equations can be solved in a general manner, by substituting solutions of the form f(x) = x^r, and deriving a characteristic equation for r. When there are repeated roots, we include extra terms of the form C_{r k} ln^k(x) x^r, where C_{r k} is an arbitrary integration constant, r is a root of the characteristic equation, and k ranges over the multiplicity of r. In the cases where the roots are complex, solutions of the form C_1 x^a sin(b log(x)) + C_2 x^a cos(b log(x)) are returned, based on expansions with Eulers formula. The general solution is the sum of the terms found. If SymPy cannot find exact roots to the characteristic equation, a CRootOf instance will be returned instead.

>>> from .. import Function, dsolve, Eq
>>> from ..abc import x
>>> f = Function('f')
>>> dsolve(4*x**2*f(x).diff(x, 2) + f(x), f(x),
... hint='nth_linear_euler_eq_homogeneous')
... 
Eq(f(x), sqrt(x)*(C1 + C2*log(x)))

Note that because this method does not involve integration, there is no nth_linear_euler_eq_homogeneous_Integral hint.

The following is for internal use:

  • returns = 'sol' returns the solution to the ODE.

  • returns = 'list' returns a list of linearly independent solutions, corresponding to the fundamental solution set, for use with non homogeneous solution methods like variation of parameters and undetermined coefficients. Note that, though the solutions should be linearly independent, this function does not explicitly check that. You can do assert simplify(wronskian(sollist)) != 0 to check for linear independence. Also, assert len(sollist) == order will need to pass.

  • returns = 'both', return a dictionary {'sol': <solution to ODE>, 'list': <list of linearly independent solutions>}.

Examples

>>> from .. import Function, dsolve, pprint
>>> from ..abc import x
>>> f = Function('f')
>>> eq = f(x).diff(x, 2)*x**2 - 4*f(x).diff(x)*x + 6*f(x)
>>> pprint(dsolve(eq, f(x),
... hint='nth_linear_euler_eq_homogeneous'))
        2
f(x) = x *(C1 + C2*x)

References

# indirect doctest

modelparameters.sympy.solvers.ode.ode_nth_linear_euler_eq_nonhomogeneous_undetermined_coefficients(eq, func, order, match, returns='sol')[source]

Solves an nth order linear non homogeneous Cauchy-Euler equidimensional ordinary differential equation using undetermined coefficients.

This is an equation with form g(x) = a_0 f(x) + a_1 x f’(x) + a_2 x^2 f’’(x) cdots.

These equations can be solved in a general manner, by substituting solutions of the form x = exp(t), and deriving a characteristic equation of form g(exp(t)) = b_0 f(t) + b_1 f’(t) + b_2 f’’(t) cdots which can be then solved by nth_linear_constant_coeff_undetermined_coefficients if g(exp(t)) has finite number of lineary independent derivatives.

Functions that fit this requirement are finite sums functions of the form a x^i e^{b x} sin(c x + d) or a x^i e^{b x} cos(c x + d), where i is a non-negative integer and a, b, c, and d are constants. For example any polynomial in x, functions like x^2 e^{2 x}, x sin(x), and e^x cos(x) can all be used. Products of sin’s and cos’s have a finite number of derivatives, because they can be expanded into sin(a x) and cos(b x) terms. However, SymPy currently cannot do that expansion, so you will need to manually rewrite the expression in terms of the above to use this method. So, for example, you will need to manually convert sin^2(x) into (1 + cos(2 x))/2 to properly apply the method of undetermined coefficients on it.

After replacement of x by exp(t), this method works by creating a trial function from the expression and all of its linear independent derivatives and substituting them into the original ODE. The coefficients for each term will be a system of linear equations, which are be solved for and substituted, giving the solution. If any of the trial functions are linearly dependent on the solution to the homogeneous equation, they are multiplied by sufficient x to make them linearly independent.

Examples

>>> from .. import dsolve, Function, Derivative, log
>>> from ..abc import x
>>> f = Function('f')
>>> eq = x**2*Derivative(f(x), x, x) - 2*x*Derivative(f(x), x) + 2*f(x) - log(x)
>>> dsolve(eq, f(x),
... hint='nth_linear_euler_eq_nonhomogeneous_undetermined_coefficients').expand()
Eq(f(x), C1*x + C2*x**2 + log(x)/2 + 3/4)
modelparameters.sympy.solvers.ode.ode_nth_linear_euler_eq_nonhomogeneous_variation_of_parameters(eq, func, order, match, returns='sol')[source]

Solves an nth order linear non homogeneous Cauchy-Euler equidimensional ordinary differential equation using variation of parameters.

This is an equation with form g(x) = a_0 f(x) + a_1 x f’(x) + a_2 x^2 f’’(x) cdots.

This method works by assuming that the particular solution takes the form

\[\sum_{x=1}^{n} c_i(x) y_i(x) {a_n} {x^n} \text{,}\]

where y_i is the ith solution to the homogeneous equation. The solution is then solved using Wronskian’s and Cramer’s Rule. The particular solution is given by multiplying eq given below with a_n x^{n}

\[\sum_{x=1}^n \left( \int \frac{W_i(x)}{W(x)} \,dx \right) y_i(x) \text{,}\]

where W(x) is the Wronskian of the fundamental system (the system of n linearly independent solutions to the homogeneous equation), and W_i(x) is the Wronskian of the fundamental system with the ith column replaced with [0, 0, cdots, 0, frac{x^{- n}}{a_n} g{left (x right )}].

This method is general enough to solve any nth order inhomogeneous linear differential equation, but sometimes SymPy cannot simplify the Wronskian well enough to integrate it. If this method hangs, try using the nth_linear_constant_coeff_variation_of_parameters_Integral hint and simplifying the integrals manually. Also, prefer using nth_linear_constant_coeff_undetermined_coefficients when it applies, because it doesn’t use integration, making it faster and more reliable.

Warning, using simplify=False with ‘nth_linear_constant_coeff_variation_of_parameters’ in dsolve() may cause it to hang, because it will not attempt to simplify the Wronskian before integrating. It is recommended that you only use simplify=False with ‘nth_linear_constant_coeff_variation_of_parameters_Integral’ for this method, especially if the solution to the homogeneous equation has trigonometric functions in it.

Examples

>>> from .. import Function, dsolve, Derivative
>>> from ..abc import x
>>> f = Function('f')
>>> eq = x**2*Derivative(f(x), x, x) - 2*x*Derivative(f(x), x) + 2*f(x) - x**4
>>> dsolve(eq, f(x),
... hint='nth_linear_euler_eq_nonhomogeneous_variation_of_parameters').expand()
Eq(f(x), C1*x + C2*x**2 + x**4/6)
modelparameters.sympy.solvers.ode.ode_separable(eq, func, order, match)[source]

Solves separable 1st order differential equations.

This is any differential equation that can be written as P(y) tfrac{dy}{dx} = Q(x). The solution can then just be found by rearranging terms and integrating: int P(y) ,dy = int Q(x) ,dx. This hint uses sympy.simplify.simplify.separatevars() as its back end, so if a separable equation is not caught by this solver, it is most likely the fault of that function. separatevars() is smart enough to do most expansion and factoring necessary to convert a separable equation F(x, y) into the proper form P(x)cdot{}Q(y). The general solution is:

>>> from .. import Function, dsolve, Eq, pprint
>>> from ..abc import x
>>> a, b, c, d, f = map(Function, ['a', 'b', 'c', 'd', 'f'])
>>> genform = Eq(a(x)*b(f(x))*f(x).diff(x), c(x)*d(f(x)))
>>> pprint(genform)
             d
a(x)*b(f(x))*--(f(x)) = c(x)*d(f(x))
             dx
>>> pprint(dsolve(genform, f(x), hint='separable_Integral'))
     f(x)
   /                  /
  |                  |
  |  b(y)            | c(x)
  |  ---- dy = C1 +  | ---- dx
  |  d(y)            | a(x)
  |                  |
 /                  /

Examples

>>> from .. import Function, dsolve, Eq
>>> from ..abc import x
>>> f = Function('f')
>>> pprint(dsolve(Eq(f(x)*f(x).diff(x) + x, 3*x*f(x)**2), f(x),
... hint='separable', simplify=False))
   /   2       \         2
log\3*f (x) - 1/        x
---------------- = C1 + --
       6                2

References

  • M. Tenenbaum & H. Pollard, “Ordinary Differential Equations”, Dover 1963, pp. 52

# indirect doctest

modelparameters.sympy.solvers.ode.ode_separable_reduced(eq, func, order, match)[source]

Solves a differential equation that can be reduced to the separable form.

The general form of this equation is

\[y' + (y/x) H(x^n y) = 0\text{}.\]

This can be solved by substituting u(y) = x^n y. The equation then reduces to the separable form frac{u’}{u (mathrm{power} - H(u))} - frac{1}{x} = 0.

The general solution is:

>>> from .. import Function, dsolve, Eq, pprint
>>> from ..abc import x, n
>>> f, g = map(Function, ['f', 'g'])
>>> genform = f(x).diff(x) + (f(x)/x)*g(x**n*f(x))
>>> pprint(genform)
                 / n     \
d          f(x)*g\x *f(x)/
--(f(x)) + ---------------
dx                x
>>> pprint(dsolve(genform, hint='separable_reduced'))
 n
x *f(x)
  /
 |
 |         1
 |    ------------ dy = C1 + log(x)
 |    y*(n - g(y))
 |
 /

See also

sympy.solvers.ode.ode_separable()

Examples

>>> from .. import Function, Derivative, pprint
>>> from .ode import dsolve, classify_ode
>>> from ..abc import x
>>> f = Function('f')
>>> d = f(x).diff(x)
>>> eq = (x - x**2*f(x))*d - f(x)
>>> dsolve(eq, hint='separable_reduced')
[Eq(f(x), (-sqrt(C1*x**2 + 1) + 1)/x), Eq(f(x), (sqrt(C1*x**2 + 1) + 1)/x)]
>>> pprint(dsolve(eq, hint='separable_reduced'))
             ___________                ___________
            /     2                    /     2
        - \/  C1*x  + 1  + 1         \/  C1*x  + 1  + 1
[f(x) = --------------------, f(x) = ------------------]
                 x                           x

References

  • Joel Moses, “Symbolic Integration - The Stormy Decade”, Communications of the ACM, Volume 14, Number 8, August 1971, pp. 558

modelparameters.sympy.solvers.ode.ode_sol_simplicity(sol, func, trysolving=True)[source]

Returns an extended integer representing how simple a solution to an ODE is.

The following things are considered, in order from most simple to least:

  • sol is solved for func.

  • sol is not solved for func, but can be if passed to solve (e.g., a solution returned by dsolve(ode, func, simplify=False).

  • If sol is not solved for func, then base the result on the length of sol, as computed by len(str(sol)).

  • If sol has any unevaluated Integrals, this will automatically be considered less simple than any of the above.

This function returns an integer such that if solution A is simpler than solution B by above metric, then ode_sol_simplicity(sola, func) < ode_sol_simplicity(solb, func).

Currently, the following are the numbers returned, but if the heuristic is ever improved, this may change. Only the ordering is guaranteed.

Simplicity

Return

sol solved for func

-2

sol not solved for func but can be

-1

sol is not solved nor solvable for func

len(str(sol))

sol contains an Integral

oo

oo here means the SymPy infinity, which should compare greater than any integer.

If you already know solve() cannot solve sol, you can use trysolving=False to skip that step, which is the only potentially slow step. For example, dsolve() with the simplify=False flag should do this.

If sol is a list of solutions, if the worst solution in the list returns oo it returns that, otherwise it returns len(str(sol)), that is, the length of the string representation of the whole list.

Examples

This function is designed to be passed to min as the key argument, such as min(listofsolutions, key=lambda i: ode_sol_simplicity(i, f(x))).

>>> from .. import symbols, Function, Eq, tan, cos, sqrt, Integral
>>> from .ode import ode_sol_simplicity
>>> x, C1, C2 = symbols('x, C1, C2')
>>> f = Function('f')
>>> ode_sol_simplicity(Eq(f(x), C1*x**2), f(x))
-2
>>> ode_sol_simplicity(Eq(x**2 + f(x), C1), f(x))
-1
>>> ode_sol_simplicity(Eq(f(x), C1*Integral(2*x, x)), f(x))
oo
>>> eq1 = Eq(f(x)/tan(f(x)/(2*x)), C1)
>>> eq2 = Eq(f(x)/tan(f(x)/(2*x) + f(x)), C2)
>>> [ode_sol_simplicity(eq, f(x)) for eq in [eq1, eq2]]
[28, 35]
>>> min([eq1, eq2], key=lambda i: ode_sol_simplicity(i, f(x)))
Eq(f(x)/tan(f(x)/(2*x)), C1)
modelparameters.sympy.solvers.ode.odesimp(eq, func, order, constants, hint)[source]

Simplifies ODEs, including trying to solve for func and running constantsimp().

It may use knowledge of the type of solution that the hint returns to apply additional simplifications.

It also attempts to integrate any Integrals in the expression, if the hint is not an _Integral hint.

This function should have no effect on expressions returned by dsolve(), as dsolve() already calls odesimp(), but the individual hint functions do not call odesimp() (because the dsolve() wrapper does). Therefore, this function is designed for mainly internal use.

Examples

>>> from .. import sin, symbols, dsolve, pprint, Function
>>> from .ode import odesimp
>>> x , u2, C1= symbols('x,u2,C1')
>>> f = Function('f')
>>> eq = dsolve(x*f(x).diff(x) - f(x) - x*sin(f(x)/x), f(x),
... hint='1st_homogeneous_coeff_subs_indep_div_dep_Integral',
... simplify=False)
>>> pprint(eq, wrap_line=False)
                        x
                       ----
                       f(x)
                         /
                        |
                        |   /        1   \
                        |  -|u2 + -------|
                        |   |        /1 \|
                        |   |     sin|--||
                        |   \        \u2//
log(f(x)) = log(C1) +   |  ---------------- d(u2)
                        |          2
                        |        u2
                        |
                       /
>>> pprint(odesimp(eq, f(x), 1, {C1},
... hint='1st_homogeneous_coeff_subs_indep_div_dep'
... )) 
    x
--------- = C1
   /f(x)\
tan|----|
   \2*x /
modelparameters.sympy.solvers.ode.sub_func_doit(eq, func, new)[source]

When replacing the func with something else, we usually want the derivative evaluated, so this function helps in making that happen.

To keep subs from having to look through all derivatives, we mask them off with dummy variables, do the func sub, and then replace masked-off derivatives with their doit values.

Examples

>>> from .. import Derivative, symbols, Function
>>> from .ode import sub_func_doit
>>> x, z = symbols('x, z')
>>> y = Function('y')
>>> sub_func_doit(3*Derivative(y(x), x) - 1, y(x), x)
2
>>> sub_func_doit(x*Derivative(y(x), x) - y(x)**2 + y(x), y(x),
... 1/(x*(z + 1/x)))
x*(-1/(x**2*(z + 1/x)) + 1/(x**3*(z + 1/x)**2)) + 1/(x*(z + 1/x))
...- 1/(x**2*(z + 1/x)**2)
modelparameters.sympy.solvers.ode.sysode_linear_2eq_order1(match_)[source]
modelparameters.sympy.solvers.ode.sysode_linear_2eq_order2(match_)[source]
modelparameters.sympy.solvers.ode.sysode_linear_3eq_order1(match_)[source]
modelparameters.sympy.solvers.ode.sysode_linear_neq_order1(match_)[source]
modelparameters.sympy.solvers.ode.sysode_nonlinear_2eq_order1(match_)[source]
modelparameters.sympy.solvers.ode.sysode_nonlinear_3eq_order1(match_)[source]

modelparameters.sympy.solvers.pde module

This module contains pdsolve() and different helper functions that it uses. It is heavily inspired by the ode module and hence the basic infrastructure remains the same.

Functions in this module

These are the user functions in this module:

  • pdsolve() - Solves PDE’s

  • classify_pde() - Classifies PDEs into possible hints for dsolve().

  • pde_separate() - Separate variables in partial differential equation either by

    additive or multiplicative separation approach.

These are the helper functions in this module:

  • pde_separate_add() - Helper function for searching additive separable solutions.

  • pde_separate_mul() - Helper function for searching multiplicative

    separable solutions.

Currently implemented solver methods

The following methods are implemented for solving partial differential equations. See the docstrings of the various pde_hint() functions for more information on each (run help(pde)):

  • 1st order linear homogeneous partial differential equations with constant coefficients.

  • 1st order linear general partial differential equations with constant coefficients.

  • 1st order linear partial differential equations with variable coefficients.

modelparameters.sympy.solvers.pde.checkpdesol(pde, sol, func=None, solve_for_func=True)[source]

Checks if the given solution satisfies the partial differential equation.

pde is the partial differential equation which can be given in the form of an equation or an expression. sol is the solution for which the pde is to be checked. This can also be given in an equation or an expression form. If the function is not provided, the helper function _preprocess from deutils is used to identify the function.

If a sequence of solutions is passed, the same sort of container will be used to return the result for each solution.

The following methods are currently being implemented to check if the solution satisfies the PDE:

  1. Directly substitute the solution in the PDE and check. If the solution hasn’t been solved for f, then it will solve for f provided solve_for_func hasn’t been set to False.

If the solution satisfies the PDE, then a tuple (True, 0) is returned. Otherwise a tuple (False, expr) where expr is the value obtained after substituting the solution in the PDE. However if a known solution returns False, it may be due to the inability of doit() to simplify it to zero.

Examples

>>> from .. import Function, symbols, diff
>>> from .pde import checkpdesol, pdsolve
>>> x, y = symbols('x y')
>>> f = Function('f')
>>> eq = 2*f(x,y) + 3*f(x,y).diff(x) + 4*f(x,y).diff(y)
>>> sol = pdsolve(eq)
>>> assert checkpdesol(eq, sol)[0]
>>> eq = x*f(x,y) + f(x,y).diff(x)
>>> checkpdesol(eq, sol)
(False, (x*F(4*x - 3*y) - 6*F(4*x - 3*y)/25 + 4*Subs(Derivative(F(_xi_1), _xi_1), (_xi_1,), (4*x - 3*y,)))*exp(-6*x/25 - 8*y/25))
modelparameters.sympy.solvers.pde.classify_pde(eq, func=None, dict=False, **kwargs)[source]

Returns a tuple of possible pdsolve() classifications for a PDE.

The tuple is ordered so that first item is the classification that pdsolve() uses to solve the PDE by default. In general, classifications near the beginning of the list will produce better solutions faster than those near the end, though there are always exceptions. To make pdsolve use a different classification, use pdsolve(PDE, func, hint=<classification>). See also the pdsolve() docstring for different meta-hints you can use.

If dict is true, classify_pde() will return a dictionary of hint:match expression terms. This is intended for internal use by pdsolve(). Note that because dictionaries are ordered arbitrarily, this will most likely not be in the same order as the tuple.

You can get help on different hints by doing help(pde.pde_hintname), where hintname is the name of the hint without “_Integral”.

See sympy.pde.allhints or the sympy.pde docstring for a list of all supported hints that can be returned from classify_pde.

Examples

>>> from .pde import classify_pde
>>> from .. import Function, diff, Eq
>>> from ..abc import x, y
>>> f = Function('f')
>>> u = f(x, y)
>>> ux = u.diff(x)
>>> uy = u.diff(y)
>>> eq = Eq(1 + (2*(ux/u)) + (3*(uy/u)))
>>> classify_pde(eq)
('1st_linear_constant_coeff_homogeneous',)
modelparameters.sympy.solvers.pde.pde_1st_linear_constant_coeff(eq, func, order, match, solvefun)[source]

Solves a first order linear partial differential equation with constant coefficients.

The general form of this partial differential equation is

\[a \frac{df(x,y)}{dx} + b \frac{df(x,y)}{dy} + c f(x,y) = G(x,y)\]

where a, b and c are constants and G(x, y) can be an arbitrary function in x and y.

The general solution of the PDE is:

>>> from ..solvers import pdsolve
>>> from ..abc import x, y, a, b, c
>>> from .. import Function, pprint
>>> f = Function('f')
>>> G = Function('G')
>>> u = f(x,y)
>>> ux = u.diff(x)
>>> uy = u.diff(y)
>>> genform = a*u + b*ux + c*uy - G(x,y)
>>> pprint(genform)
          d               d
a*f(x, y) + b*--(f(x, y)) + c*--(f(x, y)) - G(x, y)
          dx              dy
>>> pprint(pdsolve(genform, hint='1st_linear_constant_coeff_Integral'))
          //          b*x + c*y                                             \
          ||              /                                                 |
          ||             |                                                  |
          ||             |                                       a*xi       |
          ||             |                                     -------      |
          ||             |                                      2    2      |
          ||             |      /b*xi + c*eta  -b*eta + c*xi\  b  + c       |
          ||             |     G|------------, -------------|*e        d(xi)|
          ||             |      |   2    2         2    2   |               |
          ||             |      \  b  + c         b  + c    /               |
          ||             |                                                  |
          ||            /                                                   |
          ||                                                                |
f(x, y) = ||F(eta) + -------------------------------------------------------|*
          ||                                  2    2                        |
          \\                                 b  + c                         /

        \|
        ||
        ||
        ||
        ||
        ||
        ||
        ||
        ||
  -a*xi ||
 -------||
  2    2||
 b  + c ||
e       ||
        ||
        /|eta=-b*y + c*x, xi=b*x + c*y

Examples

>>> from .pde import pdsolve
>>> from .. import Function, diff, pprint, exp
>>> from ..abc import x,y
>>> f = Function('f')
>>> eq = -2*f(x,y).diff(x) + 4*f(x,y).diff(y) + 5*f(x,y) - exp(x + 3*y)
>>> pdsolve(eq)
Eq(f(x, y), (F(4*x + 2*y) + exp(x/2 + 4*y)/15)*exp(x/2 - y))

References

  • Viktor Grigoryan, “Partial Differential Equations” Math 124A - Fall 2010, pp.7

modelparameters.sympy.solvers.pde.pde_1st_linear_constant_coeff_homogeneous(eq, func, order, match, solvefun)[source]

Solves a first order linear homogeneous partial differential equation with constant coefficients.

The general form of this partial differential equation is

\[a \frac{df(x,y)}{dx} + b \frac{df(x,y)}{dy} + c f(x,y) = 0\]

where a, b and c are constants.

The general solution is of the form:

>>> from ..solvers import pdsolve
>>> from ..abc import x, y, a, b, c
>>> from .. import Function, pprint
>>> f = Function('f')
>>> u = f(x,y)
>>> ux = u.diff(x)
>>> uy = u.diff(y)
>>> genform = a*ux + b*uy + c*u
>>> pprint(genform)
  d               d
a*--(f(x, y)) + b*--(f(x, y)) + c*f(x, y)
  dx              dy

>>> pprint(pdsolve(genform))
                         -c*(a*x + b*y)
                         ---------------
                              2    2
                             a  + b
f(x, y) = F(-a*y + b*x)*e

Examples

>>> from .pde import (
... pde_1st_linear_constant_coeff_homogeneous)
>>> from .. import pdsolve
>>> from .. import Function, diff, pprint
>>> from ..abc import x,y
>>> f = Function('f')
>>> pdsolve(f(x,y) + f(x,y).diff(x) + f(x,y).diff(y))
Eq(f(x, y), F(x - y)*exp(-x/2 - y/2))
>>> pprint(pdsolve(f(x,y) + f(x,y).diff(x) + f(x,y).diff(y)))
                      x   y
                    - - - -
                      2   2
f(x, y) = F(x - y)*e

References

  • Viktor Grigoryan, “Partial Differential Equations” Math 124A - Fall 2010, pp.7

modelparameters.sympy.solvers.pde.pde_1st_linear_variable_coeff(eq, func, order, match, solvefun)[source]

Solves a first order linear partial differential equation with variable coefficients. The general form of this partial differential equation is

\[a(x, y) \frac{df(x, y)}{dx} + a(x, y) \frac{df(x, y)}{dy} + c(x, y) f(x, y) - G(x, y)\]

where a(x, y), b(x, y), c(x, y) and G(x, y) are arbitrary functions in x and y. This PDE is converted into an ODE by making the following transformation.

1] xi as x

2] eta as the constant in the solution to the differential equation frac{dy}{dx} = -frac{b}{a}

Making the following substitutions reduces it to the linear ODE

\[a(\xi, \eta)\frac{du}{d\xi} + c(\xi, \eta)u - d(\xi, \eta) = 0\]

which can be solved using dsolve.

The general form of this PDE is:

>>> from .pde import pdsolve
>>> from ..abc import x, y
>>> from .. import Function, pprint
>>> a, b, c, G, f= [Function(i) for i in ['a', 'b', 'c', 'G', 'f']]
>>> u = f(x,y)
>>> ux = u.diff(x)
>>> uy = u.diff(y)
>>> genform = a(x, y)*u + b(x, y)*ux + c(x, y)*uy - G(x,y)
>>> pprint(genform)
                                     d                     d
-G(x, y) + a(x, y)*f(x, y) + b(x, y)*--(f(x, y)) + c(x, y)*--(f(x, y))
                                     dx                    dy

Examples

>>> from .pde import pdsolve
>>> from .. import Function, diff, pprint, exp
>>> from ..abc import x,y
>>> f = Function('f')
>>> eq =  x*(u.diff(x)) - y*(u.diff(y)) + y**2*u - y**2
>>> pdsolve(eq)
Eq(f(x, y), F(x*y)*exp(y**2/2) + 1)

References

  • Viktor Grigoryan, “Partial Differential Equations” Math 124A - Fall 2010, pp.7

modelparameters.sympy.solvers.pde.pde_separate(eq, fun, sep, strategy='mul')[source]

Separate variables in partial differential equation either by additive or multiplicative separation approach. It tries to rewrite an equation so that one of the specified variables occurs on a different side of the equation than the others.

Parameters:
  • eq – Partial differential equation

  • fun – Original function F(x, y, z)

  • sep – List of separated functions [X(x), u(y, z)]

  • strategy – Separation strategy. You can choose between additive separation (‘add’) and multiplicative separation (‘mul’) which is default.

Examples

>>> from .. import E, Eq, Function, pde_separate, Derivative as D
>>> from ..abc import x, t
>>> u, X, T = map(Function, 'uXT')
>>> eq = Eq(D(u(x, t), x), E**(u(x, t))*D(u(x, t), t))
>>> pde_separate(eq, u(x, t), [X(x), T(t)], strategy='add')
[exp(-X(x))*Derivative(X(x), x), exp(T(t))*Derivative(T(t), t)]
>>> eq = Eq(D(u(x, t), x, 2), D(u(x, t), t, 2))
>>> pde_separate(eq, u(x, t), [X(x), T(t)], strategy='mul')
[Derivative(X(x), x, x)/X(x), Derivative(T(t), t, t)/T(t)]
modelparameters.sympy.solvers.pde.pde_separate_add(eq, fun, sep)[source]

Helper function for searching additive separable solutions.

Consider an equation of two independent variables x, y and a dependent variable w, we look for the product of two functions depending on different arguments:

w(x, y, z) = X(x) + y(y, z)

Examples

>>> from .. import E, Eq, Function, pde_separate_add, Derivative as D
>>> from ..abc import x, t
>>> u, X, T = map(Function, 'uXT')
>>> eq = Eq(D(u(x, t), x), E**(u(x, t))*D(u(x, t), t))
>>> pde_separate_add(eq, u(x, t), [X(x), T(t)])
[exp(-X(x))*Derivative(X(x), x), exp(T(t))*Derivative(T(t), t)]
modelparameters.sympy.solvers.pde.pde_separate_mul(eq, fun, sep)[source]

Helper function for searching multiplicative separable solutions.

Consider an equation of two independent variables x, y and a dependent variable w, we look for the product of two functions depending on different arguments:

w(x, y, z) = X(x)*u(y, z)

Examples

>>> from .. import Function, Eq, pde_separate_mul, Derivative as D
>>> from ..abc import x, y
>>> u, X, Y = map(Function, 'uXY')
>>> eq = Eq(D(u(x, y), x, 2), D(u(x, y), y, 2))
>>> pde_separate_mul(eq, u(x, y), [X(x), Y(y)])
[Derivative(X(x), x, x)/X(x), Derivative(Y(y), y, y)/Y(y)]
modelparameters.sympy.solvers.pde.pdsolve(eq, func=None, hint='default', dict=False, solvefun=None, **kwargs)[source]

Solves any (supported) kind of partial differential equation.

Usage

pdsolve(eq, f(x,y), hint) -> Solve partial differential equation eq for function f(x,y), using method hint.

Details

eq can be any supported partial differential equation (see

the pde docstring for supported methods). This can either be an Equality, or an expression, which is assumed to be equal to 0.

f(x,y) is a function of two variables whose derivatives in that

variable make up the partial differential equation. In many cases it is not necessary to provide this; it will be autodetected (and an error raised if it couldn’t be detected).

hint is the solving method that you want pdsolve to use. Use

classify_pde(eq, f(x,y)) to get all of the possible hints for a PDE. The default hint, ‘default’, will use whatever hint is returned first by classify_pde(). See Hints below for more options that you can use for hint.

solvefun is the convention used for arbitrary functions returned

by the PDE solver. If not set by the user, it is set by default to be F.

Hints

Aside from the various solving methods, there are also some meta-hints that you can pass to pdsolve():

“default”:

This uses whatever hint is returned first by classify_pde(). This is the default argument to pdsolve().

“all”:

To make pdsolve apply all relevant classification hints, use pdsolve(PDE, func, hint=”all”). This will return a dictionary of hint:solution terms. If a hint causes pdsolve to raise the NotImplementedError, value of that hint’s key will be the exception object raised. The dictionary will also include some special keys:

  • order: The order of the PDE. See also ode_order() in deutils.py

  • default: The solution that would be returned by default. This is the one produced by the hint that appears first in the tuple returned by classify_pde().

“all_Integral”:

This is the same as “all”, except if a hint also has a corresponding “_Integral” hint, it only returns the “_Integral” hint. This is useful if “all” causes pdsolve() to hang because of a difficult or impossible integral. This meta-hint will also be much faster than “all”, because integrate() is an expensive routine.

See also the classify_pde() docstring for more info on hints, and the pde docstring for a list of all supported hints.

Tips
  • You can declare the derivative of an unknown function this way:

    >>> from .. import Function, Derivative
    >>> from ..abc import x, y # x and y are the independent variables
    >>> f = Function("f")(x, y) # f is a function of x and y
    >>> # fx will be the partial derivative of f with respect to x
    >>> fx = Derivative(f, x)
    >>> # fy will be the partial derivative of f with respect to y
    >>> fy = Derivative(f, y)
    
  • See test_pde.py for many tests, which serves also as a set of examples for how to use pdsolve().

  • pdsolve always returns an Equality class (except for the case when the hint is “all” or “all_Integral”). Note that it is not possible to get an explicit solution for f(x, y) as in the case of ODE’s

  • Do help(pde.pde_hintname) to get help more information on a specific hint

Examples

>>> from .pde import pdsolve
>>> from .. import Function, diff, Eq
>>> from ..abc import x, y
>>> f = Function('f')
>>> u = f(x, y)
>>> ux = u.diff(x)
>>> uy = u.diff(y)
>>> eq = Eq(1 + (2*(ux/u)) + (3*(uy/u)))
>>> pdsolve(eq)
Eq(f(x, y), F(3*x - 2*y)*exp(-2*x/13 - 3*y/13))

modelparameters.sympy.solvers.polysys module

Solvers of systems of polynomial equations.

exception modelparameters.sympy.solvers.polysys.SolveFailed[source]

Bases: Exception

Raised when solver’s conditions weren’t met.

modelparameters.sympy.solvers.polysys.solve_biquadratic(f, g, opt)[source]

Solve a system of two bivariate quadratic polynomial equations.

Examples

>>> from ..polys import Options, Poly
>>> from ..abc import x, y
>>> from .polysys import solve_biquadratic
>>> NewOption = Options((x, y), {'domain': 'ZZ'})
>>> a = Poly(y**2 - 4 + x, y, x, domain='ZZ')
>>> b = Poly(y*2 + 3*x - 7, y, x, domain='ZZ')
>>> solve_biquadratic(a, b, NewOption)
[(1/3, 3), (41/27, 11/9)]
>>> a = Poly(y + x**2 - 3, y, x, domain='ZZ')
>>> b = Poly(-y + x - 4, y, x, domain='ZZ')
>>> solve_biquadratic(a, b, NewOption)
[(-sqrt(29)/2 + 7/2, -sqrt(29)/2 - 1/2), (sqrt(29)/2 + 7/2, -1/2 +       sqrt(29)/2)]
modelparameters.sympy.solvers.polysys.solve_generic(polys, opt)[source]

Solve a generic system of polynomial equations.

Returns all possible solutions over C[x_1, x_2, …, x_m] of a set F = { f_1, f_2, …, f_n } of polynomial equations, using Groebner basis approach. For now only zero-dimensional systems are supported, which means F can have at most a finite number of solutions.

The algorithm works by the fact that, supposing G is the basis of F with respect to an elimination order (here lexicographic order is used), G and F generate the same ideal, they have the same set of solutions. By the elimination property, if G is a reduced, zero-dimensional Groebner basis, then there exists an univariate polynomial in G (in its last variable). This can be solved by computing its roots. Substituting all computed roots for the last (eliminated) variable in other elements of G, new polynomial system is generated. Applying the above procedure recursively, a finite number of solutions can be found.

The ability of finding all solutions by this procedure depends on the root finding algorithms. If no solutions were found, it means only that roots() failed, but the system is solvable. To overcome this difficulty use numerical algorithms instead.

References

[Buchberger01]
  1. Buchberger, Groebner Bases: A Short

Introduction for Systems Theorists, In: R. Moreno-Diaz, B. Buchberger, J.L. Freire, Proceedings of EUROCAST’01, February, 2001

[Cox97]
  1. Cox, J. Little, D. O’Shea, Ideals, Varieties

and Algorithms, Springer, Second Edition, 1997, pp. 112

Examples

>>> from ..polys import Poly, Options
>>> from .polysys import solve_generic
>>> from ..abc import x, y
>>> NewOption = Options((x, y), {'domain': 'ZZ'})
>>> a = Poly(x - y + 5, x, y, domain='ZZ')
>>> b = Poly(x + y - 3, x, y, domain='ZZ')
>>> solve_generic([a, b], NewOption)
[(-1, 4)]
>>> a = Poly(x - 2*y + 5, x, y, domain='ZZ')
>>> b = Poly(2*x - y - 3, x, y, domain='ZZ')
>>> solve_generic([a, b], NewOption)
[(11/3, 13/3)]
>>> a = Poly(x**2 + y, x, y, domain='ZZ')
>>> b = Poly(x + y*4, x, y, domain='ZZ')
>>> solve_generic([a, b], NewOption)
[(0, 0), (1/4, -1/16)]
modelparameters.sympy.solvers.polysys.solve_poly_system(seq, *gens, **args)[source]

Solve a system of polynomial equations.

Examples

>>> from .. import solve_poly_system
>>> from ..abc import x, y
>>> solve_poly_system([x*y - 2*y, 2*y**2 - x**2], x, y)
[(0, 0), (2, -sqrt(2)), (2, sqrt(2))]
modelparameters.sympy.solvers.polysys.solve_triangulated(polys, *gens, **args)[source]

Solve a polynomial system using Gianni-Kalkbrenner algorithm.

The algorithm proceeds by computing one Groebner basis in the ground domain and then by iteratively computing polynomial factorizations in appropriately constructed algebraic extensions of the ground domain.

Examples

>>> from .polysys import solve_triangulated
>>> from ..abc import x, y, z
>>> F = [x**2 + y + z - 1, x + y**2 + z - 1, x + y + z**2 - 1]
>>> solve_triangulated(F, x, y, z)
[(0, 0, 1), (0, 1, 0), (1, 0, 0)]

References

1. Patrizia Gianni, Teo Mora, Algebraic Solution of System of Polynomial Equations using Groebner Bases, AAECC-5 on Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, LNCS 356 247–257, 1989

modelparameters.sympy.solvers.recurr module

This module is intended for solving recurrences or, in other words, difference equations. Currently supported are linear, inhomogeneous equations with polynomial or rational coefficients.

The solutions are obtained among polynomials, rational functions, hypergeometric terms, or combinations of hypergeometric term which are pairwise dissimilar.

rsolve_X functions were meant as a low level interface for rsolve which would use Mathematica’s syntax.

Given a recurrence relation:

\[a_{k}(n) y(n+k) + a_{k-1}(n) y(n+k-1) + ... + a_{0}(n) y(n) = f(n)\]

where k > 0 and a_{i}(n) are polynomials in n. To use rsolve_X we need to put all coefficients in to a list L of k+1 elements the following way:

L = [ a_{0}(n), ..., a_{k-1}(n), a_{k}(n) ]

where L[i], for i=0, ldots, k, maps to a_{i}(n) y(n+i) (y(n+i) is implicit).

For example if we would like to compute m-th Bernoulli polynomial up to a constant (example was taken from rsolve_poly docstring), then we would use b(n+1) - b(n) = m n^{m-1} recurrence, which has solution b(n) = B_m + C.

Then L = [-1, 1] and f(n) = m n^(m-1) and finally for m=4:

>>> from .. import Symbol, bernoulli, rsolve_poly
>>> n = Symbol('n', integer=True)
>>> rsolve_poly([-1, 1], 4*n**3, n)
C0 + n**4 - 2*n**3 + n**2
>>> bernoulli(4, n)
n**4 - 2*n**3 + n**2 - 1/30

For the sake of completeness, f(n) can be:

[1] a polynomial -> rsolve_poly [2] a rational function -> rsolve_ratio [3] a hypergeometric function -> rsolve_hyper

modelparameters.sympy.solvers.recurr.rsolve(f, y, init=None)[source]

Solve univariate recurrence with rational coefficients.

Given k-th order linear recurrence operatorname{L} y = f, or equivalently:

\[a_{k}(n) y(n+k) + a_{k-1}(n) y(n+k-1) + \cdots + a_{0}(n) y(n) = f(n)\]

where a_{i}(n), for i=0, ldots, k, are polynomials or rational functions in n, and f is a hypergeometric function or a sum of a fixed number of pairwise dissimilar hypergeometric terms in n, finds all solutions or returns None, if none were found.

Initial conditions can be given as a dictionary in two forms:

  1. {   n_0  : v_0,   n_1  : v_1, ...,   n_m  : v_m }

  2. { y(n_0) : v_0, y(n_1) : v_1, ..., y(n_m) : v_m }

or as a list L of values:

L = [ v_0, v_1, ..., v_m ]

where L[i] = v_i, for i=0, ldots, m, maps to y(n_i).

Examples

Lets consider the following recurrence:

\[(n - 1) y(n + 2) - (n^2 + 3 n - 2) y(n + 1) + 2 n (n + 1) y(n) = 0\]
>>> from .. import Function, rsolve
>>> from ..abc import n
>>> y = Function('y')
>>> f = (n - 1)*y(n + 2) - (n**2 + 3*n - 2)*y(n + 1) + 2*n*(n + 1)*y(n)
>>> rsolve(f, y(n))
2**n*C0 + C1*factorial(n)
>>> rsolve(f, y(n), { y(0):0, y(1):3 })
3*2**n - 3*factorial(n)
modelparameters.sympy.solvers.recurr.rsolve_hyper(coeffs, f, n, **hints)[source]

Given linear recurrence operator operatorname{L} of order k with polynomial coefficients and inhomogeneous equation operatorname{L} y = f we seek for all hypergeometric solutions over field K of characteristic zero.

The inhomogeneous part can be either hypergeometric or a sum of a fixed number of pairwise dissimilar hypergeometric terms.

The algorithm performs three basic steps:

  1. Group together similar hypergeometric terms in the inhomogeneous part of operatorname{L} y = f, and find particular solution using Abramov’s algorithm.

  2. Compute generating set of operatorname{L} and find basis in it, so that all solutions are linearly independent.

  3. Form final solution with the number of arbitrary constants equal to dimension of basis of operatorname{L}.

Term a(n) is hypergeometric if it is annihilated by first order linear difference equations with polynomial coefficients or, in simpler words, if consecutive term ratio is a rational function.

The output of this procedure is a linear combination of fixed number of hypergeometric terms. However the underlying method can generate larger class of solutions - D’Alembertian terms.

Note also that this method not only computes the kernel of the inhomogeneous equation, but also reduces in to a basis so that solutions generated by this procedure are linearly independent

Examples

>>> from ..solvers import rsolve_hyper
>>> from ..abc import x
>>> rsolve_hyper([-1, -1, 1], 0, x)
C0*(1/2 + sqrt(5)/2)**x + C1*(-sqrt(5)/2 + 1/2)**x
>>> rsolve_hyper([-1, 1], 1 + x, x)
C0 + x*(x + 1)/2

References

modelparameters.sympy.solvers.recurr.rsolve_poly(coeffs, f, n, **hints)[source]

Given linear recurrence operator operatorname{L} of order k with polynomial coefficients and inhomogeneous equation operatorname{L} y = f, where f is a polynomial, we seek for all polynomial solutions over field K of characteristic zero.

The algorithm performs two basic steps:

  1. Compute degree N of the general polynomial solution.

  2. Find all polynomials of degree N or less of operatorname{L} y = f.

There are two methods for computing the polynomial solutions. If the degree bound is relatively small, i.e. it’s smaller than or equal to the order of the recurrence, then naive method of undetermined coefficients is being used. This gives system of algebraic equations with N+1 unknowns.

In the other case, the algorithm performs transformation of the initial equation to an equivalent one, for which the system of algebraic equations has only r indeterminates. This method is quite sophisticated (in comparison with the naive one) and was invented together by Abramov, Bronstein and Petkovsek.

It is possible to generalize the algorithm implemented here to the case of linear q-difference and differential equations.

Lets say that we would like to compute m-th Bernoulli polynomial up to a constant. For this we can use b(n+1) - b(n) = m n^{m-1} recurrence, which has solution b(n) = B_m + C. For example:

>>> from .. import Symbol, rsolve_poly
>>> n = Symbol('n', integer=True)
>>> rsolve_poly([-1, 1], 4*n**3, n)
C0 + n**4 - 2*n**3 + n**2

References

modelparameters.sympy.solvers.recurr.rsolve_ratio(coeffs, f, n, **hints)[source]

Given linear recurrence operator operatorname{L} of order k with polynomial coefficients and inhomogeneous equation operatorname{L} y = f, where f is a polynomial, we seek for all rational solutions over field K of characteristic zero.

This procedure accepts only polynomials, however if you are interested in solving recurrence with rational coefficients then use rsolve which will pre-process the given equation and run this procedure with polynomial arguments.

The algorithm performs two basic steps:

  1. Compute polynomial v(n) which can be used as universal denominator of any rational solution of equation operatorname{L} y = f.

  2. Construct new linear difference equation by substitution y(n) = u(n)/v(n) and solve it for u(n) finding all its polynomial solutions. Return None if none were found.

Algorithm implemented here is a revised version of the original Abramov’s algorithm, developed in 1989. The new approach is much simpler to implement and has better overall efficiency. This method can be easily adapted to q-difference equations case.

Besides finding rational solutions alone, this functions is an important part of Hyper algorithm were it is used to find particular solution of inhomogeneous part of a recurrence.

Examples

>>> from ..abc import x
>>> from .recurr import rsolve_ratio
>>> rsolve_ratio([-2*x**3 + x**2 + 2*x - 1, 2*x**3 + x**2 - 6*x,
... - 2*x**3 - 11*x**2 - 18*x - 9, 2*x**3 + 13*x**2 + 22*x + 8], 0, x)
C2*(2*x - 3)/(2*(x**2 - 1))

References

See also

rsolve_hyper

modelparameters.sympy.solvers.solvers module

This module contain solvers for all kinds of equations:

  • algebraic or transcendental, use solve()

  • recurrence, use rsolve()

  • differential, use dsolve()

  • nonlinear (numerically), use nsolve() (you will need a good starting point)

modelparameters.sympy.solvers.solvers.check_assumptions(expr, against=None, **assumptions)[source]

Checks whether expression expr satisfies all assumptions.

assumptions is a dict of assumptions: {‘assumption’: True|False, …}.

Examples

>>> from .. import Symbol, pi, I, exp, check_assumptions
>>> check_assumptions(-5, integer=True)
True
>>> check_assumptions(pi, real=True, integer=False)
True
>>> check_assumptions(pi, real=True, negative=True)
False
>>> check_assumptions(exp(I*pi/7), real=False)
True
>>> x = Symbol('x', real=True, positive=True)
>>> check_assumptions(2*x + 1, real=True, positive=True)
True
>>> check_assumptions(-2*x - 5, real=True, positive=True)
False

To check assumptions of expr against another variable or expression, pass the expression or variable as against.

>>> check_assumptions(2*x + 1, x)
True

None is returned if check_assumptions() could not conclude.

>>> check_assumptions(2*x - 1, real=True, positive=True)
>>> z = Symbol('z')
>>> check_assumptions(z, real=True)
modelparameters.sympy.solvers.solvers.checksol(f, symbol, sol=None, **flags)[source]

Checks whether sol is a solution of equation f == 0.

Input can be either a single symbol and corresponding value or a dictionary of symbols and values. When given as a dictionary and flag simplify=True, the values in the dictionary will be simplified. f can be a single equation or an iterable of equations. A solution must satisfy all equations in f to be considered valid; if a solution does not satisfy any equation, False is returned; if one or more checks are inconclusive (and none are False) then None is returned.

Examples

>>> from .. import symbols
>>> from ..solvers import checksol
>>> x, y = symbols('x,y')
>>> checksol(x**4 - 1, x, 1)
True
>>> checksol(x**4 - 1, x, 0)
False
>>> checksol(x**2 + y**2 - 5**2, {x: 3, y: 4})
True

To check if an expression is zero using checksol, pass it as f and send an empty dictionary for symbol:

>>> checksol(x**2 + x - x*(x + 1), {})
True

None is returned if checksol() could not conclude.

flags:
‘numerical=True (default)’

do a fast numerical check if f has only one symbol.

‘minimal=True (default is False)’

a very fast, minimal testing.

‘warn=True (default is False)’

show a warning if checksol() could not conclude.

‘simplify=True (default)’

simplify solution before substituting into function and simplify the function before trying specific simplifications

‘force=True (default is False)’

make positive all symbols without assumptions regarding sign.

modelparameters.sympy.solvers.solvers.denoms(eq, *symbols)[source]

Return (recursively) set of all denominators that appear in eq that contain any symbol in symbols; if symbols are not provided then all denominators will be returned.

Examples

>>> from .solvers import denoms
>>> from ..abc import x, y, z
>>> from .. import sqrt
>>> denoms(x/y)
{y}
>>> denoms(x/(y*z))
{y, z}
>>> denoms(3/x + y/z)
{x, z}
>>> denoms(x/2 + y/z)
{2, z}

If symbols are provided then only denominators containing those symbols will be returned

>>> denoms(1/x + 1/y + 1/z, y, z)
{y, z}
modelparameters.sympy.solvers.solvers.det_minor(M)[source]

Return the det(M) computed from minors without introducing new nesting in products.

See also

det_perm, det_quick

modelparameters.sympy.solvers.solvers.det_perm(M)[source]

Return the det(M) by using permutations to select factors. For size larger than 8 the number of permutations becomes prohibitively large, or if there are no symbols in the matrix, it is better to use the standard determinant routines, e.g. M.det().

See also

det_minor, det_quick

modelparameters.sympy.solvers.solvers.det_quick(M, method=None)[source]

Return det(M) assuming that either there are lots of zeros or the size of the matrix is small. If this assumption is not met, then the normal Matrix.det function will be used with method = method.

See also

det_minor, det_perm

modelparameters.sympy.solvers.solvers.inv_quick(M)[source]

Return the inverse of M, assuming that either there are lots of zeros or the size of the matrix is small.

modelparameters.sympy.solvers.solvers.minsolve_linear_system(system, *symbols, **flags)[source]

Find a particular solution to a linear system.

In particular, try to find a solution with the minimal possible number of non-zero variables using a naive algorithm with exponential complexity. If quick=True, a heuristic is used.

modelparameters.sympy.solvers.solvers.nsolve(*args, **kwargs)[source]

Solve a nonlinear equation system numerically:

nsolve(f, [args,] x0, modules=['mpmath'], **kwargs)

f is a vector function of symbolic expressions representing the system. args are the variables. If there is only one variable, this argument can be omitted. x0 is a starting vector close to a solution.

Use the modules keyword to specify which modules should be used to evaluate the function and the Jacobian matrix. Make sure to use a module that supports matrices. For more information on the syntax, please see the docstring of lambdify.

If the keyword arguments contain ‘dict’=True (default is False) nsolve will return a list (perhaps empty) of solution mappings. This might be especially useful if you want to use nsolve as a fallback to solve since using the dict argument for both methods produces return values of consistent type structure. Please note: to keep this consistency with solve, the solution will be returned in a list even though nsolve (currently at least) only finds one solution at a time.

Overdetermined systems are supported.

>>> from .. import Symbol, nsolve
>>> import sympy
>>> import mpmath
>>> mpmath.mp.dps = 15
>>> x1 = Symbol('x1')
>>> x2 = Symbol('x2')
>>> f1 = 3 * x1**2 - 2 * x2**2 - 1
>>> f2 = x1**2 - 2 * x1 + x2**2 + 2 * x2 - 8
>>> print(nsolve((f1, f2), (x1, x2), (-1, 1)))
Matrix([[-1.19287309935246], [1.27844411169911]])

For one-dimensional functions the syntax is simplified:

>>> from .. import sin, nsolve
>>> from ..abc import x
>>> nsolve(sin(x), x, 2)
3.14159265358979
>>> nsolve(sin(x), 2)
3.14159265358979

To solve with higher precision than the default, use the prec argument.

>>> from .. import cos
>>> nsolve(cos(x) - x, 1)
0.739085133215161
>>> nsolve(cos(x) - x, 1, prec=50)
0.73908513321516064165531208767387340401341175890076
>>> cos(_)
0.73908513321516064165531208767387340401341175890076

To solve for complex roots of real functions, a nonreal initial point must be specified:

>>> from .. import I
>>> nsolve(x**2 + 2, I)
1.4142135623731*I

mpmath.findroot is used and you can find there more extensive documentation, especially concerning keyword parameters and available solvers. Note, however, that functions which are very steep near the root the verification of the solution may fail. In this case you should use the flag verify=False and independently verify the solution.

>>> from .. import cos, cosh
>>> from ..abc import i
>>> f = cos(x)*cosh(x) - 1
>>> nsolve(f, 3.14*100)
Traceback (most recent call last):
...
ValueError: Could not find root within given tolerance. (1.39267e+230 > 2.1684e-19)
>>> ans = nsolve(f, 3.14*100, verify=False); ans
312.588469032184
>>> f.subs(x, ans).n(2)
2.1e+121
>>> (f/f.diff(x)).subs(x, ans).n(2)
7.4e-15

One might safely skip the verification if bounds of the root are known and a bisection method is used:

>>> bounds = lambda i: (3.14*i, 3.14*(i + 1))
>>> nsolve(f, bounds(100), solver='bisect', verify=False)
315.730061685774

Alternatively, a function may be better behaved when the denominator is ignored. Since this is not always the case, however, the decision of what function to use is left to the discretion of the user.

>>> eq = x**2/(1 - x)/(1 - 2*x)**2 - 100
>>> nsolve(eq, 0.46)
Traceback (most recent call last):
...
ValueError: Could not find root within given tolerance. (10000 > 2.1684e-19)
Try another starting point or tweak arguments.
>>> nsolve(eq.as_numer_denom()[0], 0.46)
0.46792545969349058
modelparameters.sympy.solvers.solvers.solve(f, *symbols, **flags)[source]

Algebraically solves equations and systems of equations.

Currently supported are:
  • polynomial,

  • transcendental

  • piecewise combinations of the above

  • systems of linear and polynomial equations

  • systems containing relational expressions.

Input is formed as:

  • f
    • a single Expr or Poly that must be zero,

    • an Equality

    • a Relational expression or boolean

    • iterable of one or more of the above

  • symbols (object(s) to solve for) specified as
    • none given (other non-numeric objects will be used)

    • single symbol

    • denested list of symbols e.g. solve(f, x, y)

    • ordered iterable of symbols e.g. solve(f, [x, y])

  • flags
    ‘dict’=True (default is False)

    return list (perhaps empty) of solution mappings

    ‘set’=True (default is False)

    return list of symbols and set of tuple(s) of solution(s)

    ‘exclude=[] (default)’

    don’t try to solve for any of the free symbols in exclude; if expressions are given, the free symbols in them will be extracted automatically.

    ‘check=True (default)’

    If False, don’t do any testing of solutions. This can be useful if one wants to include solutions that make any denominator zero.

    ‘numerical=True (default)’

    do a fast numerical check if f has only one symbol.

    ‘minimal=True (default is False)’

    a very fast, minimal testing.

    ‘warn=True (default is False)’

    show a warning if checksol() could not conclude.

    ‘simplify=True (default)’

    simplify all but polynomials of order 3 or greater before returning them and (if check is not False) use the general simplify function on the solutions and the expression obtained when they are substituted into the function which should be zero

    ‘force=True (default is False)’

    make positive all symbols without assumptions regarding sign.

    ‘rational=True (default)’

    recast Floats as Rational; if this option is not used, the system containing floats may fail to solve because of issues with polys. If rational=None, Floats will be recast as rationals but the answer will be recast as Floats. If the flag is False then nothing will be done to the Floats.

    ‘manual=True (default is False)’

    do not use the polys/matrix method to solve a system of equations, solve them one at a time as you might “manually”

    ‘implicit=True (default is False)’

    allows solve to return a solution for a pattern in terms of other functions that contain that pattern; this is only needed if the pattern is inside of some invertible function like cos, exp, ….

    ‘particular=True (default is False)’

    instructs solve to try to find a particular solution to a linear system with as many zeros as possible; this is very expensive

    ‘quick=True (default is False)’

    when using particular=True, use a fast heuristic instead to find a solution with many zeros (instead of using the very slow method guaranteed to find the largest number of zeros possible)

    ‘cubics=True (default)’

    return explicit solutions when cubic expressions are encountered

    ‘quartics=True (default)’

    return explicit solutions when quartic expressions are encountered

    ‘quintics=True (default)’

    return explicit solutions (if possible) when quintic expressions are encountered

Examples

The output varies according to the input and can be seen by example:

>>> from .. import solve, Poly, Eq, Function, exp
>>> from ..abc import x, y, z, a, b
>>> f = Function('f')
  • boolean or univariate Relational

    >>> solve(x < 3)
    (-oo < x) & (x < 3)
    
  • to always get a list of solution mappings, use flag dict=True

    >>> solve(x - 3, dict=True)
    [{x: 3}]
    >>> sol = solve([x - 3, y - 1], dict=True)
    >>> sol
    [{x: 3, y: 1}]
    >>> sol[0][x]
    3
    >>> sol[0][y]
    1
    
  • to get a list of symbols and set of solution(s) use flag set=True

    >>> solve([x**2 - 3, y - 1], set=True)
    ([x, y], {(-sqrt(3), 1), (sqrt(3), 1)})
    
  • single expression and single symbol that is in the expression

    >>> solve(x - y, x)
    [y]
    >>> solve(x - 3, x)
    [3]
    >>> solve(Eq(x, 3), x)
    [3]
    >>> solve(Poly(x - 3), x)
    [3]
    >>> solve(x**2 - y**2, x, set=True)
    ([x], {(-y,), (y,)})
    >>> solve(x**4 - 1, x, set=True)
    ([x], {(-1,), (1,), (-I,), (I,)})
    
  • single expression with no symbol that is in the expression

    >>> solve(3, x)
    []
    >>> solve(x - 3, y)
    []
    
  • single expression with no symbol given

    In this case, all free symbols will be selected as potential symbols to solve for. If the equation is univariate then a list of solutions is returned; otherwise – as is the case when symbols are given as an iterable of length > 1 – a list of mappings will be returned.

    >>> solve(x - 3)
    [3]
    >>> solve(x**2 - y**2)
    [{x: -y}, {x: y}]
    >>> solve(z**2*x**2 - z**2*y**2)
    [{x: -y}, {x: y}, {z: 0}]
    >>> solve(z**2*x - z**2*y**2)
    [{x: y**2}, {z: 0}]
    
  • when an object other than a Symbol is given as a symbol, it is isolated algebraically and an implicit solution may be obtained. This is mostly provided as a convenience to save one from replacing the object with a Symbol and solving for that Symbol. It will only work if the specified object can be replaced with a Symbol using the subs method.

    >>> solve(f(x) - x, f(x))
    [x]
    >>> solve(f(x).diff(x) - f(x) - x, f(x).diff(x))
    [x + f(x)]
    >>> solve(f(x).diff(x) - f(x) - x, f(x))
    [-x + Derivative(f(x), x)]
    >>> solve(x + exp(x)**2, exp(x), set=True)
    ([exp(x)], {(-sqrt(-x),), (sqrt(-x),)})
    
    >>> from .. import Indexed, IndexedBase, Tuple, sqrt
    >>> A = IndexedBase('A')
    >>> eqs = Tuple(A[1] + A[2] - 3, A[1] - A[2] + 1)
    >>> solve(eqs, eqs.atoms(Indexed))
    {A[1]: 1, A[2]: 2}
    
    • To solve for a symbol implicitly, use ‘implicit=True’:

      >>> solve(x + exp(x), x)
      [-LambertW(1)]
      >>> solve(x + exp(x), x, implicit=True)
      [-exp(x)]
      
    • It is possible to solve for anything that can be targeted with subs:

      >>> solve(x + 2 + sqrt(3), x + 2)
      [-sqrt(3)]
      >>> solve((x + 2 + sqrt(3), x + 4 + y), y, x + 2)
      {y: -2 + sqrt(3), x + 2: -sqrt(3)}
      
    • Nothing heroic is done in this implicit solving so you may end up with a symbol still in the solution:

      >>> eqs = (x*y + 3*y + sqrt(3), x + 4 + y)
      >>> solve(eqs, y, x + 2)
      {y: -sqrt(3)/(x + 3), x + 2: (-2*x - 6 + sqrt(3))/(x + 3)}
      >>> solve(eqs, y*x, x)
      {x: -y - 4, x*y: -3*y - sqrt(3)}
      
    • if you attempt to solve for a number remember that the number you have obtained does not necessarily mean that the value is equivalent to the expression obtained:

      >>> solve(sqrt(2) - 1, 1)
      [sqrt(2)]
      >>> solve(x - y + 1, 1)  # /!\ -1 is targeted, too
      [x/(y - 1)]
      >>> [_.subs(z, -1) for _ in solve((x - y + 1).subs(-1, z), 1)]
      [-x + y]
      
    • To solve for a function within a derivative, use dsolve.

  • single expression and more than 1 symbol

    • when there is a linear solution

      >>> solve(x - y**2, x, y)
      [{x: y**2}]
      >>> solve(x**2 - y, x, y)
      [{y: x**2}]
      
    • when undetermined coefficients are identified

      • that are linear

        >>> solve((a + b)*x - b + 2, a, b)
        {a: -2, b: 2}
        
      • that are nonlinear

        >>> solve((a + b)*x - b**2 + 2, a, b, set=True)
        ([a, b], {(-sqrt(2), sqrt(2)), (sqrt(2), -sqrt(2))})
        
    • if there is no linear solution then the first successful attempt for a nonlinear solution will be returned

      >>> solve(x**2 - y**2, x, y)
      [{x: -y}, {x: y}]
      >>> solve(x**2 - y**2/exp(x), x, y)
      [{x: 2*LambertW(y/2)}]
      >>> solve(x**2 - y**2/exp(x), y, x)
      [{y: -x*sqrt(exp(x))}, {y: x*sqrt(exp(x))}]
      
  • iterable of one or more of the above

    • involving relationals or bools

      >>> solve([x < 3, x - 2])
      Eq(x, 2)
      >>> solve([x > 3, x - 2])
      False
      
    • when the system is linear

      • with a solution

        >>> solve([x - 3], x)
        {x: 3}
        >>> solve((x + 5*y - 2, -3*x + 6*y - 15), x, y)
        {x: -3, y: 1}
        >>> solve((x + 5*y - 2, -3*x + 6*y - 15), x, y, z)
        {x: -3, y: 1}
        >>> solve((x + 5*y - 2, -3*x + 6*y - z), z, x, y)
        {x: -5*y + 2, z: 21*y - 6}
        
      • without a solution

        >>> solve([x + 3, x - 3])
        []
        
    • when the system is not linear

      >>> solve([x**2 + y -2, y**2 - 4], x, y, set=True)
      ([x, y], {(-2, -2), (0, 2), (2, -2)})
      
    • if no symbols are given, all free symbols will be selected and a list of mappings returned

      >>> solve([x - 2, x**2 + y])
      [{x: 2, y: -4}]
      >>> solve([x - 2, x**2 + f(x)], {f(x), x})
      [{x: 2, f(x): -4}]
      
    • if any equation doesn’t depend on the symbol(s) given it will be eliminated from the equation set and an answer may be given implicitly in terms of variables that were not of interest

      >>> solve([x - y, y - 3], x)
      {x: y}
      

Notes

solve() with check=True (default) will run through the symbol tags to elimate unwanted solutions. If no assumptions are included all possible solutions will be returned.

>>> from .. import Symbol, solve
>>> x = Symbol("x")
>>> solve(x**2 - 1)
[-1, 1]

By using the positive tag only one solution will be returned:

>>> pos = Symbol("pos", positive=True)
>>> solve(pos**2 - 1)
[1]

Assumptions aren’t checked when solve() input involves relationals or bools.

When the solutions are checked, those that make any denominator zero are automatically excluded. If you do not want to exclude such solutions then use the check=False option:

>>> from .. import sin, limit
>>> solve(sin(x)/x)  # 0 is excluded
[pi]

If check=False then a solution to the numerator being zero is found: x = 0. In this case, this is a spurious solution since sin(x)/x has the well known limit (without dicontinuity) of 1 at x = 0:

>>> solve(sin(x)/x, check=False)
[0, pi]

In the following case, however, the limit exists and is equal to the the value of x = 0 that is excluded when check=True:

>>> eq = x**2*(1/x - z**2/x)
>>> solve(eq, x)
[]
>>> solve(eq, x, check=False)
[0]
>>> limit(eq, x, 0, '-')
0
>>> limit(eq, x, 0, '+')
0

Disabling high-order, explicit solutions

When solving polynomial expressions, one might not want explicit solutions (which can be quite long). If the expression is univariate, CRootOf instances will be returned instead:

>>> solve(x**3 - x + 1)
[-1/((-1/2 - sqrt(3)*I/2)*(3*sqrt(69)/2 + 27/2)**(1/3)) - (-1/2 -
sqrt(3)*I/2)*(3*sqrt(69)/2 + 27/2)**(1/3)/3, -(-1/2 +
sqrt(3)*I/2)*(3*sqrt(69)/2 + 27/2)**(1/3)/3 - 1/((-1/2 +
sqrt(3)*I/2)*(3*sqrt(69)/2 + 27/2)**(1/3)), -(3*sqrt(69)/2 +
27/2)**(1/3)/3 - 1/(3*sqrt(69)/2 + 27/2)**(1/3)]
>>> solve(x**3 - x + 1, cubics=False)
[CRootOf(x**3 - x + 1, 0),
 CRootOf(x**3 - x + 1, 1),
 CRootOf(x**3 - x + 1, 2)]

If the expression is multivariate, no solution might be returned:

>>> solve(x**3 - x + a, x, cubics=False)
[]

Sometimes solutions will be obtained even when a flag is False because the expression could be factored. In the following example, the equation can be factored as the product of a linear and a quadratic factor so explicit solutions (which did not require solving a cubic expression) are obtained:

>>> eq = x**3 + 3*x**2 + x - 1
>>> solve(eq, cubics=False)
[-1, -1 + sqrt(2), -sqrt(2) - 1]

Solving equations involving radicals

Because of SymPy’s use of the principle root (issue #8789), some solutions to radical equations will be missed unless check=False:

>>> from .. import root
>>> eq = root(x**3 - 3*x**2, 3) + 1 - x
>>> solve(eq)
[]
>>> solve(eq, check=False)
[1/3]

In the above example there is only a single solution to the equation. Other expressions will yield spurious roots which must be checked manually; roots which give a negative argument to odd-powered radicals will also need special checking:

>>> from .. import real_root, S
>>> eq = root(x, 3) - root(x, 5) + S(1)/7
>>> solve(eq)  # this gives 2 solutions but misses a 3rd
[CRootOf(7*_p**5 - 7*_p**3 + 1, 1)**15,
CRootOf(7*_p**5 - 7*_p**3 + 1, 2)**15]
>>> sol = solve(eq, check=False)
>>> [abs(eq.subs(x,i).n(2)) for i in sol]
[0.48, 0.e-110, 0.e-110, 0.052, 0.052]

The first solution is negative so real_root must be used to see that it satisfies the expression:

>>> abs(real_root(eq.subs(x, sol[0])).n(2))
0.e-110

If the roots of the equation are not real then more care will be necessary to find the roots, especially for higher order equations. Consider the following expression:

>>> expr = root(x, 3) - root(x, 5)

We will construct a known value for this expression at x = 3 by selecting the 1-th root for each radical:

>>> expr1 = root(x, 3, 1) - root(x, 5, 1)
>>> v = expr1.subs(x, -3)

The solve function is unable to find any exact roots to this equation:

>>> eq = Eq(expr, v); eq1 = Eq(expr1, v)
>>> solve(eq, check=False), solve(eq1, check=False)
([], [])

The function unrad, however, can be used to get a form of the equation for which numerical roots can be found:

>>> from .solvers import unrad
>>> from .. import nroots
>>> e, (p, cov) = unrad(eq)
>>> pvals = nroots(e)
>>> inversion = solve(cov, x)[0]
>>> xvals = [inversion.subs(p, i) for i in pvals]

Although eq or eq1 could have been used to find xvals, the solution can only be verified with expr1:

>>> z = expr - v
>>> [xi.n(chop=1e-9) for xi in xvals if abs(z.subs(x, xi).n()) < 1e-9]
[]
>>> z1 = expr1 - v
>>> [xi.n(chop=1e-9) for xi in xvals if abs(z1.subs(x, xi).n()) < 1e-9]
[-3.0]
modelparameters.sympy.solvers.solvers.solve_linear(lhs, rhs=0, symbols=[], exclude=[])[source]

Return a tuple derived from f = lhs - rhs that is one of the following:

(0, 1) meaning that f is independent of the symbols in symbols that aren’t in exclude, e.g:

>>> from .solvers import solve_linear
>>> from ..abc import x, y, z
>>> from .. import cos, sin
>>> eq = y*cos(x)**2 + y*sin(x)**2 - y  # = y*(1 - 1) = 0
>>> solve_linear(eq)
(0, 1)
>>> eq = cos(x)**2 + sin(x)**2  # = 1
>>> solve_linear(eq)
(0, 1)
>>> solve_linear(x, exclude=[x])
(0, 1)

(0, 0) meaning that there is no solution to the equation amongst the symbols given.

(If the first element of the tuple is not zero then the function is guaranteed to be dependent on a symbol in symbols.)

(symbol, solution) where symbol appears linearly in the numerator of f, is in symbols (if given) and is not in exclude (if given). No simplification is done to f other than a mul=True expansion, so the solution will correspond strictly to a unique solution.

(n, d) where n and d are the numerator and denominator of f when the numerator was not linear in any symbol of interest; n will never be a symbol unless a solution for that symbol was found (in which case the second element is the solution, not the denominator).

Examples

>>> from ..core.power import Pow
>>> from ..polys.polytools import cancel

The variable x appears as a linear variable in each of the following:

>>> solve_linear(x + y**2)
(x, -y**2)
>>> solve_linear(1/x - y**2)
(x, y**(-2))

When not linear in x or y then the numerator and denominator are returned.

>>> solve_linear(x**2/y**2 - 3)
(x**2 - 3*y**2, y**2)

If the numerator of the expression is a symbol then (0, 0) is returned if the solution for that symbol would have set any denominator to 0:

>>> eq = 1/(1/x - 2)
>>> eq.as_numer_denom()
(x, -2*x + 1)
>>> solve_linear(eq)
(0, 0)

But automatic rewriting may cause a symbol in the denominator to appear in the numerator so a solution will be returned:

>>> (1/x)**-1
x
>>> solve_linear((1/x)**-1)
(x, 0)

Use an unevaluated expression to avoid this:

>>> solve_linear(Pow(1/x, -1, evaluate=False))
(0, 0)

If x is allowed to cancel in the following expression, then it appears to be linear in x, but this sort of cancellation is not done by solve_linear so the solution will always satisfy the original expression without causing a division by zero error.

>>> eq = x**2*(1/x - z**2/x)
>>> solve_linear(cancel(eq))
(x, 0)
>>> solve_linear(eq)
(x**2*(-z**2 + 1), x)

A list of symbols for which a solution is desired may be given:

>>> solve_linear(x + y + z, symbols=[y])
(y, -x - z)

A list of symbols to ignore may also be given:

>>> solve_linear(x + y + z, exclude=[x])
(y, -x - z)

(A solution for y is obtained because it is the first variable from the canonically sorted list of symbols that had a linear solution.)

modelparameters.sympy.solvers.solvers.solve_linear_system(system, *symbols, **flags)[source]

Solve system of N linear equations with M variables, which means both under- and overdetermined systems are supported. The possible number of solutions is zero, one or infinite. Respectively, this procedure will return None or a dictionary with solutions. In the case of underdetermined systems, all arbitrary parameters are skipped. This may cause a situation in which an empty dictionary is returned. In that case, all symbols can be assigned arbitrary values.

Input to this functions is a Nx(M+1) matrix, which means it has to be in augmented form. If you prefer to enter N equations and M unknowns then use solve(Neqs, *Msymbols) instead. Note: a local copy of the matrix is made by this routine so the matrix that is passed will not be modified.

The algorithm used here is fraction-free Gaussian elimination, which results, after elimination, in an upper-triangular matrix. Then solutions are found using back-substitution. This approach is more efficient and compact than the Gauss-Jordan method.

>>> from .. import Matrix, solve_linear_system
>>> from ..abc import x, y

Solve the following system:

   x + 4 y ==  2
-2 x +   y == 14
>>> system = Matrix(( (1, 4, 2), (-2, 1, 14)))
>>> solve_linear_system(system, x, y)
{x: -6, y: 2}

A degenerate system returns an empty dictionary.

>>> system = Matrix(( (0,0,0), (0,0,0) ))
>>> solve_linear_system(system, x, y)
{}
modelparameters.sympy.solvers.solvers.solve_linear_system_LU(matrix, syms)[source]

Solves the augmented matrix system using LUsolve and returns a dictionary in which solutions are keyed to the symbols of syms as ordered.

The matrix must be invertible.

Examples

>>> from .. import Matrix
>>> from ..abc import x, y, z
>>> from .solvers import solve_linear_system_LU
>>> solve_linear_system_LU(Matrix([
... [1, 2, 0, 1],
... [3, 2, 2, 1],
... [2, 0, 0, 1]]), [x, y, z])
{x: 1/2, y: 1/4, z: -1/2}

See also

sympy.matrices.LUsolve

modelparameters.sympy.solvers.solvers.solve_undetermined_coeffs(equ, coeffs, sym, **flags)[source]

Solve equation of a type p(x; a_1, …, a_k) == q(x) where both p, q are univariate polynomials and f depends on k parameters. The result of this functions is a dictionary with symbolic values of those parameters with respect to coefficients in q.

This functions accepts both Equations class instances and ordinary SymPy expressions. Specification of parameters and variable is obligatory for efficiency and simplicity reason.

>>> from .. import Eq
>>> from ..abc import a, b, c, x
>>> from ..solvers import solve_undetermined_coeffs
>>> solve_undetermined_coeffs(Eq(2*a*x + a+b, x), [a, b], x)
{a: 1/2, b: -1/2}
>>> solve_undetermined_coeffs(Eq(a*c*x + a+b, x), [a, b], x)
{a: 1/c, b: -1/c}
modelparameters.sympy.solvers.solvers.unrad(eq, *syms, **flags)[source]

Remove radicals with symbolic arguments and return (eq, cov), None or raise an error:

None is returned if there are no radicals to remove.

NotImplementedError is raised if there are radicals and they cannot be removed or if the relationship between the original symbols and the change of variable needed to rewrite the system as a polynomial cannot be solved.

Otherwise the tuple, (eq, cov), is returned where:

``eq``, ``cov``
    ``eq`` is an equation without radicals (in the symbol(s) of
    interest) whose solutions are a superset of the solutions to the
    original expression. ``eq`` might be re-written in terms of a new
    variable; the relationship to the original variables is given by
    ``cov`` which is a list containing ``v`` and ``v**p - b`` where
    ``p`` is the power needed to clear the radical and ``b`` is the
    radical now expressed as a polynomial in the symbols of interest.
    For example, for sqrt(2 - x) the tuple would be
    ``(c, c**2 - 2 + x)``. The solutions of ``eq`` will contain
    solutions to the original equation (if there are any).
syms

an iterable of symbols which, if provided, will limit the focus of radical removal: only radicals with one or more of the symbols of interest will be cleared. All free symbols are used if syms is not set.

flags are used internally for communication during recursive calls. Two options are also recognized:

``take``, when defined, is interpreted as a single-argument function
that returns True if a given Pow should be handled.

Radicals can be removed from an expression if:

*   all bases of the radicals are the same; a change of variables is
    done in this case.
*   if all radicals appear in one term of the expression
*   there are only 4 terms with sqrt() factors or there are less than
    four terms having sqrt() factors
*   there are only two terms with radicals

Examples

>>> from .solvers import unrad
>>> from ..abc import x
>>> from .. import sqrt, Rational, root, real_roots, solve
>>> unrad(sqrt(x)*x**Rational(1, 3) + 2)
(x**5 - 64, [])
>>> unrad(sqrt(x) + root(x + 1, 3))
(x**3 - x**2 - 2*x - 1, [])
>>> eq = sqrt(x) + root(x, 3) - 2
>>> unrad(eq)
(_p**3 + _p**2 - 2, [_p, _p**6 - x])

modelparameters.sympy.solvers.solveset module

This module contains functions to:

  • solve a single equation for a single variable, in any domain either real or complex.

  • solve a system of linear equations with N variables and M equations.

  • solve a system of Non Linear Equations with N variables and M equations

modelparameters.sympy.solvers.solveset.domain_check(f, symbol, p)[source]

Returns False if point p is infinite or any subexpression of f is infinite or becomes so after replacing symbol with p. If none of these conditions is met then True will be returned.

Examples

>>> from .. import Mul, oo
>>> from ..abc import x
>>> from .solveset import domain_check
>>> g = 1/(1 + (1/(x + 1))**2)
>>> domain_check(g, x, -1)
False
>>> domain_check(x**2, x, 0)
True
>>> domain_check(1/x, x, oo)
False
  • The function relies on the assumption that the original form of the equation has not been changed by automatic simplification.

>>> domain_check(x/x, x, 0) # x/x is automatically simplified to 1
True
  • To deal with automatic evaluations use evaluate=False:

>>> domain_check(Mul(x, 1/x, evaluate=False), x, 0)
False
modelparameters.sympy.solvers.solveset.invert_complex(f_x, y, x, domain=S.Complexes)

Reduce the complex valued equation f(x) = y to a set of equations {g(x) = h_1(y), g(x) = h_2(y), ..., g(x) = h_n(y) } where g(x) is a simpler function than f(x). The return value is a tuple (g(x), set_h), where g(x) is a function of x and set_h is the set of function {h_1(y), h_2(y), ..., h_n(y)}. Here, y is not necessarily a symbol.

The set_h contains the functions, along with the information about the domain in which they are valid, through set operations. For instance, if y = Abs(x) - n is inverted in the real domain, then set_h is not simply {-n, n} as the nature of n is unknown; rather, it is: Intersection([0, oo) {n}) U Intersection((-oo, 0], {-n})

By default, the complex domain is used which means that inverting even seemingly simple functions like exp(x) will give very different results from those obtained in the real domain. (In the case of exp(x), the inversion via log is multi-valued in the complex domain, having infinitely many branches.)

If you are working with real values only (or you are not sure which function to use) you should probably set the domain to S.Reals (or use invert_real which does that automatically).

Examples

>>> from .solveset import invert_complex, invert_real
>>> from ..abc import x, y
>>> from .. import exp, log

When does exp(x) == y?

>>> invert_complex(exp(x), y, x)
(x, ImageSet(Lambda(_n, I*(2*_n*pi + arg(y)) + log(Abs(y))), S.Integers))
>>> invert_real(exp(x), y, x)
(x, Intersection(S.Reals, {log(y)}))

When does exp(x) == 1?

>>> invert_complex(exp(x), 1, x)
(x, ImageSet(Lambda(_n, 2*_n*I*pi), S.Integers))
>>> invert_real(exp(x), 1, x)
(x, {0})
modelparameters.sympy.solvers.solveset.invert_real(f_x, y, x, domain=S.Reals)[source]

Inverts a real-valued function. Same as _invert, but sets the domain to S.Reals before inverting.

modelparameters.sympy.solvers.solveset.linear_eq_to_matrix(equations, *symbols)[source]

Converts a given System of Equations into Matrix form. Here equations must be a linear system of equations in symbols. The order of symbols in input symbols will determine the order of coefficients in the returned Matrix.

The Matrix form corresponds to the augmented matrix form. For example:

\[4x + 2y + 3z = 1\]
\[3x + y + z = -6\]
\[2x + 4y + 9z = 2\]

This system would return A & b as given below:

    [ 4  2  3 ]          [ 1 ]
A = [ 3  1  1 ]   b  =   [-6 ]
    [ 2  4  9 ]          [ 2 ]

Examples

>>> from .. import linear_eq_to_matrix, symbols
>>> x, y, z = symbols('x, y, z')
>>> eqns = [x + 2*y + 3*z - 1, 3*x + y + z + 6, 2*x + 4*y + 9*z - 2]
>>> A, b = linear_eq_to_matrix(eqns, [x, y, z])
>>> A
Matrix([
[1, 2, 3],
[3, 1, 1],
[2, 4, 9]])
>>> b
Matrix([
[ 1],
[-6],
[ 2]])
>>> eqns = [x + z - 1, y + z, x - y]
>>> A, b = linear_eq_to_matrix(eqns, [x, y, z])
>>> A
Matrix([
[1,  0, 1],
[0,  1, 1],
[1, -1, 0]])
>>> b
Matrix([
[1],
[0],
[0]])
  • Symbolic coefficients are also supported

>>> a, b, c, d, e, f = symbols('a, b, c, d, e, f')
>>> eqns = [a*x + b*y - c, d*x + e*y - f]
>>> A, B = linear_eq_to_matrix(eqns, x, y)
>>> A
Matrix([
[a, b],
[d, e]])
>>> B
Matrix([
[c],
[f]])
modelparameters.sympy.solvers.solveset.linsolve(system, *symbols)[source]

Solve system of N linear equations with M variables, which means both under - and overdetermined systems are supported. The possible number of solutions is zero, one or infinite. Zero solutions throws a ValueError, where as infinite solutions are represented parametrically in terms of given symbols. For unique solution a FiniteSet of ordered tuple is returned.

All Standard input formats are supported: For the given set of Equations, the respective input types are given below:

\[3x + 2y - z = 1\]
\[2x - 2y + 4z = -2\]
\[2x - y + 2z = 0\]
  • Augmented Matrix Form, system given below:

         [3   2  -1  1]
system = [2  -2   4 -2]
         [2  -1   2  0]
  • List Of Equations Form

system = [3x + 2y - z - 1, 2x - 2y + 4z + 2, 2x - y + 2z]

  • Input A & b Matrix Form (from Ax = b) are given as below:

    [3   2  -1 ]         [  1 ]
A = [2  -2   4 ]    b =  [ -2 ]
    [2  -1   2 ]         [  0 ]

system = (A, b)

Symbols to solve for should be given as input in all the cases either in an iterable or as comma separated arguments. This is done to maintain consistency in returning solutions in the form of variable input by the user.

The algorithm used here is Gauss-Jordan elimination, which results, after elimination, in an row echelon form matrix.

Returns:

  • A FiniteSet of ordered tuple of values of symbols for which

  • the system has solution.

  • Please note that general FiniteSet is unordered, the solution

  • returned here is not simply a FiniteSet of solutions, rather

  • it is a FiniteSet of ordered tuple, i.e. the first & only

  • argument to FiniteSet is a tuple of solutions, which is ordered,

  • & hence the returned solution is ordered.

  • Also note that solution could also have been returned as an

  • ordered tuple, FiniteSet is just a wrapper {} around

  • the tuple. It has no other significance except for

  • the fact it is just used to maintain a consistent output

  • format throughout the solveset.

  • Returns EmptySet(), if the linear system is inconsistent.

Raises:

ValueError – The input is not valid. The symbols are not given.

Examples

>>> from .. import Matrix, S, linsolve, symbols
>>> x, y, z = symbols("x, y, z")
>>> A = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 10]])
>>> b = Matrix([3, 6, 9])
>>> A
Matrix([
[1, 2,  3],
[4, 5,  6],
[7, 8, 10]])
>>> b
Matrix([
[3],
[6],
[9]])
>>> linsolve((A, b), [x, y, z])
{(-1, 2, 0)}
  • Parametric Solution: In case the system is under determined, the function will return parametric solution in terms of the given symbols. Free symbols in the system are returned as it is. For e.g. in the system below, z is returned as the solution for variable z, which means z is a free symbol, i.e. it can take arbitrary values.

>>> A = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> b = Matrix([3, 6, 9])
>>> linsolve((A, b), [x, y, z])
{(z - 1, -2*z + 2, z)}
  • List of Equations as input

>>> Eqns = [3*x + 2*y - z - 1, 2*x - 2*y + 4*z + 2, - x + S(1)/2*y - z]
>>> linsolve(Eqns, x, y, z)
{(1, -2, -2)}
  • Augmented Matrix as input

>>> aug = Matrix([[2, 1, 3, 1], [2, 6, 8, 3], [6, 8, 18, 5]])
>>> aug
Matrix([
[2, 1,  3, 1],
[2, 6,  8, 3],
[6, 8, 18, 5]])
>>> linsolve(aug, x, y, z)
{(3/10, 2/5, 0)}
  • Solve for symbolic coefficients

>>> a, b, c, d, e, f = symbols('a, b, c, d, e, f')
>>> eqns = [a*x + b*y - c, d*x + e*y - f]
>>> linsolve(eqns, x, y)
{((-b*f + c*e)/(a*e - b*d), (a*f - c*d)/(a*e - b*d))}
  • A degenerate system returns solution as set of given symbols.

>>> system = Matrix(([0,0,0], [0,0,0], [0,0,0]))
>>> linsolve(system, x, y)
{(x, y)}
  • For an empty system linsolve returns empty set

>>> linsolve([ ], x)
EmptySet()
modelparameters.sympy.solvers.solveset.nonlinsolve(system, *symbols)[source]

Solve system of N non linear equations with M variables, which means both under and overdetermined systems are supported. Positive dimensional system is also supported (A system with infinitely many solutions is said to be positive-dimensional). In Positive dimensional system solution will be dependent on at least one symbol. Returns both real solution and complex solution(If system have). The possible number of solutions is zero, one or infinite.

Parameters:
  • system (list of equations) – The target system of equations

  • symbols (list of Symbols) – symbols should be given as a sequence eg. list

Returns:

  • A FiniteSet of ordered tuple of values of symbols for which the system

  • has solution. Order of values in the tuple is same as symbols present in

  • the parameter symbols.

  • Please note that general FiniteSet is unordered, the solution returned

  • here is not simply a FiniteSet of solutions, rather it is a FiniteSet of

  • ordered tuple, i.e. the first & only argument to FiniteSet is a tuple of

  • solutions, which is ordered, & hence the returned solution is ordered.

  • Also note that solution could also have been returned as an ordered tuple,

  • FiniteSet is just a wrapper {} around the tuple. It has no other

  • significance except for the fact it is just used to maintain a consistent

  • output format throughout the solveset.

  • For the given set of Equations, the respective input types

  • are given below

  • .. math:: x*y - 1 = 0

  • .. math:: 4*x**2 + y**2 - 5 = 0

  • system = [x*y - 1, 4*x**2 + y**2 - 5]

  • symbols = [x, y]

Raises:
  • ValueError – The input is not valid. The symbols are not given.

  • AttributeError – The input symbols are not Symbol type.

Examples

>>> from ..core.symbol import symbols
>>> from .solveset import nonlinsolve
>>> x, y, z = symbols('x, y, z', real=True)
>>> nonlinsolve([x*y - 1, 4*x**2 + y**2 - 5], [x, y])
{(-1, -1), (-1/2, -2), (1/2, 2), (1, 1)}
  1. Positive dimensional system and complements:

>>> from .. import pprint
>>> from ..polys.polytools import is_zero_dimensional
>>> a, b, c, d = symbols('a, b, c, d', real=True)
>>> eq1 =  a + b + c + d
>>> eq2 = a*b + b*c + c*d + d*a
>>> eq3 = a*b*c + b*c*d + c*d*a + d*a*b
>>> eq4 = a*b*c*d - 1
>>> system = [eq1, eq2, eq3, eq4]
>>> is_zero_dimensional(system)
False
>>> pprint(nonlinsolve(system, [a, b, c, d]), use_unicode=False)
  -1       1               1      -1
{(---, -d, -, {d} \ {0}), (-, -d, ---, {d} \ {0})}
   d       d               d       d
>>> nonlinsolve([(x+y)**2 - 4, x + y - 2], [x, y])
{(-y + 2, y)}

2. If some of the equations are non polynomial equation then nonlinsolve will call substitution function and returns real and complex solutions, if present.

>>> from .. import exp, sin
>>> nonlinsolve([exp(x) - sin(y), y**2 - 4], [x, y])
{(log(sin(2)), 2), (ImageSet(Lambda(_n, I*(2*_n*pi + pi) +
    log(sin(2))), S.Integers), -2), (ImageSet(Lambda(_n, 2*_n*I*pi +
    Mod(log(sin(2)), 2*I*pi)), S.Integers), 2)}

3. If system is Non linear polynomial zero dimensional then it returns both solution (real and complex solutions, if present using solve_poly_system):

>>> from .. import sqrt
>>> nonlinsolve([x**2 - 2*y**2 -2, x*y - 2], [x, y])
{(-2, -1), (2, 1), (-sqrt(2)*I, sqrt(2)*I), (sqrt(2)*I, -sqrt(2)*I)}

4. nonlinsolve can solve some linear(zero or positive dimensional) system (because it is using groebner function to get the groebner basis and then substitution function basis as the new system). But it is not recommended to solve linear system using nonlinsolve, because linsolve is better for all kind of linear system.

>>> nonlinsolve([x + 2*y -z - 3, x - y - 4*z + 9 , y + z - 4], [x, y, z])
{(3*z - 5, -z + 4, z)}

5. System having polynomial equations and only real solution is present (will be solved using solve_poly_system):

>>> e1 = sqrt(x**2 + y**2) - 10
>>> e2 = sqrt(y**2 + (-x + 10)**2) - 3
>>> nonlinsolve((e1, e2), (x, y))
{(191/20, -3*sqrt(391)/20), (191/20, 3*sqrt(391)/20)}
>>> nonlinsolve([x**2 + 2/y - 2, x + y - 3], [x, y])
{(1, 2), (1 + sqrt(5), -sqrt(5) + 2), (-sqrt(5) + 1, 2 + sqrt(5))}
>>> nonlinsolve([x**2 + 2/y - 2, x + y - 3], [y, x])
{(2, 1), (2 + sqrt(5), -sqrt(5) + 1), (-sqrt(5) + 2, 1 + sqrt(5))}

6. It is better to use symbols instead of Trigonometric Function or Function (e.g. replace sin(x) with symbol, replace f(x) with symbol and so on. Get soln from nonlinsolve and then using solveset get the value of x)

How nonlinsolve is better than old solver _solve_system :

1. A positive dimensional system solver : nonlinsolve can return solution for positive dimensional system. It finds the Groebner Basis of the positive dimensional system(calling it as basis) then we can start solving equation(having least number of variable first in the basis) using solveset and substituting that solved solutions into other equation(of basis) to get solution in terms of minimum variables. Here the important thing is how we are substituting the known values and in which equations.

2. Real and Complex both solutions : nonlinsolve returns both real and complex solution. If all the equations in the system are polynomial then using solve_poly_system both real and complex solution is returned. If all the equations in the system are not polynomial equation then goes to substitution method with this polynomial and non polynomial equation(s), to solve for unsolved variables. Here to solve for particular variable solveset_real and solveset_complex is used. For both real and complex solution function _solve_using_know_values is used inside substitution function.(substitution function will be called when there is any non polynomial equation(s) is present). When solution is valid then add its general solution in the final result.

3. Complement and Intersection will be added if any : nonlinsolve maintains dict for complements and Intersections. If solveset find complements or/and Intersection with any Interval or set during the execution of substitution function ,then complement or/and Intersection for that variable is added before returning final solution.

modelparameters.sympy.solvers.solveset.solve_decomposition(f, symbol, domain)[source]

Function to solve equations via the principle of “Decomposition and Rewriting”.

Examples

>>> from .. import exp, sin, Symbol, pprint, S
>>> from .solveset import solve_decomposition as sd
>>> x = Symbol('x')
>>> f1 = exp(2*x) - 3*exp(x) + 2
>>> sd(f1, x, S.Reals)
{0, log(2)}
>>> f2 = sin(x)**2 + 2*sin(x) + 1
>>> pprint(sd(f2, x, S.Reals), use_unicode=False)
          3*pi
{2*n*pi + ---- | n in S.Integers}
           2
>>> f3 = sin(x + 2)
>>> pprint(sd(f3, x, S.Reals), use_unicode=False)
{2*n*pi - 2 | n in S.Integers} U {pi*(2*n + 1) - 2 | n in S.Integers}
modelparameters.sympy.solvers.solveset.solveset(f, symbol=None, domain=S.Complexes)[source]

Solves a given inequality or equation with set as output

Parameters:
  • f (Expr or a relational.) – The target equation or inequality

  • symbol (Symbol) – The variable for which the equation is solved

  • domain (Set) – The domain over which the equation is solved

Returns:

  • Set – A set of values for symbol for which f is True or is equal to zero. An EmptySet is returned if f is False or nonzero. A ConditionSet is returned as unsolved object if algorithms to evaluate complete solution are not yet implemented.

  • solveset claims to be complete in the solution set that it returns.

Raises:
  • NotImplementedError – The algorithms to solve inequalities in complex domain are not yet implemented.

  • ValueError – The input is not valid.

  • RuntimeError – It is a bug, please report to the github issue tracker.

Notes

Python interprets 0 and 1 as False and True, respectively, but in this function they refer to solutions of an expression. So 0 and 1 return the Domain and EmptySet, respectively, while True and False return the opposite (as they are assumed to be solutions of relational expressions).

See also

solveset_real

solver for real domain

solveset_complex

solver for complex domain

Examples

>>> from .. import exp, sin, Symbol, pprint, S
>>> from .solveset import solveset, solveset_real
  • The default domain is complex. Not specifying a domain will lead to the solving of the equation in the complex domain (and this is not affected by the assumptions on the symbol):

>>> x = Symbol('x')
>>> pprint(solveset(exp(x) - 1, x), use_unicode=False)
{2*n*I*pi | n in S.Integers}
>>> x = Symbol('x', real=True)
>>> pprint(solveset(exp(x) - 1, x), use_unicode=False)
{2*n*I*pi | n in S.Integers}
  • If you want to use solveset to solve the equation in the real domain, provide a real domain. (Using solveset_real does this automatically.)

>>> R = S.Reals
>>> x = Symbol('x')
>>> solveset(exp(x) - 1, x, R)
{0}
>>> solveset_real(exp(x) - 1, x)
{0}

The solution is mostly unaffected by assumptions on the symbol, but there may be some slight difference:

>>> pprint(solveset(sin(x)/x,x), use_unicode=False)
({2*n*pi | n in S.Integers} \ {0}) U ({2*n*pi + pi | n in S.Integers} \ {0})
>>> p = Symbol('p', positive=True)
>>> pprint(solveset(sin(p)/p, p), use_unicode=False)
{2*n*pi | n in S.Integers} U {2*n*pi + pi | n in S.Integers}
  • Inequalities can be solved over the real domain only. Use of a complex domain leads to a NotImplementedError.

>>> solveset(exp(x) > 1, x, R)
Interval.open(0, oo)
modelparameters.sympy.solvers.solveset.solveset_complex(f, symbol)[source]
modelparameters.sympy.solvers.solveset.solveset_real(f, symbol)[source]
modelparameters.sympy.solvers.solveset.solvify(f, symbol, domain)[source]

Solves an equation using solveset and returns the solution in accordance with the solve output API.

Returns:

  • We classify the output based on the type of solution returned by solveset.

  • Solution | Output

  • —————————————-

  • FiniteSet | list

  • ImageSet, | list (if f is periodic)

  • Union |

  • EmptySet | empty list

  • Others | None

Raises:

NotImplementedError – A ConditionSet is the input.

Examples

>>> from .solveset import solvify, solveset
>>> from ..abc import x
>>> from .. import S, tan, sin, exp
>>> solvify(x**2 - 9, x, S.Reals)
[-3, 3]
>>> solvify(sin(x) - 1, x, S.Reals)
[pi/2]
>>> solvify(tan(x), x, S.Reals)
[0]
>>> solvify(exp(x) - 1, x, S.Complexes)
>>> solvify(exp(x) - 1, x, S.Reals)
[0]
modelparameters.sympy.solvers.solveset.substitution(system, symbols, result=[{}], known_symbols=[], exclude=[], all_symbols=None)[source]

Solves the system using substitution method. It is used in nonlinsolve. This will be called from nonlinsolve when any equation(s) is non polynomial equation.

Parameters:
  • system (list of equations) – The target system of equations

  • symbols (list of symbols to be solved.) – The variable(s) for which the system is solved

  • known_symbols (list of solved symbols) – Values are known for these variable(s)

  • result (An empty list or list of dict) – If No symbol values is known then empty list otherwise symbol as keys and corresponding value in dict.

  • exclude (Set of expression.) – Mostly denominator expression(s) of the equations of the system. Final solution should not satisfy these expressions.

  • all_symbols (known_symbols + symbols(unsolved).) –

Returns:

  • A FiniteSet of ordered tuple of values of all_symbols for which the

  • system has solution. Order of values in the tuple is same as symbols

  • present in the parameter all_symbols. If parameter all_symbols is None

  • then same as symbols present in the parameter symbols.

  • Please note that general FiniteSet is unordered, the solution returned

  • here is not simply a FiniteSet of solutions, rather it is a FiniteSet of

  • ordered tuple, i.e. the first & only argument to FiniteSet is a tuple of

  • solutions, which is ordered, & hence the returned solution is ordered.

  • Also note that solution could also have been returned as an ordered tuple,

  • FiniteSet is just a wrapper {} around the tuple. It has no other

  • significance except for the fact it is just used to maintain a consistent

  • output format throughout the solveset.

Raises:
  • ValueError – The input is not valid. The symbols are not given.

  • AttributeError – The input symbols are not Symbol type.

Examples

>>> from ..core.symbol import symbols
>>> x, y = symbols('x, y', real=True)
>>> from .solveset import substitution
>>> substitution([x + y], [x], [{y: 1}], [y], set([]), [x, y])
{(-1, 1)}
  • when you want soln should not satisfy eq x + 1 = 0

>>> substitution([x + y], [x], [{y: 1}], [y], set([x + 1]), [y, x])
EmptySet()
>>> substitution([x + y], [x], [{y: 1}], [y], set([x - 1]), [y, x])
{(1, -1)}
>>> substitution([x + y - 1, y - x**2 + 5], [x, y])
{(-3, 4), (2, -1)}
  • Returns both real and complex solution

>>> x, y, z = symbols('x, y, z')
>>> from .. import exp, sin
>>> substitution([exp(x) - sin(y), y**2 - 4], [x, y])
{(log(sin(2)), 2), (ImageSet(Lambda(_n, I*(2*_n*pi + pi) +
    log(sin(2))), S.Integers), -2), (ImageSet(Lambda(_n, 2*_n*I*pi +
    Mod(log(sin(2)), 2*I*pi)), S.Integers), 2)}
>>> eqs = [z**2 + exp(2*x) - sin(y), -3 + exp(-y)]
>>> substitution(eqs, [y, z])
{(-log(3), -sqrt(-exp(2*x) - sin(log(3)))),
(-log(3), sqrt(-exp(2*x) - sin(log(3)))),
(ImageSet(Lambda(_n, 2*_n*I*pi + Mod(-log(3), 2*I*pi)), S.Integers),
ImageSet(Lambda(_n, -sqrt(-exp(2*x) + sin(2*_n*I*pi +
Mod(-log(3), 2*I*pi)))), S.Integers)),
(ImageSet(Lambda(_n, 2*_n*I*pi + Mod(-log(3), 2*I*pi)), S.Integers),
ImageSet(Lambda(_n, sqrt(-exp(2*x) + sin(2*_n*I*pi +
    Mod(-log(3), 2*I*pi)))), S.Integers))}

Module contents

A module for solving all kinds of equations.

Examples

>>> from ..solvers import solve
>>> from ..abc import x
>>> solve(x**5+5*x**4+10*x**3+10*x**2+5*x+1,x)
[-1]