modelparameters.sympy.utilities package

Subpackages

Submodules

modelparameters.sympy.utilities.autowrap module

Module for compiling codegen output, and wrap the binary for use in python.

Note

To use the autowrap module it must first be imported

>>> from .autowrap import autowrap

This module provides a common interface for different external backends, such as f2py, fwrap, Cython, SWIG(?) etc. (Currently only f2py and Cython are implemented) The goal is to provide access to compiled binaries of acceptable performance with a one-button user interface, i.e.

>>> from ..abc import x,y
>>> expr = ((x - y)**(25)).expand()
>>> binary_callable = autowrap(expr)
>>> binary_callable(1, 2)
-1.0

The callable returned from autowrap() is a binary python function, not a SymPy object. If it is desired to use the compiled function in symbolic expressions, it is better to use binary_function() which returns a SymPy Function object. The binary callable is attached as the _imp_ attribute and invoked when a numerical evaluation is requested with evalf(), or with lambdify().

>>> from .autowrap import binary_function
>>> f = binary_function('f', expr)
>>> 2*f(x, y) + y
y + 2*f(x, y)
>>> (2*f(x, y) + y).evalf(2, subs={x: 1, y:2})
0.e-110

The idea is that a SymPy user will primarily be interested in working with mathematical expressions, and should not have to learn details about wrapping tools in order to evaluate expressions numerically, even if they are computationally expensive.

When is this useful?

  1. For computations on large arrays, Python iterations may be too slow, and depending on the mathematical expression, it may be difficult to exploit the advanced index operations provided by NumPy.

  2. For really long expressions that will be called repeatedly, the compiled binary should be significantly faster than SymPy’s .evalf()

  3. If you are generating code with the codegen utility in order to use it in another project, the automatic python wrappers let you test the binaries immediately from within SymPy.

  4. To create customized ufuncs for use with numpy arrays. See ufuncify.

When is this module NOT the best approach?

  1. If you are really concerned about speed or memory optimizations, you will probably get better results by working directly with the wrapper tools and the low level code. However, the files generated by this utility may provide a useful starting point and reference code. Temporary files will be left intact if you supply the keyword tempdir=”path/to/files/”.

  2. If the array computation can be handled easily by numpy, and you don’t need the binaries for another project.

exception modelparameters.sympy.utilities.autowrap.CodeWrapError[source]

Bases: Exception

class modelparameters.sympy.utilities.autowrap.CodeWrapper(generator, filepath=None, flags=[], verbose=False)[source]

Bases: object

Base Class for code wrappers

property filename
property include_empty
property include_header
property module_name
wrap_code(routine, helpers=[])[source]
class modelparameters.sympy.utilities.autowrap.CythonCodeWrapper(*args, **kwargs)[source]

Bases: CodeWrapper

Wrapper that uses Cython

property command
dump_pyx(routines, f, prefix)[source]

Write a Cython file with python wrappers

This file contains all the definitions of the routines in c code and refers to the header file.

Parameters:
  • routines – List of Routine instances

  • f – File-like object to write the file to

  • prefix – The filename prefix, used to refer to the proper header file. Only the basename of the prefix is used.

pyx_func = 'def {name}_c({arg_string}):\n\n{declarations}{body}'
pyx_header = "cdef extern from '{header_file}.h':\n    {prototype}\n\n"
pyx_imports = 'import numpy as np\ncimport numpy as np\n\n'
setup_template = 'try:\n    from setuptools import setup\n    from setuptools import Extension\nexcept ImportError:\n    from distutils.core import setup\n    from distutils.extension import Extension\nfrom Cython.Build import cythonize\ncy_opts = {cythonize_options}\n{np_import}\next_mods = [Extension(\n    {ext_args},\n    include_dirs={include_dirs},\n    library_dirs={library_dirs},\n    libraries={libraries},\n    extra_compile_args={extra_compile_args},\n    extra_link_args={extra_link_args}\n)]\nsetup(ext_modules=cythonize(ext_mods, **cy_opts))\n'
std_compile_flag = '-std=c99'
class modelparameters.sympy.utilities.autowrap.DummyWrapper(generator, filepath=None, flags=[], verbose=False)[source]

Bases: CodeWrapper

Class used for testing independent of backends

template = '# dummy module for testing of SymPy\ndef %(name)s():\n    return "%(expr)s"\n%(name)s.args = "%(args)s"\n%(name)s.returns = "%(retvals)s"\n'
class modelparameters.sympy.utilities.autowrap.F2PyCodeWrapper(*args, **kwargs)[source]

Bases: CodeWrapper

Wrapper that uses f2py

property command
class modelparameters.sympy.utilities.autowrap.UfuncifyCodeWrapper(*args, **kwargs)[source]

Bases: CodeWrapper

Wrapper for Ufuncify

property command
dump_c(routines, f, prefix, funcname=None)[source]

Write a C file with python wrappers

This file contains all the definitions of the routines in c code.

Parameters:
  • routines – List of Routine instances

  • f – File-like object to write the file to

  • prefix – The filename prefix, used to name the imported module.

  • funcname – Name of the main function to be returned.

dump_setup(f)[source]
wrap_code(routines, helpers=None)[source]
modelparameters.sympy.utilities.autowrap.autowrap(expr, language=None, backend='f2py', tempdir=None, args=None, flags=None, verbose=False, helpers=None, code_gen=None, **kwargs)[source]

Generates python callable binaries based on the math expression.

Parameters:
  • expr – The SymPy expression that should be wrapped as a binary routine.

  • language (string, optional) – If supplied, (options: ‘C’ or ‘F95’), specifies the language of the generated code. If None [default], the language is inferred based upon the specified backend.

  • backend (string, optional) – Backend used to wrap the generated code. Either ‘f2py’ [default], or ‘cython’.

  • tempdir (string, optional) – Path to directory for temporary files. If this argument is supplied, the generated code and the wrapper input files are left intact in the specified path.

  • args (iterable, optional) – An ordered iterable of symbols. Specifies the argument sequence for the function.

  • flags (iterable, optional) – Additional option flags that will be passed to the backend.

  • verbose (bool, optional) – If True, autowrap will not mute the command line backends. This can be helpful for debugging.

  • helpers (iterable, optional) – Used to define auxillary expressions needed for the main expr. If the main expression needs to call a specialized function it should be put in the helpers iterable. Autowrap will then make sure that the compiled main expression can link to the helper routine. Items should be tuples with (<funtion_name>, <sympy_expression>, <arguments>). It is mandatory to supply an argument sequence to helper routines.

  • code_gen (CodeGen instance) – An instance of a CodeGen subclass. Overrides language.

  • include_dirs ([string]) – A list of directories to search for C/C++ header files (in Unix form for portability).

  • library_dirs ([string]) – A list of directories to search for C/C++ libraries at link time.

  • libraries ([string]) – A list of library names (not filenames or paths) to link against.

  • extra_compile_args ([string]) – Any extra platform- and compiler-specific information to use when compiling the source files in ‘sources’. For platforms and compilers where “command line” makes sense, this is typically a list of command-line arguments, but for other platforms it could be anything.

  • extra_link_args ([string]) – Any extra platform- and compiler-specific information to use when linking object files together to create the extension (or to create a new static Python interpreter). Similar interpretation as for ‘extra_compile_args’.

Examples

>>> from ..abc import x, y, z
>>> from .autowrap import autowrap
>>> expr = ((x - y + z)**(13)).expand()
>>> binary_func = autowrap(expr)
>>> binary_func(1, 4, 2)
-1.0
modelparameters.sympy.utilities.autowrap.binary_function(symfunc, expr, **kwargs)[source]

Returns a sympy function with expr as binary implementation

This is a convenience function that automates the steps needed to autowrap the SymPy expression and attaching it to a Function object with implemented_function().

Parameters:
  • symfunc (sympy Function) – The function to bind the callable to.

  • expr (sympy Expression) – The expression used to generate the function.

  • kwargs (dict) – Any kwargs accepted by autowrap.

Examples

>>> from ..abc import x, y
>>> from .autowrap import binary_function
>>> expr = ((x - y)**(25)).expand()
>>> f = binary_function('f', expr)
>>> type(f)
<class 'sympy.core.function.UndefinedFunction'>
>>> 2*f(x, y)
2*f(x, y)
>>> f(x, y).evalf(2, subs={x: 1, y: 2})
-1.0
modelparameters.sympy.utilities.autowrap.ufuncify(args, expr, language=None, backend='numpy', tempdir=None, flags=None, verbose=False, helpers=None, **kwargs)[source]

Generates a binary function that supports broadcasting on numpy arrays.

Parameters:
  • args (iterable) – Either a Symbol or an iterable of symbols. Specifies the argument sequence for the function.

  • expr – A SymPy expression that defines the element wise operation.

  • language (string, optional) – If supplied, (options: ‘C’ or ‘F95’), specifies the language of the generated code. If None [default], the language is inferred based upon the specified backend.

  • backend (string, optional) – Backend used to wrap the generated code. Either ‘numpy’ [default], ‘cython’, or ‘f2py’.

  • tempdir (string, optional) – Path to directory for temporary files. If this argument is supplied, the generated code and the wrapper input files are left intact in the specified path.

  • flags (iterable, optional) – Additional option flags that will be passed to the backend.

  • verbose (bool, optional) – If True, autowrap will not mute the command line backends. This can be helpful for debugging.

  • helpers (iterable, optional) – Used to define auxillary expressions needed for the main expr. If the main expression needs to call a specialized function it should be put in the helpers iterable. Autowrap will then make sure that the compiled main expression can link to the helper routine. Items should be tuples with (<funtion_name>, <sympy_expression>, <arguments>). It is mandatory to supply an argument sequence to helper routines.

  • kwargs (dict) – These kwargs will be passed to autowrap if the f2py or cython backend is used and ignored if the numpy backend is used.

Note

The default backend (‘numpy’) will create actual instances of numpy.ufunc. These support ndimensional broadcasting, and implicit type conversion. Use of the other backends will result in a “ufunc-like” function, which requires equal length 1-dimensional arrays for all arguments, and will not perform any type conversions.

References

[1] http://docs.scipy.org/doc/numpy/reference/ufuncs.html

Examples

>>> from .autowrap import ufuncify
>>> from ..abc import x, y
>>> import numpy as np
>>> f = ufuncify((x, y), y + x**2)
>>> type(f)
<class 'numpy.ufunc'>
>>> f([1, 2, 3], 2)
array([  3.,   6.,  11.])
>>> f(np.arange(5), 3)
array([  3.,   4.,   7.,  12.,  19.])

For the ‘f2py’ and ‘cython’ backends, inputs are required to be equal length 1-dimensional arrays. The ‘f2py’ backend will perform type conversion, but the Cython backend will error if the inputs are not of the expected type.

>>> f_fortran = ufuncify((x, y), y + x**2, backend='f2py')
>>> f_fortran(1, 2)
array([ 3.])
>>> f_fortran(np.array([1, 2, 3]), np.array([1.0, 2.0, 3.0]))
array([  2.,   6.,  12.])
>>> f_cython = ufuncify((x, y), y + x**2, backend='Cython')
>>> f_cython(1, 2)  
Traceback (most recent call last):
  ...
TypeError: Argument '_x' has incorrect type (expected numpy.ndarray, got int)
>>> f_cython(np.array([1.0]), np.array([2.0]))
array([ 3.])

modelparameters.sympy.utilities.benchmarking module

modelparameters.sympy.utilities.codegen module

module for generating C, C++, Fortran77, Fortran90, Julia, Rust and Octave/Matlab routines that evaluate sympy expressions. This module is work in progress. Only the milestones with a ‘+’ character in the list below have been completed.

— How is sympy.utilities.codegen different from ..printing.ccode? —

We considered the idea to extend the printing routines for sympy functions in such a way that it prints complete compilable code, but this leads to a few unsurmountable issues that can only be tackled with dedicated code generator:

  • For C, one needs both a code and a header file, while the printing routines generate just one string. This code generator can be extended to support .pyf files for f2py.

  • SymPy functions are not concerned with programming-technical issues, such as input, output and input-output arguments. Other examples are contiguous or non-contiguous arrays, including headers of other libraries such as gsl or others.

  • It is highly interesting to evaluate several sympy functions in one C routine, eventually sharing common intermediate results with the help of the cse routine. This is more than just printing.

  • From the programming perspective, expressions with constants should be evaluated in the code generator as much as possible. This is different for printing.

— Basic assumptions —

  • A generic Routine data structure describes the routine that must be translated into C/Fortran/… code. This data structure covers all features present in one or more of the supported languages.

  • Descendants from the CodeGen class transform multiple Routine instances into compilable code. Each derived class translates into a specific language.

  • In many cases, one wants a simple workflow. The friendly functions in the last part are a simple api on top of the Routine/CodeGen stuff. They are easier to use, but are less powerful.

— Milestones —

  • First working version with scalar input arguments, generating C code, tests

  • Friendly functions that are easier to use than the rigorous Routine/CodeGen workflow.

  • Integer and Real numbers as input and output

  • Output arguments

  • InputOutput arguments

  • Sort input/output arguments properly

  • Contiguous array arguments (numpy matrices)

  • Also generate .pyf code for f2py (in autowrap module)

  • Isolate constants and evaluate them beforehand in double precision

  • Fortran 90

  • Octave/Matlab

  • Common Subexpression Elimination

  • User defined comments in the generated code

  • Optional extra include lines for libraries/objects that can eval special functions

  • Test other C compilers and libraries: gcc, tcc, libtcc, gcc+gsl, …

  • Contiguous array arguments (sympy matrices)

  • Non-contiguous array arguments (sympy matrices)

  • ccode must raise an error when it encounters something that can not be translated into c. ccode(integrate(sin(x)/x, x)) does not make sense.

  • Complex numbers as input and output

  • A default complex datatype

  • Include extra information in the header: date, user, hostname, sha1 hash, …

  • Fortran 77

  • C++

  • Python

  • Julia

  • Rust

class modelparameters.sympy.utilities.codegen.Argument(name, datatype=None, dimensions=None, precision=None)[source]

Bases: Variable

An abstract Argument data structure: a name and a data type.

This structure is refined in the descendants below.

class modelparameters.sympy.utilities.codegen.CCodeGen(project='project', printer=None, preprocessor_statements=None)[source]

Bases: CodeGen

Generator for C code.

The .write() method inherited from CodeGen will output a code file and an interface file, <prefix>.c and <prefix>.h respectively.

code_extension = 'c'
dump_c(routines, f, prefix, header=True, empty=True)[source]

Write the code by calling language specific methods.

The generated file contains all the definitions of the routines in low-level code and refers to the header file if appropriate.

Parameters:
  • routines (list) – A list of Routine instances.

  • f (file-like) – Where to write the file.

  • prefix (string) – The filename prefix, used to refer to the proper header file. Only the basename of the prefix is used.

  • header (bool, optional) – When True, a header comment is included on top of each source file. [default : True]

  • empty (bool, optional) – When True, empty lines are included to structure the source files. [default : True]

dump_fns = [<function CCodeGen.dump_c>, <function CCodeGen.dump_h>]
dump_h(routines, f, prefix, header=True, empty=True)[source]

Writes the C header file.

This file contains all the function declarations.

Parameters:
  • routines (list) – A list of Routine instances.

  • f (file-like) – Where to write the file.

  • prefix (string) – The filename prefix, used to construct the include guards. Only the basename of the prefix is used.

  • header (bool, optional) – When True, a header comment is included on top of each source file. [default : True]

  • empty (bool, optional) – When True, empty lines are included to structure the source files. [default : True]

get_prototype(routine)[source]

Returns a string for the function prototype of the routine.

If the routine has multiple result objects, an CodeGenError is raised.

See: http://en.wikipedia.org/wiki/Function_prototype

interface_extension = 'h'
standard = 'c99'
class modelparameters.sympy.utilities.codegen.CodeGen(project='project')[source]

Bases: object

Abstract class for the code generators.

dump_code(routines, f, prefix, header=True, empty=True)[source]

Write the code by calling language specific methods.

The generated file contains all the definitions of the routines in low-level code and refers to the header file if appropriate.

Parameters:
  • routines (list) – A list of Routine instances.

  • f (file-like) – Where to write the file.

  • prefix (string) – The filename prefix, used to refer to the proper header file. Only the basename of the prefix is used.

  • header (bool, optional) – When True, a header comment is included on top of each source file. [default : True]

  • empty (bool, optional) – When True, empty lines are included to structure the source files. [default : True]

printer = None
routine(name, expr, argument_sequence, global_vars)[source]

Creates an Routine object that is appropriate for this language.

This implementation is appropriate for at least C/Fortran. Subclasses can override this if necessary.

Here, we assume at most one return value (the l-value) which must be scalar. Additional outputs are OutputArguments (e.g., pointers on right-hand-side or pass-by-reference). Matrices are always returned via OutputArguments. If argument_sequence is None, arguments will be ordered alphabetically, but with all InputArguments first, and then OutputArgument and InOutArguments.

write(routines, prefix, to_files=False, header=True, empty=True)[source]

Writes all the source code files for the given routines.

The generated source is returned as a list of (filename, contents) tuples, or is written to files (see below). Each filename consists of the given prefix, appended with an appropriate extension.

Parameters:
  • routines (list) – A list of Routine instances to be written

  • prefix (string) – The prefix for the output files

  • to_files (bool, optional) – When True, the output is written to files. Otherwise, a list of (filename, contents) tuples is returned. [default: False]

  • header (bool, optional) – When True, a header comment is included on top of each source file. [default: True]

  • empty (bool, optional) – When True, empty lines are included to structure the source files. [default: True]

class modelparameters.sympy.utilities.codegen.DataType(cname, fname, pyname, jlname, octname, rsname)[source]

Bases: object

Holds strings for a certain datatype in different languages.

class modelparameters.sympy.utilities.codegen.FCodeGen(project='project', printer=None)[source]

Bases: CodeGen

Generator for Fortran 95 code

The .write() method inherited from CodeGen will output a code file and an interface file, <prefix>.f90 and <prefix>.h respectively.

code_extension = 'f90'
dump_f95(routines, f, prefix, header=True, empty=True)[source]

Write the code by calling language specific methods.

The generated file contains all the definitions of the routines in low-level code and refers to the header file if appropriate.

Parameters:
  • routines (list) – A list of Routine instances.

  • f (file-like) – Where to write the file.

  • prefix (string) – The filename prefix, used to refer to the proper header file. Only the basename of the prefix is used.

  • header (bool, optional) – When True, a header comment is included on top of each source file. [default : True]

  • empty (bool, optional) – When True, empty lines are included to structure the source files. [default : True]

dump_fns = [<function FCodeGen.dump_f95>, <function FCodeGen.dump_h>]
dump_h(routines, f, prefix, header=True, empty=True)[source]

Writes the interface to a header file.

This file contains all the function declarations.

Parameters:
  • routines (list) – A list of Routine instances.

  • f (file-like) – Where to write the file.

  • prefix (string) – The filename prefix.

  • header (bool, optional) – When True, a header comment is included on top of each source file. [default : True]

  • empty (bool, optional) – When True, empty lines are included to structure the source files. [default : True]

get_interface(routine)[source]

Returns a string for the function interface.

The routine should have a single result object, which can be None. If the routine has multiple result objects, a CodeGenError is raised.

See: http://en.wikipedia.org/wiki/Function_prototype

interface_extension = 'h'
class modelparameters.sympy.utilities.codegen.InputArgument(name, datatype=None, dimensions=None, precision=None)[source]

Bases: Argument

class modelparameters.sympy.utilities.codegen.JuliaCodeGen(project='project', printer=None)[source]

Bases: CodeGen

Generator for Julia code.

The .write() method inherited from CodeGen will output a code file <prefix>.jl.

code_extension = 'jl'
dump_fns = [<function JuliaCodeGen.dump_jl>]
dump_jl(routines, f, prefix, header=True, empty=True)[source]

Write the code by calling language specific methods.

The generated file contains all the definitions of the routines in low-level code and refers to the header file if appropriate.

Parameters:
  • routines (list) – A list of Routine instances.

  • f (file-like) – Where to write the file.

  • prefix (string) – The filename prefix, used to refer to the proper header file. Only the basename of the prefix is used.

  • header (bool, optional) – When True, a header comment is included on top of each source file. [default : True]

  • empty (bool, optional) – When True, empty lines are included to structure the source files. [default : True]

routine(name, expr, argument_sequence, global_vars)[source]

Specialized Routine creation for Julia.

class modelparameters.sympy.utilities.codegen.OctaveCodeGen(project='project', printer=None)[source]

Bases: CodeGen

Generator for Octave code.

The .write() method inherited from CodeGen will output a code file <prefix>.m.

Octave .m files usually contain one function. That function name should match the filename (prefix). If you pass multiple name_expr pairs, the latter ones are presumed to be private functions accessed by the primary function.

You should only pass inputs to argument_sequence: outputs are ordered according to their order in name_expr.

code_extension = 'm'
dump_fns = [<function OctaveCodeGen.dump_m>]
dump_m(routines, f, prefix, header=True, empty=True, inline=True)[source]

Write the code by calling language specific methods.

The generated file contains all the definitions of the routines in low-level code and refers to the header file if appropriate.

Parameters:
  • routines (list) – A list of Routine instances.

  • f (file-like) – Where to write the file.

  • prefix (string) – The filename prefix, used to refer to the proper header file. Only the basename of the prefix is used.

  • header (bool, optional) – When True, a header comment is included on top of each source file. [default : True]

  • empty (bool, optional) – When True, empty lines are included to structure the source files. [default : True]

routine(name, expr, argument_sequence, global_vars)[source]

Specialized Routine creation for Octave.

class modelparameters.sympy.utilities.codegen.Result(expr, name=None, result_var=None, datatype=None, dimensions=None, precision=None)[source]

Bases: Variable, ResultBase

An expression for a return value.

The name result is used to avoid conflicts with the reserved word “return” in the python language. It is also shorter than ReturnValue.

These may or may not need a name in the destination (e.g., “return(x*y)” might return a value without ever naming it).

class modelparameters.sympy.utilities.codegen.Routine(name, arguments, results, local_vars, global_vars)[source]

Bases: object

Generic description of evaluation routine for set of expressions.

A CodeGen class can translate instances of this class into code in a particular language. The routine specification covers all the features present in these languages. The CodeGen part must raise an exception when certain features are not present in the target language. For example, multiple return values are possible in Python, but not in C or Fortran. Another example: Fortran and Python support complex numbers, while C does not.

property result_variables

Returns a list of OutputArgument, InOutArgument and Result.

If return values are present, they are at the end ot the list.

property variables

Returns a set of all variables possibly used in the routine.

For routines with unnamed return values, the dummies that may or may not be used will be included in the set.

class modelparameters.sympy.utilities.codegen.RustCodeGen(project='project', printer=None)[source]

Bases: CodeGen

Generator for Rust code.

The .write() method inherited from CodeGen will output a code file <prefix>.rs

code_extension = 'rs'
dump_fns = [<function RustCodeGen.dump_rs>]
dump_rs(routines, f, prefix, header=True, empty=True)[source]

Write the code by calling language specific methods.

The generated file contains all the definitions of the routines in low-level code and refers to the header file if appropriate.

Parameters:
  • routines (list) – A list of Routine instances.

  • f (file-like) – Where to write the file.

  • prefix (string) – The filename prefix, used to refer to the proper header file. Only the basename of the prefix is used.

  • header (bool, optional) – When True, a header comment is included on top of each source file. [default : True]

  • empty (bool, optional) – When True, empty lines are included to structure the source files. [default : True]

get_prototype(routine)[source]

Returns a string for the function prototype of the routine.

If the routine has multiple result objects, an CodeGenError is raised.

See: http://en.wikipedia.org/wiki/Function_prototype

routine(name, expr, argument_sequence, global_vars)[source]

Specialized Routine creation for Rust.

modelparameters.sympy.utilities.codegen.codegen(name_expr, language=None, prefix=None, project='project', to_files=False, header=True, empty=True, argument_sequence=None, global_vars=None, standard=None, code_gen=None)[source]

Generate source code for expressions in a given language.

Parameters:
  • name_expr (tuple, or list of tuples) – A single (name, expression) tuple or a list of (name, expression) tuples. Each tuple corresponds to a routine. If the expression is an equality (an instance of class Equality) the left hand side is considered an output argument. If expression is an iterable, then the routine will have multiple outputs.

  • language (string,) – A string that indicates the source code language. This is case insensitive. Currently, ‘C’, ‘F95’ and ‘Octave’ are supported. ‘Octave’ generates code compatible with both Octave and Matlab.

  • prefix (string, optional) – A prefix for the names of the files that contain the source code. Language-dependent suffixes will be appended. If omitted, the name of the first name_expr tuple is used.

  • project (string, optional) – A project name, used for making unique preprocessor instructions. [default: “project”]

  • to_files (bool, optional) – When True, the code will be written to one or more files with the given prefix, otherwise strings with the names and contents of these files are returned. [default: False]

  • header (bool, optional) – When True, a header is written on top of each source file. [default: True]

  • empty (bool, optional) – When True, empty lines are used to structure the code. [default: True]

  • argument_sequence (iterable, optional) – Sequence of arguments for the routine in a preferred order. A CodeGenError is raised if required arguments are missing. Redundant arguments are used without warning. If omitted, arguments will be ordered alphabetically, but with all input aguments first, and then output or in-out arguments.

  • global_vars (iterable, optional) – Sequence of global variables used by the routine. Variables listed here will not show up as function arguments.

  • standard (string) –

  • code_gen (CodeGen instance) – An instance of a CodeGen subclass. Overrides language.

Examples

>>> from .codegen import codegen
>>> from ..abc import x, y, z
>>> [(c_name, c_code), (h_name, c_header)] = codegen(
...     ("f", x+y*z), "C89", "test", header=False, empty=False)
>>> print(c_name)
test.c
>>> print(c_code)
#include "test.h"
#include <math.h>
double f(double x, double y, double z) {
   double f_result;
   f_result = x + y*z;
   return f_result;
}

>>> print(h_name)
test.h
>>> print(c_header)
#ifndef PROJECT__TEST__H
#define PROJECT__TEST__H
double f(double x, double y, double z);
#endif

Another example using Equality objects to give named outputs. Here the filename (prefix) is taken from the first (name, expr) pair.

>>> from ..abc import f, g
>>> from .. import Eq
>>> [(c_name, c_code), (h_name, c_header)] = codegen(
...      [("myfcn", x + y), ("fcn2", [Eq(f, 2*x), Eq(g, y)])],
...      "C99", header=False, empty=False)
>>> print(c_name)
myfcn.c
>>> print(c_code)
#include "myfcn.h"
#include <math.h>
double myfcn(double x, double y) {
   double myfcn_result;
   myfcn_result = x + y;
   return myfcn_result;
}
void fcn2(double x, double y, double *f, double *g) {
   (*f) = 2*x;
   (*g) = y;
}

If the generated function(s) will be part of a larger project where various global variables have been defined, the ‘global_vars’ option can be used to remove the specified variables from the function signature

>>> from .codegen import codegen
>>> from ..abc import x, y, z
>>> [(f_name, f_code), header] = codegen(
...     ("f", x+y*z), "F95", header=False, empty=False,
...     argument_sequence=(x, y), global_vars=(z,))
>>> print(f_code)
REAL*8 function f(x, y)
implicit none
REAL*8, intent(in) :: x
REAL*8, intent(in) :: y
f = x + y*z
end function
modelparameters.sympy.utilities.codegen.get_default_datatype(expr)[source]

Derives an appropriate datatype based on the expression.

modelparameters.sympy.utilities.codegen.make_routine(name, expr, argument_sequence=None, global_vars=None, language='F95')[source]

A factory that makes an appropriate Routine from an expression.

Parameters:
  • name (string) – The name of this routine in the generated code.

  • expr (expression or list/tuple of expressions) – A SymPy expression that the Routine instance will represent. If given a list or tuple of expressions, the routine will be considered to have multiple return values and/or output arguments.

  • argument_sequence (list or tuple, optional) – List arguments for the routine in a preferred order. If omitted, the results are language dependent, for example, alphabetical order or in the same order as the given expressions.

  • global_vars (iterable, optional) – Sequence of global variables used by the routine. Variables listed here will not show up as function arguments.

  • language (string, optional) – Specify a target language. The Routine itself should be language-agnostic but the precise way one is created, error checking, etc depend on the language. [default: “F95”].

  • made (the left hand side is typically) –

  • expressions. (depending on both the language and the particular mathematical) –

  • Equality (For an expression of type) –

  • made

  • appropriate). (into an OutputArgument (or perhaps an InOutArgument if) –

  • Otherwise

  • typically

  • of (the calculated expression is made a return values) –

  • routine. (the) –

Examples

>>> from .codegen import make_routine
>>> from ..abc import x, y, f, g
>>> from .. import Eq
>>> r = make_routine('test', [Eq(f, 2*x), Eq(g, x + y)])
>>> [arg.result_var for arg in r.results]
[]
>>> [arg.name for arg in r.arguments]
[x, y, f, g]
>>> [arg.name for arg in r.result_variables]
[f, g]
>>> r.local_vars
set()

Another more complicated example with a mixture of specified and automatically-assigned names. Also has Matrix output.

>>> from .. import Matrix
>>> r = make_routine('fcn', [x*y, Eq(f, 1), Eq(g, x + g), Matrix([[x, 2]])])
>>> [arg.result_var for arg in r.results]  
[result_5397460570204848505]
>>> [arg.expr for arg in r.results]
[x*y]
>>> [arg.name for arg in r.arguments]  
[x, y, f, g, out_8598435338387848786]

We can examine the various arguments more closely:

>>> from .codegen import (InputArgument, OutputArgument,
...                                      InOutArgument)
>>> [a.name for a in r.arguments if isinstance(a, InputArgument)]
[x, y]
>>> [a.name for a in r.arguments if isinstance(a, OutputArgument)]  
[f, out_8598435338387848786]
>>> [a.expr for a in r.arguments if isinstance(a, OutputArgument)]
[1, Matrix([[x, 2]])]
>>> [a.name for a in r.arguments if isinstance(a, InOutArgument)]
[g]
>>> [a.expr for a in r.arguments if isinstance(a, InOutArgument)]
[g + x]

modelparameters.sympy.utilities.decorator module

Useful utility decorators.

modelparameters.sympy.utilities.decorator.conserve_mpmath_dps(func)[source]

After the function finishes, resets the value of mpmath.mp.dps to the value it had before the function was run.

modelparameters.sympy.utilities.decorator.doctest_depends_on(exe=None, modules=None, disable_viewers=None)[source]

Adds metadata about the depenencies which need to be met for doctesting the docstrings of the decorated objects.

modelparameters.sympy.utilities.decorator.memoize_property(storage)[source]

Create a property, where the lookup is stored in storage

class modelparameters.sympy.utilities.decorator.no_attrs_in_subclass(cls, f)[source]

Bases: object

Don’t ‘inherit’ certain attributes from a base class

>>> from .decorator import no_attrs_in_subclass
>>> class A(object):
...     x = 'test'
>>> A.x = no_attrs_in_subclass(A, A.x)
>>> class B(A):
...     pass
>>> hasattr(A, 'x')
True
>>> hasattr(B, 'x')
False
modelparameters.sympy.utilities.decorator.public(obj)[source]

Append obj’s name to global __all__ variable (call site).

By using this decorator on functions or classes you achieve the same goal as by filling __all__ variables manually, you just don’t have to repeat yourself (object’s name). You also know if object is public at definition site, not at some random location (where __all__ was set).

Note that in multiple decorator setup (in almost all cases) @public decorator must be applied before any other decorators, because it relies on the pointer to object’s global namespace. If you apply other decorators first, @public may end up modifying the wrong namespace.

Examples

>>> from .decorator import public
>>> __all__
Traceback (most recent call last):
...
NameError: name '__all__' is not defined
>>> @public
... def some_function():
...     pass
>>> __all__
['some_function']
modelparameters.sympy.utilities.decorator.threaded(func)[source]

Apply func to sub–elements of an object, including Add.

This decorator is intended to make it uniformly possible to apply a function to all elements of composite objects, e.g. matrices, lists, tuples and other iterable containers, or just expressions.

This version of threaded() decorator allows threading over elements of Add class. If this behavior is not desirable use xthreaded() decorator.

Functions using this decorator must have the following signature:

@threaded
def function(expr, *args, **kwargs):
modelparameters.sympy.utilities.decorator.threaded_factory(func, use_add)[source]

A factory for threaded decorators.

modelparameters.sympy.utilities.decorator.xthreaded(func)[source]

Apply func to sub–elements of an object, excluding Add.

This decorator is intended to make it uniformly possible to apply a function to all elements of composite objects, e.g. matrices, lists, tuples and other iterable containers, or just expressions.

This version of threaded() decorator disallows threading over elements of Add class. If this behavior is not desirable use threaded() decorator.

Functions using this decorator must have the following signature:

@xthreaded
def function(expr, *args, **kwargs):

modelparameters.sympy.utilities.enumerative module

class modelparameters.sympy.utilities.enumerative.MultisetPartitionTraverser[source]

Bases: object

Has methods to enumerate and count the partitions of a multiset.

This implements a refactored and extended version of Knuth’s algorithm 7.1.2.5M [AOCP].”

The enumeration methods of this class are generators and return data structures which can be interpreted by the same visitor functions used for the output of multiset_partitions_taocp.

See also

multiset_partitions_taocp, sympy.utilities.iterables.multiset_partititions

Examples

>>> from .enumerative import MultisetPartitionTraverser
>>> m = MultisetPartitionTraverser()
>>> m.count_partitions([4,4,4,2])
127750
>>> m.count_partitions([3,3,3])
686

References

[AOCP] (1,2,3)

Algorithm 7.1.2.5M in Volume 4A, Combinatoral Algorithms, Part 1, of The Art of Computer Programming, by Donald Knuth.

[Factorisatio]

On a Problem of Oppenheim concerning “Factorisatio Numerorum” E. R. Canfield, Paul Erdos, Carl Pomerance, JOURNAL OF NUMBER THEORY, Vol. 17, No. 1. August 1983. See section 7 for a description of an algorithm similar to Knuth’s.

[Yorgey]

Generating Multiset Partitions, Brent Yorgey, The Monad.Reader, Issue 8, September 2007.

count_partitions(multiplicities)[source]

Returns the number of partitions of a multiset whose components have the multiplicities given in multiplicities.

For larger counts, this method is much faster than calling one of the enumerators and counting the result. Uses dynamic programming to cut down on the number of nodes actually explored. The dictionary used in order to accelerate the counting process is stored in the MultisetPartitionTraverser object and persists across calls. If the the user does not expect to call count_partitions for any additional multisets, the object should be cleared to save memory. On the other hand, the cache built up from one count run can significantly speed up subsequent calls to count_partitions, so it may be advantageous not to clear the object.

Examples

>>> from .enumerative import MultisetPartitionTraverser
>>> m = MultisetPartitionTraverser()
>>> m.count_partitions([9,8,2])
288716
>>> m.count_partitions([2,2])
9
>>> del m

Notes

If one looks at the workings of Knuth’s algorithm M [AOCP], it can be viewed as a traversal of a binary tree of parts. A part has (up to) two children, the left child resulting from the spread operation, and the right child from the decrement operation. The ordinary enumeration of multiset partitions is an in-order traversal of this tree, and with the partitions corresponding to paths from the root to the leaves. The mapping from paths to partitions is a little complicated, since the partition would contain only those parts which are leaves or the parents of a spread link, not those which are parents of a decrement link.

For counting purposes, it is sufficient to count leaves, and this can be done with a recursive in-order traversal. The number of leaves of a subtree rooted at a particular part is a function only of that part itself, so memoizing has the potential to speed up the counting dramatically.

This method follows a computational approach which is similar to the hypothetical memoized recursive function, but with two differences:

  1. This method is iterative, borrowing its structure from the other enumerations and maintaining an explicit stack of parts which are in the process of being counted. (There may be multisets which can be counted reasonably quickly by this implementation, but which would overflow the default Python recursion limit with a recursive implementation.)

  2. Instead of using the part data structure directly, a more compact key is constructed. This saves space, but more importantly coalesces some parts which would remain separate with physical keys.

Unlike the enumeration functions, there is currently no _range version of count_partitions. If someone wants to stretch their brain, it should be possible to construct one by memoizing with a histogram of counts rather than a single count, and combining the histograms.

count_partitions_slow(multiplicities)[source]

Returns the number of partitions of a multiset whose elements have the multiplicities given in multiplicities.

Primarily for comparison purposes. It follows the same path as enumerate, and counts, rather than generates, the partitions.

See also

count_partitions

Has the same calling interface, but is much faster.

db_trace(msg)[source]

Useful for usderstanding/debugging the algorithms. Not generally activated in end-user code.

decrement_part(part)[source]

Decrements part (a subrange of pstack), if possible, returning True iff the part was successfully decremented.

If you think of the v values in the part as a multi-digit integer (least significant digit on the right) this is basically decrementing that integer, but with the extra constraint that the leftmost digit cannot be decremented to 0.

Parameters:

part – The part, represented as a list of PartComponent objects, which is to be decremented.

decrement_part_large(part, amt, lb)[source]

Decrements part, while respecting size constraint.

A part can have no children which are of sufficient size (as indicated by lb) unless that part has sufficient unallocated multiplicity. When enforcing the size constraint, this method will decrement the part (if necessary) by an amount needed to ensure sufficient unallocated multiplicity.

Returns True iff the part was successfully decremented.

Parameters:
  • part – part to be decremented (topmost part on the stack)

  • amt – Can only take values 0 or 1. A value of 1 means that the part must be decremented, and then the size constraint is enforced. A value of 0 means just to enforce the lb size constraint.

  • lb – The partitions produced by the calling enumeration must have more parts than this value.

decrement_part_range(part, lb, ub)[source]

Decrements part (a subrange of pstack), if possible, returning True iff the part was successfully decremented.

Parameters:

part – part to be decremented (topmost part on the stack)

ub

the maximum number of parts allowed in a partition returned by the calling traversal.

lb

The partitions produced by the calling enumeration must have more parts than this value.

Notes

Combines the constraints of _small and _large decrement methods. If returns success, part has been decremented at least once, but perhaps by quite a bit more if needed to meet the lb constraint.

decrement_part_small(part, ub)[source]

Decrements part (a subrange of pstack), if possible, returning True iff the part was successfully decremented.

Parameters:
  • part – part to be decremented (topmost part on the stack)

  • ub – the maximum number of parts allowed in a partition returned by the calling traversal.

Notes

The goal of this modification of the ordinary decrement method is to fail (meaning that the subtree rooted at this part is to be skipped) when it can be proved that this part can only have child partitions which are larger than allowed by ub. If a decision is made to fail, it must be accurate, otherwise the enumeration will miss some partitions. But, it is OK not to capture all the possible failures – if a part is passed that shouldn’t be, the resulting too-large partitions are filtered by the enumeration one level up. However, as is usual in constrained enumerations, failing early is advantageous.

The tests used by this method catch the most common cases, although this implementation is by no means the last word on this problem. The tests include:

  1. lpart must be less than ub by at least 2. This is because once a part has been decremented, the partition will gain at least one child in the spread step.

  2. If the leading component of the part is about to be decremented, check for how many parts will be added in order to use up the unallocated multiplicity in that leading component, and fail if this number is greater than allowed by ub. (See code for the exact expression.) This test is given in the answer to Knuth’s problem 7.2.1.5.69.

  3. If there is exactly enough room to expand the leading component by the above test, check the next component (if it exists) once decrementing has finished. If this has v == 0, this next component will push the expansion over the limit by 1, so fail.

enum_all(multiplicities)[source]

Enumerate the partitions of a multiset.

Examples

>>> from .enumerative import list_visitor
>>> from .enumerative import MultisetPartitionTraverser
>>> m = MultisetPartitionTraverser()
>>> states = m.enum_all([2,2])
>>> list(list_visitor(state, 'ab') for state in states)
[[['a', 'a', 'b', 'b']],
[['a', 'a', 'b'], ['b']],
[['a', 'a'], ['b', 'b']],
[['a', 'a'], ['b'], ['b']],
[['a', 'b', 'b'], ['a']],
[['a', 'b'], ['a', 'b']],
[['a', 'b'], ['a'], ['b']],
[['a'], ['a'], ['b', 'b']],
[['a'], ['a'], ['b'], ['b']]]

See also

multiset_partitions_taocp

which provides the same result as this method, but is about twice as fast. Hence, enum_all is primarily useful for testing. Also see the function for a discussion of states and visitors.

enum_large(multiplicities, lb)[source]

Enumerate the partitions of a multiset with lb < num(parts)

Equivalent to enum_range(multiplicities, lb, sum(multiplicities))

Parameters:
  • multiplicities – list of multiplicities of the components of the multiset.

  • lb – Number of parts in the partition must be greater than this lower bound.

Examples

>>> from .enumerative import list_visitor
>>> from .enumerative import MultisetPartitionTraverser
>>> m = MultisetPartitionTraverser()
>>> states = m.enum_large([2,2], 2)
>>> list(list_visitor(state, 'ab') for state in states)
[[['a', 'a'], ['b'], ['b']],
[['a', 'b'], ['a'], ['b']],
[['a'], ['a'], ['b', 'b']],
[['a'], ['a'], ['b'], ['b']]]
enum_range(multiplicities, lb, ub)[source]

Enumerate the partitions of a multiset with lb < num(parts) <= ub.

In particular, if partitions with exactly k parts are desired, call with (multiplicities, k - 1, k). This method generalizes enum_all, enum_small, and enum_large.

Examples

>>> from .enumerative import list_visitor
>>> from .enumerative import MultisetPartitionTraverser
>>> m = MultisetPartitionTraverser()
>>> states = m.enum_range([2,2], 1, 2)
>>> list(list_visitor(state, 'ab') for state in states)
[[['a', 'a', 'b'], ['b']],
[['a', 'a'], ['b', 'b']],
[['a', 'b', 'b'], ['a']],
[['a', 'b'], ['a', 'b']]]
enum_small(multiplicities, ub)[source]

Enumerate multiset partitions with no more than ub parts.

Equivalent to enum_range(multiplicities, 0, ub)

Parameters:
  • multiplicities – list of multiplicities of the components of the multiset.

  • ub – Maximum number of parts

Examples

>>> from .enumerative import list_visitor
>>> from .enumerative import MultisetPartitionTraverser
>>> m = MultisetPartitionTraverser()
>>> states = m.enum_small([2,2], 2)
>>> list(list_visitor(state, 'ab') for state in states)
[[['a', 'a', 'b', 'b']],
[['a', 'a', 'b'], ['b']],
[['a', 'a'], ['b', 'b']],
[['a', 'b', 'b'], ['a']],
[['a', 'b'], ['a', 'b']]]

The implementation is based, in part, on the answer given to exercise 69, in Knuth [AOCP].

spread_part_multiplicity()[source]

Returns True if a new part has been created, and adjusts pstack, f and lpart as needed.

Notes

Spreads unallocated multiplicity from the current top part into a new part created above the current on the stack. This new part is constrained to be less than or equal to the old in terms of the part ordering.

This call does nothing (and returns False) if the current top part has no unallocated multiplicity.

top_part()[source]

Return current top part on the stack, as a slice of pstack.

class modelparameters.sympy.utilities.enumerative.PartComponent[source]

Bases: object

Internal class used in support of the multiset partitions enumerators and the associated visitor functions.

Represents one component of one part of the current partition.

A stack of these, plus an auxiliary frame array, f, represents a partition of the multiset.

Knuth’s psuedocode makes c, u, and v separate arrays.

c
u
v
modelparameters.sympy.utilities.enumerative.factoring_visitor(state, primes)[source]

Use with multiset_partitions_taocp to enumerate the ways a number can be expressed as a product of factors. For this usage, the exponents of the prime factors of a number are arguments to the partition enumerator, while the corresponding prime factors are input here.

Examples

To enumerate the factorings of a number we can think of the elements of the partition as being the prime factors and the multiplicities as being their exponents.

>>> from .enumerative import factoring_visitor
>>> from .enumerative import multiset_partitions_taocp
>>> from .. import factorint
>>> primes, multiplicities = zip(*factorint(24).items())
>>> primes
(2, 3)
>>> multiplicities
(3, 1)
>>> states = multiset_partitions_taocp(multiplicities)
>>> list(factoring_visitor(state, primes) for state in states)
[[24], [8, 3], [12, 2], [4, 6], [4, 2, 3], [6, 2, 2], [2, 2, 2, 3]]
modelparameters.sympy.utilities.enumerative.list_visitor(state, components)[source]

Return a list of lists to represent the partition.

Examples

>>> from .enumerative import list_visitor
>>> from .enumerative import multiset_partitions_taocp
>>> states = multiset_partitions_taocp([1, 2, 1])
>>> s = next(states)
>>> list_visitor(s, 'abc')  # for multiset 'a b b c'
[['a', 'b', 'b', 'c']]
>>> s = next(states)
>>> list_visitor(s, [1, 2, 3])  # for multiset '1 2 2 3
[[1, 2, 2], [3]]
modelparameters.sympy.utilities.enumerative.multiset_partitions_taocp(multiplicities)[source]

Enumerates partitions of a multiset.

Parameters:

multiplicities – list of integer multiplicities of the components of the multiset.

Yields:

state – Internal data structure which encodes a particular partition. This output is then usually processed by a vistor function which combines the information from this data structure with the components themselves to produce an actual partition.

Unless they wish to create their own visitor function, users will have little need to look inside this data structure. But, for reference, it is a 3-element list with components:

f

is a frame array, which is used to divide pstack into parts.

lpart

points to the base of the topmost part.

pstack

is an array of PartComponent objects.

The state output offers a peek into the internal data structures of the enumeration function. The client should treat this as read-only; any modification of the data structure will cause unpredictable (and almost certainly incorrect) results. Also, the components of state are modified in place at each iteration. Hence, the visitor must be called at each loop iteration. Accumulating the state instances and processing them later will not work.

Examples

>>> from .enumerative import list_visitor
>>> from .enumerative import multiset_partitions_taocp
>>> # variables components and multiplicities represent the multiset 'abb'
>>> components = 'ab'
>>> multiplicities = [1, 2]
>>> states = multiset_partitions_taocp(multiplicities)
>>> list(list_visitor(state, components) for state in states)
[[['a', 'b', 'b']],
[['a', 'b'], ['b']],
[['a'], ['b', 'b']],
[['a'], ['b'], ['b']]]

See also

sympy.utilities.iterables.multiset_partitions

Takes a multiset as input and directly yields multiset partitions. It dispatches to a number of functions, including this one, for implementation. Most users will find it more convenient to use than multiset_partitions_taocp.

modelparameters.sympy.utilities.enumerative.part_key(part)[source]

Helper for MultisetPartitionTraverser.count_partitions that creates a key for part, that only includes information which can affect the count for that part. (Any irrelevant information just reduces the effectiveness of dynamic programming.)

Notes

This member function is a candidate for future exploration. There are likely symmetries that can be exploited to coalesce some part_key values, and thereby save space and improve performance.

modelparameters.sympy.utilities.exceptions module

General SymPy exceptions and warnings.

exception modelparameters.sympy.utilities.exceptions.SymPyDeprecationWarning(value=None, feature=None, last_supported_version=None, useinstead=None, issue=None, deprecated_since_version=None)[source]

Bases: DeprecationWarning

A warning for deprecated features of SymPy.

This class is expected to be used with the warnings.warn function (note that one has to explicitly turn on deprecation warnings):

>>> import warnings
>>> from .exceptions import SymPyDeprecationWarning
>>> warnings.simplefilter(
...     "always", SymPyDeprecationWarning)
>>> warnings.warn(
...     SymPyDeprecationWarning(feature="Old deprecated thing",
...     issue=1065, deprecated_since_version="1.0")) 
__main__:3: SymPyDeprecationWarning:

Old deprecated thing has been deprecated since SymPy 1.0. See https://github.com/sympy/sympy/issues/1065 for more info.

>>> SymPyDeprecationWarning(feature="Old deprecated thing",
... issue=1065, deprecated_since_version="1.1").warn() 
__main__:1: SymPyDeprecationWarning:

Old deprecated thing has been deprecated since SymPy 1.1. See https://github.com/sympy/sympy/issues/1065 for more info.

Three arguments to this class are required: feature, issue and deprecated_since_version.

The issue flag should be an integer referencing for a “Deprecation Removal” issue in the SymPy issue tracker. See https://github.com/sympy/sympy/wiki/Deprecating-policy.

>>> SymPyDeprecationWarning(
...    feature="Old feature",
...    useinstead="new feature",
...    issue=5241,
...    deprecated_since_version="1.1")
Old feature has been deprecated since SymPy 1.1. Use new feature
instead. See https://github.com/sympy/sympy/issues/5241 for more info.

Every formal deprecation should have an associated issue in the GitHub issue tracker. All such issues should have the DeprecationRemoval tag.

Additionally, each formal deprecation should mark the first release for which it was deprecated. Use the deprecated_since_version flag for this.

>>> SymPyDeprecationWarning(
...    feature="Old feature",
...    useinstead="new feature",
...    deprecated_since_version="0.7.2",
...    issue=1065)
Old feature has been deprecated since SymPy 0.7.2. Use new feature
instead. See https://github.com/sympy/sympy/issues/1065 for more info.

To provide additional information, create an instance of this class in this way:

>>> SymPyDeprecationWarning(
...     feature="Such and such",
...     last_supported_version="1.2.3",
...     useinstead="this other feature",
...     issue=1065,
...     deprecated_since_version="1.1")
Such and such has been deprecated since SymPy 1.1. It will be last
supported in SymPy version 1.2.3. Use this other feature instead. See
https://github.com/sympy/sympy/issues/1065 for more info.

Note that the text in feature begins a sentence, so if it begins with a plain English word, the first letter of that word should be capitalized.

Either (or both) of the arguments last_supported_version and useinstead can be omitted. In this case the corresponding sentence will not be shown:

>>> SymPyDeprecationWarning(feature="Such and such",
...     useinstead="this other feature", issue=1065,
...     deprecated_since_version="1.1")
Such and such has been deprecated since SymPy 1.1. Use this other
feature instead. See https://github.com/sympy/sympy/issues/1065 for
more info.

You can still provide the argument value. If it is a string, it will be appended to the end of the message:

>>> SymPyDeprecationWarning(
...     feature="Such and such",
...     useinstead="this other feature",
...     value="Contact the developers for further information.",
...     issue=1065,
...     deprecated_since_version="1.1")
Such and such has been deprecated since SymPy 1.1. Use this other
feature instead. See https://github.com/sympy/sympy/issues/1065 for
more info.  Contact the developers for further information.

If, however, the argument value does not hold a string, a string representation of the object will be appended to the message:

>>> SymPyDeprecationWarning(
...     feature="Such and such",
...     useinstead="this other feature",
...     value=[1,2,3],
...     issue=1065,
...     deprecated_since_version="1.1")
Such and such has been deprecated since SymPy 1.1. Use this other
feature instead. See https://github.com/sympy/sympy/issues/1065 for
more info.  ([1, 2, 3])

Note that it may be necessary to go back through all the deprecations before a release to make sure that the version number is correct. So just use what you believe will be the next release number (this usually means bumping the minor number by one).

To mark a function as deprecated, you can use the decorator @deprecated.

See also

sympy.core.decorators.deprecated

warn(stacklevel=2)[source]

modelparameters.sympy.utilities.iterables module

modelparameters.sympy.utilities.iterables.binary_partitions(n)[source]

Generates the binary partition of n.

A binary partition consists only of numbers that are powers of two. Each step reduces a 2**(k+1) to 2**k and 2**k. Thus 16 is converted to 8 and 8.

Reference: TAOCP 4, section 7.2.1.5, problem 64

Examples

>>> from .iterables import binary_partitions
>>> for i in binary_partitions(5):
...     print(i)
...
[4, 1]
[2, 2, 1]
[2, 1, 1, 1]
[1, 1, 1, 1, 1]
modelparameters.sympy.utilities.iterables.bracelets(n, k)[source]

Wrapper to necklaces to return a free (unrestricted) necklace.

modelparameters.sympy.utilities.iterables.capture(func)[source]

Return the printed output of func().

func should be a function without arguments that produces output with print statements.

>>> from .iterables import capture
>>> from .. import pprint
>>> from ..abc import x
>>> def foo():
...     print('hello world!')
...
>>> 'hello' in capture(foo) # foo, not foo()
True
>>> capture(lambda: pprint(2/x))
'2\n-\nx\n'
modelparameters.sympy.utilities.iterables.common_prefix(*seqs)[source]

Return the subsequence that is a common start of sequences in seqs.

>>> from .iterables import common_prefix
>>> common_prefix(list(range(3)))
[0, 1, 2]
>>> common_prefix(list(range(3)), list(range(4)))
[0, 1, 2]
>>> common_prefix([1, 2, 3], [1, 2, 5])
[1, 2]
>>> common_prefix([1, 2, 3], [1, 3, 5])
[1]
modelparameters.sympy.utilities.iterables.common_suffix(*seqs)[source]

Return the subsequence that is a common ending of sequences in seqs.

>>> from .iterables import common_suffix
>>> common_suffix(list(range(3)))
[0, 1, 2]
>>> common_suffix(list(range(3)), list(range(4)))
[]
>>> common_suffix([1, 2, 3], [9, 2, 3])
[2, 3]
>>> common_suffix([1, 2, 3], [9, 7, 3])
[3]
modelparameters.sympy.utilities.iterables.dict_merge(*dicts)[source]

Merge dictionaries into a single dictionary.

modelparameters.sympy.utilities.iterables.filter_symbols(iterator, exclude)[source]

Only yield elements from iterator that do not occur in exclude.

Parameters:
  • iterator (iterable) –

  • from (iterator to take elements) –

  • exclude (elements to) –

  • exclude

Returns:

  • iterator (iterator)

  • filtered iterator

modelparameters.sympy.utilities.iterables.flatten(iterable, levels=None, cls=None)[source]

Recursively denest iterable containers.

>>> from .iterables import flatten
>>> flatten([1, 2, 3])
[1, 2, 3]
>>> flatten([1, 2, [3]])
[1, 2, 3]
>>> flatten([1, [2, 3], [4, 5]])
[1, 2, 3, 4, 5]
>>> flatten([1.0, 2, (1, None)])
[1.0, 2, 1, None]

If you want to denest only a specified number of levels of nested containers, then set levels flag to the desired number of levels:

>>> ls = [[(-2, -1), (1, 2)], [(0, 0)]]
>>> flatten(ls, levels=1)
[(-2, -1), (1, 2), (0, 0)]

If cls argument is specified, it will only flatten instances of that class, for example:

>>> from ..core import Basic
>>> class MyOp(Basic):
...     pass
...
>>> flatten([MyOp(1, MyOp(2, 3))], cls=MyOp)
[1, 2, 3]

adapted from http://kogs-www.informatik.uni-hamburg.de/~meine/python_tricks

modelparameters.sympy.utilities.iterables.generate_bell(n)[source]

Return permutations of [0, 1, …, n - 1] such that each permutation differs from the last by the exchange of a single pair of neighbors. The n! permutations are returned as an iterator. In order to obtain the next permutation from a random starting permutation, use the next_trotterjohnson method of the Permutation class (which generates the same sequence in a different manner).

Examples

>>> from itertools import permutations
>>> from .iterables import generate_bell
>>> from .. import zeros, Matrix

This is the sort of permutation used in the ringing of physical bells, and does not produce permutations in lexicographical order. Rather, the permutations differ from each other by exactly one inversion, and the position at which the swapping occurs varies periodically in a simple fashion. Consider the first few permutations of 4 elements generated by permutations and generate_bell:

>>> list(permutations(range(4)))[:5]
[(0, 1, 2, 3), (0, 1, 3, 2), (0, 2, 1, 3), (0, 2, 3, 1), (0, 3, 1, 2)]
>>> list(generate_bell(4))[:5]
[(0, 1, 2, 3), (0, 1, 3, 2), (0, 3, 1, 2), (3, 0, 1, 2), (3, 0, 2, 1)]

Notice how the 2nd and 3rd lexicographical permutations have 3 elements out of place whereas each “bell” permutation always has only two elements out of place relative to the previous permutation (and so the signature (+/-1) of a permutation is opposite of the signature of the previous permutation).

How the position of inversion varies across the elements can be seen by tracing out where the largest number appears in the permutations:

>>> m = zeros(4, 24)
>>> for i, p in enumerate(generate_bell(4)):
...     m[:, i] = Matrix([j - 3 for j in list(p)])  # make largest zero
>>> m.print_nonzero('X')
[XXX  XXXXXX  XXXXXX  XXX]
[XX XX XXXX XX XXXX XX XX]
[X XXXX XX XXXX XX XXXX X]
[ XXXXXX  XXXXXX  XXXXXX ]

See also

sympy.combinatorics.Permutation.next_trotterjohnson

References

modelparameters.sympy.utilities.iterables.generate_derangements(perm)[source]

Routine to generate unique derangements.

TODO: This will be rewritten to use the ECO operator approach once the permutations branch is in master.

Examples

>>> from .iterables import generate_derangements
>>> list(generate_derangements([0, 1, 2]))
[[1, 2, 0], [2, 0, 1]]
>>> list(generate_derangements([0, 1, 2, 3]))
[[1, 0, 3, 2], [1, 2, 3, 0], [1, 3, 0, 2], [2, 0, 3, 1],     [2, 3, 0, 1], [2, 3, 1, 0], [3, 0, 1, 2], [3, 2, 0, 1],     [3, 2, 1, 0]]
>>> list(generate_derangements([0, 1, 1]))
[]

See also

sympy.functions.combinatorial.factorials.subfactorial

modelparameters.sympy.utilities.iterables.generate_involutions(n)[source]

Generates involutions.

An involution is a permutation that when multiplied by itself equals the identity permutation. In this implementation the involutions are generated using Fixed Points.

Alternatively, an involution can be considered as a permutation that does not contain any cycles with a length that is greater than two.

Reference: http://mathworld.wolfram.com/PermutationInvolution.html

Examples

>>> from .iterables import generate_involutions
>>> list(generate_involutions(3))
[(0, 1, 2), (0, 2, 1), (1, 0, 2), (2, 1, 0)]
>>> len(list(generate_involutions(4)))
10
modelparameters.sympy.utilities.iterables.generate_oriented_forest(n)[source]

This algorithm generates oriented forests.

An oriented graph is a directed graph having no symmetric pair of directed edges. A forest is an acyclic graph, i.e., it has no cycles. A forest can also be described as a disjoint union of trees, which are graphs in which any two vertices are connected by exactly one simple path.

Reference: [1] T. Beyer and S.M. Hedetniemi: constant time generation of rooted trees, SIAM J. Computing Vol. 9, No. 4, November 1980 [2] http://stackoverflow.com/questions/1633833/oriented-forest-taocp-algorithm-in-python

Examples

>>> from .iterables import generate_oriented_forest
>>> list(generate_oriented_forest(4))
[[0, 1, 2, 3], [0, 1, 2, 2], [0, 1, 2, 1], [0, 1, 2, 0],     [0, 1, 1, 1], [0, 1, 1, 0], [0, 1, 0, 1], [0, 1, 0, 0], [0, 0, 0, 0]]
modelparameters.sympy.utilities.iterables.group(seq, multiple=True)[source]

Splits a sequence into a list of lists of equal, adjacent elements.

Examples

>>> from .iterables import group
>>> group([1, 1, 1, 2, 2, 3])
[[1, 1, 1], [2, 2], [3]]
>>> group([1, 1, 1, 2, 2, 3], multiple=False)
[(1, 3), (2, 2), (3, 1)]
>>> group([1, 1, 3, 2, 2, 1], multiple=False)
[(1, 2), (3, 1), (2, 2), (1, 1)]

See also

multiset

modelparameters.sympy.utilities.iterables.has_dups(seq)[source]

Return True if there are any duplicate elements in seq.

Examples

>>> from .iterables import has_dups
>>> from .. import Dict, Set
>>> has_dups((1, 2, 1))
True
>>> has_dups(range(3))
False
>>> all(has_dups(c) is False for c in (set(), Set(), dict(), Dict()))
True
modelparameters.sympy.utilities.iterables.has_variety(seq)[source]

Return True if there are any different elements in seq.

Examples

>>> from .iterables import has_variety
>>> has_variety((1, 2, 1))
True
>>> has_variety((1, 1, 1))
False
modelparameters.sympy.utilities.iterables.ibin(n, bits=0, str=False)[source]

Return a list of length bits corresponding to the binary value of n with small bits to the right (last). If bits is omitted, the length will be the number required to represent n. If the bits are desired in reversed order, use the [::-1] slice of the returned list.

If a sequence of all bits-length lists starting from [0, 0,…, 0] through [1, 1, …, 1] are desired, pass a non-integer for bits, e.g. ‘all’.

If the bit string is desired pass str=True.

Examples

>>> from .iterables import ibin
>>> ibin(2)
[1, 0]
>>> ibin(2, 4)
[0, 0, 1, 0]
>>> ibin(2, 4)[::-1]
[0, 1, 0, 0]

If all lists corresponding to 0 to 2**n - 1, pass a non-integer for bits:

>>> bits = 2
>>> for i in ibin(2, 'all'):
...     print(i)
(0, 0)
(0, 1)
(1, 0)
(1, 1)

If a bit string is desired of a given length, use str=True:

>>> n = 123
>>> bits = 10
>>> ibin(n, bits, str=True)
'0001111011'
>>> ibin(n, bits, str=True)[::-1]  # small bits left
'1101111000'
>>> list(ibin(3, 'all', str=True))
['000', '001', '010', '011', '100', '101', '110', '111']
modelparameters.sympy.utilities.iterables.interactive_traversal(expr)[source]

Traverse a tree asking a user which branch to choose.

modelparameters.sympy.utilities.iterables.kbins(l, k, ordered=None)[source]

Return sequence l partitioned into k bins.

Examples

>>> from .iterables import kbins

The default is to give the items in the same order, but grouped into k partitions without any reordering:

>>> from __future__ import print_function
>>> for p in kbins(list(range(5)), 2):
...     print(p)
...
[[0], [1, 2, 3, 4]]
[[0, 1], [2, 3, 4]]
[[0, 1, 2], [3, 4]]
[[0, 1, 2, 3], [4]]

The ordered flag which is either None (to give the simple partition of the the elements) or is a 2 digit integer indicating whether the order of the bins and the order of the items in the bins matters. Given:

A = [[0], [1, 2]]
B = [[1, 2], [0]]
C = [[2, 1], [0]]
D = [[0], [2, 1]]

the following values for ordered have the shown meanings:

00 means A == B == C == D
01 means A == B
10 means A == D
11 means A == A
>>> for ordered in [None, 0, 1, 10, 11]:
...     print('ordered = %s' % ordered)
...     for p in kbins(list(range(3)), 2, ordered=ordered):
...         print('     %s' % p)
...
ordered = None
     [[0], [1, 2]]
     [[0, 1], [2]]
ordered = 0
     [[0, 1], [2]]
     [[0, 2], [1]]
     [[0], [1, 2]]
ordered = 1
     [[0], [1, 2]]
     [[0], [2, 1]]
     [[1], [0, 2]]
     [[1], [2, 0]]
     [[2], [0, 1]]
     [[2], [1, 0]]
ordered = 10
     [[0, 1], [2]]
     [[2], [0, 1]]
     [[0, 2], [1]]
     [[1], [0, 2]]
     [[0], [1, 2]]
     [[1, 2], [0]]
ordered = 11
     [[0], [1, 2]]
     [[0, 1], [2]]
     [[0], [2, 1]]
     [[0, 2], [1]]
     [[1], [0, 2]]
     [[1, 0], [2]]
     [[1], [2, 0]]
     [[1, 2], [0]]
     [[2], [0, 1]]
     [[2, 0], [1]]
     [[2], [1, 0]]
     [[2, 1], [0]]
modelparameters.sympy.utilities.iterables.minlex(seq, directed=True, is_set=False, small=None)[source]

Return a tuple where the smallest element appears first; if directed is True (default) then the order is preserved, otherwise the sequence will be reversed if that gives a smaller ordering.

If every element appears only once then is_set can be set to True for more efficient processing.

If the smallest element is known at the time of calling, it can be passed and the calculation of the smallest element will be omitted.

Examples

>>> from ..combinatorics.polyhedron import minlex
>>> minlex((1, 2, 0))
(0, 1, 2)
>>> minlex((1, 0, 2))
(0, 2, 1)
>>> minlex((1, 0, 2), directed=False)
(0, 1, 2)
>>> minlex('11010011000', directed=True)
'00011010011'
>>> minlex('11010011000', directed=False)
'00011001011'
modelparameters.sympy.utilities.iterables.multiset(seq)[source]

Return the hashable sequence in multiset form with values being the multiplicity of the item in the sequence.

Examples

>>> from .iterables import multiset
>>> multiset('mississippi')
{'i': 4, 'm': 1, 'p': 2, 's': 4}

See also

group

modelparameters.sympy.utilities.iterables.multiset_combinations(m, n, g=None)[source]

Return the unique combinations of size n from multiset m.

Examples

>>> from .iterables import multiset_combinations
>>> from itertools import combinations
>>> [''.join(i) for i in  multiset_combinations('baby', 3)]
['abb', 'aby', 'bby']
>>> def count(f, s): return len(list(f(s, 3)))

The number of combinations depends on the number of letters; the number of unique combinations depends on how the letters are repeated.

>>> s1 = 'abracadabra'
>>> s2 = 'banana tree'
>>> count(combinations, s1), count(multiset_combinations, s1)
(165, 23)
>>> count(combinations, s2), count(multiset_combinations, s2)
(165, 54)
modelparameters.sympy.utilities.iterables.multiset_partitions(multiset, m=None)[source]

Return unique partitions of the given multiset (in list form). If m is None, all multisets will be returned, otherwise only partitions with m parts will be returned.

If multiset is an integer, a range [0, 1, …, multiset - 1] will be supplied.

Examples

>>> from .iterables import multiset_partitions
>>> list(multiset_partitions([1, 2, 3, 4], 2))
[[[1, 2, 3], [4]], [[1, 2, 4], [3]], [[1, 2], [3, 4]],
[[1, 3, 4], [2]], [[1, 3], [2, 4]], [[1, 4], [2, 3]],
[[1], [2, 3, 4]]]
>>> list(multiset_partitions([1, 2, 3, 4], 1))
[[[1, 2, 3, 4]]]

Only unique partitions are returned and these will be returned in a canonical order regardless of the order of the input:

>>> a = [1, 2, 2, 1]
>>> ans = list(multiset_partitions(a, 2))
>>> a.sort()
>>> list(multiset_partitions(a, 2)) == ans
True
>>> a = range(3, 1, -1)
>>> (list(multiset_partitions(a)) ==
...  list(multiset_partitions(sorted(a))))
True

If m is omitted then all partitions will be returned:

>>> list(multiset_partitions([1, 1, 2]))
[[[1, 1, 2]], [[1, 1], [2]], [[1, 2], [1]], [[1], [1], [2]]]
>>> list(multiset_partitions([1]*3))
[[[1, 1, 1]], [[1], [1, 1]], [[1], [1], [1]]]

Counting

The number of partitions of a set is given by the bell number:

>>> from .. import bell
>>> len(list(multiset_partitions(5))) == bell(5) == 52
True

The number of partitions of length k from a set of size n is given by the Stirling Number of the 2nd kind:

>>> def S2(n, k):
...     from .. import Dummy, binomial, factorial, Sum
...     if k > n:
...         return 0
...     j = Dummy()
...     arg = (-1)**(k-j)*j**n*binomial(k,j)
...     return 1/factorial(k)*Sum(arg,(j,0,k)).doit()
...
>>> S2(5, 2) == len(list(multiset_partitions(5, 2))) == 15
True

These comments on counting apply to sets, not multisets.

Notes

When all the elements are the same in the multiset, the order of the returned partitions is determined by the partitions routine. If one is counting partitions then it is better to use the nT function.

See also

partitions, sympy.combinatorics.partitions.Partition, sympy.combinatorics.partitions.IntegerPartition, sympy.functions.combinatorial.numbers.nT

modelparameters.sympy.utilities.iterables.multiset_permutations(m, size=None, g=None)[source]

Return the unique permutations of multiset m.

Examples

>>> from .iterables import multiset_permutations
>>> from .. import factorial
>>> [''.join(i) for i in multiset_permutations('aab')]
['aab', 'aba', 'baa']
>>> factorial(len('banana'))
720
>>> len(list(multiset_permutations('banana')))
60
modelparameters.sympy.utilities.iterables.necklaces(n, k, free=False)[source]

A routine to generate necklaces that may (free=True) or may not (free=False) be turned over to be viewed. The “necklaces” returned are comprised of n integers (beads) with k different values (colors). Only unique necklaces are returned.

Examples

>>> from .iterables import necklaces, bracelets
>>> def show(s, i):
...     return ''.join(s[j] for j in i)

The “unrestricted necklace” is sometimes also referred to as a “bracelet” (an object that can be turned over, a sequence that can be reversed) and the term “necklace” is used to imply a sequence that cannot be reversed. So ACB == ABC for a bracelet (rotate and reverse) while the two are different for a necklace since rotation alone cannot make the two sequences the same.

(mnemonic: Bracelets can be viewed Backwards, but Not Necklaces.)

>>> B = [show('ABC', i) for i in bracelets(3, 3)]
>>> N = [show('ABC', i) for i in necklaces(3, 3)]
>>> set(N) - set(B)
{'ACB'}
>>> list(necklaces(4, 2))
[(0, 0, 0, 0), (0, 0, 0, 1), (0, 0, 1, 1),
 (0, 1, 0, 1), (0, 1, 1, 1), (1, 1, 1, 1)]
>>> [show('.o', i) for i in bracelets(4, 2)]
['....', '...o', '..oo', '.o.o', '.ooo', 'oooo']

References

http://mathworld.wolfram.com/Necklace.html

modelparameters.sympy.utilities.iterables.numbered_symbols(prefix='x', cls=None, start=0, exclude=[], *args, **assumptions)[source]

Generate an infinite stream of Symbols consisting of a prefix and increasing subscripts provided that they do not occur in exclude.

Parameters:
  • prefix (str, optional) – The prefix to use. By default, this function will generate symbols of the form “x0”, “x1”, etc.

  • cls (class, optional) – The class to use. By default, it uses Symbol, but you can also use Wild or Dummy.

  • start (int, optional) – The start number. By default, it is 0.

Returns:

sym – The subscripted symbols.

Return type:

Symbol

modelparameters.sympy.utilities.iterables.ordered_partitions(n, m=None, sort=True)[source]

Generates ordered partitions of integer n.

Parameters:
  • m (integer (default gives partitions of all sizes) else only) – those with size m. In addition, if m is not None then partitions are generated in place (see examples).

  • sort (bool (default True) controls whether partitions are) – returned in sorted order when m is not None; when False, the partitions are returned as fast as possible with elements sorted, but when m|n the partitions will not be in ascending lexicographical order.

Examples

>>> from .iterables import ordered_partitions

All partitions of 5 in ascending lexicographical:

>>> for p in ordered_partitions(5):
...     print(p)
[1, 1, 1, 1, 1]
[1, 1, 1, 2]
[1, 1, 3]
[1, 2, 2]
[1, 4]
[2, 3]
[5]

Only partitions of 5 with two parts:

>>> for p in ordered_partitions(5, 2):
...     print(p)
[1, 4]
[2, 3]

When m is given, a given list objects will be used more than once for speed reasons so you will not see the correct partitions unless you make a copy of each as it is generated:

>>> [p for p in ordered_partitions(7, 3)]
[[1, 1, 1], [1, 1, 1], [1, 1, 1], [2, 2, 2]]
>>> [list(p) for p in ordered_partitions(7, 3)]
[[1, 1, 5], [1, 2, 4], [1, 3, 3], [2, 2, 3]]

When n is a multiple of m, the elements are still sorted but the partitions themselves will be unordered if sort is False; the default is to return them in ascending lexicographical order.

>>> for p in ordered_partitions(6, 2):
...     print(p)
[1, 5]
[2, 4]
[3, 3]

But if speed is more important than ordering, sort can be set to False:

>>> for p in ordered_partitions(6, 2, sort=False):
...     print(p)
[1, 5]
[3, 3]
[2, 4]

References

modelparameters.sympy.utilities.iterables.partitions(n, m=None, k=None, size=False)[source]

Generate all partitions of positive integer, n.

Parameters:
  • m (integer (default gives partitions of all sizes)) – limits number of parts in partition (mnemonic: m, maximum parts)

  • k (integer (default gives partitions number from 1 through n)) – limits the numbers that are kept in the partition (mnemonic: k, keys)

  • size (bool (default False, only partition is returned)) – when True then (M, P) is returned where M is the sum of the multiplicities and P is the generated partition.

  • dictionary (Each partition is represented as a) –

  • integer (mapping an) –

  • example (to the number of copies of that integer in the partition. For) –

:param : :param the first partition of 4 returned is {4: :type the first partition of 4 returned is {4: 1}, “4: one of them”.

Examples

>>> from .iterables import partitions

The numbers appearing in the partition (the key of the returned dict) are limited with k:

>>> for p in partitions(6, k=2):  
...     print(p)
{2: 3}
{1: 2, 2: 2}
{1: 4, 2: 1}
{1: 6}

The maximum number of parts in the partition (the sum of the values in the returned dict) are limited with m (default value, None, gives partitions from 1 through n):

>>> for p in partitions(6, m=2):  
...     print(p)
...
{6: 1}
{1: 1, 5: 1}
{2: 1, 4: 1}
{3: 2}

Note that the _same_ dictionary object is returned each time. This is for speed: generating each partition goes quickly, taking constant time, independent of n.

>>> [p for p in partitions(6, k=2)]
[{1: 6}, {1: 6}, {1: 6}, {1: 6}]

If you want to build a list of the returned dictionaries then make a copy of them:

>>> [p.copy() for p in partitions(6, k=2)]  
[{2: 3}, {1: 2, 2: 2}, {1: 4, 2: 1}, {1: 6}]
>>> [(M, p.copy()) for M, p in partitions(6, k=2, size=True)]  
[(3, {2: 3}), (4, {1: 2, 2: 2}), (5, {1: 4, 2: 1}), (6, {1: 6})]
Reference:

modified from Tim Peter’s version to allow for k and m values: code.activestate.com/recipes/218332-generator-for-integer-partitions/

See also

sympy.combinatorics.partitions.Partition, sympy.combinatorics.partitions.IntegerPartition

modelparameters.sympy.utilities.iterables.permute_signs(t)[source]

Return iterator in which the signs of non-zero elements of t are permuted.

Examples

>>> from .iterables import permute_signs
>>> list(permute_signs((0, 1, 2)))
[(0, 1, 2), (0, -1, 2), (0, 1, -2), (0, -1, -2)]
modelparameters.sympy.utilities.iterables.postfixes(seq)[source]

Generate all postfixes of a sequence.

Examples

>>> from .iterables import postfixes
>>> list(postfixes([1,2,3,4]))
[[4], [3, 4], [2, 3, 4], [1, 2, 3, 4]]
modelparameters.sympy.utilities.iterables.postorder_traversal(node, keys=None)[source]

Do a postorder traversal of a tree.

This generator recursively yields nodes that it has visited in a postorder fashion. That is, it descends through the tree depth-first to yield all of a node’s children’s postorder traversal before yielding the node itself.

Parameters:
  • node (sympy expression) – The expression to traverse.

  • keys ((default None) sort key(s)) – The key(s) used to sort args of Basic objects. When None, args of Basic objects are processed in arbitrary order. If key is defined, it will be passed along to ordered() as the only key(s) to use to sort the arguments; if key is simply True then the default keys of ordered will be used (node count and default_sort_key).

Yields:

subtree (sympy expression) – All of the subtrees in the tree.

Examples

>>> from .iterables import postorder_traversal
>>> from ..abc import w, x, y, z

The nodes are returned in the order that they are encountered unless key is given; simply passing key=True will guarantee that the traversal is unique.

>>> list(postorder_traversal(w + (x + y)*z)) 
[z, y, x, x + y, z*(x + y), w, w + z*(x + y)]
>>> list(postorder_traversal(w + (x + y)*z, keys=True))
[w, z, x, y, x + y, z*(x + y), w + z*(x + y)]
modelparameters.sympy.utilities.iterables.prefixes(seq)[source]

Generate all prefixes of a sequence.

Examples

>>> from .iterables import prefixes
>>> list(prefixes([1,2,3,4]))
[[1], [1, 2], [1, 2, 3], [1, 2, 3, 4]]
modelparameters.sympy.utilities.iterables.reshape(seq, how)[source]

Reshape the sequence according to the template in how.

Examples

>>> from ..utilities import reshape
>>> seq = list(range(1, 9))
>>> reshape(seq, [4]) # lists of 4
[[1, 2, 3, 4], [5, 6, 7, 8]]
>>> reshape(seq, (4,)) # tuples of 4
[(1, 2, 3, 4), (5, 6, 7, 8)]
>>> reshape(seq, (2, 2)) # tuples of 4
[(1, 2, 3, 4), (5, 6, 7, 8)]
>>> reshape(seq, (2, [2])) # (i, i, [i, i])
[(1, 2, [3, 4]), (5, 6, [7, 8])]
>>> reshape(seq, ((2,), [2])) # etc....
[((1, 2), [3, 4]), ((5, 6), [7, 8])]
>>> reshape(seq, (1, [2], 1))
[(1, [2, 3], 4), (5, [6, 7], 8)]
>>> reshape(tuple(seq), ([[1], 1, (2,)],))
(([[1], 2, (3, 4)],), ([[5], 6, (7, 8)],))
>>> reshape(tuple(seq), ([1], 1, (2,)))
(([1], 2, (3, 4)), ([5], 6, (7, 8)))
>>> reshape(list(range(12)), [2, [3], {2}, (1, (3,), 1)])
[[0, 1, [2, 3, 4], {5, 6}, (7, (8, 9, 10), 11)]]
modelparameters.sympy.utilities.iterables.rotate_left(x, y)[source]

Left rotates a list x by the number of steps specified in y.

Examples

>>> from .iterables import rotate_left
>>> a = [0, 1, 2]
>>> rotate_left(a, 1)
[1, 2, 0]
modelparameters.sympy.utilities.iterables.rotate_right(x, y)[source]

Right rotates a list x by the number of steps specified in y.

Examples

>>> from .iterables import rotate_right
>>> a = [0, 1, 2]
>>> rotate_right(a, 1)
[2, 0, 1]
modelparameters.sympy.utilities.iterables.runs(seq, op=<built-in function gt>)[source]

Group the sequence into lists in which successive elements all compare the same with the comparison operator, op: op(seq[i + 1], seq[i]) is True from all elements in a run.

Examples

>>> from .iterables import runs
>>> from operator import ge
>>> runs([0, 1, 2, 2, 1, 4, 3, 2, 2])
[[0, 1, 2], [2], [1, 4], [3], [2], [2]]
>>> runs([0, 1, 2, 2, 1, 4, 3, 2, 2], op=ge)
[[0, 1, 2, 2], [1, 4], [3], [2, 2]]
modelparameters.sympy.utilities.iterables.sift(seq, keyfunc)[source]

Sift the sequence, seq into a dictionary according to keyfunc.

OUTPUT: each element in expr is stored in a list keyed to the value of keyfunc for the element.

Examples

>>> from ..utilities import sift
>>> from ..abc import x, y
>>> from .. import sqrt, exp
>>> sift(range(5), lambda x: x % 2)
{0: [0, 2, 4], 1: [1, 3]}

sift() returns a defaultdict() object, so any key that has no matches will give [].

>>> sift([x], lambda x: x.is_commutative)
{True: [x]}
>>> _[False]
[]

Sometimes you won’t know how many keys you will get:

>>> sift([sqrt(x), exp(x), (y**x)**2],
...      lambda x: x.as_base_exp()[0])
{E: [exp(x)], x: [sqrt(x)], y: [y**(2*x)]}

If you need to sort the sifted items it might be better to use ordered which can economically apply multiple sort keys to a squence while sorting.

See also

ordered

modelparameters.sympy.utilities.iterables.signed_permutations(t)[source]

Return iterator in which the signs of non-zero elements of t and the order of the elements are permuted.

Examples

>>> from .iterables import signed_permutations
>>> list(signed_permutations((0, 1, 2)))
[(0, 1, 2), (0, -1, 2), (0, 1, -2), (0, -1, -2), (0, 2, 1),
(0, -2, 1), (0, 2, -1), (0, -2, -1), (1, 0, 2), (-1, 0, 2),
(1, 0, -2), (-1, 0, -2), (1, 2, 0), (-1, 2, 0), (1, -2, 0),
(-1, -2, 0), (2, 0, 1), (-2, 0, 1), (2, 0, -1), (-2, 0, -1),
(2, 1, 0), (-2, 1, 0), (2, -1, 0), (-2, -1, 0)]
modelparameters.sympy.utilities.iterables.subsets(seq, k=None, repetition=False)[source]

Generates all k-subsets (combinations) from an n-element set, seq.

A k-subset of an n-element set is any subset of length exactly k. The number of k-subsets of an n-element set is given by binomial(n, k), whereas there are 2**n subsets all together. If k is None then all 2**n subsets will be returned from shortest to longest.

Examples

>>> from .iterables import subsets

subsets(seq, k) will return the n!/k!/(n - k)! k-subsets (combinations) without repetition, i.e. once an item has been removed, it can no longer be “taken”:

>>> list(subsets([1, 2], 2))
[(1, 2)]
>>> list(subsets([1, 2]))
[(), (1,), (2,), (1, 2)]
>>> list(subsets([1, 2, 3], 2))
[(1, 2), (1, 3), (2, 3)]

subsets(seq, k, repetition=True) will return the (n - 1 + k)!/k!/(n - 1)! combinations with repetition:

>>> list(subsets([1, 2], 2, repetition=True))
[(1, 1), (1, 2), (2, 2)]

If you ask for more items than are in the set you get the empty set unless you allow repetitions:

>>> list(subsets([0, 1], 3, repetition=False))
[]
>>> list(subsets([0, 1], 3, repetition=True))
[(0, 0, 0), (0, 0, 1), (0, 1, 1), (1, 1, 1)]
modelparameters.sympy.utilities.iterables.take(iter, n)[source]

Return n items from iter iterator.

modelparameters.sympy.utilities.iterables.topological_sort(graph, key=None)[source]

Topological sort of graph’s vertices.

Parameters:
  • graph (tuple[list, list[tuple[T, T]]) – A tuple consisting of a list of vertices and a list of edges of a graph to be sorted topologically.

  • key (callable[T] (optional)) – Ordering key for vertices on the same level. By default the natural (e.g. lexicographic) ordering is used (in this case the base type must implement ordering relations).

Examples

Consider a graph:

+---+     +---+     +---+
| 7 |\    | 5 |     | 3 |
+---+ \   +---+     +---+
  |   _\___/ ____   _/ |
  |  /  \___/    \ /   |
  V  V           V V   |
 +----+         +---+  |
 | 11 |         | 8 |  |
 +----+         +---+  |
  | | \____   ___/ _   |
  | \      \ /    / \  |
  V  \     V V   /  V  V
+---+ \   +---+ |  +----+
| 2 |  |  | 9 | |  | 10 |
+---+  |  +---+ |  +----+
       \________/

where vertices are integers. This graph can be encoded using elementary Python’s data structures as follows:

>>> V = [2, 3, 5, 7, 8, 9, 10, 11]
>>> E = [(7, 11), (7, 8), (5, 11), (3, 8), (3, 10),
...      (11, 2), (11, 9), (11, 10), (8, 9)]

To compute a topological sort for graph (V, E) issue:

>>> from .iterables import topological_sort

>>> topological_sort((V, E))
[3, 5, 7, 8, 11, 2, 9, 10]

If specific tie breaking approach is needed, use key parameter:

>>> topological_sort((V, E), key=lambda v: -v)
[7, 5, 11, 3, 10, 8, 9, 2]

Only acyclic graphs can be sorted. If the input graph has a cycle, then ValueError will be raised:

>>> topological_sort((V, E + [(10, 7)]))
Traceback (most recent call last):
...
ValueError: cycle detected
modelparameters.sympy.utilities.iterables.unflatten(iter, n=2)[source]

Group iter into tuples of length n. Raise an error if the length of iter is not a multiple of n.

modelparameters.sympy.utilities.iterables.uniq(seq, result=None)[source]

Yield unique elements from seq as an iterator. The second parameter result is used internally; it is not necessary to pass anything for this.

Examples

>>> from .iterables import uniq
>>> dat = [1, 4, 1, 5, 4, 2, 1, 2]
>>> type(uniq(dat)) in (list, tuple)
False
>>> list(uniq(dat))
[1, 4, 5, 2]
>>> list(uniq(x for x in dat))
[1, 4, 5, 2]
>>> list(uniq([[1], [2, 1], [1]]))
[[1], [2, 1]]
modelparameters.sympy.utilities.iterables.variations(seq, n, repetition=False)[source]

Returns a generator of the n-sized variations of seq (size N). repetition controls whether items in seq can appear more than once;

Examples

variations(seq, n) will return N! / (N - n)! permutations without repetition of seq’s elements:

>>> from .iterables import variations
>>> list(variations([1, 2], 2))
[(1, 2), (2, 1)]

variations(seq, n, True) will return the N**n permutations obtained by allowing repetition of elements:

>>> list(variations([1, 2], 2, repetition=True))
[(1, 1), (1, 2), (2, 1), (2, 2)]

If you ask for more items than are in the set you get the empty set unless you allow repetitions:

>>> list(variations([0, 1], 3, repetition=False))
[]
>>> list(variations([0, 1], 3, repetition=True))[:4]
[(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1)]

See also

sympy.core.compatibility.permutations, sympy.core.compatibility.product

modelparameters.sympy.utilities.lambdify module

This module provides convenient functions to transform sympy expressions to lambda functions which can be used to calculate numerical values very fast.

modelparameters.sympy.utilities.lambdify.implemented_function(symfunc, implementation)[source]

Add numerical implementation to function symfunc.

symfunc can be an UndefinedFunction instance, or a name string. In the latter case we create an UndefinedFunction instance with that name.

Be aware that this is a quick workaround, not a general method to create special symbolic functions. If you want to create a symbolic function to be used by all the machinery of SymPy you should subclass the Function class.

Parameters:
  • symfunc (str or UndefinedFunction instance) – If str, then create new UndefinedFunction with this as name. If symfunc is a sympy function, attach implementation to it.

  • implementation (callable) – numerical implementation to be called by evalf() or lambdify

Returns:

afunc – function with attached implementation

Return type:

sympy.FunctionClass instance

Examples

>>> from ..abc import x
>>> from .lambdify import lambdify, implemented_function
>>> from .. import Function
>>> f = implemented_function(Function('f'), lambda x: x+1)
>>> lam_f = lambdify(x, f(x))
>>> lam_f(4)
5
modelparameters.sympy.utilities.lambdify.lambdastr(args, expr, printer=None, dummify=False)[source]

Returns a string that can be evaluated to a lambda function.

Examples

>>> from ..abc import x, y, z
>>> from .lambdify import lambdastr
>>> lambdastr(x, x**2)
'lambda x: (x**2)'
>>> lambdastr((x,y,z), [z,y,x])
'lambda x,y,z: ([z, y, x])'

Although tuples may not appear as arguments to lambda in Python 3, lambdastr will create a lambda function that will unpack the original arguments so that nested arguments can be handled:

>>> lambdastr((x, (y, z)), x + y)
'lambda _0,_1: (lambda x,y,z: (x + y))(*list(__flatten_args__([_0,_1])))'
modelparameters.sympy.utilities.lambdify.lambdify(args, expr, modules=None, printer=None, use_imps=True, dummify=True)[source]

Returns a lambda function for fast calculation of numerical values.

If not specified differently by the user, modules defaults to ["numpy"] if NumPy is installed, and ["math", "mpmath", "sympy"] if it isn’t, that is, SymPy functions are replaced as far as possible by either numpy functions if available, and Python’s standard library math, or mpmath functions otherwise. To change this behavior, the “modules” argument can be used. It accepts:

  • the strings “math”, “mpmath”, “numpy”, “numexpr”, “sympy”, “tensorflow”

  • any modules (e.g. math)

  • dictionaries that map names of sympy functions to arbitrary functions

  • lists that contain a mix of the arguments above, with higher priority given to entries appearing first.

Warning

Note that this function uses eval, and thus shouldn’t be used on unsanitized input.

The default behavior is to substitute all arguments in the provided expression with dummy symbols. This allows for applied functions (e.g. f(t)) to be supplied as arguments. Call the function with dummify=False if dummy substitution is unwanted (and args is not a string). If you want to view the lambdified function or provide “sympy” as the module, you should probably set dummify=False.

For functions involving large array calculations, numexpr can provide a significant speedup over numpy. Please note that the available functions for numexpr are more limited than numpy but can be expanded with implemented_function and user defined subclasses of Function. If specified, numexpr may be the only option in modules. The official list of numexpr functions can be found at: https://github.com/pydata/numexpr#supported-functions

In previous releases lambdify replaced Matrix with numpy.matrix by default. As of release 1.0 numpy.array is the default. To get the old default behavior you must pass in [{'ImmutableDenseMatrix': numpy.matrix}, 'numpy'] to the modules kwarg.

>>> from .. import lambdify, Matrix
>>> from ..abc import x, y
>>> import numpy
>>> array2mat = [{'ImmutableDenseMatrix': numpy.matrix}, 'numpy']
>>> f = lambdify((x, y), Matrix([x, y]), modules=array2mat)
>>> f(1, 2)
matrix([[1],
        [2]])

Usage

  1. Use one of the provided modules:

    >>> from .. import sin, tan, gamma
    >>> from .lambdify import lambdastr
    >>> from ..abc import x, y
    >>> f = lambdify(x, sin(x), "math")
    
    Attention: Functions that are not in the math module will throw a name

    error when the lambda function is evaluated! So this would be better:

    >>> f = lambdify(x, sin(x)*gamma(x), ("math", "mpmath", "sympy"))
    
  2. Use some other module:

    >>> import numpy
    >>> f = lambdify((x,y), tan(x*y), numpy)
    
    Attention: There are naming differences between numpy and sympy. So if

    you simply take the numpy module, e.g. sympy.atan will not be translated to numpy.arctan. Use the modified module instead by passing the string “numpy”:

    >>> f = lambdify((x,y), tan(x*y), "numpy")
    >>> f(1, 2)
    -2.18503986326
    >>> from numpy import array
    >>> f(array([1, 2, 3]), array([2, 3, 5]))
    [-2.18503986 -0.29100619 -0.8559934 ]
    
  3. Use a dictionary defining custom functions:

    >>> def my_cool_function(x): return 'sin(%s) is cool' % x
    >>> myfuncs = {"sin" : my_cool_function}
    >>> f = lambdify(x, sin(x), myfuncs); f(1)
    'sin(1) is cool'
    

Examples

>>> from .lambdify import implemented_function
>>> from .. import sqrt, sin, Matrix
>>> from .. import Function
>>> from ..abc import w, x, y, z
>>> f = lambdify(x, x**2)
>>> f(2)
4
>>> f = lambdify((x, y, z), [z, y, x])
>>> f(1,2,3)
[3, 2, 1]
>>> f = lambdify(x, sqrt(x))
>>> f(4)
2.0
>>> f = lambdify((x, y), sin(x*y)**2)
>>> f(0, 5)
0.0
>>> row = lambdify((x, y), Matrix((x, x + y)).T, modules='sympy')
>>> row(1, 2)
Matrix([[1, 3]])

Tuple arguments are handled and the lambdified function should be called with the same type of arguments as were used to create the function.:

>>> f = lambdify((x, (y, z)), x + y)
>>> f(1, (2, 4))
3

A more robust way of handling this is to always work with flattened arguments:

>>> from .iterables import flatten
>>> args = w, (x, (y, z))
>>> vals = 1, (2, (3, 4))
>>> f = lambdify(flatten(args), w + x + y + z)
>>> f(*flatten(vals))
10

Functions present in expr can also carry their own numerical implementations, in a callable attached to the _imp_ attribute. Usually you attach this using the implemented_function factory:

>>> f = implemented_function(Function('f'), lambda x: x+1)
>>> func = lambdify(x, f(x))
>>> func(4)
5

lambdify always prefers _imp_ implementations to implementations in other namespaces, unless the use_imps input parameter is False.

Usage with Tensorflow module:

>>> import tensorflow as tf
>>> f = Max(x, sin(x))
>>> func = lambdify(x, f, 'tensorflow')
>>> result = func(tf.constant(1.0))
>>> result # a tf.Tensor representing the result of the calculation
<tf.Tensor 'Maximum:0' shape=() dtype=float32>
>>> sess = tf.Session()
>>> sess.run(result) # compute result
1.0
>>> var = tf.Variable(1.0)
>>> sess.run(tf.global_variables_initializer())
>>> sess.run(func(var)) # also works for tf.Variable and tf.Placeholder
1.0
>>> tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]]) # works with any shape tensor
>>> sess.run(func(tensor))
array([[ 1.,  2.],
       [ 3.,  4.]], dtype=float32)

modelparameters.sympy.utilities.magic module

Functions that involve magic.

modelparameters.sympy.utilities.magic.pollute(names, objects)[source]

Pollute the global namespace with symbols -> objects mapping.

modelparameters.sympy.utilities.memoization module

modelparameters.sympy.utilities.memoization.assoc_recurrence_memo(base_seq)[source]

Memo decorator for associated sequences defined by recurrence starting from base

base_seq(n) – callable to get base sequence elements

XXX works only for Pn0 = base_seq(0) cases XXX works only for m <= n cases

modelparameters.sympy.utilities.memoization.recurrence_memo(initial)[source]

Memo decorator for sequences defined by recurrence

See usage examples e.g. in the specfun/combinatorial module

modelparameters.sympy.utilities.misc module

Miscellaneous stuff that doesn’t really fit anywhere else.

exception modelparameters.sympy.utilities.misc.Undecidable[source]

Bases: ValueError

modelparameters.sympy.utilities.misc.debug(*args)[source]

Print *args if SYMPY_DEBUG is True, else do nothing.

modelparameters.sympy.utilities.misc.debug_decorator(func)[source]

If SYMPY_DEBUG is True, it will print a nice execution tree with arguments and results of all decorated functions, else do nothing.

modelparameters.sympy.utilities.misc.filldedent(s, w=70)[source]

Strips leading and trailing empty lines from a copy of s, then dedents, fills and returns it.

Empty line stripping serves to deal with docstrings like this one that start with a newline after the initial triple quote, inserting an empty line at the beginning of the string.

modelparameters.sympy.utilities.misc.find_executable(executable, path=None)[source]

Try to find ‘executable’ in the directories listed in ‘path’ (a string listing directories separated by ‘os.pathsep’; defaults to os.environ[‘PATH’]). Returns the complete filename or None if not found

modelparameters.sympy.utilities.misc.func_name(x)[source]

Return function name of x (if defined) else the type(x). .. seealso:: sympy.core.compatibility

modelparameters.sympy.utilities.misc.rawlines(s)[source]

Return a cut-and-pastable string that, when printed, is equivalent to the input. The string returned is formatted so it can be indented nicely within tests; in some cases it is wrapped in the dedent function which has to be imported from textwrap.

Examples

Note: because there are characters in the examples below that need to be escaped because they are themselves within a triple quoted docstring, expressions below look more complicated than they would be if they were printed in an interpreter window.

>>> from .misc import rawlines
>>> from .. import TableForm
>>> s = str(TableForm([[1, 10]], headings=(None, ['a', 'bee'])))
>>> print(rawlines(s))
(
    'a bee\n'
    '-----\n'
    '1 10 '
)
>>> print(rawlines('''this
... that'''))
dedent('''\
    this
    that''')
>>> print(rawlines('''this
... that
... '''))
dedent('''\
    this
    that
    ''')
>>> s = """this
... is a triple '''
... """
>>> print(rawlines(s))
dedent("""\
    this
    is a triple '''
    """)
>>> print(rawlines('''this
... that
...     '''))
(
    'this\n'
    'that\n'
    '    '
)
modelparameters.sympy.utilities.misc.replace(string, *reps)[source]

Return string with all keys in reps replaced with their corresponding values, longer strings first, irrespective of the order they are given. reps may be passed as tuples or a single mapping.

Examples

>>> from .misc import replace
>>> replace('foo', {'oo': 'ar', 'f': 'b'})
'bar'
>>> replace("spamham sha", ("spam", "eggs"), ("sha","md5"))
'eggsham md5'

There is no guarantee that a unique answer will be obtained if keys in a mapping overlap (i.e. are the same length and have some identical sequence at the beginning/end):

>>> reps = [
...     ('ab', 'x'),
...     ('bc', 'y')]
>>> replace('abc', *reps) in ('xc', 'ay')
True

References

modelparameters.sympy.utilities.misc.translate(s, a, b=None, c=None)[source]

Return s where characters have been replaced or deleted.

SYNTAX

translate(s, None, deletechars):

all characters in deletechars are deleted

translate(s, map [,deletechars]):

all characters in deletechars (if provided) are deleted then the replacements defined by map are made; if the keys of map are strings then the longer ones are handled first. Multicharacter deletions should have a value of ‘’.

translate(s, oldchars, newchars, deletechars)

all characters in deletechars are deleted then each character in oldchars is replaced with the corresponding character in newchars

Examples

>>> from .misc import translate
>>> from ..core.compatibility import unichr
>>> abc = 'abc'
>>> translate(abc, None, 'a')
'bc'
>>> translate(abc, {'a': 'x'}, 'c')
'xb'
>>> translate(abc, {'abc': 'x', 'a': 'y'})
'x'
>>> translate('abcd', 'ac', 'AC', 'd')
'AbC'

There is no guarantee that a unique answer will be obtained if keys in a mapping overlap are the same length and have some identical sequences at the beginning/end:

>>> translate(abc, {'ab': 'x', 'bc': 'y'}) in ('xc', 'ay')
True

modelparameters.sympy.utilities.pkgdata module

pkgdata is a simple, extensible way for a package to acquire data file resources.

The getResource function is equivalent to the standard idioms, such as the following minimal implementation:

import sys, os

def getResource(identifier, pkgname=__name__):
    pkgpath = os.path.dirname(sys.modules[pkgname].__file__)
    path = os.path.join(pkgpath, identifier)
    return open(os.path.normpath(path), mode='rb')

When a __loader__ is present on the module given by __name__, it will defer getResource to its get_data implementation and return it as a file-like object (such as StringIO).

modelparameters.sympy.utilities.pkgdata.get_resource(identifier, pkgname='modelparameters.sympy.utilities.pkgdata')[source]

Acquire a readable object for a given package name and identifier. An IOError will be raised if the resource can not be found.

For example:

mydata = get_resource('mypkgdata.jpg').read()

Note that the package name must be fully qualified, if given, such that it would be found in sys.modules.

In some cases, getResource will return a real file object. In that case, it may be useful to use its name attribute to get the path rather than use it as a file-like object. For example, you may be handing data off to a C API.

modelparameters.sympy.utilities.pytest module

py.test hacks to support XFAIL/XPASS

class modelparameters.sympy.utilities.pytest.RaisesContext(expectedException)[source]

Bases: object

modelparameters.sympy.utilities.pytest.SKIP(reason)[source]

Similar to skip(), but this is a decorator.

exception modelparameters.sympy.utilities.pytest.Skipped[source]

Bases: Exception

modelparameters.sympy.utilities.pytest.XFAIL(func)[source]
exception modelparameters.sympy.utilities.pytest.XFail[source]

Bases: Exception

exception modelparameters.sympy.utilities.pytest.XPass[source]

Bases: Exception

modelparameters.sympy.utilities.pytest.raises(expectedException, code=None)[source]

Tests that code raises the exception expectedException.

code may be a callable, such as a lambda expression or function name.

If code is not given or None, raises will return a context manager for use in with statements; the code to execute then comes from the scope of the with.

raises() does nothing if the callable raises the expected exception, otherwise it raises an AssertionError.

Examples

>>> from .pytest import raises
>>> raises(ZeroDivisionError, lambda: 1/0)
>>> raises(ZeroDivisionError, lambda: 1/2)
Traceback (most recent call last):
...
AssertionError: DID NOT RAISE
>>> with raises(ZeroDivisionError):
...     n = 1/0
>>> with raises(ZeroDivisionError):
...     n = 1/2
Traceback (most recent call last):
...
AssertionError: DID NOT RAISE

Note that you cannot test multiple statements via with raises:

>>> with raises(ZeroDivisionError):
...     n = 1/0    # will execute and raise, aborting the ``with``
...     n = 9999/0 # never executed

This is just what with is supposed to do: abort the contained statement sequence at the first exception and let the context manager deal with the exception.

To test multiple statements, you’ll need a separate with for each:

>>> with raises(ZeroDivisionError):
...     n = 1/0    # will execute and raise
>>> with raises(ZeroDivisionError):
...     n = 9999/0 # will also execute and raise
modelparameters.sympy.utilities.pytest.skip(str)[source]
modelparameters.sympy.utilities.pytest.slow(func)[source]

modelparameters.sympy.utilities.randtest module

Helpers for randomized testing

modelparameters.sympy.utilities.randtest.random_complex_number(a=2, b=-1, c=3, d=1, rational=False)[source]

Return a random complex number.

To reduce chance of hitting branch cuts or anything, we guarantee b <= Im z <= d, a <= Re z <= c

modelparameters.sympy.utilities.randtest.test_derivative_numerically(f, z, tol=1e-06, a=2, b=-1, c=3, d=1)[source]

Test numerically that the symbolically computed derivative of f with respect to z is correct.

This routine does not test whether there are Floats present with precision higher than 15 digits so if there are, your results may not be what you expect due to round-off errors.

Examples

>>> from .. import sin
>>> from ..abc import x
>>> from .randtest import test_derivative_numerically as td
>>> td(sin(x), x)
True
modelparameters.sympy.utilities.randtest.verify_numerically(f, g, z=None, tol=1e-06, a=2, b=-1, c=3, d=1)[source]

Test numerically that f and g agree when evaluated in the argument z.

If z is None, all symbols will be tested. This routine does not test whether there are Floats present with precision higher than 15 digits so if there are, your results may not be what you expect due to round- off errors.

Examples

>>> from .. import sin, cos
>>> from ..abc import x
>>> from .randtest import verify_numerically as tn
>>> tn(sin(x)**2 + cos(x)**2, 1, x)
True

modelparameters.sympy.utilities.runtests module

This is our testing framework.

Goals:

  • it should be compatible with py.test and operate very similarly (or identically)

  • doesn’t require any external dependencies

  • preferably all the functionality should be in this file only

  • no magic, just import the test file and execute the test functions, that’s it

  • portable

class modelparameters.sympy.utilities.runtests.PyTestReporter(verbose=False, tb='short', colors=True, force_colors=False, split=None)[source]

Bases: Reporter

Py.test like reporter. Should produce output identical to py.test.

doctest_fail(name, error_msg)[source]
entering_filename(filename, n)[source]
entering_test(f)[source]
finish()[source]
import_error(filename, exc_info)[source]
leaving_filename()[source]
root_dir(dir)[source]
start(seed=None, msg='test process starts')[source]
property terminal_width
test_exception(exc_info)[source]
test_fail(exc_info)[source]
test_pass(char='.')[source]
test_skip(v=None)[source]
test_xfail()[source]
test_xpass(v)[source]
write(text, color='', align='left', width=None, force_colors=False)[source]

Prints a text on the screen.

It uses sys.stdout.write(), so no readline library is necessary.

Parameters:
  • color (choose from the colors below, "" means default color) –

  • align ("left"/"right", "left" is a normal print, "right" is aligned on) – the right-hand side of the screen, filled with spaces if necessary

  • width (the screen width) –

write_center(text, delim='=')[source]
write_exception(e, val, tb)[source]
class modelparameters.sympy.utilities.runtests.Reporter[source]

Bases: object

Parent class for all reporters.

exception modelparameters.sympy.utilities.runtests.Skipped[source]

Bases: Exception

class modelparameters.sympy.utilities.runtests.SymPyDocTestFinder(verbose=False, parser=<doctest.DocTestParser object>, recurse=True, exclude_empty=True)[source]

Bases: DocTestFinder

A class used to extract the DocTests that are relevant to a given object, from its docstring and the docstrings of its contained objects. Doctests can currently be extracted from the following object types: modules, functions, classes, methods, staticmethods, classmethods, and properties.

Modified from doctest’s version by looking harder for code in the case that it looks like the the code comes from a different module. In the case of decorated functions (e.g. @vectorize) they appear to come from a different module (e.g. multidemensional) even though their code is not there.

class modelparameters.sympy.utilities.runtests.SymPyDocTestRunner(checker=None, verbose=None, optionflags=0)[source]

Bases: DocTestRunner

A class used to run DocTest test cases, and accumulate statistics. The run method is used to process a single DocTest case. It returns a tuple (f, t), where t is the number of test cases tried, and f is the number of test cases that failed.

Modified from the doctest version to not reset the sys.displayhook (see issue 5140).

See the docstring of the original DocTestRunner for more information.

run(test, compileflags=None, out=None, clear_globs=True)[source]

Run the examples in test, and display the results using the writer function out.

The examples are run in the namespace test.globs. If clear_globs is true (the default), then this namespace will be cleared after the test runs, to help with garbage collection. If you would like to examine the namespace after the test completes, then use clear_globs=False.

compileflags gives the set of flags that should be used by the Python compiler when running the examples. If not specified, then it will default to the set of future-import flags that apply to globs.

The output of each example is checked using SymPyDocTestRunner.check_output, and the results are formatted by the SymPyDocTestRunner.report_* methods.

class modelparameters.sympy.utilities.runtests.SymPyDocTests(reporter, normal)[source]

Bases: object

get_test_files(dir, pat='*.py', init_only=True)[source]

Returns the list of *.py files (default) from which docstrings will be tested which are at or below directory dir. By default, only those that have an __init__.py in their parent directory and do not start with test_ will be included.

test()[source]

Runs the tests and returns True if all tests pass, otherwise False.

test_file(filename)[source]
class modelparameters.sympy.utilities.runtests.SymPyOutputChecker[source]

Bases: OutputChecker

Compared to the OutputChecker from the stdlib our OutputChecker class supports numerical comparison of floats occuring in the output of the doctest examples

check_output(want, got, optionflags)[source]

Return True iff the actual output from an example (got) matches the expected output (want). These strings are always considered to match if they are identical; but depending on what option flags the test runner is using, several non-exact match types are also possible. See the documentation for TestRunner for more information about option flags.

modelparameters.sympy.utilities.runtests.SymPyTestResults

alias of TestResults

class modelparameters.sympy.utilities.runtests.SymPyTests(reporter, kw='', post_mortem=False, seed=None, fast_threshold=None, slow_threshold=None)[source]

Bases: object

get_test_files(dir, pat='test_*.py')[source]

Returns the list of test_*.py (default) files at or below directory dir relative to the sympy home directory.

matches(x)[source]

Does the keyword expression self._kw match “x”? Returns True/False.

Always returns True if self._kw is “”.

test(sort=False, timeout=False, slow=False, enhance_asserts=False)[source]

Runs the tests returning True if all tests pass, otherwise False.

If sort=False run tests in random order.

test_file(filename, sort=True, timeout=False, slow=False, enhance_asserts=False)[source]
modelparameters.sympy.utilities.runtests.convert_to_native_paths(lst)[source]

Converts a list of ‘/’ separated paths into a list of native (os.sep separated) paths and converts to lowercase if the system is case insensitive.

modelparameters.sympy.utilities.runtests.doctest(*paths, **kwargs)[source]

Runs doctests in all *.py files in the sympy directory which match any of the given strings in paths or all tests if paths=[].

Notes:

  • Paths can be entered in native system format or in unix, forward-slash format.

  • Files that are on the blacklist can be tested by providing their path; they are only excluded if no paths are given.

Examples

>>> import sympy

Run all tests:

>>> sympy.doctest() 

Run one file:

>>> sympy.doctest("sympy/core/basic.py") 
>>> sympy.doctest("polynomial.rst") 

Run all tests in sympy/functions/ and some particular file:

>>> sympy.doctest("/functions", "basic.py") 

Run any file having polynomial in its name, doc/src/modules/polynomial.rst, sympy/functions/special/polynomials.py, and sympy/polys/polynomial.py:

>>> sympy.doctest("polynomial") 

The split option can be passed to split the test run into parts. The split currently only splits the test files, though this may change in the future. split should be a string of the form ‘a/b’, which will run part a of b. Note that the regular doctests and the Sphinx doctests are split independently. For instance, to run the first half of the test suite:

>>> sympy.doctest(split='1/2')  

The subprocess and verbose options are the same as with the function test(). See the docstring of that function for more information.

modelparameters.sympy.utilities.runtests.get_sympy_dir()[source]

Returns the root sympy directory and set the global value indicating whether the system is case sensitive or not.

modelparameters.sympy.utilities.runtests.run_all_tests(test_args=(), test_kwargs=None, doctest_args=(), doctest_kwargs=None, examples_args=(), examples_kwargs=None)[source]

Run all tests.

Right now, this runs the regular tests (bin/test), the doctests (bin/doctest), the examples (examples/all.py), and the sage tests (see sympy/external/tests/test_sage.py).

This is what setup.py test uses.

You can pass arguments and keyword arguments to the test functions that support them (for now, test, doctest, and the examples). See the docstrings of those functions for a description of the available options.

For example, to run the solvers tests with colors turned off:

>>> from .runtests import run_all_tests
>>> run_all_tests(test_args=("solvers",),
... test_kwargs={"colors:False"}) 
modelparameters.sympy.utilities.runtests.run_in_subprocess_with_hash_randomization(function, function_args=(), function_kwargs=None, command='/opt/hostedtoolcache/Python/3.8.16/x64/bin/python3', module='sympy.utilities.runtests', force=False)[source]

Run a function in a Python subprocess with hash randomization enabled.

If hash randomization is not supported by the version of Python given, it returns False. Otherwise, it returns the exit value of the command. The function is passed to sys.exit(), so the return value of the function will be the return value.

The environment variable PYTHONHASHSEED is used to seed Python’s hash randomization. If it is set, this function will return False, because starting a new subprocess is unnecessary in that case. If it is not set, one is set at random, and the tests are run. Note that if this environment variable is set when Python starts, hash randomization is automatically enabled. To force a subprocess to be created even if PYTHONHASHSEED is set, pass force=True. This flag will not force a subprocess in Python versions that do not support hash randomization (see below), because those versions of Python do not support the -R flag.

function should be a string name of a function that is importable from the module module, like “_test”. The default for module is “sympy.utilities.runtests”. function_args and function_kwargs should be a repr-able tuple and dict, respectively. The default Python command is sys.executable, which is the currently running Python command.

This function is necessary because the seed for hash randomization must be set by the environment variable before Python starts. Hence, in order to use a predetermined seed for tests, we must start Python in a separate subprocess.

Hash randomization was added in the minor Python versions 2.6.8, 2.7.3, 3.1.5, and 3.2.3, and is enabled by default in all Python versions after and including 3.3.0.

Examples

>>> from .runtests import (
... run_in_subprocess_with_hash_randomization)
>>> # run the core tests in verbose mode
>>> run_in_subprocess_with_hash_randomization("_test",
... function_args=("core",),
... function_kwargs={'verbose': True}) 
# Will return 0 if sys.executable supports hash randomization and tests
# pass, 1 if they fail, and False if it does not support hash
# randomization.
modelparameters.sympy.utilities.runtests.setup_pprint()[source]
modelparameters.sympy.utilities.runtests.split_list(l, split, density=None)[source]

Splits a list into part a of b

split should be a string of the form ‘a/b’. For instance, ‘1/3’ would give the split one of three.

If the length of the list is not divisible by the number of splits, the last split will have more items.

density may be specified as a list. If specified, tests will be balanced so that each split has as equal-as-possible amount of mass according to density.

>>> from .runtests import split_list
>>> a = list(range(10))
>>> split_list(a, '1/3')
[0, 1, 2]
>>> split_list(a, '2/3')
[3, 4, 5]
>>> split_list(a, '3/3')
[6, 7, 8, 9]
modelparameters.sympy.utilities.runtests.sympytestfile(filename, module_relative=True, name=None, package=None, globs=None, verbose=None, report=True, optionflags=0, extraglobs=None, raise_on_error=False, parser=<doctest.DocTestParser object>, encoding=None)[source]

Test examples in the given file. Return (#failures, #tests).

Optional keyword arg module_relative specifies how filenames should be interpreted:

  • If module_relative is True (the default), then filename specifies a module-relative path. By default, this path is relative to the calling module’s directory; but if the package argument is specified, then it is relative to that package. To ensure os-independence, filename should use “/” characters to separate path segments, and should not be an absolute path (i.e., it may not begin with “/”).

  • If module_relative is False, then filename specifies an os-specific path. The path may be absolute or relative (to the current working directory).

Optional keyword arg name gives the name of the test; by default use the file’s basename.

Optional keyword argument package is a Python package or the name of a Python package whose directory should be used as the base directory for a module relative filename. If no package is specified, then the calling module’s directory is used as the base directory for module relative filenames. It is an error to specify package if module_relative is False.

Optional keyword arg globs gives a dict to be used as the globals when executing examples; by default, use {}. A copy of this dict is actually used for each docstring, so that each docstring’s examples start with a clean slate.

Optional keyword arg extraglobs gives a dictionary that should be merged into the globals that are used to execute examples. By default, no extra globals are used.

Optional keyword arg verbose prints lots of stuff if true, prints only failures if false; by default, it’s true iff “-v” is in sys.argv.

Optional keyword arg report prints a summary at the end when true, else prints nothing at the end. In verbose mode, the summary is detailed, else very brief (in fact, empty if all tests passed).

Optional keyword arg optionflags or’s together module constants, and defaults to 0. Possible values (see the docs for details):

  • DONT_ACCEPT_TRUE_FOR_1

  • DONT_ACCEPT_BLANKLINE

  • NORMALIZE_WHITESPACE

  • ELLIPSIS

  • SKIP

  • IGNORE_EXCEPTION_DETAIL

  • REPORT_UDIFF

  • REPORT_CDIFF

  • REPORT_NDIFF

  • REPORT_ONLY_FIRST_FAILURE

Optional keyword arg raise_on_error raises an exception on the first unexpected exception or failure. This allows failures to be post-mortem debugged.

Optional keyword arg parser specifies a DocTestParser (or subclass) that should be used to extract tests from the files.

Optional keyword arg encoding specifies an encoding that should be used to convert the file to unicode.

Advanced tomfoolery: testmod runs methods of a local instance of class doctest.Tester, then merges the results into (or creates) global Tester instance doctest.master. Methods of doctest.master can be called directly too, if you want to do something unusual. Passing report=0 to testmod is especially useful then, to delay displaying a summary. Invoke doctest.master.summarize(verbose) when you’re done fiddling.

modelparameters.sympy.utilities.runtests.sys_normcase(f)[source]
modelparameters.sympy.utilities.runtests.test(*paths, **kwargs)[source]

Run tests in the specified test_*.py files.

Tests in a particular test_*.py file are run if any of the given strings in paths matches a part of the test file’s path. If paths=[], tests in all test_*.py files are run.

Notes:

  • If sort=False, tests are run in random order (not default).

  • Paths can be entered in native system format or in unix, forward-slash format.

  • Files that are on the blacklist can be tested by providing their path; they are only excluded if no paths are given.

Explanation of test results

Output

Meaning

.

passed

F

failed

X

XPassed (expected to fail but passed)

f

XFAILed (expected to fail and indeed failed)

s

skipped

w

slow

T

timeout (e.g., when --timeout is used)

K

KeyboardInterrupt (when running the slow tests with --slow, you can interrupt one of them without killing the test runner)

Colors have no additional meaning and are used just to facilitate interpreting the output.

Examples

>>> import sympy

Run all tests:

>>> sympy.test()    

Run one file:

>>> sympy.test("sympy/core/tests/test_basic.py")    
>>> sympy.test("_basic")    

Run all tests in sympy/functions/ and some particular file:

>>> sympy.test("sympy/core/tests/test_basic.py",
...        "sympy/functions")    

Run all tests in sympy/core and sympy/utilities:

>>> sympy.test("/core", "/util")    

Run specific test from a file:

>>> sympy.test("sympy/core/tests/test_basic.py",
...        kw="test_equality")    

Run specific test from any file:

>>> sympy.test(kw="subs")    

Run the tests with verbose mode on:

>>> sympy.test(verbose=True)    

Don’t sort the test output:

>>> sympy.test(sort=False)    

Turn on post-mortem pdb:

>>> sympy.test(pdb=True)    

Turn off colors:

>>> sympy.test(colors=False)    

Force colors, even when the output is not to a terminal (this is useful, e.g., if you are piping to less -r and you still want colors)

>>> sympy.test(force_colors=False)    

The traceback verboseness can be set to “short” or “no” (default is “short”)

>>> sympy.test(tb='no')    

The split option can be passed to split the test run into parts. The split currently only splits the test files, though this may change in the future. split should be a string of the form ‘a/b’, which will run part a of b. For instance, to run the first half of the test suite:

>>> sympy.test(split='1/2')  

The time_balance option can be passed in conjunction with split. If time_balance=True (the default for sympy.test), sympy will attempt to split the tests such that each split takes equal time. This heuristic for balancing is based on pre-recorded test data.

>>> sympy.test(split='1/2', time_balance=True)  

You can disable running the tests in a separate subprocess using subprocess=False. This is done to support seeding hash randomization, which is enabled by default in the Python versions where it is supported. If subprocess=False, hash randomization is enabled/disabled according to whether it has been enabled or not in the calling Python process. However, even if it is enabled, the seed cannot be printed unless it is called from a new Python process.

Hash randomization was added in the minor Python versions 2.6.8, 2.7.3, 3.1.5, and 3.2.3, and is enabled by default in all Python versions after and including 3.3.0.

If hash randomization is not supported subprocess=False is used automatically.

>>> sympy.test(subprocess=False)     

To set the hash randomization seed, set the environment variable PYTHONHASHSEED before running the tests. This can be done from within Python using

>>> import os
>>> os.environ['PYTHONHASHSEED'] = '42' 

Or from the command line using

$ PYTHONHASHSEED=42 ./bin/test

If the seed is not set, a random seed will be chosen.

Note that to reproduce the same hash values, you must use both the same seed as well as the same architecture (32-bit vs. 64-bit).

modelparameters.sympy.utilities.source module

This module adds several functions for interactive source code inspection.

modelparameters.sympy.utilities.source.get_class(lookup_view)[source]

Convert a string version of a class name to the object.

For example, get_class(‘sympy.core.Basic’) will return class Basic located in module sympy.core

modelparameters.sympy.utilities.source.get_mod_func(callback)[source]

splits the string path to a class into a string path to the module and the name of the class. For example:

>>> from .source import get_mod_func
>>> get_mod_func('sympy.core.basic.Basic')
('sympy.core.basic', 'Basic')
modelparameters.sympy.utilities.source.source(object)[source]

Prints the source code of a given object.

modelparameters.sympy.utilities.timeutils module

Simple tools for timing functions’ execution, when IPython is not available.

modelparameters.sympy.utilities.timeutils.timed(func, setup='pass', limit=None)[source]

Adaptively measure execution time of a function.

modelparameters.sympy.utilities.timeutils.timethis(name)[source]

Module contents

This module contains some general purpose utilities that are used across SymPy.