Module for compiling codegen output, and wrap the binary for use in
python.
Note
To use the autowrap module it must first be imported
>>> from.autowrapimportautowrap
This module provides a common interface for different external backends, such
as f2py, fwrap, Cython, SWIG(?) etc. (Currently only f2py and Cython are
implemented) The goal is to provide access to compiled binaries of acceptable
performance with a one-button user interface, i.e.
The callable returned from autowrap() is a binary python function, not a
SymPy object. If it is desired to use the compiled function in symbolic
expressions, it is better to use binary_function() which returns a SymPy
Function object. The binary callable is attached as the _imp_ attribute and
invoked when a numerical evaluation is requested with evalf(), or with
lambdify().
The idea is that a SymPy user will primarily be interested in working with
mathematical expressions, and should not have to learn details about wrapping
tools in order to evaluate expressions numerically, even if they are
computationally expensive.
When is this useful?
For computations on large arrays, Python iterations may be too slow,
and depending on the mathematical expression, it may be difficult to
exploit the advanced index operations provided by NumPy.
For really long expressions that will be called repeatedly, the
compiled binary should be significantly faster than SymPy’s .evalf()
If you are generating code with the codegen utility in order to use
it in another project, the automatic python wrappers let you test the
binaries immediately from within SymPy.
To create customized ufuncs for use with numpy arrays.
See ufuncify.
When is this module NOT the best approach?
If you are really concerned about speed or memory optimizations,
you will probably get better results by working directly with the
wrapper tools and the low level code. However, the files generated
by this utility may provide a useful starting point and reference
code. Temporary files will be left intact if you supply the keyword
tempdir=”path/to/files/”.
If the array computation can be handled easily by numpy, and you
don’t need the binaries for another project.
Generates python callable binaries based on the math expression.
Parameters:
expr – The SymPy expression that should be wrapped as a binary routine.
language (string, optional) – If supplied, (options: ‘C’ or ‘F95’), specifies the language of the
generated code. If None [default], the language is inferred based
upon the specified backend.
backend (string, optional) – Backend used to wrap the generated code. Either ‘f2py’ [default],
or ‘cython’.
tempdir (string, optional) – Path to directory for temporary files. If this argument is supplied,
the generated code and the wrapper input files are left intact in the
specified path.
args (iterable, optional) – An ordered iterable of symbols. Specifies the argument sequence for the
function.
flags (iterable, optional) – Additional option flags that will be passed to the backend.
verbose (bool, optional) – If True, autowrap will not mute the command line backends. This can be
helpful for debugging.
helpers (iterable, optional) – Used to define auxillary expressions needed for the main expr. If the
main expression needs to call a specialized function it should be put
in the helpers iterable. Autowrap will then make sure that the
compiled main expression can link to the helper routine. Items should
be tuples with (<funtion_name>, <sympy_expression>, <arguments>). It
is mandatory to supply an argument sequence to helper routines.
code_gen (CodeGen instance) – An instance of a CodeGen subclass. Overrides language.
include_dirs ([string]) – A list of directories to search for C/C++ header files (in Unix form
for portability).
library_dirs ([string]) – A list of directories to search for C/C++ libraries at link time.
libraries ([string]) – A list of library names (not filenames or paths) to link against.
extra_compile_args ([string]) – Any extra platform- and compiler-specific information to use when
compiling the source files in ‘sources’. For platforms and compilers
where “command line” makes sense, this is typically a list of
command-line arguments, but for other platforms it could be anything.
extra_link_args ([string]) – Any extra platform- and compiler-specific information to use when
linking object files together to create the extension (or to create a
new static Python interpreter). Similar interpretation as for
‘extra_compile_args’.
Returns a sympy function with expr as binary implementation
This is a convenience function that automates the steps needed to
autowrap the SymPy expression and attaching it to a Function object
with implemented_function().
Parameters:
symfunc (sympy Function) – The function to bind the callable to.
expr (sympy Expression) – The expression used to generate the function.
Generates a binary function that supports broadcasting on numpy arrays.
Parameters:
args (iterable) – Either a Symbol or an iterable of symbols. Specifies the argument
sequence for the function.
expr – A SymPy expression that defines the element wise operation.
language (string, optional) – If supplied, (options: ‘C’ or ‘F95’), specifies the language of the
generated code. If None [default], the language is inferred based
upon the specified backend.
backend (string, optional) – Backend used to wrap the generated code. Either ‘numpy’ [default],
‘cython’, or ‘f2py’.
tempdir (string, optional) – Path to directory for temporary files. If this argument is supplied,
the generated code and the wrapper input files are left intact in
the specified path.
flags (iterable, optional) – Additional option flags that will be passed to the backend.
verbose (bool, optional) – If True, autowrap will not mute the command line backends. This can
be helpful for debugging.
helpers (iterable, optional) – Used to define auxillary expressions needed for the main expr. If
the main expression needs to call a specialized function it should
be put in the helpers iterable. Autowrap will then make sure
that the compiled main expression can link to the helper routine.
Items should be tuples with (<funtion_name>, <sympy_expression>,
<arguments>). It is mandatory to supply an argument sequence to
helper routines.
kwargs (dict) – These kwargs will be passed to autowrap if the f2py or cython
backend is used and ignored if the numpy backend is used.
Note
The default backend (‘numpy’) will create actual instances of
numpy.ufunc. These support ndimensional broadcasting, and implicit type
conversion. Use of the other backends will result in a “ufunc-like”
function, which requires equal length 1-dimensional arrays for all
arguments, and will not perform any type conversions.
For the ‘f2py’ and ‘cython’ backends, inputs are required to be equal length
1-dimensional arrays. The ‘f2py’ backend will perform type conversion, but
the Cython backend will error if the inputs are not of the expected type.
module for generating C, C++, Fortran77, Fortran90, Julia, Rust
and Octave/Matlab routines that evaluate sympy expressions.
This module is work in progress.
Only the milestones with a ‘+’ character in the list below have been completed.
— How is sympy.utilities.codegen different from ..printing.ccode? —
We considered the idea to extend the printing routines for sympy functions in
such a way that it prints complete compilable code, but this leads to a few
unsurmountable issues that can only be tackled with dedicated code generator:
For C, one needs both a code and a header file, while the printing routines
generate just one string. This code generator can be extended to support
.pyf files for f2py.
SymPy functions are not concerned with programming-technical issues, such
as input, output and input-output arguments. Other examples are contiguous
or non-contiguous arrays, including headers of other libraries such as gsl
or others.
It is highly interesting to evaluate several sympy functions in one C
routine, eventually sharing common intermediate results with the help
of the cse routine. This is more than just printing.
From the programming perspective, expressions with constants should be
evaluated in the code generator as much as possible. This is different
for printing.
— Basic assumptions —
A generic Routine data structure describes the routine that must be
translated into C/Fortran/… code. This data structure covers all
features present in one or more of the supported languages.
Descendants from the CodeGen class transform multiple Routine instances
into compilable code. Each derived class translates into a specific
language.
In many cases, one wants a simple workflow. The friendly functions in the
last part are a simple api on top of the Routine/CodeGen stuff. They are
easier to use, but are less powerful.
— Milestones —
First working version with scalar input arguments, generating C code,
tests
Friendly functions that are easier to use than the rigorous
Routine/CodeGen workflow.
Integer and Real numbers as input and output
Output arguments
InputOutput arguments
Sort input/output arguments properly
Contiguous array arguments (numpy matrices)
Also generate .pyf code for f2py (in autowrap module)
Isolate constants and evaluate them beforehand in double precision
Fortran 90
Octave/Matlab
Common Subexpression Elimination
User defined comments in the generated code
Optional extra include lines for libraries/objects that can eval special
functions
Test other C compilers and libraries: gcc, tcc, libtcc, gcc+gsl, …
Contiguous array arguments (sympy matrices)
Non-contiguous array arguments (sympy matrices)
ccode must raise an error when it encounters something that can not be
translated into c. ccode(integrate(sin(x)/x, x)) does not make sense.
Complex numbers as input and output
A default complex datatype
Include extra information in the header: date, user, hostname, sha1
hash, …
Creates an Routine object that is appropriate for this language.
This implementation is appropriate for at least C/Fortran. Subclasses
can override this if necessary.
Here, we assume at most one return value (the l-value) which must be
scalar. Additional outputs are OutputArguments (e.g., pointers on
right-hand-side or pass-by-reference). Matrices are always returned
via OutputArguments. If argument_sequence is None, arguments will
be ordered alphabetically, but with all InputArguments first, and then
OutputArgument and InOutArguments.
Writes all the source code files for the given routines.
The generated source is returned as a list of (filename, contents)
tuples, or is written to files (see below). Each filename consists
of the given prefix, appended with an appropriate extension.
Parameters:
routines (list) – A list of Routine instances to be written
prefix (string) – The prefix for the output files
to_files (bool, optional) – When True, the output is written to files. Otherwise, a list
of (filename, contents) tuples is returned. [default: False]
header (bool, optional) – When True, a header comment is included on top of each source
file. [default: True]
empty (bool, optional) – When True, empty lines are included to structure the source
files. [default: True]
The .write() method inherited from CodeGen will output a code file
<prefix>.m.
Octave .m files usually contain one function. That function name should
match the filename (prefix). If you pass multiple name_expr pairs,
the latter ones are presumed to be private functions accessed by the
primary function.
You should only pass inputs to argument_sequence: outputs are ordered
according to their order in name_expr.
Generic description of evaluation routine for set of expressions.
A CodeGen class can translate instances of this class into code in a
particular language. The routine specification covers all the features
present in these languages. The CodeGen part must raise an exception
when certain features are not present in the target language. For
example, multiple return values are possible in Python, but not in C or
Fortran. Another example: Fortran and Python support complex numbers,
while C does not.
Generate source code for expressions in a given language.
Parameters:
name_expr (tuple, or list of tuples) – A single (name, expression) tuple or a list of (name, expression)
tuples. Each tuple corresponds to a routine. If the expression is
an equality (an instance of class Equality) the left hand side is
considered an output argument. If expression is an iterable, then
the routine will have multiple outputs.
language (string,) – A string that indicates the source code language. This is case
insensitive. Currently, ‘C’, ‘F95’ and ‘Octave’ are supported.
‘Octave’ generates code compatible with both Octave and Matlab.
prefix (string, optional) – A prefix for the names of the files that contain the source code.
Language-dependent suffixes will be appended. If omitted, the name
of the first name_expr tuple is used.
project (string, optional) – A project name, used for making unique preprocessor instructions.
[default: “project”]
to_files (bool, optional) – When True, the code will be written to one or more files with the
given prefix, otherwise strings with the names and contents of
these files are returned. [default: False]
header (bool, optional) – When True, a header is written on top of each source file.
[default: True]
empty (bool, optional) – When True, empty lines are used to structure the code.
[default: True]
argument_sequence (iterable, optional) – Sequence of arguments for the routine in a preferred order. A
CodeGenError is raised if required arguments are missing.
Redundant arguments are used without warning. If omitted,
arguments will be ordered alphabetically, but with all input
aguments first, and then output or in-out arguments.
global_vars (iterable, optional) – Sequence of global variables used by the routine. Variables
listed here will not show up as function arguments.
standard (string) –
code_gen (CodeGen instance) – An instance of a CodeGen subclass. Overrides language.
If the generated function(s) will be part of a larger project where various
global variables have been defined, the ‘global_vars’ option can be used
to remove the specified variables from the function signature
>>> from.codegenimportcodegen>>> from..abcimportx,y,z>>> [(f_name,f_code),header]=codegen(... ("f",x+y*z),"F95",header=False,empty=False,... argument_sequence=(x,y),global_vars=(z,))>>> print(f_code)REAL*8 function f(x, y)implicit noneREAL*8, intent(in) :: xREAL*8, intent(in) :: yf = x + y*zend function
A factory that makes an appropriate Routine from an expression.
Parameters:
name (string) – The name of this routine in the generated code.
expr (expression or list/tuple of expressions) – A SymPy expression that the Routine instance will represent. If
given a list or tuple of expressions, the routine will be
considered to have multiple return values and/or output arguments.
argument_sequence (list or tuple, optional) – List arguments for the routine in a preferred order. If omitted,
the results are language dependent, for example, alphabetical order
or in the same order as the given expressions.
global_vars (iterable, optional) – Sequence of global variables used by the routine. Variables
listed here will not show up as function arguments.
language (string, optional) – Specify a target language. The Routine itself should be
language-agnostic but the precise way one is created, error
checking, etc depend on the language. [default: “F95”].
made (the left hand side is typically) –
expressions. (depending on both the language and the particular mathematical) –
Append obj’s name to global __all__ variable (call site).
By using this decorator on functions or classes you achieve the same goal
as by filling __all__ variables manually, you just don’t have to repeat
yourself (object’s name). You also know if object is public at definition
site, not at some random location (where __all__ was set).
Note that in multiple decorator setup (in almost all cases) @public
decorator must be applied before any other decorators, because it relies
on the pointer to object’s global namespace. If you apply other decorators
first, @public may end up modifying the wrong namespace.
Examples
>>> from.decoratorimportpublic
>>> __all__Traceback (most recent call last):...NameError: name '__all__' is not defined
Apply func to sub–elements of an object, including Add.
This decorator is intended to make it uniformly possible to apply a
function to all elements of composite objects, e.g. matrices, lists, tuples
and other iterable containers, or just expressions.
This version of threaded() decorator allows threading over
elements of Add class. If this behavior is not desirable
use xthreaded() decorator.
Functions using this decorator must have the following signature:
Apply func to sub–elements of an object, excluding Add.
This decorator is intended to make it uniformly possible to apply a
function to all elements of composite objects, e.g. matrices, lists, tuples
and other iterable containers, or just expressions.
This version of threaded() decorator disallows threading over
elements of Add class. If this behavior is not desirable
use threaded() decorator.
Functions using this decorator must have the following signature:
Has methods to enumerate and count the partitions of a multiset.
This implements a refactored and extended version of Knuth’s algorithm
7.1.2.5M [AOCP].”
The enumeration methods of this class are generators and return
data structures which can be interpreted by the same visitor
functions used for the output of multiset_partitions_taocp.
Algorithm 7.1.2.5M in Volume 4A, Combinatoral Algorithms,
Part 1, of The Art of Computer Programming, by Donald Knuth.
[Factorisatio]
On a Problem of Oppenheim concerning
“Factorisatio Numerorum” E. R. Canfield, Paul Erdos, Carl
Pomerance, JOURNAL OF NUMBER THEORY, Vol. 17, No. 1. August
1983. See section 7 for a description of an algorithm
similar to Knuth’s.
[Yorgey]
Generating Multiset Partitions, Brent Yorgey, The
Monad.Reader, Issue 8, September 2007.
Returns the number of partitions of a multiset whose components
have the multiplicities given in multiplicities.
For larger counts, this method is much faster than calling one
of the enumerators and counting the result. Uses dynamic
programming to cut down on the number of nodes actually
explored. The dictionary used in order to accelerate the
counting process is stored in the MultisetPartitionTraverser
object and persists across calls. If the the user does not
expect to call count_partitions for any additional
multisets, the object should be cleared to save memory. On
the other hand, the cache built up from one count run can
significantly speed up subsequent calls to count_partitions,
so it may be advantageous not to clear the object.
If one looks at the workings of Knuth’s algorithm M [AOCP], it
can be viewed as a traversal of a binary tree of parts. A
part has (up to) two children, the left child resulting from
the spread operation, and the right child from the decrement
operation. The ordinary enumeration of multiset partitions is
an in-order traversal of this tree, and with the partitions
corresponding to paths from the root to the leaves. The
mapping from paths to partitions is a little complicated,
since the partition would contain only those parts which are
leaves or the parents of a spread link, not those which are
parents of a decrement link.
For counting purposes, it is sufficient to count leaves, and
this can be done with a recursive in-order traversal. The
number of leaves of a subtree rooted at a particular part is a
function only of that part itself, so memoizing has the
potential to speed up the counting dramatically.
This method follows a computational approach which is similar
to the hypothetical memoized recursive function, but with two
differences:
This method is iterative, borrowing its structure from the
other enumerations and maintaining an explicit stack of
parts which are in the process of being counted. (There
may be multisets which can be counted reasonably quickly by
this implementation, but which would overflow the default
Python recursion limit with a recursive implementation.)
Instead of using the part data structure directly, a more
compact key is constructed. This saves space, but more
importantly coalesces some parts which would remain
separate with physical keys.
Unlike the enumeration functions, there is currently no _range
version of count_partitions. If someone wants to stretch
their brain, it should be possible to construct one by
memoizing with a histogram of counts rather than a single
count, and combining the histograms.
Decrements part (a subrange of pstack), if possible, returning
True iff the part was successfully decremented.
If you think of the v values in the part as a multi-digit
integer (least significant digit on the right) this is
basically decrementing that integer, but with the extra
constraint that the leftmost digit cannot be decremented to 0.
Parameters:
part – The part, represented as a list of PartComponent objects,
which is to be decremented.
Decrements part, while respecting size constraint.
A part can have no children which are of sufficient size (as
indicated by lb) unless that part has sufficient
unallocated multiplicity. When enforcing the size constraint,
this method will decrement the part (if necessary) by an
amount needed to ensure sufficient unallocated multiplicity.
Returns True iff the part was successfully decremented.
Parameters:
part – part to be decremented (topmost part on the stack)
amt – Can only take values 0 or 1. A value of 1 means that the
part must be decremented, and then the size constraint is
enforced. A value of 0 means just to enforce the lb
size constraint.
lb – The partitions produced by the calling enumeration must
have more parts than this value.
Decrements part (a subrange of pstack), if possible, returning
True iff the part was successfully decremented.
Parameters:
part – part to be decremented (topmost part on the stack)
ub
the maximum number of parts allowed in a partition
returned by the calling traversal.
lb
The partitions produced by the calling enumeration must
have more parts than this value.
Notes
Combines the constraints of _small and _large decrement
methods. If returns success, part has been decremented at
least once, but perhaps by quite a bit more if needed to meet
the lb constraint.
Decrements part (a subrange of pstack), if possible, returning
True iff the part was successfully decremented.
Parameters:
part – part to be decremented (topmost part on the stack)
ub – the maximum number of parts allowed in a partition
returned by the calling traversal.
Notes
The goal of this modification of the ordinary decrement method
is to fail (meaning that the subtree rooted at this part is to
be skipped) when it can be proved that this part can only have
child partitions which are larger than allowed by ub. If a
decision is made to fail, it must be accurate, otherwise the
enumeration will miss some partitions. But, it is OK not to
capture all the possible failures – if a part is passed that
shouldn’t be, the resulting too-large partitions are filtered
by the enumeration one level up. However, as is usual in
constrained enumerations, failing early is advantageous.
The tests used by this method catch the most common cases,
although this implementation is by no means the last word on
this problem. The tests include:
lpart must be less than ub by at least 2. This is because
once a part has been decremented, the partition
will gain at least one child in the spread step.
If the leading component of the part is about to be
decremented, check for how many parts will be added in
order to use up the unallocated multiplicity in that
leading component, and fail if this number is greater than
allowed by ub. (See code for the exact expression.) This
test is given in the answer to Knuth’s problem 7.2.1.5.69.
If there is exactly enough room to expand the leading
component by the above test, check the next component (if
it exists) once decrementing has finished. If this has
v==0, this next component will push the expansion over the
limit by 1, so fail.
which provides the same result as this method, but is about twice as fast. Hence, enum_all is primarily useful for testing. Also see the function for a discussion of states and visitors.
Enumerate the partitions of a multiset with
lb<num(parts)<=ub.
In particular, if partitions with exactly k parts are
desired, call with (multiplicities,k-1,k). This
method generalizes enum_all, enum_small, and enum_large.
Returns True if a new part has been created, and
adjusts pstack, f and lpart as needed.
Notes
Spreads unallocated multiplicity from the current top part
into a new part created above the current on the stack. This
new part is constrained to be less than or equal to the old in
terms of the part ordering.
This call does nothing (and returns False) if the current top
part has no unallocated multiplicity.
Use with multiset_partitions_taocp to enumerate the ways a
number can be expressed as a product of factors. For this usage,
the exponents of the prime factors of a number are arguments to
the partition enumerator, while the corresponding prime factors
are input here.
Examples
To enumerate the factorings of a number we can think of the elements of the
partition as being the prime factors and the multiplicities as being their
exponents.
multiplicities – list of integer multiplicities of the components of the multiset.
Yields:
state – Internal data structure which encodes a particular partition.
This output is then usually processed by a vistor function
which combines the information from this data structure with
the components themselves to produce an actual partition.
Unless they wish to create their own visitor function, users will
have little need to look inside this data structure. But, for
reference, it is a 3-element list with components:
f
is a frame array, which is used to divide pstack into parts.
lpart
points to the base of the topmost part.
pstack
is an array of PartComponent objects.
The state output offers a peek into the internal data
structures of the enumeration function. The client should
treat this as read-only; any modification of the data
structure will cause unpredictable (and almost certainly
incorrect) results. Also, the components of state are
modified in place at each iteration. Hence, the visitor must
be called at each loop iteration. Accumulating the state
instances and processing them later will not work.
Takes a multiset as input and directly yields multiset partitions. It dispatches to a number of functions, including this one, for implementation. Most users will find it more convenient to use than multiset_partitions_taocp.
Helper for MultisetPartitionTraverser.count_partitions that
creates a key for part, that only includes information which can
affect the count for that part. (Any irrelevant information just
reduces the effectiveness of dynamic programming.)
Notes
This member function is a candidate for future exploration. There
are likely symmetries that can be exploited to coalesce some
part_key values, and thereby save space and improve
performance.
>>> SymPyDeprecationWarning(... feature="Old feature",... useinstead="new feature",... issue=5241,... deprecated_since_version="1.1")Old feature has been deprecated since SymPy 1.1. Use new featureinstead. See https://github.com/sympy/sympy/issues/5241 for more info.
Every formal deprecation should have an associated issue in the GitHub
issue tracker. All such issues should have the DeprecationRemoval
tag.
Additionally, each formal deprecation should mark the first release for
which it was deprecated. Use the deprecated_since_version flag for
this.
>>> SymPyDeprecationWarning(... feature="Old feature",... useinstead="new feature",... deprecated_since_version="0.7.2",... issue=1065)Old feature has been deprecated since SymPy 0.7.2. Use new featureinstead. See https://github.com/sympy/sympy/issues/1065 for more info.
To provide additional information, create an instance of this
class in this way:
>>> SymPyDeprecationWarning(... feature="Such and such",... last_supported_version="1.2.3",... useinstead="this other feature",... issue=1065,... deprecated_since_version="1.1")Such and such has been deprecated since SymPy 1.1. It will be lastsupported in SymPy version 1.2.3. Use this other feature instead. Seehttps://github.com/sympy/sympy/issues/1065 for more info.
Note that the text in feature begins a sentence, so if it begins with
a plain English word, the first letter of that word should be capitalized.
Either (or both) of the arguments last_supported_version and
useinstead can be omitted. In this case the corresponding sentence
will not be shown:
>>> SymPyDeprecationWarning(feature="Such and such",... useinstead="this other feature",issue=1065,... deprecated_since_version="1.1")Such and such has been deprecated since SymPy 1.1. Use this otherfeature instead. See https://github.com/sympy/sympy/issues/1065 formore info.
You can still provide the argument value. If it is a string, it
will be appended to the end of the message:
>>> SymPyDeprecationWarning(... feature="Such and such",... useinstead="this other feature",... value="Contact the developers for further information.",... issue=1065,... deprecated_since_version="1.1")Such and such has been deprecated since SymPy 1.1. Use this otherfeature instead. See https://github.com/sympy/sympy/issues/1065 formore info. Contact the developers for further information.
If, however, the argument value does not hold a string, a string
representation of the object will be appended to the message:
>>> SymPyDeprecationWarning(... feature="Such and such",... useinstead="this other feature",... value=[1,2,3],... issue=1065,... deprecated_since_version="1.1")Such and such has been deprecated since SymPy 1.1. Use this otherfeature instead. See https://github.com/sympy/sympy/issues/1065 formore info. ([1, 2, 3])
Note that it may be necessary to go back through all the deprecations
before a release to make sure that the version number is correct. So just
use what you believe will be the next release number (this usually means
bumping the minor number by one).
To mark a function as deprecated, you can use the decorator
@deprecated.
Return permutations of [0, 1, …, n - 1] such that each permutation
differs from the last by the exchange of a single pair of neighbors.
The n! permutations are returned as an iterator. In order to obtain
the next permutation from a random starting permutation, use the
next_trotterjohnson method of the Permutation class (which generates
the same sequence in a different manner).
This is the sort of permutation used in the ringing of physical bells,
and does not produce permutations in lexicographical order. Rather, the
permutations differ from each other by exactly one inversion, and the
position at which the swapping occurs varies periodically in a simple
fashion. Consider the first few permutations of 4 elements generated
by permutations and generate_bell:
Notice how the 2nd and 3rd lexicographical permutations have 3 elements
out of place whereas each “bell” permutation always has only two
elements out of place relative to the previous permutation (and so the
signature (+/-1) of a permutation is opposite of the signature of the
previous permutation).
How the position of inversion varies across the elements can be seen
by tracing out where the largest number appears in the permutations:
>>> m=zeros(4,24)>>> fori,pinenumerate(generate_bell(4)):... m[:,i]=Matrix([j-3forjinlist(p)])# make largest zero>>> m.print_nonzero('X')[XXX XXXXXX XXXXXX XXX][XX XX XXXX XX XXXX XX XX][X XXXX XX XXXX XX XXXX X][ XXXXXX XXXXXX XXXXXX ]
An involution is a permutation that when multiplied
by itself equals the identity permutation. In this
implementation the involutions are generated using
Fixed Points.
Alternatively, an involution can be considered as
a permutation that does not contain any cycles with
a length that is greater than two.
An oriented graph is a directed graph having no symmetric pair of directed
edges. A forest is an acyclic graph, i.e., it has no cycles. A forest can
also be described as a disjoint union of trees, which are graphs in which
any two vertices are connected by exactly one simple path.
Return a list of length bits corresponding to the binary value
of n with small bits to the right (last). If bits is omitted, the
length will be the number required to represent n. If the bits are
desired in reversed order, use the [::-1] slice of the returned list.
If a sequence of all bits-length lists starting from [0, 0,…, 0]
through [1, 1, …, 1] are desired, pass a non-integer for bits, e.g.
‘all’.
The ordered flag which is either None (to give the simple partition
of the the elements) or is a 2 digit integer indicating whether the order of
the bins and the order of the items in the bins matters. Given:
Return a tuple where the smallest element appears first; if
directed is True (default) then the order is preserved, otherwise
the sequence will be reversed if that gives a smaller ordering.
If every element appears only once then is_set can be set to True
for more efficient processing.
If the smallest element is known at the time of calling, it can be
passed and the calculation of the smallest element will be omitted.
Return unique partitions of the given multiset (in list form).
If m is None, all multisets will be returned, otherwise only
partitions with m parts will be returned.
If multiset is an integer, a range [0, 1, …, multiset - 1]
will be supplied.
These comments on counting apply to sets, not multisets.
Notes
When all the elements are the same in the multiset, the order
of the returned partitions is determined by the partitions
routine. If one is counting partitions then it is better to use
the nT function.
A routine to generate necklaces that may (free=True) or may not
(free=False) be turned over to be viewed. The “necklaces” returned
are comprised of n integers (beads) with k different
values (colors). Only unique necklaces are returned.
The “unrestricted necklace” is sometimes also referred to as a
“bracelet” (an object that can be turned over, a sequence that can
be reversed) and the term “necklace” is used to imply a sequence
that cannot be reversed. So ACB == ABC for a bracelet (rotate and
reverse) while the two are different for a necklace since rotation
alone cannot make the two sequences the same.
(mnemonic: Bracelets can be viewed Backwards, but Not Necklaces.)
m (integer (default gives partitions of all sizes) else only) – those with size m. In addition, if m is not None then
partitions are generated in place (see examples).
sort (bool (default True) controls whether partitions are) – returned in sorted order when m is not None; when False,
the partitions are returned as fast as possible with elements
sorted, but when m|n the partitions will not be in
ascending lexicographical order.
When m is given, a given list objects will be used more than
once for speed reasons so you will not see the correct partitions
unless you make a copy of each as it is generated:
When n is a multiple of m, the elements are still sorted
but the partitions themselves will be unordered if sort is False;
the default is to return them in ascending lexicographical order.
m (integer (default gives partitions of all sizes)) – limits number of parts in partition (mnemonic: m, maximum parts)
k (integer (default gives partitions number from 1 through n)) – limits the numbers that are kept in the partition (mnemonic: k, keys)
size (bool (default False, only partition is returned)) – when True then (M, P) is returned where M is the sum of the
multiplicities and P is the generated partition.
dictionary (Each partition is represented as a) –
integer (mapping an) –
example (to the number of copies of that integer in the partition. For) –
:param :
:param the first partition of 4 returned is {4:
:type the first partition of 4 returned is {4: 1}, “4: one of them”.
Examples
>>> from.iterablesimportpartitions
The numbers appearing in the partition (the key of the returned dict)
are limited with k:
The maximum number of parts in the partition (the sum of the values in
the returned dict) are limited with m (default value, None, gives
partitions from 1 through n):
Note that the _same_ dictionary object is returned each time.
This is for speed: generating each partition goes quickly,
taking constant time, independent of n.
This generator recursively yields nodes that it has visited in a postorder
fashion. That is, it descends through the tree depth-first to yield all of
a node’s children’s postorder traversal before yielding the node itself.
Parameters:
node (sympy expression) – The expression to traverse.
keys ((default None) sort key(s)) – The key(s) used to sort args of Basic objects. When None, args of Basic
objects are processed in arbitrary order. If key is defined, it will
be passed along to ordered() as the only key(s) to use to sort the
arguments; if key is simply True then the default keys of
ordered will be used (node count and default_sort_key).
Yields:
subtree (sympy expression) – All of the subtrees in the tree.
The nodes are returned in the order that they are encountered unless key
is given; simply passing key=True will guarantee that the traversal is
unique.
>>> list(postorder_traversal(w+(x+y)*z))[z, y, x, x + y, z*(x + y), w, w + z*(x + y)]>>> list(postorder_traversal(w+(x+y)*z,keys=True))[w, z, x, y, x + y, z*(x + y), w + z*(x + y)]
Group the sequence into lists in which successive elements
all compare the same with the comparison operator, op:
op(seq[i + 1], seq[i]) is True from all elements in a run.
Generates all k-subsets (combinations) from an n-element set, seq.
A k-subset of an n-element set is any subset of length exactly k. The
number of k-subsets of an n-element set is given by binomial(n, k),
whereas there are 2**n subsets all together. If k is None then all
2**n subsets will be returned from shortest to longest.
Examples
>>> from.iterablesimportsubsets
subsets(seq, k) will return the n!/k!/(n - k)! k-subsets (combinations)
without repetition, i.e. once an item has been removed, it can no
longer be “taken”:
graph (tuple[list,list[tuple[T,T]]) – A tuple consisting of a list of vertices and a list of edges of
a graph to be sorted topologically.
key (callable[T] (optional)) – Ordering key for vertices on the same level. By default the natural
(e.g. lexicographic) ordering is used (in this case the base type
must implement ordering relations).
This module provides convenient functions to transform sympy expressions to
lambda functions which can be used to calculate numerical values very fast.
symfunc can be an UndefinedFunction instance, or a name string.
In the latter case we create an UndefinedFunction instance with that
name.
Be aware that this is a quick workaround, not a general method to create
special symbolic functions. If you want to create a symbolic function to be
used by all the machinery of SymPy you should subclass the Function
class.
Parameters:
symfunc (str or UndefinedFunction instance) – If str, then create new UndefinedFunction with this as
name. If symfunc is a sympy function, attach implementation to it.
implementation (callable) – numerical implementation to be called by evalf() or lambdify
Although tuples may not appear as arguments to lambda in Python 3,
lambdastr will create a lambda function that will unpack the original
arguments so that nested arguments can be handled:
Returns a lambda function for fast calculation of numerical values.
If not specified differently by the user, modules defaults to
["numpy"] if NumPy is installed, and ["math","mpmath","sympy"]
if it isn’t, that is, SymPy functions are replaced as far as possible by
either numpy functions if available, and Python’s standard library
math, or mpmath functions otherwise. To change this behavior, the
“modules” argument can be used. It accepts:
the strings “math”, “mpmath”, “numpy”, “numexpr”, “sympy”, “tensorflow”
any modules (e.g. math)
dictionaries that map names of sympy functions to arbitrary functions
lists that contain a mix of the arguments above, with higher priority
given to entries appearing first.
Warning
Note that this function uses eval, and thus shouldn’t be used on
unsanitized input.
The default behavior is to substitute all arguments in the provided
expression with dummy symbols. This allows for applied functions (e.g.
f(t)) to be supplied as arguments. Call the function with dummify=False if
dummy substitution is unwanted (and args is not a string). If you want
to view the lambdified function or provide “sympy” as the module, you
should probably set dummify=False.
For functions involving large array calculations, numexpr can provide a
significant speedup over numpy. Please note that the available functions
for numexpr are more limited than numpy but can be expanded with
implemented_function and user defined subclasses of Function. If specified,
numexpr may be the only option in modules. The official list of numexpr
functions can be found at:
https://github.com/pydata/numexpr#supported-functions
In previous releases lambdify replaced Matrix with numpy.matrix
by default. As of release 1.0 numpy.array is the default.
To get the old default behavior you must pass in [{'ImmutableDenseMatrix':numpy.matrix},'numpy'] to the modules kwarg.
Attention: There are naming differences between numpy and sympy. So if
you simply take the numpy module, e.g. sympy.atan will not be
translated to numpy.arctan. Use the modified module instead
by passing the string “numpy”:
Functions present in expr can also carry their own numerical
implementations, in a callable attached to the _imp_
attribute. Usually you attach this using the
implemented_function factory:
lambdify always prefers _imp_ implementations to implementations
in other namespaces, unless the use_imps input parameter is False.
Usage with Tensorflow module:
>>> importtensorflowastf>>> f=Max(x,sin(x))>>> func=lambdify(x,f,'tensorflow')>>> result=func(tf.constant(1.0))>>> result# a tf.Tensor representing the result of the calculation<tf.Tensor 'Maximum:0' shape=() dtype=float32>>>> sess=tf.Session()>>> sess.run(result)# compute result1.0>>> var=tf.Variable(1.0)>>> sess.run(tf.global_variables_initializer())>>> sess.run(func(var))# also works for tf.Variable and tf.Placeholder1.0>>> tensor=tf.constant([[1.0,2.0],[3.0,4.0]])# works with any shape tensor>>> sess.run(func(tensor))array([[ 1., 2.], [ 3., 4.]], dtype=float32)
Strips leading and trailing empty lines from a copy of s, then dedents,
fills and returns it.
Empty line stripping serves to deal with docstrings like this one that
start with a newline after the initial triple quote, inserting an empty
line at the beginning of the string.
Try to find ‘executable’ in the directories listed in ‘path’ (a
string listing directories separated by ‘os.pathsep’; defaults to
os.environ[‘PATH’]). Returns the complete filename or None if not
found
Return a cut-and-pastable string that, when printed, is equivalent
to the input. The string returned is formatted so it can be indented
nicely within tests; in some cases it is wrapped in the dedent
function which has to be imported from textwrap.
Examples
Note: because there are characters in the examples below that need
to be escaped because they are themselves within a triple quoted
docstring, expressions below look more complicated than they would
be if they were printed in an interpreter window.
>>> from.miscimportrawlines>>> from..importTableForm>>> s=str(TableForm([[1,10]],headings=(None,['a','bee'])))>>> print(rawlines(s))( 'a bee\n' '-----\n' '1 10 ')>>> print(rawlines('''this... that'''))dedent('''\ this that''')
>>> print(rawlines('''this... that... '''))dedent('''\ this that ''')
>>> s="""this... is a triple '''... """>>> print(rawlines(s))dedent("""\ this is a triple ''' """)
Return string with all keys in reps replaced with
their corresponding values, longer strings first, irrespective
of the order they are given. reps may be passed as tuples
or a single mapping.
There is no guarantee that a unique answer will be
obtained if keys in a mapping overlap (i.e. are the same
length and have some identical sequence at the
beginning/end):
all characters in deletechars (if provided) are deleted
then the replacements defined by map are made; if the keys
of map are strings then the longer ones are handled first.
Multicharacter deletions should have a value of ‘’.
translate(s, oldchars, newchars, deletechars)
all characters in deletechars are deleted
then each character in oldchars is replaced with the
corresponding character in newchars
There is no guarantee that a unique answer will be
obtained if keys in a mapping overlap are the same
length and have some identical sequences at the
beginning/end:
When a __loader__ is present on the module given by __name__, it will defer
getResource to its get_data implementation and return it as a file-like
object (such as StringIO).
Acquire a readable object for a given package name and identifier.
An IOError will be raised if the resource can not be found.
For example:
mydata=get_resource('mypkgdata.jpg').read()
Note that the package name must be fully qualified, if given, such
that it would be found in sys.modules.
In some cases, getResource will return a real file object. In that
case, it may be useful to use its name attribute to get the path
rather than use it as a file-like object. For example, you may
be handing data off to a C API.
Tests that code raises the exception expectedException.
code may be a callable, such as a lambda expression or function
name.
If code is not given or None, raises will return a context
manager for use in with statements; the code to execute then
comes from the scope of the with.
raises() does nothing if the callable raises the expected exception,
otherwise it raises an AssertionError.
Examples
>>> from.pytestimportraises
>>> raises(ZeroDivisionError,lambda:1/0)>>> raises(ZeroDivisionError,lambda:1/2)Traceback (most recent call last):...AssertionError: DID NOT RAISE
>>> withraises(ZeroDivisionError):... n=1/0>>> withraises(ZeroDivisionError):... n=1/2Traceback (most recent call last):...AssertionError: DID NOT RAISE
Note that you cannot test multiple statements via
withraises:
>>> withraises(ZeroDivisionError):... n=1/0# will execute and raise, aborting the ``with``... n=9999/0# never executed
This is just what with is supposed to do: abort the
contained statement sequence at the first exception and let
the context manager deal with the exception.
To test multiple statements, you’ll need a separate with
for each:
>>> withraises(ZeroDivisionError):... n=1/0# will execute and raise>>> withraises(ZeroDivisionError):... n=9999/0# will also execute and raise
Test numerically that the symbolically computed derivative of f
with respect to z is correct.
This routine does not test whether there are Floats present with
precision higher than 15 digits so if there are, your results may
not be what you expect due to round-off errors.
Test numerically that f and g agree when evaluated in the argument z.
If z is None, all symbols will be tested. This routine does not test
whether there are Floats present with precision higher than 15 digits
so if there are, your results may not be what you expect due to round-
off errors.
A class used to extract the DocTests that are relevant to a given
object, from its docstring and the docstrings of its contained
objects. Doctests can currently be extracted from the following
object types: modules, functions, classes, methods, staticmethods,
classmethods, and properties.
Modified from doctest’s version by looking harder for code in the
case that it looks like the the code comes from a different module.
In the case of decorated functions (e.g. @vectorize) they appear
to come from a different module (e.g. multidemensional) even though
their code is not there.
A class used to run DocTest test cases, and accumulate statistics.
The run method is used to process a single DocTest case. It
returns a tuple (f,t), where t is the number of test cases
tried, and f is the number of test cases that failed.
Modified from the doctest version to not reset the sys.displayhook (see
issue 5140).
See the docstring of the original DocTestRunner for more information.
Run the examples in test, and display the results using the
writer function out.
The examples are run in the namespace test.globs. If
clear_globs is true (the default), then this namespace will
be cleared after the test runs, to help with garbage
collection. If you would like to examine the namespace after
the test completes, then use clear_globs=False.
compileflags gives the set of flags that should be used by
the Python compiler when running the examples. If not
specified, then it will default to the set of future-import
flags that apply to globs.
The output of each example is checked using
SymPyDocTestRunner.check_output, and the results are
formatted by the SymPyDocTestRunner.report_* methods.
Returns the list of *.py files (default) from which docstrings
will be tested which are at or below directory dir. By default,
only those that have an __init__.py in their parent directory
and do not start with test_ will be included.
Compared to the OutputChecker from the stdlib our OutputChecker class
supports numerical comparison of floats occuring in the output of the
doctest examples
Return True iff the actual output from an example (got)
matches the expected output (want). These strings are
always considered to match if they are identical; but
depending on what option flags the test runner is using,
several non-exact match types are also possible. See the
documentation for TestRunner for more information about
option flags.
Run all tests in sympy/functions/ and some particular file:
>>> sympy.doctest("/functions","basic.py")
Run any file having polynomial in its name, doc/src/modules/polynomial.rst,
sympy/functions/special/polynomials.py, and sympy/polys/polynomial.py:
>>> sympy.doctest("polynomial")
The split option can be passed to split the test run into parts. The
split currently only splits the test files, though this may change in the
future. split should be a string of the form ‘a/b’, which will run
part a of b. Note that the regular doctests and the Sphinx
doctests are split independently. For instance, to run the first half of
the test suite:
>>> sympy.doctest(split='1/2')
The subprocess and verbose options are the same as with the function
test(). See the docstring of that function for more information.
Right now, this runs the regular tests (bin/test), the doctests
(bin/doctest), the examples (examples/all.py), and the sage tests (see
sympy/external/tests/test_sage.py).
This is what setup.pytest uses.
You can pass arguments and keyword arguments to the test functions that
support them (for now, test, doctest, and the examples). See the
docstrings of those functions for a description of the available options.
For example, to run the solvers tests with colors turned off:
Run a function in a Python subprocess with hash randomization enabled.
If hash randomization is not supported by the version of Python given, it
returns False. Otherwise, it returns the exit value of the command. The
function is passed to sys.exit(), so the return value of the function will
be the return value.
The environment variable PYTHONHASHSEED is used to seed Python’s hash
randomization. If it is set, this function will return False, because
starting a new subprocess is unnecessary in that case. If it is not set,
one is set at random, and the tests are run. Note that if this
environment variable is set when Python starts, hash randomization is
automatically enabled. To force a subprocess to be created even if
PYTHONHASHSEED is set, pass force=True. This flag will not force a
subprocess in Python versions that do not support hash randomization (see
below), because those versions of Python do not support the -R flag.
function should be a string name of a function that is importable from
the module module, like “_test”. The default for module is
“sympy.utilities.runtests”. function_args and function_kwargs
should be a repr-able tuple and dict, respectively. The default Python
command is sys.executable, which is the currently running Python command.
This function is necessary because the seed for hash randomization must be
set by the environment variable before Python starts. Hence, in order to
use a predetermined seed for tests, we must start Python in a separate
subprocess.
Hash randomization was added in the minor Python versions 2.6.8, 2.7.3,
3.1.5, and 3.2.3, and is enabled by default in all Python versions after
and including 3.3.0.
Examples
>>> from.runtestsimport(... run_in_subprocess_with_hash_randomization)>>> # run the core tests in verbose mode>>> run_in_subprocess_with_hash_randomization("_test",... function_args=("core",),... function_kwargs={'verbose':True})# Will return 0 if sys.executable supports hash randomization and tests# pass, 1 if they fail, and False if it does not support hash# randomization.
split should be a string of the form ‘a/b’. For instance, ‘1/3’ would give
the split one of three.
If the length of the list is not divisible by the number of splits, the
last split will have more items.
density may be specified as a list. If specified,
tests will be balanced so that each split has as equal-as-possible
amount of mass according to density.
Test examples in the given file. Return (#failures, #tests).
Optional keyword arg module_relative specifies how filenames
should be interpreted:
If module_relative is True (the default), then filename
specifies a module-relative path. By default, this path is
relative to the calling module’s directory; but if the
package argument is specified, then it is relative to that
package. To ensure os-independence, filename should use
“/” characters to separate path segments, and should not
be an absolute path (i.e., it may not begin with “/”).
If module_relative is False, then filename specifies an
os-specific path. The path may be absolute or relative (to
the current working directory).
Optional keyword arg name gives the name of the test; by default
use the file’s basename.
Optional keyword argument package is a Python package or the
name of a Python package whose directory should be used as the
base directory for a module relative filename. If no package is
specified, then the calling module’s directory is used as the base
directory for module relative filenames. It is an error to
specify package if module_relative is False.
Optional keyword arg globs gives a dict to be used as the globals
when executing examples; by default, use {}. A copy of this dict
is actually used for each docstring, so that each docstring’s
examples start with a clean slate.
Optional keyword arg extraglobs gives a dictionary that should be
merged into the globals that are used to execute examples. By
default, no extra globals are used.
Optional keyword arg verbose prints lots of stuff if true, prints
only failures if false; by default, it’s true iff “-v” is in sys.argv.
Optional keyword arg report prints a summary at the end when true,
else prints nothing at the end. In verbose mode, the summary is
detailed, else very brief (in fact, empty if all tests passed).
Optional keyword arg optionflags or’s together module constants,
and defaults to 0. Possible values (see the docs for details):
DONT_ACCEPT_TRUE_FOR_1
DONT_ACCEPT_BLANKLINE
NORMALIZE_WHITESPACE
ELLIPSIS
SKIP
IGNORE_EXCEPTION_DETAIL
REPORT_UDIFF
REPORT_CDIFF
REPORT_NDIFF
REPORT_ONLY_FIRST_FAILURE
Optional keyword arg raise_on_error raises an exception on the
first unexpected exception or failure. This allows failures to be
post-mortem debugged.
Optional keyword arg parser specifies a DocTestParser (or
subclass) that should be used to extract tests from the files.
Optional keyword arg encoding specifies an encoding that should
be used to convert the file to unicode.
Advanced tomfoolery: testmod runs methods of a local instance of
class doctest.Tester, then merges the results into (or creates)
global Tester instance doctest.master. Methods of doctest.master
can be called directly too, if you want to do something unusual.
Passing report=0 to testmod is especially useful then, to delay
displaying a summary. Invoke doctest.master.summarize(verbose)
when you’re done fiddling.
Tests in a particular test_*.py file are run if any of the given strings
in paths matches a part of the test file’s path. If paths=[],
tests in all test_*.py files are run.
Notes:
If sort=False, tests are run in random order (not default).
Paths can be entered in native system format or in unix,
forward-slash format.
Files that are on the blacklist can be tested by providing
their path; they are only excluded if no paths are given.
Explanation of test results
Output
Meaning
.
passed
F
failed
X
XPassed (expected to fail but passed)
f
XFAILed (expected to fail and indeed failed)
s
skipped
w
slow
T
timeout (e.g., when --timeout is used)
K
KeyboardInterrupt (when running the slow tests with --slow,
you can interrupt one of them without killing the test runner)
Colors have no additional meaning and are used just to facilitate
interpreting the output.
Force colors, even when the output is not to a terminal (this is useful,
e.g., if you are piping to less-r and you still want colors)
>>> sympy.test(force_colors=False)
The traceback verboseness can be set to “short” or “no” (default is
“short”)
>>> sympy.test(tb='no')
The split option can be passed to split the test run into parts. The
split currently only splits the test files, though this may change in the
future. split should be a string of the form ‘a/b’, which will run
part a of b. For instance, to run the first half of the test suite:
>>> sympy.test(split='1/2')
The time_balance option can be passed in conjunction with split.
If time_balance=True (the default for sympy.test), sympy will attempt
to split the tests such that each split takes equal time. This heuristic
for balancing is based on pre-recorded test data.
>>> sympy.test(split='1/2',time_balance=True)
You can disable running the tests in a separate subprocess using
subprocess=False. This is done to support seeding hash randomization,
which is enabled by default in the Python versions where it is supported.
If subprocess=False, hash randomization is enabled/disabled according to
whether it has been enabled or not in the calling Python process.
However, even if it is enabled, the seed cannot be printed unless it is
called from a new Python process.
Hash randomization was added in the minor Python versions 2.6.8, 2.7.3,
3.1.5, and 3.2.3, and is enabled by default in all Python versions after
and including 3.3.0.
If hash randomization is not supported subprocess=False is used
automatically.
>>> sympy.test(subprocess=False)
To set the hash randomization seed, set the environment variable
PYTHONHASHSEED before running the tests. This can be done from within
Python using
>>> importos>>> os.environ['PYTHONHASHSEED']='42'
Or from the command line using
$ PYTHONHASHSEED=42 ./bin/test
If the seed is not set, a random seed will be chosen.
Note that to reproduce the same hash values, you must use both the same seed
as well as the same architecture (32-bit vs. 64-bit).