Using FiPy¶
This document explains how to use FiPy in a practical sense. To see the problems that FiPy is capable of solving, you can run any of the scripts in the examples.
Note
We strongly recommend you proceed through the examples, but at the very least work through
examples.diffusion.mesh1D
to understand the notation and
basic concepts of FiPy.
We exclusively use either the UNIX command line or IPython to interact with FiPy. The commands in the examples are written with the assumption that they will be executed from the command line. For instance, from within the main FiPy directory, you can type:
$ python examples/diffusion/mesh1D.py
A viewer should appear and you should be prompted through a series of examples.
In order to customize the examples, or to develop your own scripts, some knowledge of Python syntax is required. We recommend you familiarize yourself with the excellent Python tutorial [11] or with Dive Into Python [12]. Deeper insight into Python can be obtained from the [13].
As you gain experience, you may want to browse through the Commandline Flags and Environment Variables that affect FiPy.
Testing FiPy¶
For a general installation, FiPy can be tested by running:
$ python c "import fipy; fipy.test()"
This command runs all the test cases in FiPy’s modules, but doesn’t include any of the tests in FiPy’s examples. To run the test cases in both modules and examples, use:
$ python setup.py test
Note
You may need to first run:
$ python setup.py egg_info
for this to work properly.
in an unpacked FiPy archive. The test suite can be run with a number of different configurations depending on which solver suite is available and other factors. See Commandline Flags and Environment Variables for more details.
FiPy will skip tests that depend on Optional Packages that have not been installed. For example, if Mayavi and Gmsh are not installed, FiPy will warn something like:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Skipped 131 doctest examples because `gmsh` cannot be found on the $PATH
Skipped 42 doctest examples because the `tvtk` package cannot be imported
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Although the test suite may show warnings, there should be no other errors. Any errors should be investigated or reported on the issue tracker. Users can see if there are any known problems for the latest FiPy distribution by checking FiPy’s automated test display.
Below are a number of common Commandline Flags for testing various FiPy configurations.
Parallel Tests¶
If FiPy is configured for Solving in Parallel, you can run the tests on multiple processor cores with:
$ mpirun np {# of processors} python setup.py test trilinos
or:
$ mpirun np {# of processors} python c "import fipy; fipy.test('trilinos')"
Commandline Flags and Environment Variables¶
FiPy chooses a default run time configuration based on the available packages on the system. The Commandline Flags and Environment Variables sections below describe how to override FiPy’s default behavior.
Commandline Flags¶
You can add any of the following caseinsensitive flags after the name of a script you call from the command line, e.g.:
$ python myFiPyScript someflag

inline
¶
Causes many mathematical operations to be performed in C, rather than Python, for improved performance. Requires the
weave
package.
The following flags take precedence over the FIPY_SOLVERS
environment variable:

skfmm
¶
Forces the use of the Scikitfmm level set solver.
Environment Variables¶
You can set any of the following environment variables in the manner
appropriate for your shell. If you are not running in a shell (e.g.,
you are invoking FiPy scripts from within IPython or IDLE),
you can set these variables via the os.environ
dictionary,
but you must do so before importing anything from the fipy
package.

FIPY_DISPLAY_MATRIX
¶ If present, causes the graphical display of the solution matrix of each equation at each call of
solve()
orsweep()
. Setting the value to “terms
” causes the display of the matrix for eachTerm
that composes the equation. Requires the Matplotlib package. Setting the value to “print
” causes the matrix to be printed to the console.

FIPY_INLINE
¶ If present, causes many mathematical operations to be performed in C, rather than Python. Requires the
weave
package.

FIPY_INLINE_COMMENT
¶ If present, causes the addition of a comment showing the Python context that produced a particular piece of
weave
C code. Useful for debugging.

FIPY_SOLVERS
¶ Forces the use of the specified suite of linear solvers. Valid (caseinsensitive) choices are “
pysparse
”, “trilinos
”, “nopysparse
”, “scipy
” and “pyamg
”.

FIPY_VERBOSE_SOLVER
¶ If present, causes the linear solvers to print a variety of diagnostic information.

FIPY_VIEWER
¶ Forces the use of the specified viewer. Valid values are any
<viewer>
from thefipy.viewers.<viewer>Viewer
modules. The special value ofdummy
will allow the script to run without displaying anything.
Solving in Parallel¶
FiPy can use PETSc or Trilinos to solve equations in
parallel. Most mesh classes in fipy.meshes
can solve in
parallel. This includes all “...Grid...
” and “...Gmsh...
”
class meshes. Currently, the only remaining serialonly meshes are
Tri2D
and
SkewedGrid2D
.
Tip
You are strongly advised to force the use of only one OpenMP thread with Trilinos:
$ export OMP_NUM_THREADS=1
See OpenMP Threads vs. MPI Ranks for more information.
Note
Trilinos 12.12 has support for Python 3, but PyTrilinos on condaforge presently only provides 12.10, which is limited to Python 2.x. PETSc is available for both Python 3 and Python 2.7.
It should not generally be necessary to change anything in your script. Simply invoke:
$ mpirun np {# of processors} python myScript.py petsc
or:
$ mpirun np {# of processors} python myScript.py trilinos
instead of:
$ python myScript.py
The following plot shows the scaling behavior for multiple processors. We compare solution time vs number of Slurm tasks (available cores) for a Method of Manufactured Solutions AllenCahn problem.
(Source code, png, hires.png, pdf)
“Speedup” relative to Pysparse (bigger numbers are better) versus number of tasks (processes) on a loglog plot. The number of threads per MPI rank is indicated by the line style (see legend). OpenMP threads MPI ranks = Slurm tasks.
A few things can be observed in this plot:
 Both PETSc and Trilinos exhibit power law scaling, but the
power is only about 0.7. At least one source of this poor scaling is
that our “
...Grid...
” meshes parallelize by dividing the mesh into slabs, which leads to more communication overhead than more compact partitions. The “...Gmsh...
” meshes partition more efficiently, but carry more overhead in other ways. We’ll be making efforts to improve the partitioning of the “...Grid...
” meshes in a future release.  PETSc and Trilinos have fairly comparable performance, but lag Pysparse by a considerable margin. The SciPy solvers are even worse. Some of this discrepancy may be because the different packages are not all doing the same thing. Different solver packages have different default solvers and preconditioners. Moreover, the meaning of the solution tolerance depends on the normalization the solver uses and it is not always obvious which of several possibilities a particular package employs. We will endeavor to normalize the normalizations in a future release.
 PETSc with one thread is faster than with two threads until the number of tasks reaches about 10 and is faster than with four threads until the number of tasks reaches more than 20. Trilinos with one thread is faster than with two threads until the number of tasks is more than 30. We don’t fully understand the reasons for this, but there may be a modest benefit, when using a large number of cpus, to allow two to four OpenMP threads per MPI rank. See OpenMP Threads vs. MPI Ranks for caveats and more information.
These results are likely both problem and architecture dependent. You should develop an understanding of the scaling behavior of your own codes before doing “production” runs.
The easiest way to confirm that FiPy is properly configured to solve in parallel is to run one of the examples, e.g.,:
$ mpirun np 2 examples/diffusion/mesh1D.py
You should see two viewers open with half the simulation running in one of them and half in the other. If this does not look right (e.g., you get two viewers, both showing the entire simulation), or if you just want to be sure, you can run a diagnostic script:
$ mpirun np 3 python examples/parallel.py
which should print out:
mpi4py PyTrilinos petsc4py FiPy
processor 0 of 3 :: processor 0 of 3 :: processor 0 of 3 :: 5 cells on processor 0 of 3
processor 1 of 3 :: processor 1 of 3 :: processor 1 of 3 :: 7 cells on processor 1 of 3
processor 2 of 3 :: processor 2 of 3 :: processor 2 of 3 :: 6 cells on processor 2 of 3
If there is a problem with your parallel environment, it should be clear that there is either a problem importing one of the required packages or that there is some problem with the MPI environment. For example:
mpi4py PyTrilinos petsc4py FiPy
processor 0 of 3 :: processor 0 of 3 :: processor 0 of 3 :: 10 cells on processor 0 of 1
[my.machine.com:69815] WARNING: There were 4 Windows created but not freed.
processor 1 of 3 :: processor 1 of 3 :: processor 1 of 3 :: 10 cells on processor 0 of 1
[my.machine.com:69814] WARNING: There were 4 Windows created but not freed.
processor 2 of 3 :: processor 2 of 3 :: processor 2 of 3 :: 10 cells on processor 0 of 1
[my.machine.com:69813] WARNING: There were 4 Windows created but not freed.
indicates mpi4py is properly communicating with MPI and is running in parallel, but that Trilinos is not, and is running three separate serial environments. As a result, FiPy is limited to three separate serial operations, too. In this instance, the problem is that although Trilinos was compiled with MPI enabled, it was compiled against a different MPI library than is currently available (and which mpi4py was compiled against). The solution is to rebuild Trilinos against the active MPI libraries.
When solving in parallel, FiPy essentially breaks the problem
up into separate subdomains and solves them (somewhat) independently.
FiPy generally “does the right thing”, but if you find that
you need to do something with the entire solution, you can use
var.
globalValue
.
Note
One option for debugging in parallel is:
$ mpirun np {# of processors} xterm hold e "python m ipdb myScript.py"
OpenMP Threads vs. MPI Ranks¶
By default, PETSc and Trilinos spawn as many OpenMP threads as there are cores available. This may very well be an intentional optimization, where they are designed to have one MPI rank per node of a cluster, so each of the child threads would help with computation but would not compete for I/O resources during ghost cell exchanges and file I/O. However, Python’s Global Interpreter Lock (GIL) binds all of the child threads to the same core as their parent! So instead of improving performance, each core suffers a heavy overhead from managing those idling threads.
The solution to this is to force these solvers to use only one OpenMP thread:
$ export OMP_NUM_THREADS=1
Because this environment variable affects all processes launched in the current session, you may prefer to restrict its use to FiPy runs:
$ OMP_NUM_THREADS=1 mpirun np {# of processors} python myScript.py trilinos
The difference can be extreme. We have observed the FiPy test
suite to run in just over two minutes when OMP_NUM_THREADS=1
,
compared to over an hour and 23 minutes when OpenMP threads are
unrestricted. We don’t know why, but other platforms do not suffer the
same degree of degradation.
Conceivably, allowing these parallel solvers unfettered access to OpenMP threads with no MPI communication at all could perform as well or better than purely MPI parallelization. The plot below demonstrates this is not the case. We compare solution time vs number of OpenMP threads for fixed number of slots for a Method of Manufactured Solutions AllenCahn problem. OpenMP threading always slows down FiPy performance.
(Source code, png, hires.png, pdf)
“Speedup” relative to one thread (bigger numbers are better) versus number of threads for 32 Slurm tasks on a loglog plot. OpenMP threads MPI ranks = Slurm tasks.
See https://www.mailarchive.com/fipy@nist.gov/msg03393.html for further analysis.
It may be possible to configure these packages to use only one OpenMP thread, but this is not the configuration of the version available from condaforge and building Trilinos, at least, is NotFun™.
Meshing with Gmsh¶
FiPy works with arbitrary polygonal meshes generated by
Gmsh. FiPy provides two wrappers classes
(Gmsh2D
and
Gmsh3D
) enabling Gmsh to be
used directly from python. The classes can be instantiated with a set
of Gmsh style commands (see
examples.diffusion.circle
). The classes can also be
instantiated with the path to either a Gmsh geometry file
(.geo
) or a Gmsh mesh file (.msh
) (see
examples.diffusion.anisotropy
).
As well as meshing arbitrary geometries, Gmsh partitions
meshes for parallel simulations. Mesh partitioning automatically
occurs whenever a parallel communicator is passed to the mesh on
instantiation. This is the default setting for all meshes that work in
parallel including Gmsh2D
and
Gmsh3D
.
Note
FiPy solution accuracy can be compromised with highly nonorthogonal or nonconjunctional meshes.
Coupled and Vector Equations¶
Equations can now be coupled together so that the contributions from
all the equations appear in a single system matrix. This results in
tighter coupling for equations with spatial and temporal derivatives
in more than one variable. In FiPy equations are coupled
together using the &
operator:
>>> eqn0 = ...
>>> eqn1 = ...
>>> coupledEqn = eqn0 & eqn1
The coupledEqn
will use a combined system matrix that includes
four quadrants for each of the different variable and equation
combinations. In previous versions of FiPy there has been no
need to specify which variable a given term acts on when generating
equations. The variable is simply specified when calling solve
or
sweep
and this functionality has been maintained in the case of
single equations. However, for coupled equations the variable that a
given term operates on now needs to be specified when the equation is
generated. The syntax for generating coupled equations has the form:
>>> eqn0 = Term00(coeff=..., var=var0) + Term01(coeff..., var=var1) == source0
>>> eqn1 = Term10(coeff=..., var=var0) + Term11(coeff..., var=var1) == source1
>>> coupledEqn = eqn0 & eqn1
and there is now no need to pass any variables when solving:
>>> coupledEqn.solve(dt=..., solver=...)
In this case the matrix system will have the form
FiPy tries to make sensible decisions regarding each term’s
location in the matrix and the ordering of the variable column
array. For example, if Term01
is a transient term then Term01
would appear in the upper left diagonal and the ordering of the
variable column array would be reversed.
The use of coupled equation is described in detail in
examples.diffusion.coupled
. Other examples that demonstrate the
use of coupled equations are examples.phase.binaryCoupled
,
examples.phase.polyxtalCoupled
and
examples.cahnHilliard.mesh2DCoupled
. As well as coupling
equations, true vector equations can now be written in FiPy
(see examples.diffusion.coupled
for more details).
Boundary Conditions¶
Applying fixed value (Dirichlet) boundary conditions¶
To apply a fixed value boundary condition use the
constrain()
method. For example, to fix var to
have a value of 2 along the upper surface of a domain, use
>>> var.constrain(2., where=mesh.facesTop)
Note
The old equivalent
FixedValue
boundary
condition is now deprecated.
Applying fixed gradient boundary conditions (Neumann)¶
To apply a fixed Gradient boundary condition use the
faceGrad
.constrain()
method. For
example, to fix var to have a gradient of (0,2) along the upper
surface of a 2D domain, use
>>> var.faceGrad.constrain(((0,),(2,)), where=mesh.facesTop)
If the gradient normal to the boundary (e.g., ) is to be set to a scalar value of 2, use
>>> var.faceGrad.constrain(2 * mesh.faceNormals, where=mesh.exteriorFaces)
Applying fixed flux boundary conditions¶
Generally these can be implemented with a judicious use of
faceGrad
.constrain()
. Failing that, an
exterior flux term can be added to the equation. Firstly, set the
terms’ coefficients to be zero on the exterior faces,
>>> diffCoeff.constrain(0., mesh.exteriorFaces)
>>> convCoeff.constrain(0., mesh.exteriorFaces)
then create an equation with an extra term to account for the exterior flux,
>>> eqn = (TransientTerm() + ConvectionTerm(convCoeff)
... == DiffusionCoeff(diffCoeff)
... + (mesh.exteriorFaces * exteriorFlux).divergence)
where exteriorFlux is a rank 1
FaceVariable
.
Note
The old equivalent FixedFlux
boundary condition is now deprecated.
Applying outlet or inlet boundary conditions¶
Convection terms default to a no flux boundary condition unless the exterior faces are associated with a constraint, in which case either an inlet or an outlet boundary condition is applied depending on the flow direction.
Applying spatially varying boundary conditions¶
The use of spatial varying boundary conditions is best demonstrated with an example. Given a 2D equation in the domain and with boundary conditions,
where represents the flux. The boundary conditions in FiPy can be written with the following code,
>>> X, Y = mesh.faceCenters
>>> mask = ((X < 0.5)  (Y < 0.5))
>>> var.faceGrad.constrain(0, where=mesh.exteriorFaces & mask)
>>> var.constrain(X * Y, where=mesh.exteriorFaces & ~mask)
then
>>> eqn.solve(...)
Further demonstrations of spatially varying boundary condition can be found
in examples.diffusion.mesh20x20
and examples.diffusion.circle
Applying Robin boundary conditions¶
The Robin condition applied on the portion of the boundary
can often be substituted for the flux in an equation
At faces identified by mask
,
>>> a = FaceVariable(mesh=mesh, value=..., rank=1)
>>> a.setValue(0., where=mask)
>>> b = FaceVariable(mesh=mesh, value=..., rank=0)
>>> b.setValue(0., where=mask)
>>> g = FaceVariable(mesh=mesh, value=..., rank=0)
>>> eqn = (TransientTerm() == PowerLawConvectionTerm(coeff=a)
... + DiffusionTerm(coeff=b)
... + (g * mask * mesh.faceNormals).divergence)
When the Robin condition does not exactly map onto the boundary flux, we can attempt to apply it term by term. The Robin condition relates the gradient at a boundary face to the value on that face, however FiPy naturally calculates variable values at cell centers and gradients at intervening faces. Using a first order upwind approximation, the boundary value of the variable at face can be written in terms of the value at the neighboring cell and the normal gradient at the boundary:
(1)¶
where is the distance vector from the face center to the adjoining cell center. The approximation is most valid when the mesh is orthogonal.
Substituting this expression into the Robin condition:
(2)¶
we obtain an expression for the gradient at the boundary face in terms of its neighboring cell. We can, in turn, substitute this back into (1)
(3)¶
to obtain the value on the boundary face in terms of the neighboring cell.
Substituting (2) into the discretization of the
DiffusionTerm
:
An equation of the form
>>> eqn = TransientTerm() == DiffusionTerm(coeff=Gamma0)
can be constrained to have a Robin condition at faces identified by
mask
by making the following modifications
>>> Gamma = FaceVariable(mesh=mesh, value=Gamma0)
>>> Gamma.setValue(0., where=mask)
>>> dPf = FaceVariable(mesh=mesh,
... value=mesh._faceToCellDistanceRatio * mesh.cellDistanceVectors)
>>> n = mesh.faceNormals
>>> a = FaceVariable(mesh=mesh, value=..., rank=1)
>>> b = FaceVariable(mesh=mesh, value=..., rank=0)
>>> g = FaceVariable(mesh=mesh, value=..., rank=0)
>>> RobinCoeff = (mask * Gamma0 * n / (dPf.dot(a) + b)
>>> eqn = (TransientTerm() == DiffusionTerm(coeff=Gamma) + (RobinCoeff * g).divergence
...  ImplicitSourceTerm(coeff=(RobinCoeff * n.dot(a)).divergence)
Similarly, for a ConvectionTerm
, we can
substitute (3):
Note
An expression like the heat flux convection boundary condition can be put in the form of the Robin condition used above by letting , , and .
Applying internal “boundary” conditions¶
Applying internal boundary conditions can be achieved through the use of implicit and explicit sources.
Internal fixed value¶
An equation of the form
>>> eqn = TransientTerm() == DiffusionTerm()
can be constrained to have a fixed internal value
at a position
given by mask
with the following alterations
>>> eqn = (TransientTerm() == DiffusionTerm()
...  ImplicitSourceTerm(mask * largeValue)
... + mask * largeValue * value)
The parameter largeValue
must be chosen to be large enough to
completely dominate the matrix diagonal and the RHS vector in cells
that are masked. The mask
variable would typically be a
CellVariable
Boolean constructed using the cell center values.
Internal fixed gradient¶
An equation of the form
>>> eqn = TransientTerm() == DiffusionTerm(coeff=Gamma0)
can be constrained to have a fixed internal gradient
magnitude
at a position given by mask
with the following alterations
>>> Gamma = FaceVariable(mesh=mesh, value=Gamma0)
>>> Gamma[mask.value] = 0.
>>> eqn = (TransientTerm() == DiffusionTerm(coeff=Gamma)
... + DiffusionTerm(coeff=largeValue * mask)
...  ImplicitSourceTerm(mask * largeValue * gradient
... * mesh.faceNormals).divergence)
The parameter largeValue
must be chosen to be large enough to
completely dominate the matrix diagonal and the RHS vector in cells
that are masked. The mask
variable would typically be a
FaceVariable
Boolean constructed using the face center values.
Internal Robin condition¶
Nothing different needs to be done when applying Robin boundary conditions at internal faces.
Note
While we believe the derivations for applying Robin boundary conditions are “correct”, they often do not seem to produce the intuitive result. At this point, we think this has to do with the pathology of “internal” boundary conditions, but remain open to other explanations. FiPy was designed with diffuse interface treatments (phase field and level set) in mind and, as such, internal “boundaries” do not come up in our own work and have not received much attention.
Warning
The constraints mechanism is not designed to constrain internal values
for variables that are being solved by equations. In particular, one must
be careful to distinguish between constraining internal cell values
during the solve step and simply applying arbitrary constraints to a
CellVariable
. Applying a constraint,
>>> var.constrain(value, where=mask)
simply fixes the returned value of var
at mask
to be
value
. It does not have any effect on the implicit value of var
at the
mask
location during the linear solve so it is not a substitute
for the source term machinations described above. Future releases of
FiPy may implicitly deal with this discrepancy, but the current
release does not.
A simple example can be used to demonstrate this:
>>> m = Grid1D(nx=2, dx=1.)
>>> var = CellVariable(mesh=m)
We wish to solve subject to and . We apply a constraint to the faces for the right side boundary condition (which works).
>>> var.constrain(1., where=m.facesRight)
We create the equation with the source term constraint described above
>>> mask = m.x < 1.
>>> largeValue = 1e+10
>>> value = 0.25
>>> eqn = DiffusionTerm()  ImplicitSourceTerm(largeValue * mask) + largeValue * mask * value
and the expected value is obtained.
>>> eqn.solve(var)
>>> print var
[ 0.25 0.75]
However, if a constraint is used without the source term constraint an unexpected solution is obtained
>>> var.constrain(0.25, where=mask)
>>> eqn = DiffusionTerm()
>>> eqn.solve(var)
>>> print var
[ 0.25 1. ]
although the left cell has the expected value as it is constrained.
FiPy has simply solved with and (by default) , giving everywhere, and then subsequently replaced the cells with .
Running under Python 2¶
Thanks to the future package and to the contributions of pya and woodscn, FiPy runs under both Python 3 and Python 2.7, without conversion or modification.
Because Python itself will drop support for Python 2.7 on January 1, 2020 and many of the prerequisites for FiPy have pledged to drop support for Python 2.7 no later than 2020, we have prioritized adding support for better Python 3 solvers, starting with petsc4py.
Because the faster PySparse and Trilinos solvers are not available under Python 3, we will maintain Python 2.x support as long as practical. Be aware that the condaforge packages that FiPy depends upon are not wellmaintained on Python 2.x and our support for that configuration is rapidly becoming impractical, despite the present performance benefits. Hopefully, we will learn how to optimize our use of PETSc and/or Trilinos 12.12 will become available on condaforge.
Manual¶
You can view the manual online at <http://www.ctcms.nist.gov/fipy> or you can download the latest manual from <http://www.ctcms.nist.gov/fipy/download/>. Alternatively, it may be possible to build a fresh copy by issuing the following command in the base directory:
$ python setup.py build_docs pdf html
Note
This mechanism is intended primarily for the developers. At a minimum, you will need at least version 1.7.0 of Sphinx, plus all of its prerequisites. We install via conda:
$ conda install channel condaforge sphinx
Bibliographic citations require the sphinxcontribbibtex package:
$ pip install sphinxcontribbibtex
Some documentation uses numpydoc styling:
$ pip install numpydoc
Some embeded figures require matplotlib, pandas, and imagemagick:
$ conda install channel condaforge matplotlib pandas imagemagick
The PDF file requires SIunits.sty available, e.g., from texlivescience.
Spelling is checked automatically in the course of Continuous Integration. If you wish to check manually, you will need pyspelling, hunspell, and the libreoffice dictionaries:
$ conda install channel condaforge hunspell
$ pip install pyspelling
$ wget O en_US.aff https://cgit.freedesktop.org/libreoffice/dictionaries/plain/en/en_US.aff?id=a4473e06b56bfe35187e302754f6baaa8d75e54f
$ wget O en_US.dic https://cgit.freedesktop.org/libreoffice/dictionaries/plain/en/en_US.dic?id=a4473e06b56bfe35187e302754f6baaa8d75e54f