Pyomo PDF
Pyomo PDF
Pyomo PDF
Release 5.6.9.dev0
Pyomo
1 Installation 3
1.1 Using CONDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Using PIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Conditional Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Citing Pyomo 5
2.1 Pyomo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 PySP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Pyomo Overview 7
3.1 Mathematical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Overview of Modeling Components and Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Abstract Versus Concrete Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.4 Simple Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
i
6.7 Accessing Variable Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.8 Accessing Parameter Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.9 Accessing Duals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.10 Accessing Slacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.11 Accessing Solver Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.12 Display of Solver Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.13 Sending Options to the Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.14 Specifying the Path to a Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.15 Warm Starts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.16 Solving Multiple Instances in Parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.17 Changing the temporary directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
8 Modeling Extensions 97
8.1 Bilevel Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.2 Dynamic Optimization with pyomo.DAE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.3 MPEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.4 Generalized Disjunctive Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.5 Stochastic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
8.6 Pyomo Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
ii
15 Third-Party Contributions 317
15.1 Pyomo Nonlinear Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
15.2 GDPopt logic-based solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
15.3 Multistart Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
15.4 parmest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
15.5 Pyomo Interface to MC++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
15.6 MindtPy solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
15.7 Pyomo Interface to z3 SMT Sat Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
16 Bibliography 345
Bibliography 351
Index 355
iii
iv
Pyomo Documentation, Release 5.6.9.dev0
Pyomo is a Python-based, open-source optimization modeling language with a diverse set of optimization capabilities.
Contents 1
Pyomo Documentation, Release 5.6.9.dev0
2 Contents
CHAPTER 1
Installation
We recommend installation with conda, which is included with the Anaconda distribution of Python. You can install
Pyomo in your system Python installation by executing the following in a shell:
Optimization solvers are not installed with Pyomo, but some open source optimization solvers can be installed with
conda as well:
The standard utility for installing Python packages is pip. You can install Pyomo in your system Python installation
by executing the following in a shell:
3
Pyomo Documentation, Release 5.6.9.dev0
Extensions to Pyomo, and many of the contributions in pyomo.contrib, also have conditional dependencies on a variety
of third-party Python packages including but not limited to: numpy, scipy, sympy, networkx, openpxl, pyodbc, xlrd,
pandas, matplotlib, pymysql, pyro4, and pint. Pyomo extensions that require any of these packages will generate an
error message for missing dependencies upon use.
Many of the conditional dependencies are already distributed with Anaconda. You can check which Python packages
you already have installed using the command conda list or pip list. Additional Python packages may be
installed as needed.
4 Chapter 1. Installation
CHAPTER 2
Citing Pyomo
2.1 Pyomo
Hart, William E., Jean-Paul Watson, and David L. Woodruff. “Pyomo: modeling and solving mathematical programs
in Python.” Mathematical Programming Computation 3, no. 3 (2011): 219-260.
Hart, William E., Carl Laird, Jean-Paul Watson, David L. Woodruff, Gabriel A. Hackebeil, Bethany L. Nicholson, and
John D. Siirola. Pyomo – Optimization Modeling in Python. Springer, 2017.
2.2 PySP
Watson, Jean-Paul, David L. Woodruff, and William E. Hart. “PySP: modeling and solving stochastic programs in
Python.” Mathematical Programming Computation 4, no. 2 (2012): 109-149.
5
Pyomo Documentation, Release 5.6.9.dev0
Pyomo Overview
This section provides an introduction to Pyomo: Python Optimization Modeling Objects. A more complete description
is contained in the [PyomoBookII] book. Pyomo supports the formulation and analysis of mathematical models for
complex optimization applications. This capability is commonly associated with commercially available algebraic
modeling languages (AMLs) such as [AMPL], [AIMMS], and [GAMS]. Pyomo’s modeling objects are embedded
within Python, a full-featured, high-level programming language that contains a rich set of supporting libraries.
Modeling is a fundamental process in many aspects of scientific research, engineering and business. Modeling involves
the formulation of a simplified representation of a system or real-world object. Thus, modeling tools like Pyomo can
be used in a variety of ways:
• Explain phenomena that arise in a system,
• Make predictions about future states of a system,
• Assess key factors that influence phenomena in a system,
• Identify extreme states in a system, that might represent worst-case scenarios or minimal cost plans, and
• Analyze trade-offs to support human decision makers.
Mathematical models represent system knowledge with a formalized language. The following mathematical concepts
are central to modern modeling activities:
3.1.1 Variables
Variables represent unknown or changing parts of a model (e.g., whether or not to make a decision, or the
characteristic of a system outcome). The values taken by the variables are often referred to as a solution
and are usually an output of the optimization process.
7
Pyomo Documentation, Release 5.6.9.dev0
3.1.2 Parameters
Parameters represents the data that must be supplied to perform the optimization. In fact, in some settings
the word data is used in place of the word parameters.
3.1.3 Relations
These are equations, inequalities or other mathematical relationships that define how different parts of a
model are connected to each other.
3.1.4 Goals
These are functions that reflect goals and objectives for the system being modeled.
The widespread availability of computing resources has made the numerical analysis of mathematical models a com-
monplace activity. Without a modeling language, the process of setting up input files, executing a solver and extracting
the final results from the solver output is tedious and error-prone. This difficulty is compounded in complex, large-scale
real-world applications which are difficult to debug when errors occur. Additionally, there are many different formats
used by optimization software packages, and few formats are recognized by many optimizers. Thus the application of
multiple optimization solvers to analyze a model introduces additional complexities.
Pyomo is an AML that extends Python to include objects for mathematical modeling. [PyomoBookI], [PyomoBookII],
and [PyomoJournal] compare Pyomo with other AMLs. Although many good AMLs have been developed for opti-
mization models, the following are motivating factors for the development of Pyomo:
Pyomo is developed within Pyomo’s open source project to promote transparency of the modeling frame-
work and encourage community development of Pyomo capabilities.
Pyomo supports a customizable capability through the extensive use of plug-ins to modularize software
components.
Pyomo models can be optimized with solvers that are written either in Python or in compiled, low-level
languages.
Pyomo leverages a high-level programming language, which has several advantages over custom AMLs:
a very robust language, extensive documentation, a rich set of standard libraries, support for modern
programming features like classes and functions, and portability to many platforms.
Pyomo supports an object-oriented design for the definition of optimization models. The basic steps of a simple
modeling process are:
• Create model and declare components
• Instantiate the model
• Apply solver
• Interrogate solver results
In practice, these steps may be applied repeatedly with different data or with different constraints applied to the model.
However, we focus on this simple modeling process to illustrate different strategies for modeling with Pyomo.
A Pyomo model consists of a collection of modeling components that define different aspects of the model. Pyomo
includes the modeling components that are commonly supported by modern AMLs: index sets, symbolic parame-
ters, decision variables, objectives, and constraints. These modeling components are defined in Pyomo through the
following Python classes:
3.2.1 Set
3.2.2 Param
3.2.3 Var
3.2.4 Objective
3.2.5 Constraint
A mathematical model can be defined using symbols that represent data values. For example, the following equations
represent a linear∑︀
program (LP) to find optimal values for the vector 𝑥 with parameters 𝑛 and 𝑏, and parameter vectors
𝑛
min 𝑐𝑗 𝑥𝑗
∑︀𝑗=1
𝑛
𝑎 and 𝑐: s.t. 𝑗=1 𝑎𝑖𝑗 𝑥𝑗 ≥ 𝑏𝑖 ∀𝑖 = 1 . . . 𝑚
𝑥𝑗 ≥ 0 ∀𝑗 = 1 . . . 𝑛
Note: As a convenience, we use the symbol ∀ to mean “for all” or “for each.”
We call this an abstract or symbolic mathematical model since it relies on unspecified parameter values. Data values
can be used to specify a model instance. The AbstractModel class provides a context for defining and initializing
abstract optimization models in Pyomo when the data values will be supplied at the time a solution is to be obtained.
In many contexts, a mathematical model can and should be directly defined with the data values supplied at the time
of the model definition. We call these concrete mathematical models. For example, the following LP model is a
min 2𝑥1 + 3𝑥2
concrete instance of the previous abstract model: s.t. 3𝑥1 + 4𝑥2 ≥ 1 The ConcreteModel class is used to
𝑥1 , 𝑥2 ≥ 0
define concrete optimization models in Pyomo.
Note: Python programmers will probably prefer to write concrete models, while users of some other algebraic mod-
eling languages may tend to prefer to write abstract models. The choice is largely a matter of taste; some applications
may be a little more straightforward using one or the other.
model = AbstractModel()
model.m = Param(within=NonNegativeIntegers)
model.n = Param(within=NonNegativeIntegers)
def obj_expression(model):
return summation(model.c, model.x)
model.OBJ = Objective(rule=obj_expression)
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
Note: Python is interpreted one line at a time. A line continuation character, backslash, is used for Python statements
that need to span multiple lines. In Python, indentation has meaning and must be consistent. For example, lines
inside a function definition must be indented and the end of the indentation is used by Python to signal the end of the
definition.
We will now examine the lines in this example. The first import line is used to ensure that int or long division
arguments are converted to floating point values before division is performed.
In Python versions before 3.0, division returns the floor of the mathematical result of division if arguments are int or
long. This import line avoids unexpected behavior when developing mathematical models with integer values.
The next import line that is required in every Pyomo model. Its purpose is to make the symbols used by Pyomo known
to Python.
The declaration of a model is also required. The use of the name model is not required. Almost any name could be
used, but we will use the name model most of the time in this book. In this example, we are declaring that it will be an
abstract model.
model = AbstractModel()
We declare the parameters 𝑚 and 𝑛 using the Pyomo Param function. This function can take a variety of arguments;
this example illustrates use of the within option that is used by Pyomo to validate the data value that is assigned to
the parameter. If this option were not given, then Pyomo would not object to any type of data being assigned to these
parameters. As it is, assignment of a value that is not a non-negative integer will result in an error.
model.m = Param(within=NonNegativeIntegers)
model.n = Param(within=NonNegativeIntegers)
Although not required, it is convenient to define index sets. In this example we use the RangeSet function to declare
that the sets will be a sequence of integers starting at 1 and ending at a value specified by the the parameters model.m
and model.n.
The coefficient and right-hand-side data are defined as indexed parameters. When sets are given as arguments to the
Param function, they indicate that the set will index the parameter.
Note: In Python, and therefore in Pyomo, any text after pound sign is considered to be a comment.
The next line that is interpreted by Python as part of the model declares the variable 𝑥. The first argument to the Var
function is a set, so it is defined as an index set for the variable. In this case the variable has only one index set, but
multiple sets could be used as was the case for the declaration of the parameter model.a. The second argument
specifies a domain for the variable. This information is part of the model and will passed to the solver when data is
provided and the model is solved. Specification of the NonNegativeReals domain implements the requirement
that the variables be greater than or equal to zero.
# the next line declares a variable indexed by the set J
model.x = Var(model.J, domain=NonNegativeReals)
In abstract models, Pyomo expressions are usually provided to objective function and constraint declarations via a
function defined with a Python def statement. The def statement establishes a name for a function along with
its arguments. When Pyomo uses a function to get objective function or constraint expressions, it always passes
in the model (i.e., itself) as the the first argument so the model is always the first formal argument when declaring
such functions in Pyomo. Additional arguments, if needed, follow. Since summation is an extremely common part
of optimization models, Pyomo provides a flexible function to accommodate it. When given two arguments, the
summation function returns an expression for the sum of the product of the two arguments over their indexes.
This only works, of course, if the two arguments have the same indexes. If it is given only one argument it returns
an expression for the sum over all indexes of that argument. So in this example, when∑︀ summation is passed the
𝑛
arguments model.c, model.x it returns an internal representation of the expression 𝑗=1 𝑐𝑗 𝑥𝑗 .
def obj_expression(model):
return summation(model.c, model.x)
To declare an objective function, the Pyomo function called Objective is used. The rule argument gives the
name of a function that returns the expression to be used. The default sense is minimization. For maximization, the
sense=maximize argument must be used. The name that is declared, which is OBJ in this case, appears in some
reports and can be almost any name.
model.OBJ = Objective(rule=obj_expression)
Declaration of constraints is similar. A function is declared to deliver the constraint expression. In this
∑︀𝑛 case, there
can be multiple constraints of the same form because we index the constraints by 𝑖 in the expression 𝑗=1 𝑎𝑖𝑗 𝑥𝑗 ≥
𝑏𝑖 ∀𝑖 = 1 . . . 𝑚, which states that we need a constraint for each value of 𝑖 from one to 𝑚. In order to parametrize the
expression by 𝑖 we include it as a formal parameter to the function that declares the constraint expression. Technically,
we could have used anything for this argument, but that might be confusing. Using an i for an 𝑖 seems sensible in this
situation.
def ax_constraint_rule(model, i):
# return the expression for the constraint for i
return sum(model.a[i,j] * model.x[j] for j in model.J) >= model.b[i]
Note: In Python, indexes are in square brackets and function arguments are in parentheses.
In order to declare constraints that use this expression, we use the Pyomo Constraint function that takes a variety
of arguments. In this case, our model specifies that we can have more than one constraint of the same form and we
have created a set, model.I, over which these constraints can be indexed so that is the first argument to the constraint
declaration function. The next argument gives the rule that will be used to generate expressions for the constraints.
Taken as a whole, this constraint declaration says that a list of constraints indexed by the set model.I will be created
and for each member of model.I, the function ax_constraint_rule will be called and it will be passed the
model object as well as the member of model.I
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
In the object oriented view of all of this, we would say that model object is a class instance of the AbstractModel
class, and model.J is a Set object that is contained by this model. Many modeling components in Pyomo can be
optionally specified as indexed components: collections of components that are referenced using one or more values.
In this example, the parameter model.c is indexed with set model.J.
In order to use this model, data must be given for the values of the parameters. Here is one file that provides data.
param m := 1 ;
param n := 2 ;
param a :=
1 1 3
1 2 4
;
param c:=
1 2
2 3
;
param b := 1 1 ;
There are multiple formats that can be used to provide data to a Pyomo model, but the AMPL format works well
for our purposes because it contains the names of the data elements together with the data. In AMPL data files, text
after a pound sign is treated as a comment. Lines generally do not matter, but statements must be terminated with a
semi-colon.
For this particular data file, there is one constraint, so the value of model.m will be one and there are two variables
(i.e., the vector model.x is two elements long) so the value of model.n will be two. These two assignments are
accomplished with standard assignments. Notice that in AMPL format input, the name of the model is omitted.
param m := 1 ;
param n := 2 ;
There is only one constraint, so only two values are needed for model.a. When assigning values to arrays and vectors
in AMPL format, one way to do it is to give the index(es) and the the value. The line 1 2 4 causes model.a[1,2]
to get the value 4. Since model.c has only one index, only one index value is needed so, for example, the line 1
2 causes model.c[1] to get the value 2. Line breaks generally do not matter in AMPL format data files, so the
assignment of the value for the single index of model.b is given on one line since that is easy to read.
param a :=
1 1 3
1 2 4
;
param c:=
1 2
2 3
;
(continues on next page)
param b := 1 1 ;
When working with Pyomo (or any other AML), it is convenient to write abstract models in a somewhat more abstract
way by using index sets that contain strings rather than index sets that are implied by 1, . . . , 𝑚 or the summation from
1 to 𝑛. When this is done, the size of the set is implied by the input, rather than specified directly. Furthermore, the
index entries may have no real order. Often, a mixture of integers and indexes and strings as indexes is needed in the
same model. To start with an illustration of general indexes, consider a slightly different Pyomo implementation of
the model we just presented.
# abstract2.py
model = AbstractModel()
model.I = Set()
model.J = Set()
def obj_expression(model):
return summation(model.c, model.x)
model.OBJ = Objective(rule=obj_expression)
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
To get the same instantiated model, the following data file can be used.
set I := 1 ;
set J := 1 2 ;
param a :=
1 1 3
1 2 4
;
param c:=
(continues on next page)
param b := 1 1 ;
However, this model can also be fed different data for problems of the same general form using meaningful indexes.
set I := TV Film ;
set J := Graham John Carol ;
param a :=
TV Graham 3
TV John 4.4
TV Carol 4.9
Film Graham 1
Film John 2.4
Film Carol 1.1
;
param c := [*]
Graham 2.2
John 3.1416
Carol 3
;
param b := TV 1 Film 1 ;
It is possible to get nearly the same flexible behavior from models declared to be abstract and models declared to be
concrete in Pyomo; however, we will focus on a straightforward concrete example here where the data is hard-wired
into the model file. Python programmers will quickly realize that the data could have come from other sources.
min 2𝑥1 + 3𝑥2
We repeat the concrete model already given: s.t. 3𝑥1 + 4𝑥2 ≥ 1 This is implemented as a concrete model as
𝑥1 , 𝑥2 ≥ 0
follows:
model = ConcreteModel()
Although rule functions can also be used to specify constraints and objectives, in this example we use the expr option
that is available only in concrete models. This option gives a direct specification of the expression.
Pyomo supports modeling and scripting but does not install a solver automatically. In order to solve a model, there
must be a solver installed on the computer to be used. If there is a solver, then the pyomo command can be used to
solve a problem instance.
Suppose that the solver named glpk (also known as glpsol) is installed on the computer. Suppose further that an
abstract model is in the file named abstract1.py and a data file for it is in the file named abstract1.dat.
From the command prompt, with both files in the current directory, a solution can be obtained with the command:
Since glpk is the default solver, there really is no need specify it so the --solver option can be dropped.
Note: There are two dashes before the command line option names such as solver.
To continue the example, if CPLEX is installed then it can be listed as the solver. The command to solve with CPLEX
is
The numbers in square brackets indicate how much time was required for each step. Results are written to the file
named results.json, which has a special structure that makes it useful for post-processing. To see a summary of
results written to the screen, use the --summary option:
For a concrete model, no data file is specified on the Pyomo command line.
4.1 Sets
4.1.1 Declaration
Sets can be declared using the Set and RangeSet functions or by assigning set expressions. The simplest set declaration
creates a set and postpones creation of its members:
17
Pyomo Documentation, Release 5.6.9.dev0
To create a set of all the numbers in set model.A doubled, one could use
As an aside we note that as always in Python, there are lot of ways to accomplish the same thing. Also, note that this
will generate an error if model.A contains elements for which multiplication times two is not defined.
The initialize option can accept any Python iterable, including a set, list, or tuple. This data may be
returned from a function or specified directly as in
The initialize option can also specify either a generator or a function to specify the Set members. In the case of
a generator, all data yielded by the generator will become the initial set members:
For initialization functions, Pyomo supports two signatures. In the first, the function returns an iterable (set, list,
or tuple) containing the data with which to initialize the Set:
In the second signature, the function is called for each element, passing the element number in as an extra argument.
This is repeated until the function returns the special value Set.End:
>>> model.X.pprint()
X : Dim=0, Dimen=1, Size=10, Domain=None, Ordered=False, Bounds=(1, 19)
[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
>>> model.Y.pprint()
Y : Dim=0, Dimen=1, Size=10, Domain=None, Ordered=False, Bounds=(1, 19)
[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
>>> model.Z.pprint()
Z : Dim=0, Dimen=1, Size=10, Domain=None, Ordered=False, Bounds=(3, 21)
[3, 5, 7, 9, 11, 13, 15, 17, 19, 21]
Additional information about iterators for set initialization is in the [PyomoBookII] book.
Note: For Abstract models, data specified in an input file or through the data argument to create_instance()
will override the data specified by the initialize options.
If sets are given as arguments to Set without keywords, they are interpreted as indexes for an array of sets. For
example, to create an array of sets that is indexed by the members of the set model.A, use:
>>> model.E = Set(model.A)
Arguments can be combined. For example, to create an array of sets, indexed by set model.A where each set contains
three dimensional members, use:
>>> model.F = Set(model.A, dimen=3)
The initialize option can be used to create a set that contains a sequence of numbers, but the RangeSet function
provides a concise mechanism for simple sequences. This function takes as its arguments a start value, a final value,
and a step size. If the RangeSet has only a single argument, then that value defines the final value in the sequence;
the first value and step size default to one. If two values given, they are the first and last value in the sequence and the
step size defaults to one. For example, the following declaration creates a set with the numbers 1.5, 5 and 8.5:
>>> model.G = RangeSet(1.5, 10, 3.5)
4.1.2 Operations
Sets may also be created by storing the result of set operations using other Pyomo sets. Pyomo supports set operations
including union, intersection, difference, and symmetric difference:
>>> model.I = model.A | model.D # union
>>> model.J = model.A & model.D # intersection
>>> model.K = model.A - model.D # difference
>>> model.L = model.A ^ model.D # exclusive-or
For example, the cross-product operator is the asterisk (*). To define a new set M that is the cross product of sets B and
C, one could use
>>> model.M = model.B * model.C
This creates a virtual set that holds references to the original sets, so any updates to the original sets (B and C) will be
reflected in the new set (M). In contrast, you can also create a concrete set, which directly stores the values of the cross
product at the time of creation and will not reflect subsequent changes in the original sets with:
>>> model.M_concrete = Set(initialize=model.B * model.C)
Finally, you can indicate that the members of a set are restricted to be in the cross product of two other sets, one can
use the within keyword:
>>> model.N = Set(within=model.B * model.C)
For use in specifying domains for sets, parameters and variables, Pyomo provides the following pre-defined virtual
sets:
• Any = all possible values
• Reals = floating point values
• PositiveReals = strictly positive floating point values
• NonPositiveReals = non-positive floating point values
4.1. Sets 19
Pyomo Documentation, Release 5.6.9.dev0
model.M = Set(within=NegativeIntegers)
Sets provide indexes for parameters, variables and other sets. Index set issues are important for modelers in part
because of efficiency considerations, but primarily because the right choice of index sets can result in very natural for-
mulations that are conducive to understanding and maintenance. Pyomo leverages Python to provide a rich collection
of options for index set creation and use.
The choice of how to represent indexes often depends on the application and the nature of the instance data that are
expected. To illustrate some of the options and issues, we will consider problems involving networks. In many network
applications, it is useful to declare a set of nodes, such as
model.Nodes = Set()
and then a set of arcs can be created with reference to the nodes.
Consider the following simple version of minimum cost flow problem:
minimize
∑︀
𝑐𝑎 𝑥𝑎
𝑎∈𝒜∑︀
subject to: 𝑆𝑛 + (𝑖,𝑛)∈𝒜 𝑥(𝑖,𝑛)
∑︀ where
−𝐷𝑛 − (𝑛,𝑗)∈𝒜 𝑥(𝑛,𝑗) 𝑛∈𝒩
𝑥𝑎 ≥ 0, 𝑎∈𝒜
• Set: Nodes ≡ 𝒩
• Set: Arcs ≡ 𝒜 ⊆ 𝒩 × 𝒩
• Var: Flow on arc (𝑖, 𝑗) ≡ 𝑥𝑖,𝑗 , (𝑖, 𝑗) ∈ 𝒜
• Param: Flow Cost on arc (𝑖, 𝑗) ≡ 𝑐𝑖,𝑗 , (𝑖, 𝑗) ∈ 𝒜
• Param: Demand at node latexmath:i ≡ 𝐷𝑖 , 𝑖 ∈ 𝒩
• Param: Supply at node latexmath:i ≡ 𝑆𝑖 , 𝑖 ∈ 𝒩
In the simplest case, the arcs can just be the cross product of the nodes, which is accomplished by the definition
model.arcs = model.Nodes*model.Nodes
that creates a set with two dimensional members. For applications where all nodes are always connected to all other
nodes this may suffice. However, issues can arise when the network is not fully dense. For example, the burden of
avoiding flow on arcs that do not exist falls on the data file where high-enough costs must be provided for those arcs.
Such a scheme is not very elegant or robust.
For many network flow applications, it might be better to declare the arcs using
model.Arcs = Set(within=model.Nodes*model.Nodes)
or
model.Arcs = Set(dimen=2)
where the difference is that the first version will provide error checking as data is assigned to the set elements.
This would enable specification of a sparse network in a natural way. But this results in a need to change the
FlowBalance constraint because as it was written in the simple example, it sums over the entire set of nodes
for each node. One way to remedy this is to sum only over the members of the set model.arcs as in
This will be OK unless the number of nodes becomes very large for a sparse network, then the time to generate this
constraint might become an issue (admittely, only for very large networks, but such networks do exist).
Another method, which comes in handy in many network applications, is to have a set for each node that contain the
nodes at the other end of arcs going to the node at hand and another set giving the nodes on out-going arcs. If these
sets are called model.NodesIn and model.NodesOut respectively, then the flow balance rule can be re-written
as
The data for NodesIn and NodesOut could be added to the input file, and this may be the most efficient option.
For all but the largest networks, rather than reading Arcs, NodesIn and NodesOut from a data file, it might
be more elegant to read only Arcs from a data file and declare model.NodesIn with an initialize option
specifying the creation as follows:
with a similar definition for model.NodesOut. This code creates a list of sets for NodesIn, one set of nodes for
each node. The full model is:
4.1. Sets 21
Pyomo Documentation, Release 5.6.9.dev0
# Isinglecomm.py
# NodesIn and NodesOut are intialized using the Arcs
from pyomo.environ import *
model = AbstractModel()
model.Nodes = Set()
model.Arcs = Set(dimen=2)
model.Demand = Param(model.Nodes)
model.Supply = Param(model.Nodes)
def Obj_rule(model):
return summation(model.FlowCost, model.Flow)
model.Obj = Objective(rule=Obj_rule, sense=minimize)
set Arcs :=
CityA CityB
CityA CityC
CityC CityB
;
param : FlowCost :=
CityA CityB 1.4
(continues on next page)
param Demand :=
CityA 0
CityB 1
CityC 1
;
param Supply :=
CityA 2
CityB 0
CityC 0
;
This can be done somewhat more efficiently, and perhaps more clearly, using a BuildAction as shown in Isinglebuild.py
in :ref:Isinglebuild.py.
There are many ways to accomplish this, but one good way is to create a set of tuples composed of all of model.k,
model.V[k] pairs. This can be done as follows:
def kv_init(model):
return ((k,v) for k in model.K for v in model.V[k])
model.KV=Set(dimen=2, initialize=kv_init)
model.MyConstraint = Constraint(model.I,model.KV,rule=c1Rule)
model = AbstractModel()
model.I=Set()
model.K=Set()
model.V=Set(model.K)
def kv_init(model):
return ((k,v) for k in model.K for v in model.V[k])
model.KV=Set(dimen=2, initialize=kv_init)
4.1. Sets 23
Pyomo Documentation, Release 5.6.9.dev0
model.y = Var(model.I)
model.x = Var(model.I, model.KV)
#include a constraint
#x[i,k,v] <= a[i,k]*y[i], for i in model.I, k in model.K, v in model.V[k]
def c1Rule(model,i,k,v):
return model.x[i,k,v] <= model.a[i,k]*model.y[i]
model.c1 = Constraint(model.I,model.KV,rule=c1Rule)
4.2 Parameters
The word “parameters” is used in many settings. When discussing a Pyomo model, we use the word to refer to data
that must be provided in order to find an optimal (or good) assignment of values to the decision variables. Parameters
are declared with the Param function, which takes arguments that are somewhat similar to the Set function. For
example, the following code snippet declares sets model.A, model.B and then a parameter array model.P that is
indexed by model.A:
model.A = Set()
model.B = Set()
model.P = Param(model.A, model.B)
In addition to sets that serve as indexes, the Param function takes the following command options:
• default = The value absent any other specification.
• doc = String describing the parameter
• initialize = A function (or Python object) that returns the members to initialize the parameter values.
• validate = A boolean function with arguments that are the prospective parameter value, the parameter indices
and the model.
• within = Set used for validation; it specifies the domain of the parameter values.
These options perform in the same way as they do for Set. For example, suppose that Model.A = RangeSet(1,
3), then there are many ways to create a parameter that is a square matrix with 9, 16, 25 on the main diagonal zeros
elsewhere, here are two ways to do it. First using a Python object to initialize:
v={}
v[1,1] = 9
v[2,2] = 16
v[3,3] = 25
model.S = Param(model.A, model.A, initialize=v, default=0)
And now using an initialization function that is automatically called once for each index tuple (remember that we are
assuming that model.A contains 1,2,3)
In this example, the index set contained integers, but index sets need not be numeric. It is very common to use strings.
Note: Data specified in an input file will override the data specified by the initialize options.
Parameter values can be checked by a validation function. In the following example, the parameter S indexed by
model.A and is checked to be greater than 3.14159. If a value is provided that is less than that, the model instantation
would be terminated and an error message issued. The function used to validate should be written so as to return True
if the data is valid and False otherwise.
4.3 Variables
Variables are intended to ultimately be given values by an optimization package. They are declared and optionally
bounded, given initial values, and documented using the Pyomo Var function. If index sets are given as arguments to
this function they are used to index the variable. Other optional directives include:
• bounds = A function (or Python object) that gives a (lower,upper) bound pair for the variable
• domain = A set that is a super-set of the values the variable can take on.
• initialize = A function (or Python object) that gives a starting value for the variable; this is particularly important
for non-linear models
• within = (synonym for domain)
The following code snippet illustrates some aspects of these options by declaring a singleton (i.e. unindexed) variable
named model.LumberJack that will take on real values between zero and 6 and it initialized to be 1.5:
Instead of the initialize option, initialization is sometimes done with a Python assignment statement as in
model.LumberJack = 1.5
For indexed variables, bounds and initial values are often specified by a rule (a Python function) that itself may make
reference to parameters or other data. The formal arguments to these rules begins with the model followed by the
indexes. This is illustrated in the following code snippet that makes use of Python dictionaries declared as lb and ub
that are used by a function to provide bounds:
Note: Many of the pre-defined virtual sets that are used as domains imply bounds. A strong example is the set
Boolean that implies bounds of zero and one.
4.3. Variables 25
Pyomo Documentation, Release 5.6.9.dev0
4.4 Objectives
An objective is a function of variables that returns a value that an optimization package attempts to maximize or
minimize. The Objective function in Pyomo declares an objective. Although other mechanisms are possible, this
function is typically passed the name of another function that gives the expression. Here is a very simple version of
such a function that assumes model.x has previously been declared as a Var:
def ObjRule(model):
return 2*model.x[1] + 3*model.x[2]
model.g = Objective(rule=ObjRule)
It is more common for an objective function to refer to parameters as in this example that assumes that model.p has
been declared as a Param and that model.x has been declared with the same index set, while model.y has been
declared as a singleton:
def profrul(model):
return summation(model.p, model.x) + model.y
model.Obj = Objective(rule=ObjRule, sense=maximize)
This example uses the sense option to specify maximization. The default sense is minimize.
4.5 Constraints
Most constraints are specified using equality or inequality expressions that are created using a rule, which is a Python
function. For example, if the variable model.x has the indexes ‘butter’ and ‘scones’, then this constraint limits the
sum over these indexes to be exactly three:
def teaOKrule(model):
return(model.x['butter'] + model.x['scones'] == 3)
model.TeaConst = Constraint(rule=teaOKrule)
Instead of expressions involving equality (==) or inequalities (<= or >=), constraints can also be expressed using a
3-tuple if the form (lb, expr, ub) where lb and ub can be None, which is interpreted as lb <= expr <= ub. Variables
can appear only in the middle expr. For example, the following two constraint declarations have the same meaning:
model.x = Var()
def aRule(model):
return model.x >= 2
model.Boundx = Constraint(rule=aRule)
def bRule(model):
return (2, model.x, None)
model.boundx = Constraint(rule=bRule)
For this simple example, it would also be possible to declare model.x with a bounds option to accomplish the same
thing.
Constraints (and objectives) can be indexed by lists or sets. When the declaration contains lists or sets as arguments,
the elements are iteratively passed to the rule function. If there is more than one, then the cross product is sent. For
example the following constraint could be interpreted as placing a budget of 𝑖 on the 𝑖th item to buy where the cost
per item is given by the parameter model.a:
model.A = RangeSet(1,10)
model.a = Param(model.A, within=PositiveReals)
model.ToBuy = Var(model.A)
def bud_rule(model, i):
return model.a[i]*model.ToBuy[i] <= i
aBudget = Constraint(model.A, rule=bud_rule)
Note: Python and Pyomo are case sensitive so model.a is not the same as model.A.
4.6 Expressions
In this section, we use the word “expression” in two ways: first in the general sense of the word and second to desribe
a class of Pyomo objects that have the name Expression as described in the subsection on expression objects.
Both objectives and constraints make use of rules to generate expressions. These are Python functions that return the
appropriate expression. These are first-class functions that can access global data as well as data passed in, including
the model object.
Operations on model elements results in expressions, which seems natural in expressions like the constraints we have
seen so far. It is also possible to build up expressions. The following example illustrates this, along with a reference
to global Python data in the form of a Python variable called switch:
switch = 3
def pi_rule(model):
accexpr = summation(model.c, model.x)
if switch >= 2:
accexpr = accexpr - model.d
return accexpr >= 0.5
PieSlice = Constraint(rule=pi_rule)
In this example, the constraint that is generated depends on the value of the Python variable called switch. If the
value is 2 or greater, then the constraint is summation(model.c, model.x) - model.d >= 0.5; other-
wise, the model.d term is not present.
Warning: Because model elements result in expressions, not values, the following does not work as expected in
an abstract model!
model.A = RangeSet(1, 10)
model.c = Param(model.A)
model.d = Param()
model.x = Var(model.A, domain=Boolean)
def pi_rule(model):
accexpr = summation(model.c, model.x)
if model.d >= 2: # NOT in an abstract model!!
accexpr = accexpr - model.d
4.6. Expressions
return accexpr >= 0.5 27
PieSlice = Constraint(rule=pi_rule)
Pyomo Documentation, Release 5.6.9.dev0
The trouble is that model.d >= 2 results in an expression, not its evaluated value. Instead use if
value(model.d) >= 2
Note: Pyomo supports non-linear expressions and can call non-linear solvers such as Ipopt.
Pyomo has facilities to add piecewise constraints of the form y=f(x) for a variety of forms of the function f.
The piecewise types other than SOS2, BIGM_SOS1, BIGM_BIN are implement as described in the paper
[Vielma_et_al].
There are two basic forms for the declaration of the constraint:
where pwconst can be replaced by a name appropriate for the application. The choice depends on whether the x and
y variables are indexed. If so, they must have the same index sets and these sets are give as the first arguments.
Keywords:
Note: pw_pts is always required. These give the breakpoints for the piecewise function and are expected to
fully span the bounds for the independent variable(s).
• pw_repn=<Option>
Indicates the type of piecewise representation to use. This can have a major impact on solver performance.
Options: (Default “SOS2”)
– “SOS2” - Standard representation using sos2 constraints.
– “BIGM_BIN” - BigM constraints with binary variables. The theoretically tightest M values are automati-
cally determined.
– “BIGM_SOS1” - BigM constraints with sos1 variables. The theoretically tightest M values are automati-
cally determined.
– “DCC” - Disaggregated convex combination model.
– “DLOG” - Logarithmic disaggregated convex combination model.
– “CC” - Convex combination model.
– “LOG” - Logarithmic branching convex combination.
– “MC” - Multiple choice model.
– “INC” - Incremental (delta) method.
Note: Step functions are supported for all but the two BIGM options. Refer to the ‘force_pw’ option.
• pw_constr_type= <Option>
Indicates the bound type of the piecewise function. Options:
– “UB” - y variable is bounded above by piecewise function.
– “LB” - y variable is bounded below by piecewise function.
– “EQ” - y variable is equal to the piecewise function.
• f_rule=f(model,i,j,. . . ,x), { }, [ ], ( )
An object that returns a numeric value that is the range value corresponding to each piecewise domain point.
For functions, the first argument must be a Pyomo model. The last argument is the domain value at which the
function evaluates (Not a Pyomo Var). Intermediate arguments are the corresponding indices of the Piecewise
component (if any). Otherwise, the object can be a dictionary of lists/tuples (with keys the same as the indexing
set) or a singe list/tuple (when no indexing set is used or when all indices use an identical piecewise function).
Examples:
# A nonlinear function
f = lambda model,x : exp(x) + value(model.p)
# A step function
f = [0,0,1,1,2,2]
• force_pw=True/False
Using the given function rule and pw_pts, a check for convexity/concavity is implemented. If (1) the function
is convex and the piecewise constraints are lower bounds or if (2) the function is concave and the piecewise
constraints are upper bounds then the piecewise constraints will be substituted for linear constraints. Setting
‘force_pw=True’ will force the use of the original piecewise constraints even when one of these two cases
applies.
• warning_tol=<float>
To aid in debugging, a warning is printed when consecutive slopes of piecewise segments are within <warn-
ing_tol> of each other. Default=1e-8
• warn_domain_coverage=True/False
Print a warning when the feasible region of the domain variable is not completely covered by the piecewise
breakpoints. Default=True
• unbounded_domain_var=True/False
Allow an unbounded or partially bounded Pyomo Var to be used as the domain variable. Default=False
Note: This does not imply unbounded piecewise segments will be constructed. The outermost piecewise
breakpoints will bound the domain variable at each index. However, the Var attributes .lb and .ub will not be
modified.
4.6. Expressions 29
Pyomo Documentation, Release 5.6.9.dev0
Here is an example of an assignment to a Python dictionary variable that has keywords for a picewise constraint:
kwds = {'pw_constr_type':'EQ','pw_repn':'SOS2','sense':maximize,'force_pw':True}
Here is a simple example based on the example given earlier in Symbolic Index Sets. In this new example, the objective
function is the sum of c times x to the fourth. In this example, the keywords are passed directly to the Piecewise
function without being assigned to a dictionary variable. The upper bound on the x variables was chosen whimsically
just to make the example. The important thing to note is that variables that are going to appear as the independent
variable in a piecewise constraint must have bounds.
# abstract2piece.py
# Similar to abstract2.py, but the objective is now c times x to the fourth power
model = AbstractModel()
model.I = Set()
model.J = Set()
def obj_expression(model):
return summation(model.c, model.y)
model.OBJ = Objective(rule=obj_expression)
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
Pyomo Expression objects are very similar to the Param component (with mutable=True) except that the
underlying values can be numeric constants or Pyomo expressions. Here’s an illustration of expression objects in an
AbstractModel. An expression object with an index set that is the numbers 1, 2, 3 is created and initialized to be the
model variable x times the index. Later in the model file, just to illustrate how to do it, the expression is changed but
just for the first index to be x squared.
model = ConcreteModel()
model.x = Var(initialize=1.0)
def _e(m,i):
return m.x*i
model.e = Expression([1,2,3], rule=_e)
instance = model.create_instance()
#... solve
#... load results
# print the value of the expression given the loaded optimal solution
print (value(instance.e[1]))
An alternative is to create Python functions that, potentially, manipulate model objects. E.g., if you define a function
You can call this function with or without Pyomo modeling components as the arguments. E.g., f(2,3) will return a
number, whereas f(model.x, 3) will return a Pyomo expression due to operator overloading.
If you take this approach you should note that anywhere a Pyomo expression is used to generate another expression
(e.g., f(model.x, 3) + 5), the initial expression is always cloned so that the new generated expression is independent of
the old. For example:
model = ConcreteModel()
model.x = Var()
If you want to create an expression that is shared between other expressions, you can use the Expression compo-
nent.
4.6. Expressions 31
Pyomo Documentation, Release 5.6.9.dev0
4.7 Suffixes
Suffixes provide a mechanism for declaring extraneous model data, which can be used in a number of contexts. Most
commonly, suffixes are used by solver plugins to store extra information about the solution of a model. This and other
suffix functionality is made available to the modeler through the use of the Suffix component class. Uses of Suffix
include:
• Importing extra information from a solver about the solution of a mathematical program (e.g., constraint duals,
variable reduced costs, basis information).
• Exporting information to a solver or algorithm to aid in solving a mathematical program (e.g., warm-starting
information, variable branching priorities).
• Tagging modeling components with local data for later use in advanced scripting algorithms.
The Suffix component used in Pyomo has been adapted from the suffix notation used in the modeling language AMPL
[AMPL]. Therefore, it follows naturally that AMPL style suffix functionality is fully available using Pyomo’s NL file
interface. For information on AMPL style suffixes the reader is referred to the AMPL website:
http://www.ampl.com
A number of scripting examples that highlight the use AMPL style suffix functionality are available in the
examples/pyomo/suffixes directory distributed with Pyomo.
4.7.2 Declaration
The effects of declaring a Suffix component on a Pyomo model are determined by the following traits:
• direction: This trait defines the direction of information flow for the suffix. A suffix direction can be assigned
one of four possible values:
– LOCAL - suffix data stays local to the modeling framework and will not be imported or exported by a
solver plugin (default)
– IMPORT - suffix data will be imported from the solver by its respective solver plugin
– EXPORT - suffix data will be exported to a solver by its respective solver plugin
– IMPORT_EXPORT - suffix data flows in both directions between the model and the solver or algorithm
• datatype: This trait advertises the type of data held on the suffix for those interfaces where it matters (e.g., the
NL file interface). A suffix datatype can be assigned one of three possible values:
– FLOAT - the suffix stores floating point data (default)
– INT - the suffix stores integer data
– None - the suffix stores any type of data
Note: Exporting suffix data through Pyomo’s NL file interface requires all active export suffixes have a strict datatype
(i.e., datatype=None is not allowed).
The following code snippet shows examples of declaring a Suffix component on a Pyomo model:
model = ConcreteModel()
Declaring a Suffix with a non-local direction on a model is not guaranteed to be compatible with all solver plugins
in Pyomo. Whether a given Suffix is acceptable or not depends on both the solver and solver interface being used.
In some cases, a solver plugin will raise an exception if it encounters a Suffix type that it does not handle, but this
is not true in every situation. For instance, the NL file interface is generic to all AMPL-compatible solvers, so there
is no way to validate that a Suffix of a given name, direction, and datatype is appropriate for a solver. One should
be careful in verifying that Suffix declarations are being handled as expected when switching to a different solver or
solver interface.
4.7.3 Operations
The Suffix component class provides a dictionary interface for mapping Pyomo modeling components to arbitrary
data. This mapping functionality is captured within the ComponentMap base class, which is also available within
Pyomo’s modeling environment. The ComponentMap can be used as a more lightweight replacement for Suffix in
cases where a simple mapping from Pyomo modeling components to arbitrary data values is required.
Note: ComponentMap and Suffix use the built-in id() function for hashing entry keys. This design decision arises
from the fact that most of the modeling components found in Pyomo are either not hashable or use a hash based on a
mutable numeric value, making them unacceptable for use as keys with the built-in dict class.
Warning: The use of the built-in id() function for hashing entry keys in ComponentMap and Suffix makes them
inappropriate for use in situations where built-in object types must be used as keys. It is strongly recommended
that only Pyomo modeling components be used as keys in these mapping containers (Var, Constraint, etc.).
Warning: Do not attempt to pickle or deepcopy instances of ComponentMap or Suffix unless doing so along with
the components for which they hold mapping entries. As an example, placing one of these objects on a model and
then cloning or pickling that model is an acceptable scenario.
In addition to the dictionary interface provided through the ComponentMap base class, the Suffix component class
also provides a number of methods whose default semantics are more convenient for working with indexed modeling
components. The easiest way to highlight this functionality is through the use of an example.
model = ConcreteModel()
model.x = Var()
(continues on next page)
4.7. Suffixes 33
Pyomo Documentation, Release 5.6.9.dev0
In this example we have a concrete Pyomo model with two different types of variable components (indexed and non-
indexed) as well as a Suffix declaration (foo). The next code snippet shows examples of adding entries to the suffix
foo.
# The same operation using the dict interface results in an entry only
# for the parent component model.y
model.foo[model.y] = 50.0
In this example we highlight the fact that the __setitem__ and setValue entry methods can be used interchange-
ably except in the case where indexed components are used (model.y). In the indexed case, the __setitem__ ap-
proach creates a single entry for the parent indexed component itself, whereas the setValue approach by default
creates an entry for each index of the component. This behavior can be controlled using the optional keyword ‘expand’,
where assigning it a value of False results in the same behavior as __setitem__.
Other operations like accessing or removing entries in our mapping can performed as if the built-in dict class is in
use.
del model.foo[model.y]
try:
(continues on next page)
The non-dict method clear_value can be used in place of __delitem__ to remove entries, where it inherits the
same default behavior as setValue for indexed components and does not raise a KeyError when the argument does
not exist as a key in the mapping.
model.foo.clear_value(model.y)
try:
print(model.foo[model.y[1]]) # -> raise KeyError
except KeyError:
pass
try:
del model.foo[model.y[1]] # -> raise KeyError
except KeyError:
pass
clear_value(component, expand=True)
Clears suffix information for a component.
setAllValues(value)
Sets the value of this suffix on all components.
updateValues(data_buffer, expand=True)
Updates the suffix data given a list of component,value tuples. Provides
an improvement in efficiency over calling setValue on every component.
getDatatype()
Return the suffix datatype.
setDatatype(datatype)
Set the suffix datatype.
getDirection()
Return the suffix direction.
setDirection(direction)
Set the suffix direction.
4.7. Suffixes 35
Pyomo Documentation, Release 5.6.9.dev0
importEnabled()
Returns True when this suffix is enabled for import from solutions.
exportEnabled()
Returns True when this suffix is enabled for export to solvers.
Importing suffix information from a solver solution is achieved by declaring a Suffix component with the appropriate
name and direction. Suffix names available for import may be specific to third-party solvers as well as individual
solver interfaces within Pyomo. The most common of these, available with most solvers and solver interfaces, is
constraint dual multipliers. Requesting that duals be imported into suffix data can be accomplished by declaring a
Suffix component on the model.
from pyomo.environ import *
model = ConcreteModel()
model.dual = Suffix(direction=Suffix.IMPORT)
model.x = Var()
model.obj = Objective(expr=model.x)
model.con = Constraint(expr=model.x>=1.0)
The existence of an active suffix with the name dual that has an import style suffix direction will cause constraint
dual information to be collected into the solver results (assuming the solver supplies dual information). In addition
to this, after loading solver results into a problem instance (using a python script or Pyomo callback functions in
conjunction with the pyomo command), one can access the dual values associated with constraints using the dual
Suffix component.
SolverFactory('glpk').solve(model)
print(model.dual[model.con]) # -> 1.0
Alternatively, the pyomo option --solver-suffixes can be used to request suffix information from a solver. In
the event that suffix names are provided via this command-line option, the pyomo script will automatically declare
these Suffix components on the constructed instance making these suffixes available for import.
Exporting suffix data is accomplished in a similar manner as to that of importing suffix data. One simply needs to
declare a Suffix component on the model with an export style suffix direction and associate modeling component
values with it. The following example shows how one can declare a special ordered set of type 1 using AMPL-style
suffix notation in conjunction with Pyomo’s NL file interface.
from pyomo.environ import *
model = ConcreteModel()
model.y = Var([1,2,3],within=NonNegativeReals)
model.sosno = Suffix(direction=Suffix.EXPORT)
model.ref = Suffix(direction=Suffix.EXPORT)
Most AMPL-compatible solvers will recognize the suffix names sosno and ref as declaring a special ordered set,
where a positive value for sosno indicates a special ordered set of type 1 and a negative value indicates a special
ordered set of type 2.
Note: Pyomo provides the SOSConstraint component for declaring special ordered sets, which is recognized by all
solver interface, including the NL file interface.
Pyomo’s NL file interface will recognize an EXPORT style Suffix component with the name ‘dual’ as supplying
initializations for constraint multipliers. As such it will be treated separately than all other EXPORT style suffixes
encountered in the NL writer, which are treated as AMPL-style suffixes. The following example script shows how one
can warmstart the interior-point solver Ipopt by supplying both primal (variable values) and dual (suffixes) solution
information. This dual suffix information can be both imported and exported using a single Suffix component with an
IMPORT_EXPORT direction.
### Create the ipopt solver plugin using the ASL interface
solver = 'ipopt'
solver_io = 'nl'
stream_solver = True # True prints solver output to screen
keepfiles = False # True prints intermediate file names (.nl,.sol,...)
opt = SolverFactory(solver,solver_io=solver_io)
if opt is None:
print("")
print("ERROR: Unable to create solver plugin for %s "\
"using the %s interface" % (solver, solver_io))
print("")
exit(1)
###
###
4.7. Suffixes 37
Pyomo Documentation, Release 5.6.9.dev0
### Send the model and suffix information to ipopt and collect the solution
print("")
print("WARM-STARTED SOLVE")
# The solver plugin will scan the model for all active suffixes
# valid for importing, which it will load into the model
opt.solve(model,keepfiles=keepfiles,tee=stream_solver)
###
The difference in performance can be seen by examining Ipopt’s iteration log with and without warm starting:
• Without Warmstart:
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.6109693e+01 1.12e+01 5.28e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.6982239e+01 7.30e-01 1.02e+01 -1.0 6.11e-01 - 7.19e-02 1.00e+00f 1
2 1.7318411e+01 3.60e-02 5.05e-01 -1.0 1.61e-01 - 1.00e+00 1.00e+00h 1
3 1.6849424e+01 2.78e-01 6.68e-02 -1.7 2.85e-01 - 7.94e-01 1.00e+00h 1
4 1.7051199e+01 4.71e-03 2.78e-03 -1.7 6.06e-02 - 1.00e+00 1.00e+00h 1
5 1.7011979e+01 7.19e-03 8.50e-03 -3.8 3.66e-02 - 9.45e-01 9.98e-01h 1
6 1.7014271e+01 1.74e-05 9.78e-06 -3.8 3.33e-03 - 1.00e+00 1.00e+00h 1
7 1.7014021e+01 1.23e-07 1.82e-07 -5.7 2.69e-04 - 1.00e+00 1.00e+00h 1
8 1.7014017e+01 1.77e-11 2.52e-11 -8.6 3.32e-06 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 8
• With Warmstart:
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.7014032e+01 2.00e-06 4.07e-06 -6.0 0.00e+00 - 0.00e+00 0.00e+00 0
(continues on next page)
Number of Iterations....: 2
In order to allow the declaration of suffix data within the framework of an AbstractModel, the Suffix component can
be initialized with an optional construction rule. As with constraint rules, this function will be executed at the time
of model construction. The following simple example highlights the use of the rule keyword in suffix initialization.
Suffix rules are expected to return an iterable of (component, value) tuples, where the expand=True semantics are
applied for indexed components.
model = AbstractModel()
model.x = Var()
model.c = Constraint(expr= model.x >= 1)
def foo_rule(m):
return ((m.x, 2.0), (m.c, 3.0))
model.foo = Suffix(rule=foo_rule)
The next example shows an abstract model where suffixes are attached only to the variables:
model = AbstractModel()
model.I = RangeSet(1,4)
model.x = Var(model.I)
def c_rule(m, i):
return m.x[i] >= i
model.c = Constraint(model.I, rule=c_rule)
def foo_rule(m):
return ((m.x[i], 3.0*i) for i in m.I)
model.foo = Suffix(rule=foo_rule)
4.7. Suffixes 39
Pyomo Documentation, Release 5.6.9.dev0
If you have a ConcreteModel, add these lines at the bottom of your Python script to solve it
If you have an AbstractModel, you must create a concrete instance of your model before solving it using the same
lines as above:
To solve a ConcreteModel contained in the file my_model.py using the pyomo command and the solver GLPK, use
the following line in a terminal window:
To solve an AbstractModel contained in the file my_model.py with data in the file my_data.dat using the pyomo
command and the solver GLPK, use the following line in a terminal window:
41
Pyomo Documentation, Release 5.6.9.dev0
Pyomo supports a wide variety of solvers. Pyomo has specialized interfaces to some solvers (for example, BARON,
CBC, CPLEX, and Gurobi). It also has generic interfaces that support calling any solver that can read AMPL “.nl”
and write “.sol” files and the ability to generate GAMS-format models and retrieve the results. You can get the
current list of supported solvers using the pyomo command:
This section gives an overview of commonly used scripting commands when working with Pyomo models. These
commands must be applied to a concrete model instance or in other words an instantiated model.
To illustrate Python scripts for Pyomo we consider an example that is in the file iterative1.py and is executed
using the command
43
Pyomo Documentation, Release 5.6.9.dev0
python iterative1.py
Note: This is a Python script that contains elements of Pyomo, so it is executed using the python command. The
pyomo command can be used, but then there will be some strange messages at the end when Pyomo finishes the script
and attempts to send the results to a solver, which is what the pyomo command does.
This script creates a model, solves it, and then adds a constraint to preclude the solution just found. This process
is repeated, so the script finds and prints multiple solutions. The particular model it creates is just the sum of four
binary variables. One does not need a computer to solve the problem or even to iterate over solutions. This example is
provided just to illustrate some elementary aspects of scripting.
# iterative1.py
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
# Create a solver
opt = pyo.SolverFactory('glpk')
#
# A simple model with binary variables and
# an empty constraint list.
#
model = pyo.AbstractModel()
model.n = pyo.Param(default=4)
model.x = pyo.Var(pyo.RangeSet(model.n), within=pyo.Binary)
def o_rule(model):
return pyo.summation(model.x)
model.o = pyo.Objective(rule=o_rule)
model.c = pyo.ConstraintList()
Let us now analyze this script. The first line is a comment that happens to give the name of the file. This is followed by
two lines that import symbols for Pyomo. The pyomo namespace is imported as pyo. Therefore, pyo. must precede
each use of a Pyomo name.
# iterative1.py
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
An object to perform optimization is created by calling SolverFactory with an argument giving the name of the
solver. The argument would be 'gurobi' if, e.g., Gurobi was desired instead of glpk:
# Create a solver
opt = pyo.SolverFactory('glpk')
The next lines after a comment create a model. For our discussion here, we will refer to this as the base model because
it will be extended by adding constraints later. (The words “base model” are not reserved words, they are just being
introduced for the discussion of this example). There are no constraints in the base model, but that is just to keep it
simple. Constraints could be present in the base model. Even though it is an abstract model, the base model is fully
specified by these commands because it requires no external data:
model = pyo.AbstractModel()
model.n = pyo.Param(default=4)
model.x = pyo.Var(pyo.RangeSet(model.n), within=pyo.Binary)
def o_rule(model):
return pyo.summation(model.x)
model.o = pyo.Objective(rule=o_rule)
The next line is not part of the base model specification. It creates an empty constraint list that the script will use to
add constraints.
model.c = pyo.ConstraintList()
The next non-comment line creates the instantiated model and refers to the instance object with a Python variable
instance. Models run using the pyomo script do not typically contain this line because model instantiation is done
by the pyomo script. In this example, the create function is called without arguments because none are needed;
however, the name of a file with data commands is given as an argument in many scripts.
instance = model.create_instance()
The next line invokes the solver and refers to the object contain results with the Python variable results.
results = opt.solve(instance)
The solve function loads the results into the instance, so the next line writes out the updated values.
instance.display()
The next non-comment line is a Python iteration command that will successively assign the integers from 0 to 4 to the
Python variable i, although that variable is not used in script. This loop is what causes the script to generate five more
solutions:
for i in range(5):
An expression is built up in the Python variable named expr. The Python variable j will be iteratively assigned all
of the indexes of the variable x. For each index, the value of the variable (which was loaded by the load method
just described) is tested to see if it is zero and the expression in expr is augmented accordingly. Although expr is
initialized to 0 (an integer), its type will change to be a Pyomo expression when it is assigned expressions involving
Pyomo variable objects:
expr = 0
for j in instance.x:
if pyo.value(instance.x[j]) == 0:
expr += instance.x[j]
else:
expr += (1-instance.x[j])
During the first iteration (when i is 0), we know that all values of x will be 0, so we can anticipate what the expression
will look like. We know that x is indexed by the integers from 1 to 4 so we know that j will take on the values from
1 to 4 and we also know that all value of x will be zero for all indexes so we know that the value of expr will be
something like
The value of j will be evaluated because it is a Python variable; however, because it is a Pyomo variable, the value
of instance.x[j] not be used, instead the variable object will appear in the expression. That is exactly what we
want in this case. When we wanted to use the current value in the if statement, we used the value function to get it.
The next line adds to the constaint list called c the requirement that the expression be greater than or equal to one:
The proof that this precludes the last solution is left as an exerise for the reader.
The final lines in the outer for loop find a solution and display it:
results = opt.solve(instance)
print ("\n===== iteration",i)
instance.display()
Note: The assignment of the solve output to a results object is somewhat anachronistic. Many scripts just use
since the results are moved to the instance by default, leaving the results object with little of interest. If, for some
reason, you want the results to stay in the results object and not be moved to the instance, you would use
This approach can be usefull if there is a concern that the solver did not terminate with an optimal solution. For
example,
The iterative1.py example above illustrates how a model can be changed and then re-solved. In that example,
the model is changed by adding a constraint, but the model could also be changed by altering the values of parameters.
Note, however, that in these examples, we make the changes to the concrete model instances. This is particularly
important for AbstractModel users, as this implies working with the instance object rather than the model
object, which allows us to avoid creating a new model object for each solve. Here is the basic idea for users of an
AbstractModel:
1. Create an AbstractModel (suppose it is called model)
2. Call model.create_instance() to create an instance (suppose it is called instance)
3. Solve instance
Note: Users of ConcreteModel typically name their models model, which can cause confusion to novice
readers of documentation. Examples based on an AbstractModel will refer to instance where users of a
ConcreteModel would typically use the name model.
If instance has a parameter whose name is Theta that was declared to be mutable (i.e., mutable=True) with
an index that contains idx, then the value in NewVal can be assigned to it using
For a singleton parameter named sigma (i.e., if it is not indexed), the assignment can be made using
Note: If the Param is not declared to be mutable, an error will occur if an assignment to it is attempted.
For more information about access to Pyomo parameters, see the section in this document on Param access Accessing
Parameter Values. Note that for concrete models, the model is the instance.
Instead of changing model data, scripts are often used to fix variable values. The following example illustrates this.
# iterative2.py
# Create a solver
opt = pyo.SolverFactory('cplex')
#
# A simple model with binary variables and
# an empty constraint list.
#
model = pyo.AbstractModel()
model.n = pyo.Param(default=4)
model.x = pyo.Var(pyo.RangeSet(model.n), within=pyo.Binary)
def o_rule(model):
return summation(model.x)
model.o = pyo.Objective(rule=o_rule)
model.c = pyo.ConstraintList()
if pyo.value(instance.x[2]) == 0:
instance.x[2].fix(1)
else:
instance.x[2].fix(0)
results = opt.solve(instance)
instance.display()
In this example, the variables are binary. The model is solved and then the value of model.x[2] is flipped to the
opposite value before solving the model again. The main lines of interest are:
if pyo.value(instance.x[2]) == 0:
instance.x[2].fix(1)
else:
instance.x[2].fix(0)
results = opt.solve(instance)
This could also have been accomplished by setting the upper and lower bounds:
>>> if instance.x[2] == 0:
... instance.x[2].setlb(1)
... instance.x[2].setub(1)
... else:
... instance.x[2].setlb(0)
... instance.x[2].setub(0)
Notice that when using the bounds, we do not set fixed to True because that would fix the variable at whatever
value it presently has and then the bounds would be ignored by the solver.
For more information about access to Pyomo variables, see the section in this document on Var access Accessing
Variable Values.
Note that
>>> instance.x.fix(2)
is equivalent to
>>> instance.y.value = 2
>>> instance.y.fixed = True
and
>>> instance.x.fix()
is equivalent to
One can add terms to an objective function of a ConcreteModel (or and instantiated AbstractModel) using the
expr attribute of the objective function object. Here is a simple example:
>>> import pyomo.environ as pyo
>>> from pyomo.opt import SolverFactory
>>> model.pprint()
Multiple objectives can be declared, but only one can be active at a time (at present, Pyomo does not support any
solvers that can be given more than one objective). If both model.obj1 and model.obj2 have been declared
using Objective, then one can ensure that model.obj2 is passed to the solver as shown in this simple example:
>>> model = pyo.ConcreteModel()
>>> model.obj1 = pyo.Objective(expr = 0)
>>> model.obj2 = pyo.Objective(expr = 0)
>>> model.obj1.deactivate()
>>> model.obj2.activate()
For abstract models this would be done prior to instantiation or else the activate and deactivate calls would
be on the instance rather than the model.
Constraints can be temporarily disabled using the deactivate() method. When the model is sent to a solver
inactive constraints are not included. Disabled constraints can be re-enabled using the activate() method.
>>> model = pyo.ConcreteModel()
>>> model.v = pyo.Var()
(continues on next page)
Often, the point of optimization is to get optimal values of variables. Some users may want to process the values in a
script. We will describe how to access a particular variable from a Python script as well as how to access all variables
from a Python script and from a callback. This should enable the reader to understand how to get the access that they
desire. The Iterative example given above also illustrates access to variable values.
Assuming the model has been instantiated and solved and the results have been loded back into the instance object,
then we can make use of the fact that the variable is a member of the instance object and its value can be accessed
using its value member. For example, suppose the model contains a variable named quant that is a singleton (has
no indexes) and suppose further that the name of the instance object is instance. Then the value of this variable
can be accessed using pyo.value(instance.quant). Variables with indexes can be referenced by supplying
the index.
Consider the following very simple example, which is similar to the iterative example. This is a concrete model. In
this example, the value of x[2] is accessed.
# noiteration1.py
# Create a solver
opt = SolverFactory('glpk')
#
# A simple model with binary variables and
# an empty constraint list.
#
model = pyo.ConcreteModel()
model.n = pyo.Param(default=4)
model.x = pyo.Var(pyo.RangeSet(model.n), within=pyo.Binary)
def o_rule(model):
(continues on next page)
results = opt.solve(model)
if pyo.value(model.x[2]) == 0:
print("The second index has a zero")
else:
print("x[2]=",pyo.value(model.x[2]))
Note: If this script is run without modification, Pyomo is likely to issue a warning because there are no constraints.
The warning is because some solvers may fail if given a problem instance that does not have any constraints.
As with one variable, we assume that the model has been instantiated and solved. Assuming the instance object has
the name instance, the following code snippet displays all variables and their values:
Alternatively,
This code could be improved by checking to see if the variable is not indexed (i.e., the only index value is None), then
the code could print the value without the word None next to it.
Assuming again that the model has been instantiated and solved and the results have been loded back into the instance
object. Here is a code snippet for fixing all integers at their current value:
Another way to access all of the variables (particularly if there are blocks) is as follows (this particular snippet assumes
that instead of import pyomo.environ as pyo from pyo.environ import * was used):
Accessing parameter values is completely analogous to accessing variable values. For example, here is a code snippet
to print the name and value of every Parameter in a model:
Access to dual values in scripts is similar to accessing primal variable values, except that dual values are not captured
by default so additional directives are needed before optimization to signal that duals are desired.
To get duals without a script, use the pyomo option --solver-suffixes='dual' which will cause dual values
to be included in output. Note: In addition to duals (dual) , reduced costs (rc) and slack values (slack) can be
requested. All suffixes can be requested using the pyomo option --solver-suffixes='.*'
Warning: Some of the duals may have the value None, rather than 0.
To signal that duals are desired, declare a Suffix component with the name “dual” on the model or instance with an
IMPORT or IMPORT_EXPORT direction.
See the section on Suffixes Suffixes for more information on Pyomo’s Suffix component. After the results are obtained
and loaded into an instance, duals can be accessed in the following fashion.
The following snippet will only work, of course, if there is a constraint with the name AxbConstraint that has and
index, which is the string Film.
Here is a complete example that relies on the file abstract2.py to provide the model and the file abstract2.
dat to provide the data. Note that the model in abstract2.py does contain a constraint named AxbConstraint
and abstract2.dat does specify an index for it named Film.
# driveabs2.py
from __future__ import division
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
# Create a solver
opt = SolverFactory('cplex')
results = opt.solve(instance)
# also puts the results back into the instance for easy access
Concrete models are slightly different because the model is the instance. Here is a complete example that relies on the
file concrete1.py to provide the model and instantiate it.
# driveconc1.py
from __future__ import division
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
# Create a solver
opt = SolverFactory('cplex')
The functions lslack() and uslack() return the upper and lower slacks, respectively, for a constraint.
After a solve, the results object has a member Solution.Status that contains the solver status. The following
snippet shows an example of access via a print statement:
results = opt.solve(instance)
#print ("The solver returned a status of:"+str(results.solver.status))
The use of the Python str function to cast the value to a be string makes it easy to test it. In particular, the value
‘optimal’ indicates that the solver succeeded. It is also possible to access Pyomo data that can be compared with the
solver status as in the following code snippet:
#...
Alternatively,
...
Most solvers accept options and Pyomo can pass options through to a solver. In scripts or callbacks, the options can
be attached to the solver object by adding to its options dictionary as illustrated by this snippet:
optimizer = pyo.SolverFactory['cbc']
optimizer.options["threads"] = 4
If multiple options are needed, then multiple dictionary entries should be added.
Sometime it is desirable to pass options as part of the call to the solve function as in this snippet:
The quoted string is passed directly to the solver. If multiple options need to be passed to the solver in this way, they
should be separated by a space within the quoted string. Notice that tee is a Pyomo option and is solver-independent,
while the string argument to options is passed to the solver without very little processing by Pyomo. If the solver
does not have a “threads” option, it will probably complain, but Pyomo will not.
There are no default values for options on a SolverFactory object. If you directly modify its options dictionary, as
was done above, those options will persist across every call to optimizer.solve(...) unless you delete them
from the options dictionary. You can also pass a dictionary of options into the opt.solve(...) method using the
options keyword. Those options will only persist within that solve and temporarily override any matching options
in the options dictionary on the solver object.
Often, the executables for solvers are in the path; however, for situations where they are not, the SolverFactory function
accepts the keyword executable, which you can use to set an absolute or relative path to a solver executable. E.g.,
Some solvers support a warm start based on current values of variables. To use this feature, set the values of variables
in the instance and pass warmstart=True to the solve() method. E.g.,
instance = model.create()
instance.y[0] = 1
instance.y[1] = 0
opt = pyo.SolverFactory("cplex")
Note: The Cplex and Gurobi LP file (and Python) interfaces will generate an MST file with the variable data and
hand this off to the solver in addition to the LP file.
Warning: Solvers using the NL file interface (e.g., “gurobi_ampl”, “cplexamp”) do not accept warmstart as a
keyword to the solve() method as the NL file format, by default, includes variable initialization data (drawn from
the current value of all variables).
Use of parallel solvers for PySP is discussed in the section on parallel PySP Solving Sub-problems in Parallel and/or
Remotely.
Solvers are controlled by solver servers. The pyro mip solver server is launched with the command
pyro_mip_server. This command may be repeated to launch as many solvers as are desired. A name server
and a dispatch server must be running and accessible to the process that runs the script that will use the mip servers as
well as to the mip servers. The name server is launched using the command pyomo_ns and then the dispatch server
is launched with dispatch_srvr. Note that both commands contain an underscore. Both programs keep running
until terminated by an external signal, so it is common to pipe their output to a file. The commands are:
• Once: pyomo_ns
• Once: dispatch_srvr
• Multiple times: pyro_mip_server
This example demonstrates how to use these services to solve two instances in parallel.
# parallel.py
from __future__ import division
from pyomo.environ import *
from pyomo.opt import SolverFactory
from pyomo.opt.parallel import SolverManagerFactory
import sys
# Create a solver
optsolver = SolverFactory('cplex')
#
# A simple model with binary variables and
# an empty constraint list.
#
model = AbstractModel()
model.n = Param(default=4)
model.x = Var(RangeSet(model.n), within=Binary)
(continues on next page)
instance2 = model.create()
instance2.x[1] = 1
instance2.x[1].fixed = True
action_handle_map[action_handle] = "Original"
action_handle = solver_manager.queue(instance2, opt=optsolver, warmstart=False,
˓→tee=True, verbose=False)
This example creates two instances that are very similar and then sends them to be dispatched to solvers. If there
are two solvers, then these problems could be solved in parallel (we say “could” because for such trivial problems
to be actually solved in parallel, the solvers would have to be very, very slow). This example is non-sensical; the
goal is simply to show solver_manager.queue to submit jobs to a name server for dispatch to solver servers
and solver_manager.wait_any to recover the results. The wait_all function is similar, but it takes a list of
action handles (returned by queue) as an argument and returns all of the results at once.
A “temporary” directory is used for many intermediate files. Normally, the name of the directory for temporary files
is provided by the operating system, but the user can specify their own directory name. The pyomo command-line
--tempdir option propagates through to the TempFileManager service. One can accomplish the same through the
following few lines of code in a script:
If you start with a ConcreteModel, each component you add to the model will be fully constructed and initialized
at the time it attached to the model. However, if you are starting with an AbstractModel, construction occurs in
two phases. When you first declare and attach components to the model, those components are empty containers and
not fully constructed, even if you explicitly provide data.
>>> import pyomo.environ as pyo
>>> model = pyo.AbstractModel()
>>> model.is_constructed()
False
If you look at the model at this point, you will see that everything is “empty”:
>>> model.pprint()
1 Set Declarations
I : Dim=0, Dimen=1, Size=0, Domain=None, Ordered=False, Bounds=None
Not constructed
1 Param Declarations
p : Size=0, Index=None, Domain=Any, Default=None, Mutable=False
Not constructed
1 Var Declarations
(continues on next page)
59
Pyomo Documentation, Release 5.6.9.dev0
3 Declarations: p I x
Before you can manipulate modeling components or solve the model, you must first create a concrete instance by
applying data to your abstract model. This can be done using the create_instance() method, which takes the
abstract model and optional data and returns a new concrete instance by constructing each of the model components
in the order in which they were declared (attached to the model). Note that the instance creation is performed “out of
place”; that is, the original abstract model is left untouched.
1 Param Declarations
p : Size=1, Index=None, Domain=Any, Default=None, Mutable=False
Key : Value
None : 5
1 Var Declarations
x : Size=3, Index=I
Key : Lower : Value : Upper : Fixed : Stale : Domain
1 : None : None : None : False : True : Reals
2 : None : None : None : False : True : Reals
3 : None : None : None : False : True : Reals
3 Declarations: p I x
Note: AbstractModel users should note that in some examples, your concrete model instance is called “instance” and
not “model”. This is the case here, where we are explicitly calling instance = model.create_instance().
The create_instance() method can also take a reference to external data, which overrides any data specified in
the original component declarations. The data can be provided from several sources, including using a dict, DataPor-
tal, or DAT file. For example:
1 Param Declarations
p : Size=1, Index=None, Domain=Any, Default=None, Mutable=False
Key : Value
None : 5
(continues on next page)
1 Var Declarations
x : Size=2, Index=I
Key : Lower : Value : Upper : Fixed : Stale : Domain
4 : None : None : None : False : True : Reals
5 : None : None : None : False : True : Reals
3 Declarations: p I x
There are roughly three ways of using data to construct a Pyomo model:
1. use standard Python objects,
2. initialize a model with data loaded with a DataPortal object, and
3. load model data from a Pyomo data command file.
Standard Python data objects include native Python data types (e.g. lists, sets, and dictionaries) as well as standard
data formats like numpy arrays and Pandas data frames. Standard Python data objects can be used to define constant
values in a Pyomo model, and they can be used to initialize Set and Param components. However, initializing Set
and Param components in this manner provides few advantages over direct use of standard Python data objects. (An
import exception is that components indexed by Set objects use less memory than components indexed by native
Python data.)
The DataPortal class provides a generic facility for loading data from disparate sources. A DataPortal object
can load data in a consistent manner, and this data can be used to simply initialize all Set and Param components in a
model. DataPortal objects can be used to initialize both concrete and abstract models in a uniform manner, which
is important in some scripting applications. But in practice, this capability is only necessary for abstract models, whose
data components are initialized after being constructed. (In fact, all abstract data components in an abstract model are
loaded from DataPortal objects.)
Finally, Pyomo data command files provide a convenient mechanism for initializing Set and Param components
with a high-level data specification. Data command files can be used with both concrete and abstract models, though
in a different manner. Data command files are parsed using a DataPortal object, which must be done explicitly
for a concrete model. However, abstract models can load data from a data command file directly, after the model is
constructed. Again, this capability is only necessary for abstract models, whose data components are initialized after
being constructed.
The following sections provide more detail about how data can be used to initialize Pyomo models.
In many cases, Pyomo models can be constructed without Set and Param data components. Native Python data
types class can be simply used to define constant values in Pyomo expressions. Consequently, Python sets, lists and
dictionaries can be used to construct Pyomo models, as well as a wide range of other Python classes.
TODO
More examples here: set, list, dict, numpy, pandas.
The Set and Param components used in a Pyomo model can also be initialized with standard Python data types.
This enables some modeling efficiencies when manipulating sets (e.g. when re-using sets for indices), and it supports
validation of set and parameter data values. The Set and Param components are initialized with Python data using
the initialize option.
Set Components
In general, Set components can be initialized with iterable data. For example, simple sets can be initialized with:
• list, set and tuple data:
model.A = Set(initialize=[2,3,5])
model.B = Set(initialize=set([2,3,5]))
model.C = Set(initialize=(2,3,5))
• generators:
model.D = Set(initialize=range(9))
model.E = Set(initialize=(i for i in model.B if i%2 == 0))
• numpy arrays:
f = numpy.array([2, 3, 5])
model.F = Set(initialize=f)
Sets can also be indirectly initialized with functions that return native Python data:
def g(model):
return [2,3,5]
model.G = Set(initialize=g)
Indexed sets can be initialized with dictionary data where the dictionary values are iterable data:
H_init = {}
H_init[2] = [1,3,5]
H_init[3] = [2,4,6]
H_init[4] = [3,5,7]
model.H = Set([2,3,4],initialize=H_init)
Parameter Components
When a parameter is a single value, then a Param component can be simply initialized with a value:
model.a = Param(initialize=1.1)
More generally, Param components can be initialized with dictionary data where the dictionary values are single
values:
Parameters can also be indirectly initialized with functions that return native Python data:
def c(model):
return {1:1, 2:2, 3:3}
model.c = Param([1,2,3], initialize=c)
Data can be passed to the model create_instance() method through a series of nested native Python dictionar-
ies. The structure begins with a dictionary of namespaces, with the only required entry being the None namespace.
Each namespace contains a dictionary that maps component names to dictionaries of component values. For scalar
components, the required data dictionary maps the implicit index None to the desired value:
3 Param Declarations
p : Size=1, Index=None, Domain=Any, Default=None, Mutable=False
Key : Value
None : 100
q : Size=3, Index=I, Domain=Any, Default=None, Mutable=False
Key : Value
1 : 10
2 : 20
3 : 30
r : Size=9, Index=r_index, Domain=Any, Default=0, Mutable=False
Key : Value
(1, 1) : 110
(1, 2) : 120
(2, 3) : 230
5 Declarations: I p q r_index r
Note: The discussion and presentation below are adapted from Chapter 6 of the “Pyomo Book” [PyomoBookII].
The discussion of the DataPortal class uses these same examples to illustrate how data can be loaded into Pyomo
Model Data
Pyomo’s data command files employ a domain-specific language whose syntax closely resembles the syntax of
AMPL’s data commands [AMPL]. A data command file consists of a sequence of commands that either (a) spec-
ify set and parameter data for a model, or (b) specify where such data is to be obtained from external sources (e.g.
table files, CSV files, spreadsheets and databases).
The following commands are used to declare data:
• The set command declares set data.
• The param command declares a table of parameter data, which can also include the declaration of the set data
used to index the parameter data.
• The table command declares a two-dimensional table of parameter data.
• The load command defines how set and parameter data is loaded from external data sources, including ASCII
table files, CSV files, XML files, YAML files, JSON files, ranges in spreadsheets, and database tables.
The following commands are also used in data command files:
• The include command specifies a data command file that is processed immediately.
• The data and end commands do not perform any actions, but they provide compatibility with AMPL scripts
that define data commands.
• The namespace keyword allows data commands to be organized into named groups that can be enabled or
disabled during model construction.
The following data types can be represented in a data command file:
• Numeric value: Any Python numeric value (e.g. integer, float, scientific notation, or boolean).
• Simple string: A sequence of alpha-numeric characters.
• Quoted string: A simple string that is included in a pair of single or double quotes. A quoted string can include
quotes within the quoted string.
Numeric values are automatically converted to Python integer or floating point values when a data command file is
parsed. Additionally, if a quoted string can be intepreted as a numeric value, then it will be converted to Python
numeric types when the data is parsed. For example, the string “100” is converted to a numeric value automatically.
Warning: Pyomo data commands do not exactly correspond to AMPL data commands. The set and param
commands are designed to closely match AMPL’s syntax and semantics, though these commands only support
a subset of the corresponding declarations in AMPL. However, other Pyomo data commands are not generally
designed to match the semantics of AMPL.
Note: Pyomo data commands are terminated with a semicolon, and the syntax of data commands does not depend
on whitespace. Thus, data commands can be broken across multiple lines – newlines and tab characters are ignored –
and data commands can be formatted with whitespace with few restrictions.
Simple Sets
The set data command explicitly specifies the members of either a single set or an array of sets, i.e., an indexed
set. A single set is specified with a list of data values that are included in this set. The formal syntax for the set data
command is:
A set may be empty, and it may contain any combination of numeric and non-numeric string values. For example, the
following are valid set commands:
# An empty set
set A := ;
# A set of numbers
set A := 1 2 3;
# A set of strings
set B := north south east west;
The set data command can also specify tuple data with the standard notation for tuples. For example, suppose that
set A contains 3-tuples:
model.A = Set(dimen=3)
The following set data command then specifies that A is the set containing the tuples (1,2,3) and (4,5,6):
Alternatively, set data can simply be listed in the order that the tuple is represented:
set A := 1 2 3 4 5 6 ;
Obviously, the number of data elements specified using this syntax should be a multiple of the set dimension.
Sets with 2-tuple data can also be specified in a matrix denoting set membership. For example, the following set data
command declares 2-tuples in A using plus (+) to denote valid tuples and minus (-) to denote invalid tuples:
set A : A1 A2 A3 A4 :=
1 + - - +
2 + - + -
3 - + - - ;
This data command declares the following five 2-tuples: ('A1',1), ('A1',2), ('A2',3), ('A3',2), and
('A4',1).
Finally, a set of tuple data can be concisely represented with tuple templates that represent a slice of tuple data. For
example, suppose that the set A contains 4-tuples:
model.A = Set(dimen=4)
The following set data command declares groups of tuples that are defined by a template and data to complete this
template:
set A :=
(1,2,*,4) A B
(*,2,*,4) A B C D ;
A tuple template consists of a tuple that contains one or more asterisk (*) symbols instead of a value. These represent
indices where the tuple value is replaced by the values from the list of values that follows the tuple template. In this
example, the following tuples are in set A:
(1, 2, 'A', 4)
(1, 2, 'B', 4)
('A', 2, 'B', 4)
('C', 2, 'D', 4)
Set Arrays
The set data command can also be used to declare data for a set array. Each set in a set array must be declared with
a separate set data command with the following syntax:
Because set arrays can be indexed by an arbitrary set, the index value may be a numeric value, a non-numeric string
value, or a comma-separated list of string values.
Suppose that a set A is used to index a set B as follows:
model.A = Set()
model.B = Set(model.A)
set B[1] := 0 1 2;
set B[aaa] := aa bb cc;
set B['a b'] := 'aa bb cc';
Simple or non-indexed parameters are declared in an obvious way, as shown by these examples:
param A := 1.4;
param B := 1;
param C := abc;
(continues on next page)
Parameters can be defined with numeric data, simple strings and quoted strings. Note that parameters cannot be defined
without data, so there is no analog to the specification of an empty set.
Most parameter data is indexed over one or more sets, and there are a number of ways the param data command can
be used to specify indexed parameter data. One-dimensional parameter data is indexed over a single set. Suppose that
the parameter B is a parameter indexed by the set A:
model.A = Set()
model.B = Param(model.A)
A param data command can specify values for B with a list of index-value pairs:
set A := a c e;
param B := a 10 c 30 e 50;
Because whitespace is ignored, this example data command file can be reorganized to specify the same data in a tabular
format:
set A := a c e;
param B :=
a 10
c 30
e 50
;
Multiple parameters can be defined using a single param data command. For example, suppose that parameters B, C,
and D are one-dimensional parameters all indexed by the set A:
model.A = Set()
model.B = Param(model.A)
model.C = Param(model.A)
model.D = Param(model.A)
Values for these parameters can be specified using a single param data command that declares these parameter names
followed by a list of index and parameter values:
set A := a c e;
param : B C D :=
a 10 -1 1.1
c 30 -3 3.3
e 50 -5 5.5
;
The values in the param data command are interpreted as a list of sublists, where each sublist consists of an index
followed by the corresponding numeric value.
Note that parameter values do not need to be defined for all indices. For example, the following data command file is
valid:
set A := a c e g;
param : B C D :=
a 10 -1 1.1
c 30 -3 3.3
e 50 -5 5.5
;
The index g is omitted from the param command, and consequently this index is not valid for the model instance
that uses this data. More complex patterns of missing data can be specified using the period (.) symbol to indicate a
missing value. This syntax is useful when specifying multiple parameters that do not necessarily have the same index
values:
set A := a c e;
param : B C D :=
a . -1 1.1
c 30 . 3.3
e 50 -5 .
;
This example provides a concise representation of parameters that share a common index set while using different
index values.
Note that this data file specifies the data for set A twice: (1) when A is defined and (2) implicitly when the parameters
are defined. An alternate syntax for param allows the user to concisely specify the definition of an index set along
with associated parameters:
param : A : B C D :=
a 10 -1 1.1
c 30 -3 3.3
e 50 -5 5.5
;
Finally, we note that default values for missing data can also be specified using the default keyword:
set A := a c e;
Note that default values can only be specified in param commands that define values for a single parameter.
Multi-dimensional parameter data is indexed over either multiple sets or a single multi-dimensional set. Suppose that
parameter B is a parameter indexed by set A that has dimension 2:
model.A = Set(dimen=2)
model.B = Param(model.A)
The syntax of the param data command remains essentially the same when specifying values for B with a list of index
and parameter values:
set A := a 1 c 2 e 3;
param B :=
a 1 10
c 2 30
e 3 50;
Missing and default values are also handled in the same way with multi-dimensional index sets:
set A := a 1 c 2 e 3;
param B default 0 :=
a 1 10
c 2 .
e 3 50;
Similarly, multiple parameters can defined with a single param data command. Suppose that parameters B, C, and D
are parameters indexed over set A that has dimension 2:
model.A = Set(dimen=2)
model.B = Param(model.A)
model.C = Param(model.A)
model.D = Param(model.A)
These parameters can be defined with a single param command that declares the parameter names followed by a list
of index and parameter values:
set A := a 1 c 2 e 3;
param : B C D :=
a 1 10 -1 1.1
c 2 30 -3 3.3
e 3 50 -5 5.5
;
Similarly, the following param data command defines the index set along with the parameters:
param : A : B C D :=
a 1 10 -1 1.1
c 2 30 -3 3.3
e 3 50 -5 5.5
;
The param command also supports a matrix syntax for specifying the values in a parameter that has a 2-dimensional
index. Suppose parameter B is indexed over set A that has dimension 2:
model.A = Set(dimen=2)
model.B = Param(model.A)
set A := 1 a 1 c 1 e 2 a 2 c 2 e 3 a 3 c 3 e;
param B : a c e :=
1 1 2 3
2 4 5 6
(continues on next page)
Additionally, the following syntax can be used to specify a transposed matrix of parameter values:
set A := 1 a 1 c 1 e 2 a 2 c 2 e 3 a 3 c 3 e;
param B (tr) : 1 2 3 :=
a 1 4 7
c 2 5 8
e 3 6 9
;
This functionality facilitates the presentation of parameter data in a natural format. In particular, the transpose syntax
may allow the specification of tables for which the rows comfortably fit within a single line. However, a matrix may
be divided column-wise into shorter rows since the line breaks are not significant in Pyomo data commands.
For parameters with three or more indices, the parameter data values may be specified as a series of slices. Each slice
is defined by a template followed by a list of index and parameter values. Suppose that parameter B is indexed over
set A that has dimension 4:
model.A = Set(dimen=4)
model.B = Param(model.A)
The following param command defines a matrix of parameter values with multiple templates:
param B :=
[*,1,*,1] a a 10 b b 20
[*,2,*,2] a a 30 b b 40
;
The B parameter consists of four values: B[a,1,a,1]=10, B[b,1,b,1]=20, B[a,2,a,2]=30, and B[b,2,
b,2]=40.
The table data command explicitly specifies a two-dimensional array of parameter data. This command provides
a more flexible and complete data declaration than is possible with a param declaration. The following example
illustrates a simple table command that declares data for a single parameter:
table M(A) :
A B M N :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
The parameter M is indexed by column A, which must be pre-defined unless declared separately (see below). The
column labels are provided after the colon and before the colon-equal (:=). Subsequently, the table data is provided.
The syntax is not sensitive to whitespace, so the following is an equivalent table command:
table M(A) :
A B M N :=
A1 B1 4.3 5.3 A2 B2 4.4 5.4 A3 B3 4.5 5.5 ;
Multiple parameters can be declared by simply including additional parameter names. For example:
This example declares data for the M and N parameters, which have different indexing columns. The indexing columns
represent set data, which is specified separately. For example:
This example declares data for the M and N parameters, along with the A and Z indexing sets. The correspondence
between the index set Z and the indices of parameter N can be made more explicit by indexing N by Z:
Warning: If a table command does not explicitly indicate the indexing sets, then these are assumed to be
initialized separately. A table command can separately initialize sets and parameters in a Pyomo model, and
there is no presumed association between the data that is initialized. For example, the table command initializes
a set Z and a parameter M that are not related:
table Z={A,B} M(A):
A B M N :=
A1 B1 4.3 5.3
A2 B2 4.4 5.4
A3 B3 4.5 5.5
;
Finally, simple parameter values can also be specified with a table command:
table pi := 3.1416 ;
The previous examples considered examples of the table command where column labels are provided. The table
command can also be used without column labels. For example, the first example can be revised to omit column labels
as follows:
The columns=4 is a keyword-value pair that defines the number of columns in this table; this must be explicitly
specified in tables without column labels. The default column labels are integers starting from 1; the labels are
columns 1, 2, 3, and 4 in this example. The M parameter is indexed by column 1. The braces syntax declares the
column where the M data is provided.
Similarly, set data can be declared referencing the integer column labels:
Finally, we compare and contrast the table and param commands. Both commands can be used to declare parameter
and set data, and both commands can be used to declare a simple parameter. However, there are some important
differences between these data commands:
• The param command can declare a single set that is used to index one or more parameters. The table
command can declare data for any number of sets, independent of whether they are used to index parameter
data.
• The param command can declare data for multiple parameters only if they share the same index set. The
table command can declare data for any number of parameters that are may be indexed separately.
• The table syntax unambiguously describes the dimensionality of indexing sets. The param command must
be interpreted with a model that provides the dimension of the indexing set.
This last point provides a key motivation for the table command. Specifically, the table command can be used to
reliably initialize concrete models using Pyomo’s DataPortal class. By contrast, the param command can only
be used to initialize concrete models with parameters that are indexed by a single column (i.e., a simple set).
The load command provides a mechanism for loading data from a variety of external tabular data sources. This
command loads a table of data that represents set and parameter data in a Pyomo model. The table consists of rows
and columns for which all rows have the same length, all columns have the same length, and the first row represents
labels for the column data.
The load command can load data from a variety of different external data sources:
• TAB File: A text file format that uses whitespace to separate columns of values in each row of a table.
• CSV File: A text file format that uses comma or other delimiters to separate columns of values in each row of a
table.
• XML File: An extensible markup language for documents and data structures. XML files can represent tabular
data.
• Excel File: A spreadsheet data format that is primarily used by the Microsoft Excel application.
• Database: A relational database.
This command uses a data manager that coordinates how data is extracted from a specified data source. In this
way, the load command provides a generic mechanism that enables Pyomo models to interact with standard data
repositories that are maintained in an application-specific manner.
The simplest illustration of the load command is specifying data for an indexed parameter. Consider the file Y.tab:
A Y
A1 3.3
A2 3.4
A3 3.5
This file specifies the values of parameter Y which is indexed by set A. The following load command loads the
parameter data:
load Y.tab : [A] Y;
The first argument is the filename. The options after the colon indicate how the table data is mapped to model data.
Option [A] indicates that set A is used as the index, and option Y indicates the parameter that is initialized.
Similarly, the following load command loads both the parameter data as well as the index set A:
load Y.tab : A=[A] Y;
The difference is the specification of the index set, A=[A], which indicates that set A is initialized with the index
loaded from the ASCII table file.
Set data can also be loaded from a ASCII table file that contains a single column of data:
A
A1
A2
A3
The format option must be specified to denote the fact that the relational data is being interpreted as a set:
load A.tab format=set : A;
Note that this allows for specifying set data that contains tuples. Consider file C.tab:
A B
A1 1
A1 2
A1 3
(continues on next page)
Note that this example requires that C be declared with dimension two.
The syntax of the load command is broken into two parts. The first part ends with the colon, and it begins with a
filename, database URL, or DSN (data source name). Additionally, this first part can contain option value pairs. The
following options are recognized:
The format option is the only option that is required for all data managers. This option specifies how a relational
table is interpreted to represent set and parameter data. If the using option is omitted, then the filename suffix is used
to select the data manager. The remaining options are specific to spreadsheets and relational databases (see below).
The second part of the load command consists of the specification of column names for indices and data. The
remainder of this section describes different specifications and how they define how data is loaded into a model.
Suppose file ABCD.tab defines the following relational table:
A B C D
A1 B1 1 10
A2 B2 2 20
A3 B3 3 30
There are many ways to interpret this relational table. It could specify a set of 4-tuples, a parameter indexed by 3-
tuples, two parameters indexed by 2-tuples, and so on. Additionally, we may wish to select a subset of this table to
initialize data in a model. Consequently, the load command provides a variety of syntax options for specifying how
a table is interpreted.
A simple specification is to interpret the relational table as a set:
Note that Z is a set in the model that the data is being loaded into. If this set does not exist, an error will occur while
loading data from this table.
Another simple specification is to interpret the relational table as a parameter with indexed by 3-tuples:
Again, this requires that D be a parameter in the model that the data is being loaded into. Additionally, the index set
for D must contain the indices that are specified in the table. The load command also allows for the specification of
the index set:
This specifies that the index set is loaded into the Z set in the model. Similarly, data can be loaded into another
parameter than what is specified in the relational table:
This specifies that the index set is loaded into the Z set and that the data in the D column in the table is loaded into the
Y parameter.
This syntax allows the load command to provide an arbitrary specification of data mappings from columns in a
relational table into index sets and parameters. For example, suppose that a model is defined with set Z and parameters
Y and W:
model.Z = Set()
model.Y = Param(model.Z)
model.W = Param(model.Z)
Then the following command defines how these data items are loaded using columns B, C and D:
When the using option is omitted the data manager is inferred from the filename suffix. However, the filename
suffix does not always reflect the format of the data it contains. For example, consider the relational table in the file
ABCD.txt:
A,B,C,D
A1,B1,1,10
A2,B2,2,20
A3,B3,3,30
We can specify the using option to load from this file into parameter D and set Z:
Note: The data managers supported by Pyomo can be listed with the pyomo help subcommand
By default, a table is interpreted as columns of one or more parameters with associated index columns. The format
option can be used to specify other interpretations of a table:
We have previously illustrated the use of the set format value to interpret a relational table as a set of values or tuples.
The following examples illustrate the other format values.
A table with a single value can be interpreted as a simple parameter using the param format value. Suppose that
Z.tab contains the following table:
1.1
The following load command then loads this value into parameter p:
Sets with 2-tuple data can be represented with a matrix format that denotes set membership. The set_array format
value interprets a relational table as a matrix that defines a set of 2-tuples where + denotes a valid tuple and - denotes
an invalid tuple. Suppose that D.tab contains the following relational table:
B A1 A2 A3
1 + - -
(continues on next page)
This command declares the following 2-tuples: ('A1',1), ('A2',2), and ('A3',3).
Parameters with 2-tuple indices can be interpreted with a matrix format that where rows and columns are different
indices. Suppose that U.tab contains the following table:
I A1 A2 A3
I1 1.3 2.3 3.3
I2 1.4 2.4 3.4
I3 1.5 2.5 3.5
I4 1.6 2.6 3.6
Then the following load command loads this value into parameter U with a 2-dimensional index using the array
format value.:
The transpose_array format value also interprets the table as a matrix, but it loads the data in a transposed
format:
Note that these format values do not support the initialization of the index data.
Many of the options for the load command are specific to spreadsheets and relational databases. The range option
is used to specify the range of cells that are loaded from a spreadsheet. The range of cells represents a table in which
the first row of cells defines the column names for the table.
Suppose that file ABCD.xls contains the range ABCD that is shown in the following figure:
The following command loads this data to initialize parameter D and index Z:
Thus, the syntax for loading data from spreadsheets only differs from CSV and ASCII text files by the use of the
range option.
When loading from a relational database, the data source specification is a filename or data connection string. Access
to a database may be restricted, and thus the specification of username and password options may be required.
Alternatively, these options can be specified within a data connection string.
A variety of database interface packages are available within Python. The using option is used to specify the database
interface package that will be used to access a database. For example, the pyodbc interface can be used to connect
to Excel spreadsheets. The following command loads data from the Excel spreadsheet ABCD.xls using the pyodbc
interface. The command loads this data to initialize parameter D and index Z:
load ABCD.xls using=pyodbc table=ABCD : Z=[A,B,C] Y=D ;
The using option specifies that the pyodbc package will be used to connect with the Excel spreadsheet. The table
option specifies that the table ABCD is loaded from this spreadsheet. Similarly, the following command specifies a data
connection string to specify the ODBC driver explicitly:
load "Driver={Microsoft Excel Driver (*.xls)}; Dbq=ABCD.xls;"
using=pyodbc
table=ABCD : Z=[A,B,C] Y=D ;
ODBC drivers are generally tailored to the type of data source that they work with; this syntax illustrates how the
load command can be tailored to the details of the database that a user is working with.
The previous examples specified the table option, which declares the name of a relational table in a database. Many
databases support the Structured Query Language (SQL), which can be used to dynamically compose a relational table
from other tables in a database. The classic diet problem will be used to illustrate the use of SQL queries to initialize
a Pyomo model. In this problem, a customer is faced with the task of minimizing the cost for a meal at a fast food
restaurant – they must purchase a sandwich, side, and a drink for the lowest cost. The following is a Pyomo model for
this problem:
# diet1.py
from pyomo.environ import *
infinity = float('inf')
MAX_FOOD_SUPPLY = 20.0 # There is a finite food supply
model = AbstractModel()
# --------------------------------------------------------
model.FOOD = Set()
model.cost = Param(model.FOOD, within=PositiveReals)
model.f_min = Param(model.FOOD, within=NonNegativeReals, default=0.0)
def f_max_validate (model, value, j):
return model.f_max[j] > model.f_min[j]
model.f_max = Param(model.FOOD, validate=f_max_validate, default=MAX_FOOD_SUPPLY)
model.NUTR = Set()
model.n_min = Param(model.NUTR, within=NonNegativeReals, default=0.0)
model.n_max = Param(model.NUTR, default=infinity)
model.amt = Param(model.NUTR, model.FOOD, within=NonNegativeReals)
# --------------------------------------------------------
# --------------------------------------------------------
def Total_Cost_rule(model):
return sum(model.cost[j] * model.Buy[j] for j in model.FOOD)
model.Total_Cost = Objective(rule=Total_Cost_rule, sense=minimize)
# --------------------------------------------------------
def Entree_rule(model):
entrees = ['Cheeseburger', 'Ham Sandwich', 'Hamburger', 'Fish Sandwich', 'Chicken
˓→Sandwich']
def Side_rule(model):
sides = ['Fries', 'Sausage Biscuit']
return sum(model.Buy[s] for s in sides) >= 1
model.Side = Constraint(rule=Side_rule)
def Drink_rule(model):
drinks = ['Lowfat Milk', 'Orange Juice']
return sum(model.Buy[d] for d in drinks) >= 1
model.Drink = Constraint(rule=Drink_rule)
Suppose that the file diet1.sqlite be a SQLite database file that contains the following data in the Food table:
FOOD cost
Cheeseburger 1.84
Ham Sandwich 2.19
Hamburger 1.84
Fish Sandwich 1.44
Chicken Sandwich 2.29
Fries 0.77
Sausage Biscuit 1.29
Lowfat Milk 0.60
Orange Juice 0.72
In addition, the Food table has two additional columns, f_min and f_max, with no data for any row. These columns
exist to match the structure for the parameters used in the model.
We can solve the diet1 model using the Python definition in diet1.py and the data from this database. The file
diet.sqlite.dat specifies a load command that uses that sqlite3 data manager and embeds a SQL query to
retrieve the data:
# File diet.sqlite.dat
load "diet.sqlite"
using=sqlite3
query="SELECT FOOD,cost,f_min,f_max FROM Food"
: FOOD=[FOOD] cost f_min f_max ;
The PyODBC driver module will pass the SQL query through an Access ODBC connector, extract the data from the
diet1.mdb file, and return it to Pyomo. The Pyomo ODBC handler can then convert the data received into the
proper format for solving the model internally. More complex SQL queries are possible, depending on the underlying
database and ODBC driver in use. However, the name and ordering of the columns queried are specified in the Pyomo
data file; using SQL wildcards (e.g., SELECT *) or column aliasing (e.g., SELECT f AS FOOD) may cause errors
in Pyomo’s mapping of relational data to parameters.
The include command allows a data command file to execute data commands from another file. For example, the
following command file executes data commands from ex1.dat and then ex2.dat:
include ex1.dat;
include ex2.dat;
Pyomo is sensitive to the order of execution of data commands, since data commands can redefine set and parameter
values. The include command respects this data ordering; all data commands in the included file are executed
before the remaining data commands in the current file are executed.
The namespace keyword is not a data command, but instead it is used to structure the specification of Pyomo’s
data commands. Specifically, a namespace declaration is used to group data commands and to provide a group label.
Consider the following data command file:
set C := 1 2 3 ;
namespace ns1
{
set C := 4 5 6 ;
}
namespace ns2
{
set C := 7 8 9 ;
}
This data file defines two namespaces: ns1 and ns2 that initialize a set C. By default, data commands contained
within a namespace are ignored during model construction; when no namespaces are specified, the set C has values
1,2,3. When namespace ns1 is specified, then the set C values are overridden with the set 4,5,6.
Pyomo’s DataPortal class standardizes the process of constructing model instances by managing the process of
loading data from different data sources in a uniform manner. A DataPortal object can load data from the following
data sources:
• TAB File: A text file format that uses whitespace to separate columns of values in each row of a table.
• CSV File: A text file format that uses comma or other delimiters to separate columns of values in each row of a
table.
• JSON File: A popular lightweight data-interchange format that is easily parsed.
• YAML File: A human friendly data serialization standard.
• XML File: An extensible markup language for documents and data structures. XML files can represent tabular
data.
• Excel File: A spreadsheet data format that is primarily used by the Microsoft Excel application.
• Database: A relational database.
• DAT File: A Pyomo data command file.
Note that most of these data formats can express tabular data.
Warning: The DataPortal class requires the installation of Python packages to support some of these data
formats:
• YAML File: pyyaml
• Excel File: win32com, openpyxl or xlrd
These packages support different data Excel data formats: the win32com package supports
.xls, .xlsm and .xlsx, the openpyxl package supports .xlsx and the xlrd package
supports .xls.
• Database: pyodbc, pypyodbc, sqlite3 or pymysql
These packages support different database interface APIs: the pyodbc and pypyodbc packages
support the ODBC database API, the sqlite3 package uses the SQLite C library to directly
interface with databases using the DB-API 2.0 specification, and pymysql is a pure-Python
MySQL client.
DataPortal objects can be used to initialize both concrete and abstract Pyomo models. Consider the file A.tab,
which defines a simple set with a tabular format:
A
A1
A2
A3
The load method is used to load data into a DataPortal object. Components in a concrete model can be explicitly
initialized with data loaded by a DataPortal object:
data = DataPortal()
data.load(filename='A.tab', set="A", format="set")
model = ConcreteModel()
model.A = Set(initialize=data['A'])
All data needed to initialize an abstract model must be provided by a DataPortal object, and the use of the
DataPortal object to initialize components is automated for the user:
model = AbstractModel()
model.A = Set()
data = DataPortal()
data.load(filename='A.tab', set=model.A)
instance = model.create_instance(data)
Note the difference in the execution of the load method in these two examples: for concrete models data is loaded
by name and the format must be specified, and for abstract models the data is loaded by component, from which the
data format can often be inferred.
The load method opens the data file, processes it, and loads the data in a format that can be used to construct a model
instance. The load method can be called multiple times to load data for different sets or parameters, or to override
data processed earlier. The load method takes a variety of arguments that define how data is loaded:
• filename: This option specifies the source data file.
• format: This option specifies the how to interpret data within a table. Valid formats are: set, set_array,
param, table, array, and transposed_array.
• set: This option is either a string or model compent that defines a set that will be initialized with this data.
• param: This option is either a string or model compent that defines a parameter that will be initialized with
this data. A list or tuple of strings or model components can be used to define multiple parameters that are
initialized.
• index: This option is either a string or model compent that defines an index set that will be initialized with this
data.
• using: This option specifies the Python package used to load this data source. This option is used when
loading data from databases.
• select: This option defines the columns that are selected from the data source. The column order may be
changed from the data source, which allows the DataPortal object to define
• namespace: This option defines the data namespace that will contain this data.
The use of these options is illustrated below.
The DataPortal class also provides a simple API for accessing set and parameter data that are loaded from different
data sources. The [] operator is used to access set and parameter values. Consider the following example, which loads
data and prints the value of the [] operator:
data = DataPortal()
data.load(filename='A.tab', set="A", format="set")
print(data['A']) #['A1', 'A2', 'A3']
The DataPortal class also has several methods for iterating over the data that has been loaded:
• keys(): Returns an iterator of the data keys.
• values(): Returns an iterator of the data values.
• items(): Returns an iterator of (name, value) tuples from the data.
Finally, the data() method provides a generic mechanism for accessing the underlying data representation used by
DataPortal objects.
JSON and YAML files are structured data formats that are well-suited for data serialization. These data formats do not
represent data in tabular format, but instead they directly represent set and parameter values with lists and dictionaries:
• Simple Set: a list of string or numeric value
• Indexed Set: a dictionary that maps an index to a list of string or numeric value
The data in this file can be used to load the following model:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.B = Set(dimen=2)
model.C = Set(model.A)
model.p = Param()
model.q = Param(model.A)
model.r = Param(model.B)
data.load(filename='T.json')
Note that no set or param option needs to be specified when loading a JSON or YAML file. All of the set and
parameter data in the file are loaded by the DataPortal> object, and only the data needed for model construction
is used.
The following YAML file has a similar structure:
A: [A1, A2, A3]
B:
- [1, B1]
- [2, B2]
- [3, B3]
C:
'A1': [1, 2, 3]
'A3': [10, 20, 30]
p: 0.1
q: {A1: 3.3, A2: 3.4, A3: 3.5}
r:
- index: [1, B1]
value: 3.3
- index: [2, B2]
value: 3.4
- index: [3, B3]
value: 3.5
The data in this file can be used to load a Pyomo model with the same syntax as a JSON file:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.B = Set(dimen=2)
model.C = Set(model.A)
model.p = Param()
(continues on next page)
Many data sources supported by Pyomo are tabular data formats. Tabular data is numerical or textual data that is
organized into one or more simple tables, where data is arranged in a matrix. Each table consists of a matrix of
numeric string values, simple strings, and quoted strings. All rows have the same length, all columns have the same
length, and the first row typically represents labels for the column data.
The following section describes the tabular data sources supported by Pyomo, and the subsequent sections illustrate
ways that data can be loaded from tabular data using TAB files. Subsequent sections describe options for loading data
from Excel spreadsheets and relational databases.
Tabular Data
TAB files represent tabular data in an ascii file using whitespace as a delimiter. A TAB file consists of rows of values,
where each row has the same length. For example, the file PP.tab has the format:
A B PP
A1 B1 4.3
A2 B2 4.4
A3 B3 4.5
CSV files represent tabular data in a format that is very similar to TAB files. Pyomo assumes that a CSV file consists
of rows of values, where each row has the same length. For example, the file PP.csv has the format:
A,B,PP
A1,B1,4.3
A2,B2,4.4
A3,B3,4.5
Excel spreadsheets can express complex data relationships. A range is a contiguous, rectangular block of cells in an
Excel spreadsheet. Thus, a range in a spreadsheet has the same tabular structure as is a TAB file or a CSV file. For
example, consider the file excel.xls that has the range PPtable:
A relational database is an application that organizes data into one or more tables (or relations) with a unique key in
each row. Tables both reflect the data in a database as well as the result of queries within a database.
XML files represent tabular using table and row elements. Each sub-element of a row element represents a different
column, where each row has the same length. For example, the file PP.xml has the format:
<table>
<row>
<A value="A1"/><B value="B1"/><PP value="4.3"/>
(continues on next page)
The set option is used specify a Set component that is loaded with data.
A
A1
A2
A3
In the following example, a DataPortal object loads data for a simple set A:
model = AbstractModel()
model.A = Set()
data = DataPortal()
data.load(filename='A.tab', set=model.A)
instance = model.create_instance(data)
A B
A1 1
A1 2
A1 3
A2 1
A2 2
A2 3
A3 1
A3 2
A3 3
In the following example, a DataPortal object loads data for a two-dimensional set C:
model = AbstractModel()
model.C = Set(dimen=2)
data = DataPortal()
data.load(filename='C.tab', set=model.C)
instance = model.create_instance(data)
In this example, the column titles do not directly impact the process of loading data. Column titles can be used to
select a subset of columns from a table that is loaded (see below).
Consider the file D.tab, which defines an array representation of a two-dimensional set:
B A1 A2 A3
1 + - -
2 - + -
3 - - +
In the following example, a DataPortal object loads data for a two-dimensional set D:
model = AbstractModel()
model.D = Set(dimen=2)
data = DataPortal()
data.load(filename='D.tab', set=model.D, format='set_array')
instance = model.create_instance(data)
The format option indicates that the set data is declared in a array format.
The param option is used specify a Param component that is loaded with data.
The simplest parameter is simply a singleton value. Consider the file Z.tab:
1.1
In the following example, a DataPortal object loads data for a simple parameter z:
model = AbstractModel()
data = DataPortal()
model.z = Param()
data.load(filename='Z.tab', param=model.z)
instance = model.create_instance(data)
An indexed parameter can be defined by a single column in a table. For example, consider the file Y.tab:
A Y
A1 3.3
A2 3.4
A3 3.5
In the following example, a DataPortal object loads data for an indexed parameter y:
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3'])
model.y = Param(model.A)
data.load(filename='Y.tab', param=model.y)
instance = model.create_instance(data)
When column names are not used to specify the index and parameter data, then the DataPortal object assumes that
the rightmost column defines parameter values. In this file, the A column contains the index values, and the Y column
contains the parameter values.
Note that the data for set A is predefined in the previous example. The index set can be loaded with the parameter
data using the index option. In the following example, a DataPortal object loads data for set A and the indexed
parameter y
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.y = Param(model.A)
data.load(filename='Y.tab', param=model.y, index=model.A)
instance = model.create_instance(data)
An index set with multiple dimensions can also be loaded with an indexed parameter. Consider the file PP.tab:
A B PP
A1 B1 4.3
A2 B2 4.4
A3 B3 4.5
In the following example, a DataPortal object loads data for a tuple set and an indexed parameter:
model = AbstractModel()
data = DataPortal()
model.A = Set(dimen=2)
model.p = Param(model.A)
data.load(filename='PP.tab', param=model.p, index=model.A)
instance = model.create_instance(data)
Missing parameter data can be expressed in two ways. First, parameter data can be defined with indices that are a
subset of valid indices in the model. The following example loads the indexed parameter y:
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3','A4'])
model.y = Param(model.A)
data.load(filename='Y.tab', param=model.y)
instance = model.create_instance(data)
The model defines an index set with four values, but only three parameter values are declared in the data file Y.tab.
Parameter data can also be declared with missing values using the period (.) symbol. For example, consider the file
S.tab:
A B PP
A1 B1 4.3
A2 B2 4.4
A3 B3 4.5
In the following example, a DataPortal object loads data for the index set A and indexed parameter y:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.s = Param(model.A)
data.load(filename='S.tab', param=model.s, index=model.A)
instance = model.create_instance(data)
The period (.) symbol indicates a missing parameter value, but the index set A contains the index value for the missing
parameter.
Multiple parameters can be initialized at once by specifying a list (or tuple) of component parameters. Consider the
file XW.tab:
A X W
A1 3.3 4.3
A2 3.4 4.4
A3 3.5 4.5
In the following example, a DataPortal object loads data for parameters x and w:
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3'])
model.x = Param(model.A)
model.w = Param(model.A)
data.load(filename='XW.tab', param=(model.x,model.w))
instance = model.create_instance(data)
We have previously noted that the column names do not need to be specified to load set and parameter data. However,
the select option can be to identify the columns in the table that are used to load parameter data. This option
specifies a list (or tuple) of column names that are used, in that order, to form the table that defines the component
data.
For example, consider the following load declaration:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.w = Param(model.A)
data.load(filename='XW.tab', select=('A','W'),
(continues on next page)
The columns A and W are selected from the file XW.tab, and a single parameter is defined.
Consider the file U.tab, which defines an array representation of a multiply-indexed parameter:
I A1 A2 A3
I1 1.3 2.3 3.3
I2 1.4 2.4 3.4
I3 1.5 2.5 3.5
I4 1.6 2.6 3.6
In the following example, a DataPortal object loads data for a two-dimensional parameter u:
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3'])
model.I = Set(initialize=['I1','I2','I3','I4'])
model.u = Param(model.I, model.A)
data.load(filename='U.tab', param=model.u,
format='array')
instance = model.create_instance(data)
The format option indicates that the parameter data is declared in a array format. The format option can also
indicate that the parameter data should be transposed.
model = AbstractModel()
data = DataPortal()
model.A = Set(initialize=['A1','A2','A3'])
model.I = Set(initialize=['I1','I2','I3','I4'])
model.t = Param(model.A, model.I)
data.load(filename='U.tab', param=model.t,
format='transposed_array')
instance = model.create_instance(data)
Note that the transposed parameter data changes the index set for the parameter.
Tabular data can be loaded from spreadsheets and databases using auxilliary Python packages that provide an interface
to these data formats. Data can be loaded from Excel spreadsheets using the win32com, xlrd and openpyxl
packages. For example, consider the following range of cells, which is named PPtable:
In the following example, a DataPortal object loads the named range PPtable from the file excel.xls:
model = AbstractModel()
data = DataPortal()
model.A = Set(dimen=2)
model.p = Param(model.A)
data.load(filename='excel.xls', range='PPtable',
param=model.p, index=model.A)
instance = model.create_instance(data)
Note that the range option is required to specify the table of cell data that is loaded from the spreadsheet.
There are a variety of ways that data can be loaded from a relational database. In the simplest case, a table can be
specified within a database:
model = AbstractModel()
data = DataPortal()
model.A = Set(dimen=2)
model.p = Param(model.A)
data.load(filename='PP.sqlite', using='sqlite3',
table='PPtable',
param=model.p, index=model.A)
instance = model.create_instance(data)
In this example, the interface sqlite3 is used to load data from an SQLite database in the file PP.sqlite. More
generally, an SQL query can be specified to dynamicly generate a table. For example:
model = AbstractModel()
data = DataPortal()
model.A = Set()
model.p = Param(model.A)
data.load(filename='PP.sqlite', using='sqlite3',
query="SELECT A,PP FROM PPtable",
param=model.p, index=model.A)
instance = model.create_instance(data)
Data Namespaces
The DataPortal class supports the concept of a namespace to organize data into named groups that can be enabled
or disabled during model construction. Various DataPortal methods have an optional namespace argument that
defaults to None:
• data(name=None, namespace=None): Returns the data associated with data in the specified namespace
• []: For a DataPortal object data, the function data['A'] returns data corresponding to A in the default
namespace, and data['ns1','A'] returns data corresponding to A in namespace ns1.
• namespaces(): Returns an iteratore for the data namespaces.
• keys(namespace=None): Returns an iterator of the data keys in the specified namespace.
• values(namespace=None): Returns and iterator of the data values in the specified namespace.
• items(namespace=None): Returns an iterator of (name, value) tuples in the specified namespace.
By default, data within a namespace are ignored during model construction. However, concrete models can be initial-
ized with data from a specific namespace. Further, abstract models can be initialized with a list of namespaces that
define the data used to initialized model components. For example, the following script generates two model instances
from an abstract model using data loaded into different namespaces:
model = AbstractModel()
model.C = Set(dimen=2)
data = DataPortal()
data.load(filename='C.tab', set=model.C, namespace='ns1')
data.load(filename='D.tab', set=model.C, namespace='ns2',
format='set_array')
instance1 = model.create_instance(data, namespaces=['ns1'])
instance2 = model.create_instance(data, namespaces=['ns2'])
Currently, Pyomo has rather limited capabilities for storing model data into standard Python data types and serialized
data formats. However, this capability is under active development.
TODO
More here.
The pyomo command is issued to the DOS prompt or a Unix shell. To see a list of Pyomo command line options, use:
To pass arguments to a solver when using the pyomo solve command, appned the Pyomo command line with the
argument --solver-options= followed by an argument that is a string to be sent to the solver (perhaps with
dashes added by Pyomo). So for most MIP solvers, the mip gap can be set using
Multiple options are separated by a space. Options that do not take an argument should be specified with the equals
sign followed by either a space or the end of the string.
For example, to specify that the solver is GLPK, then to specify a mipgap of two percent and the GLPK cuts option,
use
If there are multiple “levels” to the keyword, as is the case for some Gurobi and CPLEX options, the tokens are sep-
arated by underscore. For example, mip cuts all would be specified as mip_cuts_all. For another example,
to set the solver to be CPLEX, then to set a mip gap of one percent and to specify ‘y’ for the sub-option numerical
to the option emphasis use
See Sending Options to the Solver for a discussion of passing options in a script.
7.3.2 Troubleshooting
Many of things that can go wrong are covered by error messages, but sometimes they can be confusing or do not
provide enough information. Depending on what the troubles are, there might be ways to get a little additional
information.
If there are syntax errors in the model file, for example, it can occasionally be helpful to get error messages directly
from the Python interpreter rather than through Pyomo. Suppose the name of the model file is scuc.py, then
python scuc.py
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
The effect will be to output every member of the set model.I at the time the constraint named model.
AxbConstraint is constructed.
In many applications, the default solver interface works well. However, in some cases it is useful to specify the
interface using the solver-io option. For example, if the solver supports a direct Python interface, then the option
would be specified on the command line as
--solver-io=python
• lp: generate a standard linear programming format file with filename extension lp
• nlp: generate a file with a standard format that supports linear and nonlinear optimization with filename exten-
sion n1lp
• os: generate an OSiL format XML file.
• python: use the direct Python interface.
This is a somewhat advanced topic. In some cases, it is desirable to trigger actions to be done as part of the model
building process. The BuildAction function provides this capability in a Pyomo model. It takes as arguments
optional index sets and a function to peform the action. For example,
model.BuildBpts = BuildAction(model.J, rule=bpts_build)
calls the function bpts_build for each member of model.J. The function bpts_build should have the model
and a variable for the members of model.J as formal arguments. In this example, the following would be a valid
declaration for the function:
def bpts_build(model, j):
A full example, which extends the Symbolic Index Sets and Piecewise Linear Expressions examples, is
# abstract2piecebuild.py
# Similar to abstract2piece.py, but the breakpoints are created using a build action
model = AbstractModel()
model.I = Set()
model.J = Set()
def obj_expression(model):
return summation(model.c, model.y)
model.OBJ = Objective(rule=obj_expression)
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
This example uses the build action to create a model component with breakpoints for a Piecewise Linear Expressions
function. The BuildAction is triggered by the assignment to model.BuildBpts. This object is not referenced
again, the only goal is to cause the execution of bpts_build, which places data in the model.bpts dictionary.
Note that if model.bpts had been a Set, then it could have been created with an initialize argument to the
Set declaration. Since it is a special-purpose dictionary to support the Piecewise Linear Expressions functionality in
Pyomo, we use a BuildAction.
Another application of BuildAction can be intialization of Pyomo model data from Python data structures, or
efficient initialization of Pyomo model data from other Pyomo model data. Consider the Sparse Index Sets example.
Rather than using an initialization for each list of sets NodesIn and NodesOut separately using initialize, it
is a little more efficient and probably a little clearer, to use a build action.
The full model is:
# Isinglebuild.py
# NodesIn and NodesOut are created by a build action using the Arcs
from pyomo.environ import *
model = AbstractModel()
model.Nodes = Set()
model.Arcs = Set(dimen=2)
def Populate_In_and_Out(model):
# loop over the arcs and put the end points in the appropriate places
for (i,j) in model.Arcs:
model.NodesIn[j].add(i)
model.NodesOut[i].add(j)
model.Demand = Param(model.Nodes)
model.Supply = Param(model.Nodes)
def Obj_rule(model):
return summation(model.FlowCost, model.Flow)
model.Obj = Objective(rule=Obj_rule, sense=minimize)
for this model, the same data file can be used as for Isinglecomm.py in Sparse Index Sets such as the toy data file:
set Arcs :=
CityA CityB
CityA CityC
CityC CityB
;
param : FlowCost :=
CityA CityB 1.4
CityA CityC 2.7
CityC CityB 1.6
;
param Demand :=
CityA 0
CityB 1
CityC 1
;
param Supply :=
CityA 2
CityB 0
CityC 0
;
Build actions can also be a way to implement data validation, particularly when multiple Sets or Parameters must
be analyzed. However, the the BuildCheck component is prefered for this purpose. It executes its rule just like a
BuildAction but will terminate the construction of the model instance if the rule returns False.
Modeling Extensions
The pyomo.DAE modeling extension [PyomoDAE] allows users to incorporate systems of differential algebraic equa-
tions (DAE)s in a Pyomo model. The modeling components in this extension are able to represent ordinary or partial
differential equations. The differential equations do not have to be written in a particular format and the components
are flexible enough to represent higher-order derivatives or mixed partial derivatives. Pyomo.DAE also includes model
transformations which use simultaneous discretization approaches to transform a DAE model into an algebraic model.
Finally, pyomo.DAE includes utilities for simulating DAE models and initializing dynamic optimization problems.
As will be shown later, differential equations can be declared using using these new modeling components along with
97
Pyomo Documentation, Release 5.6.9.dev0
ContinuousSet
This component is used to define continuous bounded domains (for example ‘spatial’ or ‘time’ domains). It is
similar to a Pyomo Set component and can be used to index things like variables and constraints. Any num-
ber of ContinuousSets can be used to index a component and components can be indexed by both Sets and
ContinuousSets in arbitrary order.
In the current implementation, models with ContinuousSet components may not be solved until every
ContinuousSet has been discretized. Minimally, a ContinuousSet must be initialized with two numeric val-
ues representing the upper and lower bounds of the continuous domain. A user may also specify additional points in
the domain to be used as finite element points in the discretization.
class pyomo.dae.ContinuousSet(*args, **kwds)
Represents a bounded continuous domain
Minimally, this set must contain two numeric values defining the bounds of a continuous range. Discrete points
of interest may be added to the continuous set. A continuous set is one dimensional and may only contain
numerical values.
Parameters
• initialize (list) – Default discretization points to be included
• bounds (tuple) – The bounding points for the continuous domain. The bounds will be
included as discrete points in the ContinuousSet but will not be used to restrict points
added to the ContinuousSet through the ‘initialize’ argument, a data file, or the add()
method
_changed
This keeps track of whether or not the ContinuousSet was changed during discretization. If the user
specifies all of the needed discretization points before the discretization then there is no need to go back
through the model and reconstruct things indexed by the ContinuousSet
Type boolean
_fe
This is a sorted list of the finite element points in the ContinuousSet. i.e. this list contains all the
discrete points in the ContinuousSet that are not collocation points. Points that are both finite element
points and collocation points will be included in this list.
Type list
_discretization_info
This is a dictionary which contains information on the discretization transformation which has been applied
to the ContinuousSet.
Type dict
construct(values=None)
Constructs a ContinuousSet component
get_changed()
Returns flag indicating if the ContinuousSet was changed during discretization
Returns “True” if additional points were added to the ContinuousSet while applying a discretization
scheme
Returns
Return type boolean
get_discretization_info()
Returns a dict with information on the discretization scheme that has been applied to the
ContinuousSet.
Returns
Return type dict
get_finite_elements()
Returns the finite element points
If the ContinuousSet has been discretizaed using a collocation scheme, this method will return a list
of the finite element discretization points but not the collocation points within each finite element. If the
ContinuousSet has not been discretized or a finite difference discretization was used, this method
returns a list of all the discretization points in the ContinuousSet.
Returns
Return type list of floats
get_lower_element_boundary(point)
Returns the first finite element point that is less than or equal to ‘point’
Parameters point (float) –
Returns
Return type float
get_upper_element_boundary(point)
Returns the first finite element point that is greater or equal to ‘point’
Parameters point (float) –
Returns
Return type float
set_changed(newvalue)
Sets the _changed flag to ‘newvalue’
Parameters newvalue (boolean) –
The following code snippet shows examples of declaring a ContinuousSet component on a concrete Pyomo model:
Required imports
>>> from pyomo.environ import *
>>> from pyomo.dae import *
Note: A ContinuousSet may not be constructed unless at least two numeric points are provided to bound the
continuous domain.
The following code snippet shows an example of declaring a ContinuousSet component on an abstract Pyomo
model using the example data file.
Required imports
>>> from pyomo.environ import *
>>> from pyomo.dae import *
Note: If a separate data file is used to initialize a ContinuousSet, it is done using the ‘set’ command and not
‘continuousset’
Note: Most valid ways to declare and initialize a Set can be used to declare and initialize a ContinuousSet. See
the documentation for Set for additional options.
Warning: Be careful using a ContinuousSet as an implicit index in an expression, i.e. sum(m.v[i] for
i in m.myContinuousSet). The expression will be generated using the discretization points contained in
the ContinuousSet at the time the expression was constructed and will not be updated if additional points are
added to the set during discretization.
Note: ContinuousSet components are always ordered (sorted) therefore the first() and last() Set meth-
ods can be used to access the lower and upper boundaries of the ContinuousSet respectively
DerivativeVar
Returns
Return type list
get_derivative_expression()
Returns the current discretization expression for this derivative or creates an access function to its Var
the first time this method is called. The expression gets built up as the discretization transformations are
sequentially applied to each ContinuousSet in the model.
get_state_var()
Return the Var that is being differentiated.
Returns
Return type Var
is_fully_discretized()
Check to see if all the ContinuousSets this derivative is taken with respect to have been discretized.
Returns
Return type boolean
set_derivative_expression(expr)
Sets‘‘_expr‘‘, an expression representing the discretization equations linking the DerivativeVar to its
state Var
The code snippet below shows examples of declaring DerivativeVar components on a Pyomo model. In each
case, the variable being differentiated is supplied as the only positional argument and the type of derivative is specified
using the ‘wrt’ (or the more verbose ‘withrespectto’) keyword argument. Any keyword argument that is valid for a
Pyomo Var component may also be specified.
Required imports
>>> from pyomo.environ import *
>>> from pyomo.dae import *
Declare the mixed second order partial derivative of model.z with respect
to model.t and model.l and set bounds
>>> model.dz2 = DerivativeVar(model.z, wrt=(model.t, model.l), bounds=(-10, 10))
Note: The ‘initialize’ keyword argument will initialize the value of a derivative and is not the same as specifying an
initial condition. Initial or boundary conditions should be specified using a Constraint or ConstraintList or
by fixing the value of a Var at a boundary point.
A differential equations is declared as a standard Pyomo Constraint and is not required to have any particular
form. The following code snippet shows how one might declare an ordinary or partial differential equation.
Required imports
>>> from pyomo.environ import *
>>> from pyomo.dae import *
By default, a Constraint declared over a ContinuousSet will be applied at every discretization point contained
in the set. Often a modeler does not want to enforce a differential equation at one or both boundaries of a continuous
domain. This may be addressed explicitly in the Constraint declaration using Constraint.Skip as shown
above. Alternatively, the desired constraints can be deactivated just before the model is sent to a solver as shown
below.
>>> def _ode_rule(m, s, t):
... return m.dxdt[s, t] == m.x[s, t]**2
>>> model.ode = Constraint(model.s, model.t, rule=_ode_rule)
Note: If you intend to use the pyomo.DAE Simulator on your model then you must use constraint deactivation
instead of constraint skipping in the differential equation rule.
Warning: The Integral component is still under development and considered a prototype. It currently includes
only basic functionality for simple integrals. We welcome feedback on the interface and functionality but we do
not recommend using it on general models. Instead, integrals should be reformulated as differential equations.
Notice that the positional arguments supplied to the Integral declaration must include all indices needed to evaluate
the integral expression. The integral expression is defined in a function and supplied to the ‘rule’ keyword argument.
Finally, a user must specify a ContinuousSet that the integral is being evaluated over. This is done using the ‘wrt’
keyword argument.
Note: The ContinuousSet specified using the ‘wrt’ keyword argument must be explicitly specified as one of the
indexing sets (meaning it must be supplied as a positional argument). This is to ensure consistency in the ordering and
dimension of the indexing sets
After an Integral has been declared, it can be used just like a Pyomo Expression component and can be
included in constraints or the objective function as shown above.
If an Integral is specified with multiple positional arguments, i.e. multiple indexing sets, the final component will
be indexed by all of those sets except for the ContinuousSet that the integral was taken over. In other words, the
ContinuousSet specified with the ‘wrt’ keyword argument is removed from the indexing sets of the Integral
even though it must be specified as a positional argument. This should become more clear with the following example
showing a double integral over the ContinuousSet components model.t1 and model.t2. In addition, the
expression is also indexed by the Set model.s. The mathematical representation and implementation in Pyomo are
shown below:
∑︁ ∫︁ ∫︁
𝑋(𝑡1 , 𝑡2 , 𝑠) 𝑑𝑡1 𝑑𝑡2
𝑠 𝑡2 𝑡1
Before a Pyomo model with DerivativeVar or Integral components can be sent to a solver it must first
be sent through a discretization transformation. These transformations approximate any derivatives or integrals in
the model by using a numerical method. The numerical methods currently included in pyomo.DAE discretize the
continuous domains in the problem and introduce equality constraints which approximate the derivatives and integrals
at the discretization points. Two families of discretization schemes have been implemented in pyomo.DAE, Finite
Difference and Collocation. These schemes are described in more detail below.
Note: The schemes described here are for derivatives only. All integrals will be transformed using the trapezoid rule.
The user must write a Python script in order to use these discretizations, they have not been tested on the pyomo
command line. Example scripts are shown below for each of the discretization schemes. The transformations are
applied to Pyomo model objects which can be further manipulated before being sent to a solver. Examples of this are
also shown below.
This transformation includes implementations of several finite difference methods. For example, the Backward Dif-
ference method (also called Implicit or Backward Euler) has been implemented. The discretization equations for this
method are shown below:
Given :
𝑑𝑥
𝑑𝑡 = 𝑓 (𝑡, 𝑥), 𝑥(𝑡0 ) = 𝑥0
discretize 𝑡 and 𝑥 such that
𝑥(𝑡0 + 𝑘ℎ) = 𝑥𝑘
𝑥𝑘+1 = 𝑥𝑘 + ℎ * 𝑓 (𝑡𝑘+1 , 𝑥𝑘+1 )
𝑡𝑘+1 = 𝑡𝑘 + ℎ
where ℎ is the step size between discretization points or the size of each finite element. These equations are generated
automatically as Constraints when the backward difference method is applied to a Pyomo model.
There are several discretization options available to a dae.finite_difference transformation which can be
specified as keyword arguments to the .apply_to() function of the transformation object. These keywords are
summarized below:
Keyword arguments for applying a finite difference transformation:
‘nfe’ The desired number of finite element points to be included in the discretization. The default value is 10.
‘wrt’ Indicates which ContinuousSet the transformation should be applied to. If this keyword argument is not
specified then the same scheme will be applied to every ContinuousSet .
‘scheme’ Indicates which finite difference method to apply. Options are ‘BACKWARD’, ‘CENTRAL’, or ‘FOR-
WARD’. The default scheme is the backward difference method.
If the existing number of finite element points in a ContinuousSet is less than the desired number, new discretiza-
tion points will be added to the set. If a user specifies a number of finite element points which is less than the number
of points already included in the ContinuousSet then the transformation will ignore the specified number and
proceed with the larger set of points. Discretization points will never be removed from a ContinousSet during the
discretization.
The following code is a Python script applying the backward difference method. The code also shows how to add a
constraint to a discretized model.
Collocation Transformation
This transformation uses orthogonal collocation to discretize the differential equations in the model. Currently, two
types of collocation have been implemented. They both use Lagrange polynomials with either Gauss-Radau roots or
Gauss-Legendre roots. For more information on orthogonal collocation and the discretization equations associated
with this method please see chapter 10 of the book “Nonlinear Programming: Concepts, Algorithms, and Applications
to Chemical Processes” by L.T. Biegler.
The discretization options available to a dae.collocation transformation are the same as those described above
for the finite difference transformation with different available schemes and the addition of the ‘ncp’ option.
Additional keyword arguments for collocation discretizations:
‘scheme’ The desired collocation scheme, either ‘LAGRANGE-RADAU’ or ‘LAGRANGE-LEGENDRE’. The de-
fault is ‘LAGRANGE-RADAU’.
‘ncp’ The number of collocation points within each finite element. The default value is 3.
Note: If the user’s version of Python has access to the package Numpy then any number of collocation points may be
specified, otherwise the maximum number is 10.
Note: Any points that exist in a ContinuousSet before discretization will be used as finite element boundaries
and not as collocation points. The locations of the collocation points cannot be specified by the user, they must be
generated by the transformation.
The following code is a Python script applying collocation with Lagrange polynomials and Radau roots. The code
also shows how to add an objective function to a discretized model.
When solving an optimal control problem a user may want to restrict the number of degrees of free-
dom for the control input by forcing, for example, a piecewise constant profile. Pyomo.DAE provides the
reduce_collocation_points function to address this use-case. This function is used in conjunction with
the dae.collocation discretization transformation to reduce the number of free collocation points within a finite
element for a particular variable.
class pyomo.dae.plugins.colloc.Collocation_Discretization_Transformation
In the above example, the reduce_collocation_points function restricts the variable model.u to have only
1 free collocation point per finite element, thereby enforcing a piecewise constant profile. Fig. 8.1 shows the solution
profile before and after appling the reduce_collocation_points function.
Discretizations can be applied independently to each ContinuousSet in a model. This allows the user great flexi-
bility in discretizing their model. For example the same numerical method can be applied with different resolutions:
This also allows the user to combine different methods. For example, applying the forward difference method to one
ContinuousSet and the central finite difference method to another ContinuousSet:
In addition, the user may combine finite difference and collocation discretizations. For example:
Fig. 8.1: (left) Profile before applying the reduce_collocation_points function (right) Profile after applying
the function, restricting model.u to have a piecewise constant profile.
If the user would like to apply the same discretization to all ContinuousSet components in a model, just specify
the discretization once without the ‘wrt’ keyword argument. This will apply that scheme to all ContinuousSet
components in the model that haven’t already been discretized.
A transformation framework along with certain utility functions has been created so that advanced users may easily
implement custom discretization schemes other than those listed above. The transformation framework consists of the
following steps:
1. Specify Discretization Options
2. Discretize the ContinuousSet(s)
3. Update Model Components
4. Add Discretization Equations
5. Return Discretized Model
If a user would like to create a custom finite difference scheme then they only have to worry about step (4) in the
framework. The discretization equations for a particular scheme have been isolated from of the rest of the code for
implementing the transformation. The function containing these discretization equations can be found at the top of the
source code file for the transformation. For example, below is the function for the forward difference method:
def _forward_transform(v,s):
"""
Applies the Forward Difference formula of order O(h) for first derivatives
"""
def _fwd_fun(i):
tmp = sorted(s)
idx = tmp.index(i)
return 1/(tmp[idx+1]-tmp[idx])*(v(tmp[idx+1])-v(tmp[idx]))
return _fwd_fun
In this function, ‘v’ represents the continuous variable or function that the method is being applied to. ‘s’ represents
the set of discrete points in the continuous domain. In order to implement a custom finite difference method, a user
would have to copy the above function and just replace the equation next to the first return statement with their method.
After implementing a custom finite difference method using the above function template, the only other change that
must be made is to add the custom method to the ‘all_schemes’ dictionary in the dae.finite_difference class.
In the case of a custom collocation method, changes will have to be made in steps (2) and (4) of the transformation
framework. In addition to implementing the discretization equations, the user would also have to ensure that the
desired collocation points are added to the ContinuousSet being discretized.
The pyomo.dae Simulator class can be used to simulate systems of ODEs and DAEs. It provides an interface to
integrators available in other Python packages.
Note: The pyomo.dae Simulator does not include integrators directly. The user must have at least one of the supported
Python packages installed in order to use this class.
Returns The first return value is a 1D array of time points corresponding to the second return
value which is a 2D array of the profiles for the simulated differential and algebraic variables.
Return type numpy array, numpy array
Note: Any keyword options supported by the integrator may be specified as keyword options to the simulate function
and will be passed to the integrator.
The Simulator currently includes interfaces to SciPy and CasADi. ODE simulation is supported in both packages
however, DAE simulation is only supported by CasADi. A list of available integrators for each package is given
below. Please refer to the SciPy and CasADi documentation directly for the most up-to-date information about these
packages and for more information about the various integrators and options.
SciPy Integrators:
• ‘vode’ : Real-valued Variable-coefficient ODE solver, options for non-stiff and stiff systems
• ‘zvode’ : Complex-values Variable-coefficient ODE solver, options for non-stiff and stiff systems
• ‘lsoda’ : Real-values Variable-coefficient ODE solver, automatic switching of algorithms for non-stiff or
stiff systems
• ‘dopri5’ : Explicit runge-kutta method of order (4)5 ODE solver
• ‘dop853’ : Explicit runge-kutta method of order 8(5,3) ODE solver
CasADi Integrators:
• ‘cvodes’ : CVodes from the Sundials suite, solver for stiff or non-stiff ODE systems
• ‘idas’ : IDAS from the Sundials suite, DAE solver
• ‘collocation’ : Fixed-step implicit runge-kutta method, ODE/DAE solver
• ‘rk’ : Fixed-step explicit runge-kutta method, ODE solver
We now show how to use the Simulator to simulate the following system of ODEs:
𝑑𝜃
𝑑𝑡 =𝜔
𝑑𝜔
𝑑𝑡 = −𝑏 * 𝜔 − 𝑐 * 𝑠𝑖𝑛(𝜃)
Notice that the initial conditions are set by fixing the values of m.omega and m.theta at t=0 instead of be-
ing specified as extra equality constraints. Also notice that the differential equations are specified without using
Constraint.Skip to skip enforcement at t=0. The Simulator cannot simulate any constraints that contain if-
statements in their construction rules.
To simulate the model you must first create a Simulator object. Building this object prepares the Pyomo model for
simulation with a particular Python package and performs several checks on the model to ensure compatibility with
the Simulator. Be sure to read through the list of limitations at the end of this section to understand the types of models
supported by the Simulator.
After creating a Simulator object, the model can be simulated by calling the simulate function. Please see the API
documentation for the Simulator for more information about the valid keyword arguments for this function.
The simulate function returns numpy arrays containing time points and the corresponding values for the dynamic
variable profiles.
Simulator Limitations:
• Differential equations must be first-order and separable
• Model can only contain a single ContinuousSet
• Can’t simulate constraints with if-statements in the construction rules
• Need to provide initial conditions for dynamic states by setting the value or using fix()
The Simulator supports simulation of a system of ODE’s or DAE’s with time-varying parameters or control inputs.
Time-varying inputs can be specified using a Pyomo Suffix. We currently only support piecewise constant profiles.
For more complex inputs defined by a continuous function of time we recommend adding an algebraic variable and
constraint to your model.
The profile for a time-varying input should be specified using a Python dictionary where the keys correspond to the
switching times and the values correspond to the value of the input at a time point. A Suffix is then used to associate
this dictionary with the appropriate Var or Param and pass the information to the Simulator. The code snippet
below shows an example.
>>> m = ConcreteModel()
Time-varying inputs
>>> m.b = Var(m.t)
>>> m.c = Param(m.t, default=5.0)
Note: The Simulator does not support multi-indexed inputs (i.e. if m.b in the above example was indexed by another
set besides m.t)
Providing a good initial guess is an important factor in solving dynamic optimization problems. There are several
model initialization tools under development in pyomo.DAE to help users initialize their models. These tools will be
documented here as they become available.
From Simulation
The Simulator includes a function for initializing discretized dynamic optimization models using the profiles re-
turned from the simulator. An example using this function is shown below
Note: A model must be simulated before it can be initialized using this function
8.3 MPEC
pyomo.mpec supports modeling complementarity conditions and optimization problems with equilibrium con-
straints.
The Pyomo.GDP modeling extension allows users to include logical disjunctions in their models. These disjunctions
are often used to model discrete decisions that have implications on the system behavior. For example, in process
design, a disjunction may model the choice between processes A and B. If A is selected, then its associated equations
and inequalities will apply; otherwise, if B is selected, then its respective constraints should be enforced. In the general
case, if these models contain nonlinear relations, then they are Generalized Disjunctive Programming (GDP) models
8.4.1 Disjunctions
A disjunction is a set of constraint groupings that are linked by a logical OR relationship. The simplest case is a 2-term
disjunction:
𝐷1 ∨ 𝐷2
That is, either the constraints in the collection D1 are enforced, OR the constraints in the collection D2 are enforced.
In Pyomo, we model each collection using a special type of block called a Disjunct. Each Disjunct is a block
that contains an implicitly declared binary variable, “indicator_var” that is 1 when the constraints in that Disjunct
is enforced and 0 otherwise.
Declaration
Model.d is an indexed Disjunct that is indexed over an implicit set with members 0 and 1. Since it is an indexed
thing, each member is initialized using a call to a rule, passing in the index value (just like any other pyomo com-
ponent). However, just defining disjuncts is not sufficient to define disjunctions, as pyomo has no way of knowing
which disjuncts should be bundled into which disjunctions. To define a disjunction, you use a Disjunction com-
ponent. The disjunction takes either a rule or an expression that returns a list of disjuncts over which it should form
the disjunction. This is what _c function in the example returns.
Note: There is no requirement that disjuncts be indexed and also no requirement that they be defined using a shared
rule. It was done in this case to create a condensed example.
Transformation
To use standard commercial solvers, you must convert the disjunctive model to a standard MIP/MINLP model. The
two classical strategies for doing so are the (included) Big-M and Hull reformulations. From the Pyomo command
line, include the option --transform pyomo.gdp.bigm or --transform pyomo.gdp.chull. If you are
using a Python script, TransformationFactory accomplishes the same functionality:
• TransformationFactory('gdp.bigm').apply_to(model)
• TransformationFactory('gdp.chull').apply_to(model)
Note:
• all variables that appear in disjuncts need upper and lower bounds for chull
• for linear models, the BigM transform can estimate reasonably tight M values for you if variables are bounded.
• for nonlinear models where finite expression bounds may be inferred from variable bounds, the BigM transfor-
mation may also be able to automatically compute M values for you.
• for all other models, you will need to provide the M values through a “BigM” Suffix. A GDP_Error will be
raised for missing M values.
• When you declare a Disjunct, it (at declaration time) will automatically have a variable “indicator_var” defined
and attached to it. After that, it is just a Var like any other Var.
• The hull reformulation is an exact reformulation at the solution points even for nonconvex models, but the
resulting MINLP will also be nonconvex.
Pyomo includes the contributed GDPopt solver, which can direct solve GDP models. Its documentation and usage is
described at GDPopt logic-based solver.
Examples
To express a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree
model with associated uncertain parameters. Both concrete and abstract model representations are supported.
Given the deterministic and scenario tree models, PySP provides multiple paths for the solution of the corresponding
stochastic program. One alternative involves forming the extensive form and invoking an appropriate deterministic
solver for the entire problem once. For more complex stochastic programs, we provide a generic implementation
of Rockafellar and Wets’ Progressive Hedging algorithm, with additional specializations for approximating mixed-
integer stochastic programs as well as other decomposition methods. By leveraging the combination of a high-level
programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are
able to provide completely generic and highly configurable solver implementations.
This section describes PySP: (Pyomo Stochastic Programming), where parameters are allowed to be uncertain.
Birge and Louveaux [BirgeLouveauxBook] make use of the example of a farmer who has 500 acres that can be planted
in wheat, corn or sugar beets, at a per acre cost of 150, 230 and 260 (Euros, presumably), respectively. The farmer
needs to have at least 200 tons of wheat and 240 tons of corn to use as feed, but if enough is not grown, those crops
can be purchased for 238 and 210, respectively. Corn and wheat grown in excess of the feed requirements can be sold
for 170 and 150, respectively. A price of 36 per ton is guaranteed for the first 6000 tons grown by any farmer, but
beets in excess of that are sold for 10 per ton. The yield is 2.5, 3, and 20 tons per acre for wheat, corn and sugar beets,
respectively.
ReferenceModel.py
So far, this is a deterministic problem because we are assuming that we know all the data. The Pyomo model for
this problem shown here is in the file ReferenceModel.py in the sub-directory examples/pysp/farmer/
models that is distributed with Pyomo.
# ___________________________________________________________________________
#
# Pyomo: Python Optimization Modeling Objects
# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC
# Under the terms of Contract DE-NA0003525 with National Technology and
# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
# rights in this software.
# This software is distributed under the 3-clause BSD License.
# ___________________________________________________________________________
#
# Model
#
model = AbstractModel()
#
# Parameters
#
model.CROPS = Set()
model.TOTAL_ACREAGE = Param(within=PositiveReals)
#
# Variables
#
#
# Constraints
#
def ConstrainTotalAcreage_rule(model):
return summation(model.DevotedAcreage) <= model.TOTAL_ACREAGE
model.ConstrainTotalAcreage = Constraint(rule=ConstrainTotalAcreage_rule)
˓→model.QuantitySuperQuotaSold[i]
model.EnforceCattleFeedRequirement = Constraint(model.CROPS,
˓→rule=EnforceCattleFeedRequirement_rule)
#
# Stage-specific cost computations
#
def ComputeFirstStageCost_rule(model):
return summation(model.PlantingCostPerAcre, model.DevotedAcreage)
model.FirstStageCost = Expression(rule=ComputeFirstStageCost_rule)
model.SecondStageCost = Expression(rule=ComputeSecondStageCost_rule)
#
# PySP Auto-generated Objective
#
# minimize: sum of StageCosts
#
# An active scenario objective equivalent to that generated by PySP is
# included here for informational purposes.
def total_cost_rule(model):
return model.FirstStageCost + model.SecondStageCost
model.Total_Cost_Objective = Objective(rule=total_cost_rule, sense=minimize)
Example Data
The data introduced here are in the file AverageScenario.dat in the sub-directory examples/pysp/farmer/scenariodata
that is distributed with Pyomo. These data are given for illustration. The file ReferenceModel.dat is not required by
PySP.
# "mean" scenario
# can't purchase beets (no real need, as cattle don't eat them)
param PurchasePrice := WHEAT 238 CORN 210 SUGAR_BEETS 100000 ;
Any of these data could be modeled as uncertain, but we will consider only the possibility that the yield per acre could
be higher or lower than expected. Assume that there is a probability of 1/3 that the yields will be the average values
that were given (i.e., wheat 2.5; corn 3; and beets 20). Assume that there is a 1/3 probability that they will be lower
(2, 2.4, 16) and 1/3 probability they will be higher (3, 3.6, 24). We refer to each full set of data as a scenario and
collectively we call them a scenario tree. In this case the scenario tree is very simple: there is a root node and three leaf
nodes: one corresponding to each scenario. The acreage-to-plant decisions are root node decisions because they must
be made without knowing what the yield will be. The other variables are so-called second stage decisions, because
they will depend on which scenario is realized.
ScenarioStructure.dat
PySP requires that users describe the scenario tree using specific constructs in a file named ScenarioStructure.
dat; for the farmer problem, this file can be found in the pyomo sub-directory examples/pysp/farmer/
scenariodata that is distributed with Pyomo.
param ScenarioLeafNode :=
BelowAverageScenario BelowAverageNode
AverageScenario AverageNode
AboveAverageScenario AboveAverageNode ;
This data file is verbose and somewhat redundant, but in most applications it is generated by software rather than by
a person, so this is not an issue. Generally, the left-most part of each expression (e.g. ‘’set Stages :=’‘) is required
and uses reserved words (e.g., Stages) and the other names are supplied by the user (e.g., ‘’FirstStage” could be any
name). Every assignment is terminated with a semi-colon. We will now consider the assignments in this file one at a
time.
The first assignments provides names for the stages and the words “set Stages” are required, as are the := symbols. Any
names can be used. In this example, we used “FirstStage” and “SecondStage” but we could have used “EtapPrimero”
and “ZweiteEtage” if we had wanted to. Whatever names are given here will continue to be used to refer to the stages
in the rest of the file. The order of the names is important. A simple way to think of it is that generally, the names
must be in time order (technically, they need to be in order of information discovery, but that is usually time-order).
Stages refers to decision stages, which may, or may not, correspond directly with time stages. In the farmer example,
decisions about how much to plant are made in the first stage and “decisions” (which are pretty obvious, but which
are decision variables nonetheless) about how much to sell at each price and how much needs to be bought are second
stage decisions because they are made after the yield is known.
Node names are constructed next. The words “set Nodes” are required, but any names may be assigned to the nodes.
In two stage stochastic problems there is a root node, which we chose to name “RootNode” and then there is a node
for each scenario.
Nodes are associated with time stages with an assignment beginning with the required words “param Nodestage.” The
assignments must make use of previously defined node and stage names. Every node must be assigned a stage.
The structure of the scenario tree is defined using assignment of children to each node that has them. Since this is a
two stage problem, only the root node has children. The words “param Children” are required for every node that has
children and the name of the node is in square brackets before the colon-equals assignment symbols. A list of children
is assigned.
The probability for each node, conditional on observing the parent node is given in an assignment that begins with the
required words “param ConditionalProbability.” The root node always has a conditional probability of 1, but it must
always be given anyway. In this example, the second stage nodes are equally likely.
Scenario names are given in an assignment that begins with the required words “set Scenarios” and provides a list
of the names of the scenarios. Any names may be given. In many applications they are given unimaginative names
generated by software such as “Scen1” and the like. In this example, there are three scenarios and the names reflect
the relative values of the yields.
Leaf nodes, which are nodes with no children, are associated with scenarios. This assignment must be one-to-one and
it is initiated with the words “param ScenarioLeafNode” followed by the colon-equals assignment characters.
param ScenarioLeafNode :=
BelowAverageScenario BelowAverageNode
AverageScenario AverageNode
AboveAverageScenario AboveAverageNode ;
Variables are associated with stages using an assignment that begins with the required words “set StageVariables” and
the name of a stage in square brackets followed by the colon-equals assignment characters. Variable names that have
been defined in the file ReferenceModel.py can be assigned to stages. Any variables that are not assigned are assumed
to be in the last stage. Variable indexes can be given explicitly and/or wildcards can be used. Note that the variable
names appear without the prefix “model.” In the farmer example, DevotedAcreage is the only first stage variable.
Note: Wildcards can be used, but fully general Python slicing is not supported.
For reporting purposes, it is useful to define auxiliary variables in ReferenceModel.py that will be assigned the
cost associated with each stage. This variables do not impact algorithms, but the values are output by some software
during execution as well as upon completion. The names of the variables are assigned to stages using the “param
StageCost” assignment. The stages are previously defined in ScenarioStructure.dat and the variables are
previously defined in ReferenceModel.py.
So far, we have given a model in the file named ReferenceModel.py, a set of deterministic data in the file named
ReferenceModel.py, and a description of the stochastics in the file named ScenarioStructure.dat. All
that remains is to give the data for each scenario. There are two ways to do that in PySP: scenario-based and node-
based. The default is scenario-based so we will describe that first.
For scenario-based data, the full data for each scenario is given in a .dat file with the root name that is the name of
the scenario. So, for example, the file named AverageScenario.dat must contain all the data for the model
for the scenario named “AvererageScenario.” It turns out that this file can be created by simply copying the file
ReferenceModel.dat as shown above because it contains a full set of data for the “AverageScenario” scenario.
The files BelowAverageScenario.dat and AboveAverageScenario.dat will differ from this file and
from each other only in their last line, where the yield is specified. These three files are distributed with Pyomo and are
in the pyomo sub-directory examples/pysp/farmer/scenariodata along with ScenarioStructure.
dat and ReferenceModel.dat.
Scenario-based data wastes resources by specifying the same thing over and over again. In many cases, that does
not matter and it is convenient to have full scenario data files available (for one thing, the scenarios can easily be
run independently using the pyomo command). However, in many other settings, it is better to use a node-based
specification where the data that is unique to each node is specified in a .dat file with a root name that matches the
node name. In the farmer example, the file RootNode.dat will be the same as ReferenceModel.dat except
that it will lack the last line that specifies the yield. The files BelowAverageNode.dat, AverageNode.dat,
and AboveAverageNode.dat will contain only one line each to specify the yield. If node-based data is to be used,
then the ScenarioStructure.dat file must contain the following line:
An entire set of files for node-based data for the farmer problem are distributed with Pyomo in the sub-directory
examples/pysp/farmer/nodedata
runef
The runef command puts together the so-called extensive form version of the model. It creates a large model that
has constraints to ensure that variables at a node have the same value. For example, in the farmer problem, all of
the DevotedAcres variables must have the same value regardless of which scenario is ultimately realized. The
objective can be the expected value of the objective function, or the CVaR, or a weighted combination of the two.
Expected value is the default. A full set of options for runef can be obtained using the command:
runef --help
The pyomo distribution contains the files need to run the farmer example in the sub-directories to the sub-directory
examples/pysp/farmer so if this is the current directory and if CPLEX is installed, the following command will
cause formation of the EF and its solution using CPLEX.
The option -m models has one dash and is short-hand for the option --model-directory=models and note
that the full option uses two dashes. The -i is equivalent to --instance-directory= in the same fashion. The
default solver is CPLEX, so the solver option is not really needed. With the --solve option, runef would simply
write an .lp data file that could be passed to a solver.
runph
The runph command executes an implementation of Progressive Hedging (PH) that is intended to support scripting
and extension.
The pyomo distribution contains the files need to run the farmer example in the sub-directories to the sub-directory
examples/pysp/farmer so if this is the current directory and if CPLEX is installed, the following command will
cause PH to execute using the default sub-problem solver, which is CPLEX.
The option -m models has one dash and is short-hand for the option --model-directory=models and note
that the full option uses two dashes. The -i is equivalent to --instance-directory= in the same fashion.
After about 33 iterations, the algorithm will achieve the default level of convergence and terminate. A lot of output is
generated and among the output is the following solution information:
Variable=DevotedAcreage
Index: [CORN] (Scenarios: BelowAverageScenario AverageScenario
˓→AboveAverageScenario )
Values: 79.9844 80.0000 79.9768 Max-Min= 0.
˓→0232 Avg= 79.9871
Index: [SUGAR_BEETS] (Scenarios: BelowAverageScenario
˓→AverageScenario AboveAverageScenario )
Values: 249.9848 249.9770 250.0000 Max-Min= 0.
˓→0230 Avg= 249.9873
Index: [WHEAT] (Scenarios: BelowAverageScenario AverageScenario
˓→AboveAverageScenario )
Values: 170.0308 170.0230 170.0232 Max-Min= 0.
˓→0078 Avg= 170.0256
Cost Variable=FirstStageCost
Tree Node=RootNode (Scenarios: BelowAverageScenario
˓→AverageScenario AboveAverageScenario )
Values: 108897.0836 108897.4725 108898.1476 Max-Min= 1.0640
˓→Avg= 108897.5679
For problems with no, or few, integer variables, the default level of convergence leaves root-node variables almost
converged. Since the acreage to be planted cannot depend on the scenario that will be realized in the future, the
average, which is labeled “Avg” in this output, would be used. A farmer would probably interpret acreages of 79.9871,
249.9873, and 170.0256 to be 80, 250, and 170. In real-world applications, PH is embedded in scripts that produce
output in a format desired by a decision maker.
But in real-world applications, the default settings for PH seldom work well enough. In addition to post-processing
the output, a number of parameters need to be adjusted and sometimes scripting to extend or augment the algorithm is
needed to improve convergence rates. A full set of options can be obtained with the command:
runph --help
Use of the linearize-nonbinary-penalty-terms option requires that all variables not in the final stage
have bounds.
Final Solution
At each iteration, PH computes an average for each variable over the nodes of the scenario tree. We refer to this as
X-bar. For many problems, particularly those with integer restrictions, X-bar might not be feasible for every scenario
unless PH happens to be fully converged (in the primal variables). Consequently, the software computes a solution
system X-hat that is more likely to be feasible for every scenario and will be equivalent to X-bar under full convergence.
This solution is reported upon completion of PH and its expected value is report if it is feasible for all scenarios.
Methods for computing X-hat are controlled by the --xhat-method command-line option. For example
--xhat-method=closest-scenario
causes X-hat to be set to the scenario that is closest to X-bar (in a z-score sense). Other options, such as voting and
rounding, assign values of X-bar to X-hat except for binary and general integer variables, where the values are set
by probability weighted voting by scenarios and rounding from X-bar, respectively.
To get the full solution, including leaf node solution values, use the runph
--output-scenario-tree-solution option.
In both runph and runef the solution can be written in csv format using the --solution-writer=pyomo.
pysp.plugins.csvsolutionwriter option.
PySP scripts such as runef and runph require files that specify the model and data using files with specific names.
All files can be in the current directory, but typically, the file ReferenceModel.py is in a directory that is specified
using --model-directory= option (the short version of this option is -i) and the data files are in a directory
specified in the --instance-directory= option (the short version of this option is -m).
Note: A file name other than ReferenceModel.py can be used if the file name is given in addition to the
directory name as an argument to the --instance-directory option. For example, on a Windows machine
--instance-directory=models\MyModel.py would specify the file MyModel.py in the local directory
models.
• ReferenceModel.py: A full Pyomo model for a singe scenario. There should be no scenario indexes in this
model because they are implicit.
• ScenarioStructure.dat: Specifies the nature of the stochastics. It also specifies whether the rest of the
data is node-based or scenario-based. It is scenario-based unless ScenarioStructure.dat contains the
line
If scenario-based, then there is a data file for each scenario that specifies a full set of data for the scenario. The
name of the file is the name of the scenario with .dat appended. The names of the scenarios are given in the
ScenarioStructure.dat file.
If node-based, then there is a file with data for each node that specifies only that data that is unique for the node.
The name of the file is the name of the node with .dat appended. The names of the nodes are given in the
ScenarioStructure.dat file.
The Python package called Pyro provides capabilities that are used to enable PH to make use of multiple solver
processes for sub-problems and allows both runef and runph to make use remote solvers. We will focus on PH in
our discussion here.
There are two solver management systems available for runph, one is based on a pyro_mip_server and the other
is based on a phsolverserver. Regardless of which is used, a name server and a dispatch server must be running
and accessible to the runph process. The name server is launched using the command pyomo_ns and then the
dispatch server is launched with dispatch_srvr. Note that both commands contain an underscore. Both programs
keep running until terminated by an external signal, so it is common to pipe their output to a file.
Solvers are controlled by solver servers. The pyro mip solver server is launched with the command
pyro_mip_server. This command may be repeated to launch as many solvers as are desired. The runph then
needs a --solver-manager=pyro option to signal that runph should not launch its own solver, but should send
subproblems to be dispatched to parallel solvers. To summarize the commands:
• Once: pyomo_ns
• Once: dispatch_srvr
• Multiple times: pyro_mip_server
• Once: runph ... --solver-manager=pyro ...
Note: The runph option --shutdown-pyro will cause a shutdown signal to be sent to pyomo_ns,
dispatch_srvr and all pyro_mip_server programs upon termination of runph.
Instead of using pyro_mip_server, one can use phsolverserver in its place. You can get a list of argu-
ments using pyrosolverserver --help, which does not launch a solver server (it just displays help and termi-
nates). If you use the phsolverserver, then use --solver-manager=phpyro as an argument to runph rather than
--solver-manager=pyro.
Warning: Unlike the normal pyro_mip_server, there must be one phsolverserver for each sub-
problem. One can use fewer phsolverservers than there are scenarios by adding the command-line option “–phpyro-
required-workers=X”. This will partition the jobs among the available workers,
This document explains how to convert a PySP model into a set of files representing the SMPS format for stochastic
linear programs. Conversion can be performed through the command line by invoking the SMPS converter using the
command python -m pyomo.pysp.convert.smps. This command is available starting with Pyomo version
5.1. Prior to version 5.1, the same functionality was available via the command pysp2smps (starting at Pyomo
version 4.2).
SMPS is a standard for expressing stochastic mathematical programs that is based on the ancient MPS format for linear
programs, which is matrix-based. Modern algebraic modeling languages such as Pyomo offer a lot of flexibility so it
is a challenge to take models expressed in Pyomo/PySP and force them into SMPS format. The conversions can be
inefficient and error prone because Pyomo allows flexible expressions and model construction so the resulting matrix
may not be the same for each set of input data. We provide tools for conversion to SMPS because some researchers
have tools that read SMPS and exploit its limitations on problem structure; however, the user should be aware that the
conversion is not always possible.
Currently, these routines only support two-stage stochastic programs. Support for models with more than two time
stages will be considered in the future as this tool matures.
To enable proper conversion of a PySP model to a set of SMPS files, the following additional requirements must be
met:
1. The reference Pyomo model must include annotations that identify stochastic data locations in the second-stage
problem.
Note: Variables can be declared as primary by assigning them to a stage using the StageVariables assignment,
or declared as auxiliary variables, which are assigned to a stage using StageDerivedVariables assignment. For
algorithms such as PH, the distinction is meaningful and those variables that are fully determined by primary variables
and the data should generally be assigned to StageDerivedVariables for their stage.
The third point may also come as a surprise, but the ability to handle a non-uniform problem structure in most PySP
tools falls directly from the fact that the non-anticipativity conditions are all that is required in many cases. However,
the conversion to SMPS format is based on a matrix representation of the problem where the stochastic coefficients
are provided as a set of sparse matrix coordinates. This subsequently requires that the row and column dimensions as
well as the sparsity structure of the problem does not change across scenarios.
Annotations are necessary for alerting the SMPS conversion routines of the locations of data that needs to be updated
when changing from one scenario to another. Knowing these sparse locations allows decomposition algorithms to
employ efficient methods for solving a stochastic program. In order to use the SMPS conversion tool, at least one of
the following annotations must be declared on the reference Pyomo model:
• PySP_StochasticRHSAnnotation: indicates the existence of stochastic constraint right-hand-sides (or bounds)
in second-stage constraints
• PySP_StochasticMatrixAnnotation: indicates the existence of stochastic variable coefficients in second-stage
constraints
• PySP_StochasticObjectiveAnnotation: indicates the existence stochastic cost coefficients in the second-stage
cost function
These will be discussed in further detail in the remaining sections. The following code snippet demonstrates how to
import these annotations and declare them on a model.
Populating these annotations with entries is optional, and simply declaring them on the reference Pyomo model will
alert the SMPS conversion routines that all coefficients appearing on the second-stage model should be assumed
stochastic. That is, adding the lines in the previous code snippet alone implies that: (i) all second-stage constraints
have stochastic bounds, (ii) all first- and second-stage variables appearing in second-stage constraints have stochas-
tic coefficients, and (iii) all first- and second-stage variables appearing in the objective have stochastic coefficients.
PySP can attempt to determine the stage-ness of a constraint by examining the set of variables that appear in the
constraint expression. E.g., a first-stage constraint is characterized as having only first-stage variables appearing in
its expression. A second-stage constraint has at least one second-stage variable appearing in its expression. The
stage of a variable is declared in the scenario tree provided to PySP. This method of constraint stage classification
is not perfect. That is, one can very easily define a model with a constraint that uses only first-stage variables in an
expression involving stochastic data. This constraint would be incorrectly identified as first-stage by the method above,
even though the existence of stochastic data necessarily implies it is second-stage. To deal with cases such as this,
an additional annotation is made available that is named PySP_ConstraintStageAnnotation. This annotation will be
discussed further in a later section.
It is often the case that relatively few coefficients on a stochastic program change across scenarios. In these situations,
adding explicit declarations within these annotations will allow for a more sparse representation of the problem and,
consequently, more efficient solution by particular decomposition methods. Adding declarations to these annotations
is performed by calling the declare method, passing some component as the initial argument. Any remaining
argument requirements for this method are specific to each annotation. Valid types for the component argument
typically include:
• Constraint: includes single constraint objects as well as constraint containers
• Objective: includes single objective objects as well as objective containers
• Block: includes Pyomo models as well as single block objects and block containers
Any remaining details for adding declarations to the annotations mentioned thus far will be discussed in later sections.
The remainder of this section discusses the semantics of these declarations based on the type for the component
argument.
When the declare method is called with a component such as an indexed Constraint or a Block (model), the
SMPS conversion routines will interpret this as meaning all constraints found within that indexed Constraint or on
that Block (that have not been deactivated) should be considered. As an example, we consider the following partially
declared concrete Pyomo model:
model = ConcreteModel()
# indexed constraint
model.r_index = Set(initialize=[3, 6, 9])
def r_rule(model, i):
return p + i <= 1 *model.z + model.y * 5 <= 10 + q + i
model.r = Constraint(model.r_index, rule=r_rule)
# singleton constraint
model.c = Constraint(expr= p * model.z >= 1)
Here the local Python variables p and q serve as placeholders for data that changes with each scenario.
The following are equivalent annotations of the model, each declaring all of the constraints shown above as having
stochastic right-hand-side data:
• Implicit form
model.stoch_rhs = StochasticConstraintBoundsAnnotation()
model.stoch_rhs = StochasticConstraintBoundsAnnotation()
model.stoch_rhs.declare(model)
• Explicit form for singleton constraint with implicit form for indexed constraint and sub-block
model.stoch_rhs = StochasticConstraintBoundsAnnotation()
model.stoch_rhs.declare(model.r)
model.stoch_rhs.declare(model.c)
model.stoch_rhs.declare(model.b)
• Explicit form for singleton constraints at the model and sub-block level with implicit form for indexed constraint
model.stoch_rhs = StochasticConstraintBoundsAnnotation()
model.stoch_rhs.declare(model.r)
model.stoch_rhs.declare(model.c)
model.stoch_rhs.declare(model.b.c)
• Fully explicit form for singleton constraints as well as all indices of indexed constraint
model.stoch_rhs = StochasticConstraintBoundsAnnotation()
model.stoch_rhs.declare(model.r[3])
model.stoch_rhs.declare(model.r[6])
model.stoch_rhs.declare(model.r[9])
model.stoch_rhs.declare(model.c)
model.stoch_rhs.declare(model.b.c)
Note that the equivalence of the first three bullet forms to the last two bullet forms relies on the following conditions
being met: (1) model.z and model.y are declared on the second stage of the PySP scenario tree and (2) at least one
of these second-stage variables appears in each of the constraint expressions above. Together, these two conditions
cause each of the constraints above to be categorized as second-stage; thus, causing them to be considered by the
SMPS conversion routines in the implicit declarations used by the first three bullet forms.
Warning: Pyomo simplifies product expressions such that terms with 0 coefficients are removed from the final
expression. This can sometimes create issues with determining the correct stage classification of a constraint as
well as result in different sparsity patterns across scenarios. This issue is discussed further in the later section
entitled Edge-Cases.
When it comes to catching errors in model annotations, there is a minor difference between the first bullet form from
above (empty annotation) and the others. In the empty case, PySP will use exactly the set of second-stage constraints
it is aware of. This set will either be determined through inspection of the constraint expressions or through the user-
provided constraint-stage classifications declared using the PySP_ConstraintStageAnnotation annotation type. In
the case where the stochastic annotation is not empty, PySP will verify that all constraints declared within it belong to
the set of second-stage constraints it is aware of. If this verification fails, an error will be reported. This behavior is
meant to aid users in debugging problems associated with non-uniform sparsity structure across scenarios that are, for
example, caused by 0 coefficients in product expressions.
Annotations on AbstractModel Objects
Pyomo models defined using the AbstractModel object require the modeler to take further steps when making
these annotations. In the AbstractModel setting, these assignments must take place within a BuildAction,
which is executed only after the model has been constructed with data. As an example, the last bullet form from
the previous section could be written in the following way to allow execution with either an AbstractModel or a
ConcreteModel:
def annotate_rule(m):
m.stoch_rhs = StochasticConstraintBoundsAnnotation()
m.stoch_rhs.declare(m.r[3])
m.stoch_rhs.declare(m.r[6])
m.stoch_rhs.declare(m.r[9])
m.stoch_rhs.declare(m.c)
m.stoch_rhs.declare(m.b.c)
model.annotate = BuildAction(rule=annotate_rule)
Note that the use of m rather than model in the annotate_rule function is meant to draw attention to the fact that
the model object being passed into the function as the first argument may not be the same object as the model outside
of the function. This is in fact the case in the AbstractModel setting, whereas for the ConcreteModel setting
they are the same object. We often use model in both places to avoid errors caused by forgetting to use the correct
object inside the function (Python scoping rules handle the rest). Also note that a BuildAction must be declared
on the model after the declaration of any components being accessed inside its rule function.
Stochastic Constraint Bounds (RHS)
If stochastic elements appear on the right-hand-side of constraints (or as constants in the body of constraint expres-
sions), these locations should be declared using the PySP_StochasticRHSAnnotation annotation type. When com-
ponents are declared with this annotation, there are no additional required arguments for the declare method. How-
ever, to allow for more flexibility when dealing with double-sided inequality constraints, the declare method can
be called with at most one of the keywords lb or ub set to False to signify that one of the bounds is not stochastic.
The following code snippet shows example declarations with this annotation for various constraint types.
from pyomo.pysp.annotations import StochasticConstraintBoundsAnnotation
model = ConcreteModel()
# a second-stage variable
model.y = Var()
# equality constraint
model.c = Constraint(expr= model.y == q)
model.stoch_rhs.declare(model.c)
Note that simply declaring the PySP_StochasticRHSAnnotation annotation type and leaving it empty will alert the
SMPS conversion routines that all constraints identified as second-stage should be treated as having stochastic right-
hand-side data. Calling the declare method on at least one component implies that the set of constraints considered
should be limited to what is declared within the annotation.
Stochastic Constraint Matrix
If coefficients of variables change in the second-stage constraint matrix, these locations should be declared using the
PySP_StochasticMatrixAnnotation annotation type. When components are declared with this annotation, there are
no additional required arguments for the declare method. Calling the declare method with the single compo-
nent argument signifies that all variables encountered in the constraint expression (including first- and second-stage
variables) should be treated as having stochastic coefficients. This can be limited to a specific subset of variables by
calling the declare method with the variables keyword set to an explicit list of variable objects. The following
code snippet shows example declarations with this annotation for various constraint types.
model = ConcreteModel()
# a first-stage variable
model.x = Var()
# a second-stage variable
model.y = Var()
As is the case with the PySP_StochasticRHSAnnotation annotation type, simply declaring the
PySP_StochasticMatrixAnnotation annotation type and leaving it empty will alert the SMPS conversion rou-
tines that all constraints identified as second-stage should be considered, and, additionally, that all variables
encountered in these constraints should be considered to have stochastic coefficients. Calling the declare method
on at least one component implies that the set of constraints considered should be limited to what is declared within
the annotation.
Stochastic Objective Elements
If the cost coefficients of any variables are stochastic in the second-stage cost expression, this should be noted using the
PySP_StochasticObjectiveAnnotation annotation type. This annotation uses the same semantics for the declare
method as the PySP_StochasticMatrixAnnotation annotation type, but with one additional consideration regarding
any constants in the objective expression. Constants in the objective are treated as stochastic and automatically handled
by the SMPS code. If the objective expression does not contain any constant terms or these constant terms do not
change across scenarios, this behavior can be disabled by setting the keyword include_constant to False in a
call to the declare method.
model = ConcreteModel()
# a first-stage variable
model.x = Var()
# a second-stage variable
model.y = Var()
Similar to the previous annotation type, simply declaring the PySP_StochasticObjectiveAnnotation annotation type
and leaving it empty will alert the SMPS conversion routines that all variables appearing in the single active model
objective expression should be considered to have stochastic coefficients.
Annotating Constraint Stages
Annotating the model with constraint stages is sometimes necessary to identify to the SMPS routines that certain
constraints belong in the second time-stage even though they lack references to any second-stage variables. Annotation
of constraint stages is achieved using the PySP_ConstraintStageAnnotation annotation type. If this annotation is
added to the model, it is assumed that it will be fully populated with explicit stage assignments for every constraint
in the model. The declare method should be called giving a Constraint or Block as the first argument and a
positive integer as the second argument (1 signifies the first time stage). Example:
Edge Cases
The section discusses various points that may give users some trouble, and it attempts to provide more details about
the common pitfalls associated with translating a PySP model to SMPS format.
• Moving a Stochastic Objective to the Constraint Matrix
It is often the case that decomposition algorithms theoretically support stochastic cost coefficients but the software
implementation has not yet added support for them. This situation is easy to work around in PySP. One can simply
augment the model with an additional constraint and variable that computes the objective, and then use this variable
in the objective rather than directly using the second-stage cost expression. Consider the following reference Pyomo
model that has stochastic cost coefficients for both a first-stage and a second-stage variable in the second-stage cost
expression:
model = ConcreteModel()
# first-stage variable
model.x = Var()
# second-stage variable
model.y = Var()
The code snippet below re-expresses this model using an objective consisting of the original first-stage cost expres-
sion plus a second-stage variable SecondStageCostVar that represents the second-stage cost. This is enforced
by restricting the variable to be equal to the second-stage cost expression using an additional equality constraint
model = ConcreteModel()
# first-stage variable
model.x = Var()
# second-stage variables
model.y = Var()
model.SecondStageCostVar = Var()
model = AbstractModel()
# a first-stage variable
model.x = Var()
# a second-stage variable
model.y = Var()
Note that in the expression for constraint c, there is a fixed parameter p involved in the variable expression on the
left-hand-side of the inequality. When an expression is written this way, it can be easy to forget that the value of this
parameter will be pushed to the bound of the constraint when it is converted into linear canonical form. Remember to
declare these constraints within the PySP_StochasticRHSAnnotation annotation type.
A constant term appearing in the objective expression presents a similar issue. Whether or not this term is stochastic,
it must be dealt with when certain outputs expect the problem to be expressed as a linear program. The SMPS code in
PySP will deal with this situation for you by implicitly adding a new second-stage variable to the problem in the final
output file that uses the constant term as its coefficient in the objective and that is fixed to a value of 1.0 using a trivial
equality constraint. The default behavior when declaring the PySP_StochasticObjectiveAnnotation annotation type
will be to assume this constant term in the objective is stochastic. This helps ensure that the relative scenario costs
reported by algorithms using the SMPS files will match that of the PySP model for a given solution. When moving
a stochastic objective into the constraint matrix using the method discussed in the previous subsection, it is important
to be aware of this behavior. A stochastic constant term in the objective would necessarily translate into a stochastic
constraint right-hand-side when moved to the constraint matrix.
• Stochastic Variable Bounds
Although not directly supported, stochastic variable bounds can be expressed using explicit constraints along with the
PySP_StochasticRHSAnnotation annotation type to achieve the same effect.
• Problems Caused by Zero Coefficients
Expressions that involve products with some terms having 0 coefficients can be problematic when the zeros can become
nonzero in certain scenarios. This can cause the sparsity structure of the LP to change across scenarios because Pyomo
simplifies these expressions when they are created such that terms with a 0 coefficient are dropped. This can result in
an invalid SMPS conversion. Of course, this issue is not limited to explicit product expressions, but can arise when
the user implicitly assigns a variable a zero coefficient by outright excluding it from an expression. For example, both
constraints in the following code snippet suffer from this same underlying issue, which is that the variable model.y
will be excluded from the constraint expressions in a subset of scenarios (depending on the value of q) either directly
due to a 0 coefficient in a product expressions or indirectly due to user-defined logic that is based off of the values of
stochastic data.
model = ConcreteModel()
model.x = Var()
model.y = Var()
def c2_rule(model):
expr = p * model.x
if q != 0:
expr += model.y
(continues on next page)
The SMPS conversion routines will attempt some limited checking to help prevent this kind of situation from silently
turning the SMPS representation to garbage, but it must ultimately be up to the user to ensure this is not an issue.
This is in fact the most challenging aspect of converting PySP’s AML-based problem representation to the structure-
preserving LP representation used in the SMPS format.
One way to deal with the 0 coefficient issue, which works for both cases discussed in the example above, is to create
a zero Expression object. E.g.,
model.zero = Expression(expr=0)
This component can be used to add variables to a linear expression so that the resulting expression retains a reference
to them. This behavior can be verified by examining the output from the following example:
This section explains how the SMPS conversion utilities available in PySP can be invoked from the command line.
Starting with Pyomo version 5.1, the SMPS writer can be invoked using the command python -m pyomo.pysp.
convert.smps. Prior to version 5.1, this functionality was available via the pysp2smps command (starting at
Pyomo version 4.2). Use the --help option with the main command to see a detailed description of the command-
line options available:
Assuming successful execution, this would result in the following files being created:
• sdinput/baa99/baa99.cor
• sdinput/baa99/baa99.tim
• sdinput/baa99/baa99.sto
• sdinput/baa99/baa99.cor.symbols
The first file is the core problem file written in MPS format. The second file indicates at which row and column the
first and second time stages begin. The third file contains the location and values of stochastic data in the problem
for each scenario. This file is generated by merging the individual output for each scenario in the scenario tree into
separate BLOCK sections. The last file contains a mapping for non-anticipative variables from the symbols used in
the above files to a unique string that can be used to recover the variable on any Pyomo model. It is mainly used by
PySP’s solver interfaces to load a solver solution.
To ensure that the problem structure is the same and that all locations of stochastic data have been annotated prop-
erly, the script creates additional auxiliary files that are compared across scenarios. The command-line option
--keep-auxiliary-files can be used to retain the auxiliary files that were generated for the template sce-
nario used to write the core file. When this option is used with the above example, the following additional files will
appear in the output directory:
• sdinput/baa99/baa99.mps.det
• sdinput/baa99/baa99.sto.struct
• sdinput/baa99/baa99.row
• sdinput/baa99/baa99.col
The .mps.det file is simply the core file for the reference scenario with the values for all stochastic coefficients
set to zero. If this does not match for every scenario, then there are places in the model that still need to be declared
on one or more of the stochastic data annotations. The .row and the .col files indicate the ordering of constraints
and variables, respectively, that was used to write the core file. The .sto.struct file lists the nonzero locations
of the stochastic data in terms of their row and column location in the core file. These files are created for each
scenario instance in the scenario tree and placed inside of a subdirectory named scenario_files within the output
directory. These files will be removed removed unless validation fails or the --keep-scenario-files option is
used.
The SMPS writer also supports parallel execution. This can significantly reduce the overall time required to produce
the SMPS files when there are many scenarios. Parallel execution using PySP’s Pyro-based tools can be performed
using the steps below. Note that each of these commands can be launched in the background inside the same shell or
in their own separate shells.
1. Start the Pyro name server:
$ pyomo_ns -n localhost
An annotated version of the farmer example is also provided. The model file can be found in the pysp/farmer/
smps_model examples subdirectory. Note that the scenario tree for this model is defined in a separate file. When
invoking the SMPS writer, a scenario tree structure file can be provided via the --scenario-tree-location
(-s) command-line option. For example, assuming one is in the pysp/farmer subdirectory, the farmer model can
be converted to SMPS files using the command:
Note that, by default, the files created by the SMPS writer use shortened symbols that do not match the names of
the variables and constraints declared on the Pyomo model. This is for efficiency reasons, as using fully quali-
fied component names can result in significantly larger files. However, it can be useful in debugging situations
to generate the SMPS files using the original component names. To do this, simply add the command-line option
--symbolic-solver-labels to the command string.
The SMPS writer supports other formats for the core problem file (e.g., the LP format). The command-line option
--core-format can be used to control this setting. Refer to the command-line help string for more information
about the list of available format.
PySP provides support for creating DDSIP inputs, and some support for reading DDSIP solutions back into PySP is
under development. Use of these utilties requires additional model annotations that declare the location of stochastic
data coefficients. See the section on converting PySP models to SMPS for more information.
To access the DDSIP writer via the command line, use python -m pyomo.pysp.convert.ddsip. To access
the full solver interface to DDSIP, which writes the input files, invokes the DDSIP solver, and reads the solution, use
python -m pyomo.pysp.solvers.ddsip. For example, to get a list of command arguments, use:
Note: Not all of the command arguments are relevant for DDSIP.
For researchers that simply want to write out the files needed by DDSIP, the --output-directory option can
be used with the DDSIP writer to specifiy the directory where files should be created. The DDSIP solver interface
creates these files in a temporary directory. To have the DDSIP solver interface retain these files after it exits, use the
--keep-solver-files command-line option. The following example invokes the DDSIP solver on the network-
flow example that ships with PySP. In order to test it, one must first cd into to the networkflow example directory and
then execute the command:
$ python -m pyomo.pysp.solvers.ddsip \
-s 1ef10 -m smps_model --solver-options="NODELIM=1”
The --solver-options command line argument can be used set the values of any DDSIP options that are written
to the DDSIP configuration file; multiple options should be space-separated. See DDSIP documentation for a list of
options.
Here is the same example modified to simply create the DDSIP input files in an output directory named
ddsip_networkflow:
$ python -m pyomo.pysp.convert.ddsip \
-s 1ef10 -m smps_model --output-directory ddsip_networkflow \
--symbolic-solver-labels
The option --symbolic-solver-labels tells the DDSIP writer to produce the file names using symbols that
match names on the original Pyomo model. This can significantly increase file size, so it is not done by default. When
the DDSIP writer is invoked, a minimal DDSIP configuration file is created in the output directory that specifies the
required problem structure information. Any additional DDSIP options must be manually added to this file by the
user.
As with the SMPS writer, the DDSIP writer and solver interface support PySP’s Pyro-based parallel scenario tree
management system. See the section on the SMPS writer for a description of how to use this functionality.
See rapper: a PySP wrapper for information about putting Python scripts around PySP functionality.
The concrete interface to PySP requires a function that can return a concrete model for a given scenario. Optionally, a
function that returns a scenario tree can be provided; however, a ScenarioStructure.dat file is also an option.
This very terse introduction might help you get started using concrete models with PySP.
There is a lot of flexibility in how this function is implemented, but the path of least resistance is
>>> def pysp_instance_creation_callback(scenario_tree_model,
... scenario_name,
... node_names):
... pass
In many applications, only the scenario_name argument is used. Its purpose is almost always to determine what
data to use when populating the scenario instance. Note that in older examples, the scenario_tree_model
argument is not present.
An older example of this function can be seen in examples/pysp/farmer/concrete/ReferenceModel.
py
Note that this example does not have a function to return a scenario tree, so it can be solved from the examples/
pysp/farmer directory with a command like:
:: runef -m concrete/ReferenceModel.py -s scenariodata/ScenarioStructure.dat –solve
Note: If, for some reason, you want to use the concrete interface for PySP for an AbstractModel, the body of the
function might be something like:
>>> instance = model.create_instance(scenario_name+".dat") # doctest: +SKIP
>>> return instance # doctest: +SKIP
There are many options for a function to return a scenario tree. The path of least resistance is to name the function
pysp_scenario_tree_model_callback with no arguments. One example is shown in examples/pysp/
farmer/concreteNetX/ReferenceModel.py
It can be solved from the examples/pysp/farmer directory with a command like:
:: runef -m concreteNetX/ReferenceModel.py –solve
Pyomo Network is a package that allows users to easily represent their model as a connected network of units. Units
are blocks that contain ports, which contain variables, that are connected to other ports via arcs. The connection of
two ports to each other via an arc typically represents a set of constraints equating each member of each port to each
other, however there exist other connection rules as well, in addition to support for custom rules. Pyomo Network
also includes a model transformation that will automatically expand the arcs and generate the appropriate constraints
to produce an algebraic model that a solver can handle. Furthermore, the package also introduces a generic sequential
decomposition tool that can leverage the modeling components to decompose a model and compute each unit in the
model in a logically ordered sequence.
Port
Note: If split fractions are skipped, the write_var_sum=False option is not allowed.
class pyomo.network.port._PortData(component=None)
This class defines the data for a single Port
vars
A dictionary mapping added names to variables
Type dict
__getattr__(name)
Returns self.vars[name] if it exists
add(var, name=None, rule=None, **kwds)
Add var to this Port, casting it to a Pyomo numeric if necessary
Parameters
• var – A variable or some NumericValue like an expression
• name (str) – Name to associate with this member of the Port
• rule (function) – Function implementing the desired expansion procedure for this mem-
ber. Port.Equality by default, other options include Port.Extensive. Customs are allowed.
• kwds – Keyword arguments that will be passed to rule
arcs(active=None)
A list of Arcs in which this Port is a member
dests(active=None)
A list of Arcs in which this Port is a source
fix()
Fix all variables in the port at their current values. For expressions, fix every variable in the expression.
free()
Unfix all variables in the port. For expressions, unfix every variable in the expression.
get_split_fraction(arc)
Returns a tuple (val, fix) for the split fraction of this arc that was set via set_split_fraction if it exists, and
otherwise None.
is_binary()
Return True if all variables in the Port are binary
is_continuous()
Return True if all variables in the Port are continuous
is_equality(name)
Return True if the rule for this port member is Port.Equality
is_extensive(name)
Return True if the rule for this port member is Port.Extensive
is_fixed()
Return True if all vars/expressions in the Port are fixed
is_integer()
Return True if all variables in the Port are integer
is_potentially_variable()
Return True as ports may (should!) contain variables
iter_vars(expr_vars=False, fixed=None, names=False)
Iterate through every member of the port, going through the indices of indexed members.
Parameters
• expr_vars (bool) – If True, call identify_variables on expression type members
• fixed (bool) – Only include variables/expressions with this type of fixed
• names (bool) – If True, yield (name, index, var/expr) tuples
polynomial_degree()
Returns the maximum polynomial degree of all port members
remove(name)
Remove this member from the port
rule_for(name)
Return the rule associated with the given port member
set_split_fraction(arc, val, fix=True)
Set the split fraction value to be used for an arc during arc expansion when using Port.Extensive.
sources(active=None)
A list of Arcs in which this Port is a destination
unfix()
Unfix all variables in the port. For expressions, unfix every variable in the expression.
The following code snippet shows examples of declaring and using a Port component on a concrete Pyomo model:
Arc
__getattr__(name)
Returns self.expanded_block.name if it exists
set_value(vals)
Set the port attributes on this arc
The following code snippet shows examples of declaring and using an Arc component on a concrete Pyomo model:
The examples above show how to declare and instantiate a Port and an Arc. These two components form the basis of
the higher level representation of a connected network with sets of related variable quantities. Once a network model
has been constructed, Pyomo Network implements a transformation that will expand all (active) arcs on the model and
automatically generate the appropriate constraints. The constraints created for each port member will be indexed by
the same indexing set as the port member itself.
During transformation, a new block is created on the model for each arc (located on the arc’s parent block), which
serves to contain all of the auto generated constraints for that arc. At the end of the transformation, a reference is
created on the arc that points to this new block, available via the arc property arc.expanded_block.
The constraints produced by this transformation depend on the rule assigned for each port member and can be different
between members on the same port. For example, you can have two different members on a port where one member’s
rule is Port.Equality and the other member’s rule is Port.Extensive.
Port.Equality is the default rule for port members. This rule simply generates equality constraints on the ex-
panded block between the source port’s member and the destination port’s member. Another implemented expansion
method is Port.Extensive, which essentially represents implied splitting and mixing of certain variable quanti-
ties. Users can refer to the documentation of the static method itself for more details on how this implicit splitting
and mixing is implemented. Additionally, should users desire, the expansion API supports custom rules that can be
implemented to generate whatever is needed for special cases.
The following code demonstrates how to call the transformation to expand the arcs on a model:
>>> TransformationFactory("network.expand_arcs").apply_to(m)
Pyomo Network implements a generic SequentialDecomposition tool that can be used to compute each unit
in a network model in a logically ordered sequence.
The sequential decomposition procedure is commenced via the run method.
Creating a Graph
To begin this procedure, the Pyomo Network model is first utilized to create a networkx MultiDiGraph by adding
edges to the graph for every arc on the model, where the nodes of the graph are the parent blocks of the source and
destination ports. This is done via the create_graph method, which requires all arcs on the model to be both
directed and already expanded. The MultiDiGraph class of networkx supports both direccted edges as well as having
multiple edges between the same two nodes, so users can feel free to connect as many ports as desired between the
same two units.
Computation Order
The order of computation is then determined by treating the resulting graph as a tree, starting at the roots of the
tree, and making sure by the time each node is reached, all of its predecessors have already been computed. This is
implemented through the calculation_order and tree_order methods. Before this, however, the procedure
will first select a set of tear edges, if necessary, such that every loop in the graph is torn, while minimizing both the
number of times any single loop is torn as well as the total number of tears.
Tear Selection
A set of tear edges can be selected in one of two ways. By default, a Pyomo MIP model is created and optimized
resulting in an optimal set of tear edges. The implementation of this MIP model is based on a set of binary “torn”
variables for every edge in the graph, and constraints on every loop in the graph that dictate that there must be at
least one tear on the loop. Then there are two objectives (represented by a doubly weighted objective). The primary
objective is to minimize the number of times any single loop is torn, and then secondary to that is to minimize the total
number of tears. This process is implemented in the select_tear_mip method, which uses the model returned
from the select_tear_mip_model method.
Alternatively, there is the select_tear_heuristic method. This uses a heuristic procedure that walks back and
forth on the graph to find every optimal tear set, and returns each equally optimal tear set it finds. This method is
much slower than the MIP method on larger models, but it maintains some use in the fact that it returns every possible
optimal tear set.
A custom tear set can be assigned before calling the run method. This is useful so users can know what their tear set
will be and thus what arcs will require guesses for uninitialized values. See the set_tear_set method for details.
After all of this computational order preparation, the sequential decomposition procedure will then run through the
graph in the order it has determined. Thus, the function that was passed to the run method will be called on ev-
ery unit in sequence. This function can perform any arbitrary operations the user desires. The only thing that
SequentialDecomposition expects from the function is that after returning from it, every variable on every
outgoing port of the unit will be specified (i.e. it will have a set current value). Furthermore, the procedure guarantees
to the user that for every unit, before the function is called, every variable on every incoming port of the unit will be
fixed.
In between computing each of these units, port member values are passed across existing arcs involving the unit
currently being computed. This means that after computing a unit, the expanded constraints from each arc coming out
of this unit will be satisfied, and the values on the respective destination ports will be fixed at these new values. While
running the computational order, values are not passed across tear edges, as tear edges represent locations in loops to
stop computations (during iterations). This process continues until all units in the network have been computed. This
concludes the “first pass run” of the network.
When passing values across arcs while running the computational order, values at the destinations of each of these
arcs will be fixed at the appropriate values. This is important to the fact that the procedure guarantees every inlet
variable will be fixed before calling the function. However, since values are not passed across torn arcs, there is a need
for user-supplied guesses for those values. See the set_guesses_for method for details on how to supply these
values.
In addition to passing dictionaries of guesses for certain ports, users can also assign current values to the variables
themselves and the procedure will pick these up and fix the variables in place. Alternatively, users can utilize the
default_guess option to specify a value to use as a default guess for all free variables if they have no guess or current
value. If a free variable has no guess or current value and there is no default guess option, then an error will be raised.
Similarly, if the procedure attempts to pass a value to a destination port member but that port member is already fixed
and its fixed value is different from what is trying to be passed to it (by a tolerance specified by the almost_equal_tol
option), then an error will be raised. Lastly, if there is more than one free variable in a constraint while trying to pass
values across an arc, an error will be raised asking the user to fix more variables by the time values are passed across
said arc.
Tear Convergence
After completing the first pass run of the network, the sequential decomposition procedure will proceed to converge
all tear edges in the network (unless the user specifies not to, or if there are no tears). This process occurs separately
for every strongly connected component (SCC) in the graph, and the SCCs are computed in a logical order such that
each SCC is computed before other SCCs downstream of it (much like tree_order).
There are two implemented methods for converging tear edges: direct substitution and Wegstein acceleration. Both of
these will iteratively run the computation order until every value in every tear arc has converged to within the specified
tolerance. See the SequentialDecomposition parameter documentation for details on what can be controlled
about this procedure.
The following code demonstrates basic usage of the SequentialDecomposition class:
>>> from pyomo.environ import *
>>> from pyomo.network import *
>>> m = ConcreteModel()
>>> m.unit1 = Block()
>>> m.unit1.x = Var()
(continues on next page)
>>> m.unit1.x.fix(10)
>>> m.unit1.y['a'].fix(15)
>>> m.unit1.y['b'].fix(20)
class pyomo.network.SequentialDecomposition(**kwds)
A sequential decomposition tool for Pyomo Network models
The following parameters can be set upon construction of this class or via the options attribute.
Parameters
• graph (MultiDiGraph) – A networkx graph representing the model to be solved.
default=None (will compute it)
• tear_set (list) – A list of indexes representing edges to be torn. Can be set with a list of
edge tuples via set_tear_set.
default=None (will compute it)
• select_tear_method (str) – Which method to use to select a tear set, either “mip” or
“heuristic”.
default=”mip”
• run_first_pass (bool) – Boolean indicating whether or not to run through network
before running the tear stream convergence procedure.
default=True
• solve_tears (bool) – Boolean indicating whether or not to run iterations to converge
tear streams.
default=True
• guesses (ComponentMap) – ComponentMap of guesses to use for first pass (see
set_guesses_for method).
default=ComponentMap()
• default_guess (float) – Value to use if a free variable has no guess.
default=None
used whether or not they are active (since this needs to be done after expansion), and they all need to be
directed.
indexes_to_arcs(G, lst)
Converts a list of edge indexes to the corresponding Arcs
Parameters
• G – A networkx graph corresponding to lst
• lst – A list of edge indexes to convert to tuples
Returns A list of arcs
run(model, function)
Compute a Pyomo Network model using sequential decomposition
Parameters
• model – A Pyomo model
• function – A function to be called on each block/node in the network
select_tear_heuristic(G)
This finds optimal sets of tear edges based on two criteria. The primary objective is to minimize the
maximum number of times any cycle is broken. The seconday criteria is to minimize the number of tears.
This function uses a branch and bound type approach.
Returns
• tsets – List of lists of tear sets. All the tear sets returned are equally good. There are often
a very large number of equally good tear sets.
• upperbound_loop – The max number of times any single loop is torn
• upperbound_total – The total number of loops
Improvemnts for the future
I think I can imporve the efficency of this, but it is good enough for now. Here are some ideas for improve-
ment:
1. Reduce the number of redundant solutions. It is possible to find tears sets [1,2] and [2,1]. I
eliminate redundent solutions from the results, but they can occur and it reduces efficency.
2. Look at strongly connected components instead of whole graph. This would cut back on the
size of graph we are looking at. The flowsheets are rarely one strongly conneted component.
3. When you add an edge to a tear set you could reduce the size of the problem in the branch by
only looking at strongly connected components with that edge removed.
4. This returns all equally good optimal tear sets. That may not really be necessary. For very
large flowsheets, there could be an extremely large number of optimial tear edge sets.
select_tear_mip(G, solver, solver_io=None, solver_options={})
This finds optimal sets of tear edges based on two criteria. The primary objective is to minimize the
maximum number of times any cycle is broken. The seconday criteria is to minimize the number of tears.
This function creates a MIP problem in Pyomo with a doubly weighted objective and solves it with the
solver arguments.
select_tear_mip_model(G)
Generate a model for selecting tears from the given graph
Returns
• model
• bin_list – A list of the binary variables representing each edge, indexed by the edge index
of the graph
set_guesses_for(port, guesses)
Set the guesses for the given port
These guesses will be checked for all free variables that are encountered during the first pass run. If a free
variable has no guess, its current value will be used. If its current value is None, the default_guess option
will be used. If that is None, an error will be raised.
All port variables that are downstream of a non-tear edge will already be fixed. If there is a guess for a
fixed variable, it will be silently ignored.
The guesses should be a dict that maps the following:
Port Member Name -> Value
Or, for indexed members, multiple dicts that map:
Port Member Name -> Index -> Value
For extensive members, “Value” must be a list of tuples of the form (arc, value) to guess a value for the
expanded variable of the specified arc. However, if the arc connecting this port is a 1-to-1 arc with its peer,
then there will be no expanded variable for the single arc, so a regular “Value” should be provided.
This dict cannot be used to pass guesses for variables within expression type members. Guesses for those
variables must be assigned to the variable’s current value before calling run.
While this method makes things more convenient, all it does is:
self.options[“guesses”][port] = guesses
set_tear_set(tset)
Set a custom tear set to be used when running the decomposition
The procedure will use this custom tear set instead of finding its own, thus it can save some time. Addi-
tionally, this will be useful for knowing which edges will need guesses.
Parameters tset – A list of Arcs representing edges to tear
While this method makes things more convenient, all it does is:
self.options[“tear_set”] = tset
tear_set_arcs(G, method=’mip’, **kwds)
Call the specified tear selection method and return a list of arcs representing the selected tear edges.
The kwds will be passed to the method.
tree_order(adj, adjR, roots=None)
This function determines the ordering of nodes in a directed tree. This is a generic function that can operate
on any given tree represented by the adjaceny and reverse adjacency lists. If the adjacency list does not
represent a tree the results are not valid.
In the returned order, it is sometimes possible for more than one node to be caclulated at once. So a list
of lists is returned by this function. These represent a bredth first search order of the tree. Following the
order, all nodes that lead to a particular node will be visited before it.
Parameters
• adj – An adjeceny list for a directed tree. This uses generic integer node indexes, not
node names from the graph itself. This allows this to be used on sub-graphs and graps of
components more easily.
Additional Pyomo tutorials and examples can be found at the following links:
Prof. Jeffrey Kantor’s Pyomo Cookbook
Pyomo Gallery
153
Pyomo Documentation, Release 5.6.9.dev0
Show solver output by adding the tee=True option when calling the solve function
>>> SolverFactory('glpk').solve(model, tee=True)
You can use the pprint function to display the model or individual model components
>>> model.pprint()
>>> model.x.pprint()
10.2 FAQ
Options:
-- {end of options}
-= {show name= possibilities}
-? {show usage}
-bf {read boundsfile f}
-e {suppress echoing of assignments}
(continues on next page)
155
Pyomo Documentation, Release 5.6.9.dev0
See the Pyomo Forum for online discussions of Pyomo or to ask a question:
• http://groups.google.com/group/pyomo-forum/
Ask a question on StackExchange
• https://stackoverflow.com/questions/ask?tags=pyomo
Open an issue on GitHub:
• https://github.com/Pyomo/pyomo
Advanced Topics
The purpose of the persistent solver interfaces is to efficiently notify the solver of incremental changes to a Pyomo
model. The persistent solver interfaces create and store model instances from the Python API for the corresponding
solver. For example, the GurobiPersistent class maintaints a pointer to a gurobipy Model object. Thus, we can
make small changes to the model and notify the solver rather than recreating the entire model using the solver Python
API (or rewriting an entire model file - e.g., an lp file) every time the model is solved.
Warning: Users are responsible for notifying persistent solver interfaces when changes to a model are made!
The first step in using a persistent solver is to create a Pyomo model as usual.
This returns an instance of GurobiPersistent. Now we need to tell the solver about our model.
157
Pyomo Documentation, Release 5.6.9.dev0
This will create a gurobipy Model object and include the appropriate variables and constraints. We can now solve the
model.
We can also add or remove variables, constraints, blocks, and objectives. For example,
This tells the solver to add one new constraint but otherwise leave the model unchanged. We can now resolve the
model.
If a pyomo component is replaced with another component with the same name, the first component must be removed
from the solver. Otherwise, the solver will have multiple components. For example, the following code will run
without error, but the solver will have an extra constraint. The solver will have both y >= -2*x + 5 and y <= x, which
is not what was intended!
Warning: Components removed from a pyomo model must be removed from the solver instance by the user.
Additionally, unexpected behavior may result if a component is modified before being removed.
In most cases, the only way to modify a component is to remove it from the solver instance, modify it with Pyomo,
and then add it back to the solver instance. The only exception is with variables. Variables may be modified and then
updated with with solver:
In order to get the best performance out of the persistent solvers, use the “save_results” flag:
Note that if the “save_results” flag is set to False, then the following is not supported.
Additionally, a subset of variable values may be loaded back into the model:
In this section we provide a series of examples intended to show different things that can be done with rapper.
Imports:
The next two lines show one way to create a concrete scenario tree. There are
others that can be found in `pyomo.pysp.scenariotree.tree_structure_model`.
Create a rapper solver object assuming there is a file named ReferenceModel.py that has an appropriate
pysp_instance_creation_callback function.
Also, stsolver.scenario_tree has the solution. The package csvw is imported from PySP as shown above.
It is also possible to add arguments for chance constraints and CVaR; see rapperAPI for details.
Now we will solve the same problem again, but we cannot re-use the same rapper.StochSolver object in
the same program so we must construct a new one; however, we can re-used the scenario tree.
and we add the option to solve_ef to return the gap and the tee option to see the solver output as well.
PH
We will now do the same problem, but with PH and we will re-use the scenario tree in tree_model from
the code above. We put sub-solver options in sopts and PH options (i.e., those that would provided to
runph) Note that if options are passed to the constructor (and the solver); they are passed as a dictionary
where options that do not have an argument have the data value None. The constructor really only needs
to some options, such as those related to bundling.
>>> sopts = {}
>>> sopts['threads'] = 2
>>> phopts = {}
>>> phopts['--output-solver-log'] = None
>>> phopts['--max-iterations'] = '3'
The solve_ph method is similar to solve_ef, but requires a default_rho and accepts PH options:
A class and some utilities to wrap PySP. In particular to enable programmatic access to some of the functionality in
runef and runph for ConcreteModels Author: David L. Woodruff, started February 2017
class pyomo.pysp.util.rapper.StochSolver(fsfile, fsfct=None, tree_model=None,
phopts=None)
A class for solving stochastic versions of concrete models and abstract models. Inspired by the IDAES use case
and by daps ability to create tree models. Author: David L. Woodruff, February 2017
Parameters
• fsfile (str) – is a path to the file that contains the scenario callback for concrete or the
reference model for abstract.
• fsfct (str, or fct, or None) –
str: callback function name in the file
fct: callback function (fsfile is ignored)
None: it is a AbstractModel
• tree_model (concrete model, or networkx tree, or path) – gives the
tree as a concrete model (which could be a fct) or a valid networkx scenario tree or path to
AMPL data file.
• phopts – dictionary of ph options; needed during construction if there is bundling.
scenario_tree
scenario tree object (that includes data)
make_ef(verbose=False, generate_weighted_cvar=False, cvar_weight=None, risk_alpha=None,
cc_indicator_var_name=None, cc_alpha=0.0)
Make an ef object (used by solve_ef); all Args are optional.
Parameters
Note: Also update the scenario tree, populated with the solution. Also attach the full ef instance to the
object. So you might want obj = pyo.value(stsolver.ef_instance.MASTER) This needs more work to deal
with solver failure (dlw, March, 2018)
Parameters
• subsolver (str) – the solver to call (e.g., ‘ipopt’)
• default_rho (float) – the rho value to use by default
• phopts – dictionary of ph options (optional)
• sopts – dictionary of subsolver options (optional)
Returns: the ph object
Note: Updates the scenario tree, populated with the xbar values; however, you probably want to do obj,
xhat = ph.compute_and_report_inner_bound_using_xhat() where ph is the return value.
pyomo.pysp.util.rapper.xhat_from_ph(ph)
a service fuction to wrap a call to get xhat
Parameters ph – a post-solve ph object
Returns: (float, object)
float: the expected cost of the xhat solution for the scenarios
xhat: an object with the solution tree
pyomo.pysp.util.rapper.xhat_walker(xhat)
A service generator to walk over a given xhat
Parameters xhat (dict) – an xhat solution (probably from xhat_from_ph)
Yields (nodename, varname, varvalue)
In Demonstration of rapper Capabilities we provide a series of examples intended to show different things that can
be done with rapper and the constructor is shown for a ConcreteModel. The same capabilities are available for an
AbstractModel but the construction of the rapper object is different as shown here.
The next line constructs the rapper object that can be used to emulate runph or runef.
As homage to the tired cliché based on the rap-wrap homonym, we provide the lyrics to our rap anthem:
A PySP State of Mind (The Pyomo Hip Hop)
By Woody and https://www.song-lyrics-generator.org.uk
Yeah, yeah
Ayo, modeller, it's time.
It's time, modeller (aight, modeller, begin).
Straight out the scriptable dungeons of rap.
11.2.5 Bibliography
• genindex
• modindex
• search
This module provides support for including units within Pyomo expressions, and provides methods for checking the
consistency of units within those expresions.
To use this package within your Pyomo model, you first need an instance of a PyomoUnitsContainer. You can use the
module level instance called units and use the pre-defined units in expressions or components.
Examples
To use a unit within an expression, simply reference the desired unit as an attribute on the module singleton units.
>>> print(units.get_units(model.obj.expr))
m ** 2 / s ** 4
Note: This module has a module level instance of a PyomoUnitsContainer called units that you should use for
creating, retreiving, and checking units
Note: This is a work in progress. Once the components units implementations are complete, the units will eventually
work similar to the following.
Notes
• The implementation is currently based on the pint package and supports all the units that are supported by pint.
• The list of units that are supported by pint can be found at the following url: https://github.com/hgrecco/pint/
blob/master/pint/default_en.txt
• Currently, we do NOT test units of unary functions that include native data types e.g. explicit float (3.0) since
these are removed by the expression system before getting to the code that checks the units.
Note: In this implementation of units, “offset” units for temperature are not supported within expressions (i.e.
the non-absolute temperature units including degrees C and degrees F). This is because there are many non-obvious
combinations that are not allowable. This concern becomes clear if you first convert the non-absolute temperature units
to absolute and then perform the operation. For example, if you write 30 degC + 30 degC == 60 degC, but convert
each entry to Kelvin, the expression is not true (i.e., 303.15 K + 303.15 K is not equal to 333.15 K). Therefore, there
are several operations that are not allowable with non-absolute units, including addition, multiplication, and division.
Please see the pint documentation here for more discussion. While pint implements “delta” units (e.g., delta_degC)
to support correct unit conversions, it can be difficult to identify and guarantee valid operations in a general algebraic
modeling environment. While future work may support units with relative scale, the current implementation requires
use of absolute temperature units (i.e. K and R) within expressions and a direct conversion of numeric values using
specific functions for converting input data and reporting.
class pyomo.core.base.units_container.PyomoUnitsContainer
Bases: object
Class that is used to create and contain units in Pyomo.
This is the class that is used to create, contain, and interact with units in Pyomo. The module (pyomo.
core.base.units_container) also contains a module attribute called units that is a singleton in-
stance of a PyomoUnitsContainer. This singleton should be used instead of creating your own instance of
a PyomoUnitsContainer. For an overview of the usage of this class, see the module documentation
(pyomo.core.base.units_container)
This class is based on the “pint” module. Documentation for available units can be found at the following url:
https://github.com/hgrecco/pint/blob/master/pint/default_en.txt
Note: Pre-defined units can be accessed through attributes on the PyomoUnitsContainer class; however, these
attributes are created dynamically through the __getattr__ method, and are not present on the class until they are
requested.
check_units_consistency(expr, allow_exceptions=True)
Check the consistency of the units within an expression. IF allow_exceptions is False, then this function
swallows the exception and returns only True or False. Otherwise, it will throw an exception if the units
are inconsistent.
Parameters
• expr (Pyomo expression) – The source expression to check.
• allow_exceptions (bool) – True if you want any exceptions to be thrown, False if
you only want a boolean (and the exception is ignored).
Returns True if units are consistent, and False if not
Return type bool
Raises pyomo.core.base.units_container.UnitsError, pyomo.core.base.
units_container.InconsistentUnitsError
check_units_equivalent(expr1, expr2)
Check if the units associated with each of the expressions are equivalent.
Parameters
• expr1 (Pyomo expression) – The first expression.
• expr2 (Pyomo expression) – The second expression.
Returns True if the expressions have equivalent units, False otherwise.
Return type bool
Raises pyomo.core.base.units_container.UnitsError, pyomo.core.base.
units_container.InconsistentUnitsError
get_units(expr)
Return the Pyomo units corresponding to this expression (also performs validation and will raise an ex-
ception if units are not consistent).
Parameters expr (Pyomo expression) – The expression containing the desired units
Returns Returns the units corresponding to the expression
Return type Pyomo unit (expression)
Raises pyomo.core.base.units_container.UnitsError, pyomo.core.base.
units_container.InconsistentUnitsError
class pyomo.core.base.units_container.UnitsError(msg)
An exception class for all general errors/warnings associated with units
class pyomo.core.base.units_container.InconsistentUnitsError(exp1, exp2, msg)
An exception indicating that inconsistent units are present on an expression.
E.g., x == y, where x is in units of units.kg and y is in units of units.meter
11.4 LinearExpression
Significant speed improvements can be obtained using the LinearExpression object when there are long, dense,
linear expressions. The arguments are
where the second and third arguments are lists that must be of the same length. Here is a simple example that illustrates
the syntax. This example creates two constraints that are the same:
Warning: The lists that are passed to LinearModel are not copied, so caution must be excercised if they are
modified after the component is constructed.
Developer Reference
This section provides documentation about fundamental capabilities in Pyomo. This documentation serves as a refer-
ence for both (1) Pyomo developers and (2) advanced users who are developing Python scripts using Pyomo.
Warning: This documentation does not explicitly reference objects in pyomo.core.kernel. While the Pyomo5
expression system works with pyomo.core.kernel objects, the documentation of these documents was not sufficient
to appropriately descibe the use of kernel objects in expressions.
Pyomo supports the declaration of symbolic expressions that represent objectives, constraints and other optimization
modeling components. Pyomo expressions are represented in an expression tree, where the leaves are operands, such
as constants or variables, and the internal nodes contain operators. Pyomo relies on so-called magic methods to
automate the construction of symbolic expressions. For example, consider an expression e declared as follows:
M = ConcreteModel()
M.v = Var()
e = M.v*2
Python determines that the magic method __mul__ is called on the M.v object, with the argument 2. This method
returns a Pyomo expression object ProductExpression that has arguments M.v and 2. This represents the
following symbolic expression tree:
169
Pyomo Documentation, Release 5.6.9.dev0
v 2
Note: End-users will not likely need to know details related to how symbolic expressions are generated and managed
in Pyomo. Thus, most of the following documentation of expressions in Pyomo is most useful for Pyomo developers.
However, the discussion of runtime performance in the first section will help end-users write large-scale models.
Expression Generation
Pyomo expressions can be constructed using native binary operators in Python. For example, a sum can be created in
a simple loop:
M = ConcreteModel()
M.x = Var(range(5))
s = 0
for i in range(5):
s = s + M.x[i]
Additionally, Pyomo expressions can be constructed using functions that iteratively apply Python binary operators.
For example, the Python sum() function can be used to replace the previous loop:
The sum() function is both more compact and more efficient. Using sum() avoids the creation of temporary vari-
ables, and the summation logic is executed in the Python interpreter while the loop is interpreted.
Pyomo can express a very wide range of algebraic expressions, and there are three general classes of expressions that
are recognized by Pyomo:
• linear polynomials
• quadratic polynomials
• nonlinear expressions, including higher-order polynomials and expressions with intrinsic functions
These classes of expressions are leveraged to efficiently generate compact representations of expressions, and to trans-
form expression trees into standard forms used to interface with solvers. Note that There not all quadratic polynomials
are recognized by Pyomo; in other words, some quadratic expressions are treated as nonlinear expressions.
For example, consider the following quadratic polynomial:
This quadratic polynomial is treated as a nonlinear expression unless the expression is explicilty processed to identify
quadratic terms. This lazy identification of of quadratic terms allows Pyomo to tailor the search for quadratic terms
only when they are explicitly needed.
Pyomo includes several similar functions that can be used to create expressions:
prod A function to compute a product of Pyomo expressions.
quicksum A function to efficiently compute a sum of Pyomo expressions.
sum_product A function that computes a generalized dot product.
prod
The prod function is analogous to the builtin sum() function. Its main argument is a variable length argument list,
args, which represents expressions that are multiplied together. For example:
M = ConcreteModel()
M.x = Var(range(5))
M.z = Var()
quicksum
The behavior of the quicksum function is similar to the builtin sum() function, but this function often generates
a more compact Pyomo expression. Its main argument is a variable length argument list, args, which represents
expressions that are summed together. For example:
M = ConcreteModel()
M.x = Var(range(5))
The summation is customized based on the start and linear arguments. The start defines the initial value for
summation, which defaults to zero. If start is a numeric value, then the linear argument determines how the sum
is processed:
• If linear is False, then the terms in args are assumed to be nonlinear.
• If linear is True, then the terms in args are assumed to be linear.
• If linear is None, the first term in args is analyze to determine whether the terms are linear or nonlinear.
This argument allows the quicksum function to customize the expression representation used, and specifically a
more compact representation is used for linear polynomials. The quicksum function can be slower than the builtin
sum() function, but this compact representation can generate problem representations more quickly.
Consider the following example:
M = ConcreteModel()
M.A = RangeSet(100000)
M.p = Param(M.A, mutable=True, initialize=1)
M.x = Var(M.A)
start = time.time()
e = sum( (M.x[i] - 1)**M.p[i] for i in M.A)
print("sum: %f" % (time.time() - start))
start = time.time()
generate_standard_repn(e)
print("repn: %f" % (time.time() - start))
start = time.time()
e = quicksum( (M.x[i] - 1)**M.p[i] for i in M.A)
print("quicksum: %f" % (time.time() - start))
start = time.time()
generate_standard_repn(e)
print("repn: %f" % (time.time() - start))
The sum consists of linear terms because the exponents are one. The following output illustrates that quicksum can
identify this linear structure to generate expressions more quickly:
sum: 1.447861
repn: 0.870225
quicksum: 1.388344
repn: 0.864316
If start is not a numeric value, then the quicksum sets the initial value to start and executes a simple loop to
sum the terms. This allows the sum to be stored in an object that is passed into the function (e.g. the linear context
manager linear_expression).
Warning: By default, linear is None. While this allows for efficient expression generation in normal cases,
there are circumstances where the inspection of the first term in args is misleading. Consider the following
example:
M = ConcreteModel()
M.x = Var(range(5))
The first term created by the generator is linear, but the subsequent terms are nonlinear. Pyomo gracefully transi-
tions to a nonlinear sum, but in this case quicksum is doing additional work that is not useful.
sum_product
The sum_product function supports a generalized dot product. The args argument contains one or more compo-
nents that are used to create terms in the summation. If the args argument contains a single components, then its
sequence of terms are summed together; the sum is equivalent to calling quicksum. If two or more components are
provided, then the result is the summation of their terms multiplied together. For example:
M = ConcreteModel()
M.z = RangeSet(5)
M.x = Var(range(10))
M.y = Var(range(10))
The denom argument specifies components whose terms are in the denominator. For example:
The terms summed by this function are explicitly specified, so sum_product can identify whether the resulting
expression is linear, quadratic or nonlinear. Consequently, this function is typically faster than simple loops, and it
generates compact representations of expressions..
Finally, note that the dot_product function is an alias for sum_product.
Historical Comparison
This document describes the “Pyomo5” expressions, which were introduced in Pyomo 5.6. The main differences
between “Pyomo5” expressions and the previous expression system, called “Coopr3”, are:
• Pyomo5 supports both CPython and PyPy implementations of Python, while Coopr3 only supports CPython.
The key difference in these implementations is that Coopr3 relies on CPython reference counting, which is not
part of the Python language standard. Hence, this implementation is not guaranteed to run on other implemen-
tations of Python.
Pyomo5 does not rely on reference counting, and it has been tested with PyPy. In the future, this should allow
Pyomo to support other Python implementations (e.g. Jython).
• Pyomo5 expression objects are immutable, while Coopr3 expression objects are mutable.
This difference relates to how expression objects are managed in Pyomo. Once created, Pyomo5 expression
objects cannot be changed. Further, the user is guaranteed that no “side effects” occur when expressions change
at a later point in time. By contrast, Coopr3 allows expressions to change in-place, and thus “side effects” make
occur when expressions are changed at a later point in time. (See discussion of entanglement below.)
• Pyomo5 provides more consistent runtime performance than Coopr3.
While this documentation does not provide a detailed comparison of runtime performance between Coopr3 and
Pyomo5, the following performance considerations also motivated the creation of Pyomo5:
– There were surprising performance inconsistencies in Coopr3. For example, the following two loops had
dramatically different runtime:
M = ConcreteModel()
M.x = Var(range(100))
– Coopr3 eliminates side effects by automatically cloning sub-expressions. Unfortunately, this can easily
lead to unexpected cloning in models, which can dramatically slow down Pyomo model generation. For
example:
M = ConcreteModel()
M.p = Param(initialize=3)
M.q = 1/M.p
M.x = Var(range(100))
– Coopr3 leverages recursion in many operations, including expression cloning. Even simple non-linear
expressions can result in deep expression trees where these recursive operations fail because Python runs
out of stack space.
– The immutable representation used in Pyomo5 requires more memory allocations than Coopr3 in simple
loops. Hence, a pure-Python execution of Pyomo5 can be 10% slower than Coopr3 for model construction.
But when Cython is used to optimize the execution of Pyomo5 expression generation, the runtimes for
Pyomo5 and Coopr3 are about the same. (In principle, Cython would improve the runtime of Coopr3 as
well, but the limitations noted above motivated a new expression system in any case.)
Pyomo fundamentally relies on the use of magic methods in Python to generate expression trees, which means that
Pyomo has very limited control for how expressions are managed in Python. For example:
• Python variables can point to the same expression tree
M = ConcreteModel()
M.v = Var()
e = f = 2*M.v
e f
2 v
M = ConcreteModel()
M.v = Var()
e = 2*M.v
f = e + 3
e f
* 3
2 v
M = ConcreteModel()
M.v = Var()
e = 2*M.v
f = e + 3
g = e + 4
e f g
+ +
3 * 4
2 v
In each of these examples, it is almost impossible for a Pyomo user or developer to detect whether expressions are being
shared. In CPython, the reference counting logic can support this to a limited degree. But no equivalent mechanisms
are available in PyPy and other Python implementations.
Entangled Sub-Expressions
We say that expressions are entangled if they share one or more sub-expressions. The first example above does not
represent entanglement, but rather the fact that multiple Python variables can point to the same expression tree. In the
second and third examples, the expressions are entangled because the subtree represented by e is shared. However, if
a leave node like M.v is shared between expressions, we do not consider those expressions entangled.
Expression entanglement is problematic because shared expressions complicate the expected behavior when sub-
expressions are changed. Consider the following example:
M = ConcreteModel()
M.v = Var()
M.w = Var()
e = 2*M.v
f = e + 3
e += M.w
What is the value of e after M.w is added to it? What is the value of f? The answers to these questions are not
immediately obvious, and the fact that Coopr3 uses mutable expression objects makes them even less clear. However,
e f
+ +
w * 3
2 v
With Pyomo5 expressions, each sub-expression is immutable. Thus, the summation operation generates a new ex-
pression e without changing existing expression objects referenced in the expression tree for f. By contrast, Coopr3
imposes the same property by cloning the expression e before added M.w, resulting in the following:
e f
+ +
* w * 3
2 v 2
This example also illustrates that leaves may be shared between expressions.
There is one important exception to the entanglement property described above. The Expression component is
treated as a mutable expression when shared between expressions. For example:
M = ConcreteModel()
M.v = Var()
M.w = Var()
M.e = Expression(expr=2*M.v)
f = M.e + 3
M.e += M.w
Here, the expression M.e is a so-called named expression that the user has declared. Named expressions are explicitly
intended for re-use within models, and they provide a convenient mechanism for changing sub-expressions in complex
applications. In this example, the expression tree is as follows before M.w is added:
M.e 3
2 v
M.e 3
* w
2 v
When considering named expressions, Pyomo5 and Coopr3 enforce the following semantics:
A change to a named expression e that is a sub-expression of f changes the expression tree for f, because
f continues to point to e after it is changed.
Warning: Pyomo expression trees are not composed of Python objects from a single class hierarchy. Conse-
quently, Pyomo relies on duck typing to ensure that valid expression trees are created.
Expression Classes
Expression classes typically represent unary and binary operations. The following table describes the standard opera-
tors in Python and their associated Pyomo expression class:
Additionally, there are a variety of other Pyomo expression classes that capture more general logical relationships,
which are summarized in the following table:
Expression objects are immutable. Specifically, the list of arguments to an expression object (a.k.a. the list of child
nodes in the tree) cannot be changed after an expression class is constructed. To enforce this property, expression
objects have a standard API for accessing expression arguments:
• args - a class property that returns a generator that yields the expression arguments
• arg(i) - a function that returns the i-th argument
• nargs() - a function that returns the number of expression arguments
Warning: Developers should never use the _args_ property directly! The semantics for the use of this data has
changed since earlier versions of Pyomo. For example, in some expression classes the the value nargs() may
not equal len(_args_)!
m = ConcreteModel()
m.p = Param(default=10, mutable=False)
m.q = Param(default=10, mutable=True)
m.x = Var()
m.y = Var(initialize=1)
m.y.fixed = True
The following table describes four different simple expressions that consist of a single model component, and it shows
how they are categorized:
Expressions classes contain methods to test whether an expression tree is in each of these categories. Additionally,
Pyomo includes custom expression classes for expression trees that are not potentially variable. These custom classes
will not normally be used by developers, but they provide an optimization of the checks for potentially variability.
The following classes are exceptions to the design principles describe above.
Named Expressions
Named expressions allow for changes to an expression after it has been constructed. For example, consider the
expression f defined with the Expression component:
M = ConcreteModel()
M.v = Var()
M.w = Var()
M.e = Expression(expr=2*M.v)
f = M.e + 3 # f == 2*v + 3
M.e += M.w # f == 2*v + 3 + w
Although f is an immutable expression, whose definition is fixed, a sub-expressions is the named expression M.e.
Named expressions have a mutable value. In other words, the expression that they point to can change. Thus, a change
to the value of M.e changes the expression tree for any expression that includes the named expression.
Note: The named expression classes are not implemented as sub-classes of ExpressionBase. This reflects
design constraints related to the fact that these are modeling components that belong to class hierarchies other than the
expression class hierarchy, and Pyomo’s design prohibits the use of multiple inheritance for these classes.
Linear Expressions
Pyomo includes a special expression class for linear expressions. The class LinearExpression provides a com-
pact description of linear polynomials. Specifically, it includes a constant value constant and two lists for coeffi-
Sum Expressions
Pyomo does not have a binary sum expression class. Instead, it has an n-ary summation class, SumExpression.
This expression class treats sums as n-ary sums for efficiency reasons; many large optimization models contain large
sums. But note tht this class maintains the immutability property described above. This class shares an underlying list
of arguments with other SumExpression objects. A particular object owns the first n arguments in the shared list,
but different objects may have different values of n.
This class acts like a normal immutable expression class, and the API described above works normally. But direct
access to the shared list could have unexpected results.
Mutable Expressions
Finally, Pyomo includes several mutable expression classes that are private. These are not intended to be used by
users, but they might be useful for developers in contexts where the developer can appropriately control how the
classes are used. Specifically, immutability eliminates side-effects where changes to a sub-expression unexpectedly
create changes to the expression tree. But within the context of model transformations, developers may be able to limit
the use of expressions to avoid these side-effects. The following mutable private classes are available in Pyomo:
_MutableSumExpression This class is used in the nonlinear_expression context manager to efficiently
combine sums of nonlinear terms.
_MutableLinearExpression This class is used in the linear_expression context manager to efficiently
combine sums of linear terms.
Expression Semantics
Pyomo clear semantics regarding what is considered a valid leaf and interior node.
The following classes are valid interior nodes:
• Subclasses of ExpressionBase
• Classes that that are duck typed to match the API of the ExpressionBase class. For example, the named
expression class Expression.
The following classes are valid leaf nodes:
• Members of nonpyomo_leaf_types, which includes standard numeric data types like int, float
and long, as well as numeric data types defined by numpy and other commonly used packages.
This set also includes NonNumericValue, which is used to wrap non-numeric arguments to the
ExternalFunctionExpression class.
• Parameter component classes like SimpleParam and _ParamData, which arise in expression trees when
the parameters are declared as mutable. (Immutable parameters are identified when generating expressions, and
they are replaced with their associated numeric value.)
• Variable component classes like SimpleVar and _GeneralVarData, which often arise in expression trees.
<pyomo.core.expr.current.pyomo5_variable_types>‘.
Note: In some contexts the LinearExpression class can be treated as an interior node, and sometimes it can be
treated as a leaf. This expression object does not have any child arguments, so nargs() is zero. But this expression
references variables and parameters in a linear expression, so in that sense it does not represent a leaf node in the tree.
Context Managers
Pyomo defines several context managers that can be used to declare the form of expressions, and to define a mutable
expression object that efficiently manages sums.
The linear_expression object is a context manager that can be used to declare a linear sum. For example,
consider the following two loops:
M = ConcreteModel()
M.x = Var(range(5))
s = 0
for i in range(5):
s += M.x[i]
with linear_expression() as e:
for i in range(5):
e += M.x[i]
The first apparent difference in these loops is that the value of s is explicitly initialized while e is initialized when
the context manager is entered. However, a more fundamental difference is that the expression representation for s
differs from e. Each term added to s results in a new, immutable expression. By contrast, the context manager creates
a mutable expression representation for e. This difference allows for both (a) a more efficient processing of each sum,
and (b) a more compact representation for the expression.
The difference between linear_expression and nonlinear_expression is the underlying representation
that each supports. Note that both of these are instances of context manager classes. In singled-threaded applications,
these objects can be safely used to construct different expressions with different context declarations.
Finally, note that these context managers can be passed into the start method for the quicksum function. For
example:
M = ConcreteModel()
M.x = Var(range(5))
M.y = Var(range(5))
with linear_expression() as e:
quicksum((M.x[i] for i in M.x), start=e)
quicksum((M.y[i] for i in M.y), start=e)
This sum contains terms for M.x[i] and M.y[i]. The syntax in this example is not intuitive because the sum is
being stored in e.
Note: We do not generally expect users or developers to use these context managers. They are used by the quicksum
and sum_product functions to accelerate expression generation, and there are few cases where the direct use of these
context managers would provide additional utility to users and developers.
There are several ways that string representations can be created from an expression, but the
expression_to_string function provides the most flexible mechanism for generating a string represen-
tation. The options to this function control distinct aspects of the string representation.
The default string representation is an algebraic form, which closely mimics the Python operations used to construct
an expression. The verbose flag can be set to True to generate a string representation that is a nested functional
form. For example:
M = ConcreteModel()
M.x = Var()
e = sin(M.x) + 2*M.x
# sin(x) + 2*x
print(EXPR.expression_to_string(e))
The string representation used for variables in expression can be customized to define different label formats. If
the labeler option is specified, then this function (or class functor) is used to generate a string label used to rep-
resent the variable. Pyomo defines a variety of labelers in the pyomo.core.base.label module. For example, the
NumericLabeler defines a functor that can be used to sequentially generate simple labels with a prefix followed
by the variable count:
M = ConcreteModel()
M.x = Var()
M.y = Var()
e = sin(M.x) + 2*M.y
# sin(x1) + 2*x2
print(EXPR.expression_to_string(e, labeler=NumericLabeler('x')))
The smap option is used to specify a symbol map object (SymbolMap), which caches the variable label data. This
option is normally specified in contexts where the string representations for many expressions are being generated. In
that context, a symbol map ensures that variables in different expressions have a consistent label in their associated
string representations.
The standardize option can be used to re-order the string representation to print polynomial terms before nonlinear
terms. By default, standardize is False, and the string representation reflects the order in which terms were
combined to form the expression. Pyomo does not guarantee that the string representation exactly matches the Python
expression order, since some simplification and re-ordering of terms is done automatically to improve the efficiency
of expression generation. But in most cases the string representation will closely correspond to the Python expression
order.
If standardize is True, then the pyomo expression is processed to identify polynomial terms, and the string
representation consists of the constant and linear terms followed by an expression that contains other nonlinear terms.
For example:
M = ConcreteModel()
M.x = Var()
M.y = Var()
Cloning Expressions
Expressions are automatically cloned only during certain expression transformations. Since this can be an ex-
pensive operation, the clone_counter context manager object is provided to track the number of times the
clone_expression function is executed.
For example:
M = ConcreteModel()
M.x = Var()
Evaluating Expressions
Expressions can be evaluated when all variables and parameters in the expression have a value. The value function
can be used to walk the expression tree and compute the value of an expression. For example:
M = ConcreteModel()
M.x = Var()
M.x.value = math.pi/2.0
val = value(M.x)
assert(isclose(val, math.pi/2.0))
Additionally, expressions define the __call__() method, so the following is another way to compute the value of
an expression:
val = M.x()
assert(isclose(val, math.pi/2.0))
If a parameter or variable is undefined, then the value function and __call__() method will raise an exception.
This exception can be suppressed using the exception option. For example:
M = ConcreteModel()
M.x = Var()
val = value(M.x, exception=False)
assert(val is None)
This option is useful in contexts where adding a try block is inconvenient in your modeling script.
Note: Both the value function and __call__() method call the evaluate_expression function. In prac-
tice, this function will be slightly faster, but the difference is only meaningful when expressions are evaluated many
times.
Expression transformations sometimes need to find all nodes in an expression tree that are of a given type. Pyomo
contains two utility functions that support this functionality. First, the identify_components function is a gen-
erator function that walks the expression tree and yields all nodes whose type is in a specified set of node types. For
example:
from pyomo.core.expr import current as EXPR
M = ConcreteModel()
M.x = Var()
M.p = Param(mutable=True)
e = M.p+M.x
s = set([type(M.p)])
assert(list(EXPR.identify_components(e, s)) == [M.p])
The identify_variables function is a generator function that yields all nodes that are variables. Pyomo uses
several different classes to represent variables, but this set of variable types does not need to be specified by the user.
However, the include_fixed flag can be specified to omit fixed variables. For example:
from pyomo.core.expr import current as EXPR
e = M.x+M.y
M.y.value = 1
M.y.fixed = True
Many of the utility functions defined above are implemented by walking an expression tree and performing an oper-
ation at nodes in the tree. For example, evaluating an expression is performed using a post-order depth-first search
process where the value of a node is computed using the values of its children.
Walking an expression tree can be tricky, and the code requires intimate knowledge of the design of the expression
system. Pyomo includes several classes that define so-called visitor patterns for walking expression tree:
SimpleExpressionVisitor A visitor() method is called for each node in the tree, and the visitor class
collects information about the tree.
ExpressionValueVisitor When the visitor() method is called on each node in the tree, the values of its
children have been computed. The value of the node is returned from visitor().
ExpressionReplacementVisitor When the visitor() method is called on each node in the tree, it may
clone or otherwise replace the node using objects for its children (which themselves may be clones or replace-
ments from the original child objects). The new node object is returned from visitor().
These classes define a variety of suitable tree search methods:
• SimpleExpressionVisitor
– xbfs: breadth-first search where leaf nodes are immediately visited
– xbfs_yield_leaves: breadth-first search where leaf nodes are immediately visited, and the visit method
yields a value
• ExpressionValueVisitor
– dfs_postorder_stack: postorder depth-first search using a stack
• ExpressionReplacementVisitor
– dfs_postorder_stack: postorder depth-first search using a stack
Note: The PyUtilib visitor classes define several other search methods that could be used with Pyomo expressions.
But these are the only search methods currently used within Pyomo.
To implement a visitor object, a user creates a subclass of one of these classes. Only one of a few methods will need
to be defined to implement the visitor:
visitor() Defines the operation that is performed when a node is visited. In the ExpressionValueVisitor
and ExpressionReplacementVisitor visitor classes, this method returns a value that is used by its
parent node.
visiting_potential_leaf() Checks if the search should terminate with this node. If no, then this method
returns the tuple (False, None). If yes, then this method returns (False, value), where value is
computed by this method. This method is not used in the SimpleExpressionVisitor visitor class.
finalize() This method defines the final value that is returned from the visitor. This is not normally redefined.
Detailed documentation of the APIs for these methods is provided with the class documentation for these visitors.
SimpleExpressionVisitor Example
In this example, we describe an visitor class that counts the number of nodes in an expression (including leaf nodes).
Consider the following class:
from pyomo.core.expr import current as EXPR
class SizeofVisitor(EXPR.SimpleExpressionVisitor):
def __init__(self):
self.counter = 0
def finalize(self):
return self.counter
The class constructor creates a counter, and the visit() method increments this counter for every node that is
visited. The finalize() method returns the value of this counter after the tree has been walked. The following
function illustrates this use of this visitor class:
def sizeof_expression(expr):
#
# Create the visitor object
#
visitor = SizeofVisitor()
#
# Compute the value using the :func:`xbfs` search method.
#
return visitor.xbfs(expr)
ExpressionValueVisitor Example
In this example, we describe an visitor class that clones the expression tree (including leaf nodes). Consider the
following class:
from pyomo.core.expr import current as EXPR
class CloneVisitor(EXPR.ExpressionValueVisitor):
def __init__(self):
self.memo = {'__block_scope__': { id(None): False }}
The visit() method creates a new expression node with children specified by values. The
visiting_potential_leaf() method performs a deepcopy() on leaf nodes, which are native Python types
or non-expression objects.
def clone_expression(expr):
#
# Create the visitor object
#
visitor = CloneVisitor()
#
# Clone the expression using the :func:`dfs_postorder_stack`
# search method.
#
return visitor.dfs_postorder_stack(expr)
ExpressionReplacementVisitor Example
In this example, we describe an visitor class that replaces variables with scaled variables, using a mutable parameter
that can be modified later. the following class:
class ScalingVisitor(EXPR.ExpressionReplacementVisitor):
if node.is_variable_type():
return True, self.scale[id(node)]*node
if isinstance(node, EXPR.LinearExpression):
node_ = copy.deepcopy(node)
node_.constant = node.constant
(continues on next page)
The scale_expression() function is called with an expression and a dictionary, scale, that maps variable ID
to model parameter. For example:
M = ConcreteModel()
M.x = Var(range(5))
M.p = Param(range(5), mutable=True)
scale={}
for i in M.x:
scale[id(M.x[i])] = M.p[i]
Library Reference
Pyomo is being increasingly used as a library to support Python scripts. This section describes library APIs for key
elements of Pyomo’s core library. This documentation serves as a reference for both (1) Pyomo developers and (2)
advanced users who are developing Python scripts using Pyomo.
The following modeling components make up the core of the Pyomo Algebraic Modeling Language (AML). These
classes are all available through the pyomo.environ namespace.
ConcreteModel(*args, **kwds) A concrete optimization model that does not defer con-
struction of components.
AbstractModel(*args, **kwds) An abstract optimization model that defers construction
of components.
Block(*args, **kwargs) Blocks are indexed components that contain other com-
ponents (including blocks).
Set(*args, **kwds) A set object that is used to index other Pyomo objects.
RangeSet(*args, **kwds) A set that represents a list of numeric values.
Param(*args, **kwd) A parameter value, which may be defined over an index.
Var(*args, **kwd) A numeric variable, which may be defined over an in-
dex.
Objective(*args, **kwargs) This modeling component defines an objective expres-
sion.
Constraint(*args, **kwargs) This modeling component defines a constraint expres-
sion using a rule function.
Reference(reference[, ctype]) Creates a component that references other components
193
Pyomo Documentation, Release 5.6.9.dev0
get_suffix_value(suffix_or_name, default=None)
Get the suffix value for this component data
getname(fully_qualified=False, name_buffer=None, relative_to=None)
Return a string with the component name and index
id_index_map()
Return an dictionary id->index for all ComponentData instances.
index()
Returns the index of this ComponentData instance relative to the parent component index set. None is
returned if this instance does not have a parent component, or if - for some unknown reason - this instance
does not belong to the parent component’s index set. This method is not intended to be a fast method; it
should be used rarely, primarily in cases of label formulation.
index_set()
Return the index set
is_component_type()
Return True if this class is a Pyomo component
is_constructed()
A boolean indicating whether or not all active components of the input model have been properly con-
structed.
is_indexed()
Return true if this component is indexed
items()
Return a list (index,data) tuples from the dictionary
iteritems()
Return an iterator of (index,data) tuples from the dictionary
iterkeys()
Return an iterator of the keys in the dictionary
itervalues()
Return an iterator of the component data objects in the dictionary
keys()
Return a list of keys in the dictionary
load(arg, namespaces=[None], profile_memory=0, report_timing=None)
Load the model with data from a file, dictionary or DataPortal object.
local_name
Get the component name only within the context of the immediate parent container.
model()
Return the model of the component that owns this data.
name
Get the fully qualifed component name.
parent_block()
Return the parent of the component that owns this data.
parent_component()
Returns the component associated with this object.
pprint(ostream=None, verbose=False, prefix=”)
Print component information
preprocess(preprocessor=None)
Apply the preprocess plugins defined by the user
preprocessor_ep = <ExtensionPoint IPyomoPresolver env=pyomo>
reclassify_component_type(name_or_object, new_ctype, preserve_declaration_order=True)
TODO
reconstruct(data=None)
Re-construct model expressions
root_block()
Return self.model()
set_suffix_value(suffix_or_name, value, expand=True)
Set the suffix value for this component data
set_value(val)
Set the value of a scalar component.
to_dense_data()
TODO
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return a string representation of this component, applying the labeler if passed one.
transfer_attributes_from(src)
Transfer user-defined attributes from src to this block
This transfers all components and user-defined attributes from the block or dictionary src and places them
on this Block. Components are transferred in declaration order.
If a Component on src is also declared on this block as either a Component or attribute, the local Compo-
nent or attribute is replaced by the incoming component. If an attribute name on src matches a Component
declared on this block, then the incoming attribute is passed to the local Component’s set_value() method.
Attribute names appearing in this block’s _Block_reserved_words set will not be transferred (although
Components will be).
Parameters src (_BlockData or dict) – The Block or mapping that contains the new
attributes to assign to this block.
type()
Return the class type for this component
valid_model_component()
Return True if this can be used as a model component.
valid_problem_types()
This method allows the pyomo.opt convert function to work with a Model object.
values()
Return a list of the component data objects in the dictionary
write(filename=None, format=None, solver_capability=None, io_options={})
Write the model to a file, with a given format.
class pyomo.environ.AbstractModel(*args, **kwds)
Bases: pyomo.core.base.PyomoModel.Model
An abstract optimization model that defers construction of components.
activate()
Set the active attribute to True
active
Return the active attribute
add_component(name, val)
Add a component ‘name’ to the block.
This method assumes that the attribute is not in the model.
block_data_objects(active=None, sort=False, descend_into=True, descent_order=None)
This method returns a generator that iterates through the current block and recursively all sub-blocks. This
is semantically equivalent to
component_data_objects(Block, . . . )
clear()
Clear the data in this component
clear_suffix_value(suffix_or_name, expand=True)
Set the suffix value for this component data
clone()
TODO
collect_ctypes(active=None, descend_into=True)
Count all component types stored on or under this block.
Parameters
• active (True/None) – Set to True to indicate that only active components should be
counted. The default value of None indicates that all components (including those that
have been deactivated) should be counted.
• descend_into (bool) – Indicates whether or not component types should be counted
on sub-blocks. Default is True.
Returns: A set of component types.
component(name_or_object)
Return a child component of this block.
If passed a string, this will return the child component registered by that name. If passed a component, this
will return that component IFF the component is a child of this block. Returns None on lookup failure.
component_data_iterindex(ctype=None, active=None, sort=False, descend_into=True, de-
scent_order=None)
Return a generator that returns a tuple for each component data object in a block. By default, this generator
recursively descends into sub-blocks. The tuple is
((component name, index value), _ComponentData)
component_data_objects(ctype=None, active=None, sort=False, descend_into=True, de-
scent_order=None)
Return a generator that iterates through the component data objects for all components in a block. By
default, this generator recursively descends into sub-blocks.
component_map(ctype=None, active=None, sort=False)
Returns a PseudoMap of the components in this block.
ctype None - All components ComponentType - A single ComponentType Iterable - Iterate to
generate ComponentTypes
active is None, True, False None - All True - Active False - Inactive
sort is True, False True - Maps to Block.alphabetizeComponentAndIndex False - Maps to
Block.declarationOrder
returned if this instance does not have a parent component, or if - for some unknown reason - this instance
does not belong to the parent component’s index set. This method is not intended to be a fast method; it
should be used rarely, primarily in cases of label formulation.
index_set()
Return the index set
is_component_type()
Return True if this class is a Pyomo component
is_constructed()
A boolean indicating whether or not all active components of the input model have been properly con-
structed.
is_indexed()
Return true if this component is indexed
items()
Return a list (index,data) tuples from the dictionary
iteritems()
Return an iterator of (index,data) tuples from the dictionary
iterkeys()
Return an iterator of the keys in the dictionary
itervalues()
Return an iterator of the component data objects in the dictionary
keys()
Return a list of keys in the dictionary
load(arg, namespaces=[None], profile_memory=0, report_timing=None)
Load the model with data from a file, dictionary or DataPortal object.
local_name
Get the component name only within the context of the immediate parent container.
model()
Return the model of the component that owns this data.
name
Get the fully qualifed component name.
parent_block()
Return the parent of the component that owns this data.
parent_component()
Returns the component associated with this object.
pprint(ostream=None, verbose=False, prefix=”)
Print component information
preprocess(preprocessor=None)
Apply the preprocess plugins defined by the user
preprocessor_ep = <ExtensionPoint IPyomoPresolver env=pyomo>
reclassify_component_type(name_or_object, new_ctype, preserve_declaration_order=True)
TODO
reconstruct(data=None)
Re-construct model expressions
root_block()
Return self.model()
set_suffix_value(suffix_or_name, value, expand=True)
Set the suffix value for this component data
set_value(val)
Set the value of a scalar component.
to_dense_data()
TODO
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return a string representation of this component, applying the labeler if passed one.
transfer_attributes_from(src)
Transfer user-defined attributes from src to this block
This transfers all components and user-defined attributes from the block or dictionary src and places them
on this Block. Components are transferred in declaration order.
If a Component on src is also declared on this block as either a Component or attribute, the local Compo-
nent or attribute is replaced by the incoming component. If an attribute name on src matches a Component
declared on this block, then the incoming attribute is passed to the local Component’s set_value() method.
Attribute names appearing in this block’s _Block_reserved_words set will not be transferred (although
Components will be).
Parameters src (_BlockData or dict) – The Block or mapping that contains the new
attributes to assign to this block.
type()
Return the class type for this component
valid_model_component()
Return True if this can be used as a model component.
valid_problem_types()
This method allows the pyomo.opt convert function to work with a Model object.
values()
Return a list of the component data objects in the dictionary
write(filename=None, format=None, solver_capability=None, io_options={})
Write the model to a file, with a given format.
class pyomo.environ.Block(*args, **kwargs)
Bases: pyomo.core.base.indexed_component.ActiveIndexedComponent
Blocks are indexed components that contain other components (including blocks). Blocks have a global attribute
that defines whether construction is deferred. This applies to all components that they contain except blocks.
Blocks contained by other blocks use their local attribute to determine whether construction is deferred.
activate()
Set the active attribute to True
active
Return the active attribute
clear()
Clear the data in this component
clear_suffix_value(suffix_or_name, expand=True)
Clear the suffix value for this component data
construct(data=None)
Initialize the block
deactivate()
Set the active attribute to False
dim()
Return the dimension of the index
display(filename=None, ostream=None, prefix=”)
Display values in the block
find_component(label_or_component)
Return a block component given a name.
get_suffix_value(suffix_or_name, default=None)
Get the suffix value for this component data
getname(fully_qualified=False, name_buffer=None, relative_to=None)
Returns the component name associated with this object.
Parameters
• Generate full name from nested block names (fully_qualified) –
• Can be used to optimize iterative name (name_buffer) – generation
(using a dictionary)
• When generating a fully qualified name, (relative_to) – stop at
this block.
id_index_map()
Return an dictionary id->index for all ComponentData instances.
index_set()
Return the index set
is_component_type()
Return True if this class is a Pyomo component
is_constructed()
Return True if this class has been constructed
is_indexed()
Return true if this component is indexed
items()
Return a list (index,data) tuples from the dictionary
iteritems()
Return an iterator of (index,data) tuples from the dictionary
iterkeys()
Return an iterator of the keys in the dictionary
itervalues()
Return an iterator of the component data objects in the dictionary
keys()
Return a list of keys in the dictionary
local_name
Get the component name only within the context of the immediate parent container.
model()
Returns the model associated with this object.
name
Get the fully qualifed component name.
parent_block()
Returns the parent of this object.
parent_component()
Returns the component associated with this object.
pprint(ostream=None, verbose=False, prefix=”)
Print component information
reconstruct(data=None)
Re-construct model expressions
root_block()
Return self.model()
set_suffix_value(suffix_or_name, value, expand=True)
Set the suffix value for this component data
set_value(value)
Set the value of a scalar component.
to_dense_data()
TODO
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return the component name
type()
Return the class type for this component
valid_model_component()
Return True if this can be used as a model component.
values()
Return a list of the component data objects in the dictionary
class pyomo.environ.Constraint(*args, **kwargs)
Bases: pyomo.core.base.indexed_component.ActiveIndexedComponent
This modeling component defines a constraint expression using a rule function.
Constructor arguments:
expr A Pyomo expression for this constraint
rule A function that is used to construct constraint expressions
doc A text string describing this component
name A name for this component
Public class attributes:
doc A text string describing this component
name A name for this component
active A boolean that is true if this component will be used to construct a model instance
rule The rule used to initialize the constraint(s)
is_constructed()
Return True if this class has been constructed
is_indexed()
Return true if this component is indexed
items()
Return a list (index,data) tuples from the dictionary
iteritems()
Return an iterator of (index,data) tuples from the dictionary
iterkeys()
Return an iterator of the keys in the dictionary
itervalues()
Return an iterator of the component data objects in the dictionary
keys()
Return a list of keys in the dictionary
local_name
Get the component name only within the context of the immediate parent container.
model()
Returns the model associated with this object.
name
Get the fully qualifed component name.
parent_block()
Returns the parent of this object.
parent_component()
Returns the component associated with this object.
pprint(ostream=None, verbose=False, prefix=”)
Print component information
reconstruct(data=None)
Re-construct model expressions
root_block()
Return self.model()
set_suffix_value(suffix_or_name, value, expand=True)
Set the suffix value for this component data
set_value(value)
Set the value of a scalar component.
to_dense_data()
TODO
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return the component name
type()
Return the class type for this component
valid_model_component()
Return True if this can be used as a model component.
values()
Return a list of the component data objects in the dictionary
class pyomo.environ.Objective(*args, **kwargs)
Bases: pyomo.core.base.indexed_component.ActiveIndexedComponent
This modeling component defines an objective expression.
Note that this is a subclass of NumericValue to allow objectives to be used as part of expressions.
Constructor arguments:
expr A Pyomo expression for this objective
rule A function that is used to construct objective expressions
sense Indicate whether minimizing (the default) or maximizing
doc A text string describing this component
name A name for this component
Public class attributes:
doc A text string describing this component
name A name for this component
active A boolean that is true if this component will be used to construct a model instance
rule The rule used to initialize the objective(s)
sense The objective sense
Private class attributes:
_constructed A boolean that is true if this component has been constructed
_data A dictionary from the index set to component data objects
_index The set of valid indices
_implicit_subsets A tuple of set objects that represents the index set
_model A weakref to the model that owns this component
_parent A weakref to the parent block that owns this component
_type The class type for the derived subclass
activate()
Set the active attribute to True
active
Return the active attribute
clear()
Clear the data in this component
clear_suffix_value(suffix_or_name, expand=True)
Clear the suffix value for this component data
construct(data=None)
Construct the expression(s) for this objective.
deactivate()
Set the active attribute to False
dim()
Return the dimension of the index
display(prefix=”, ostream=None)
Provide a verbose display of this object
get_suffix_value(suffix_or_name, default=None)
Get the suffix value for this component data
getname(fully_qualified=False, name_buffer=None, relative_to=None)
Returns the component name associated with this object.
Parameters
• Generate full name from nested block names (fully_qualified) –
• Can be used to optimize iterative name (name_buffer) – generation
(using a dictionary)
• When generating a fully qualified name, (relative_to) – stop at
this block.
id_index_map()
Return an dictionary id->index for all ComponentData instances.
index_set()
Return the index set
is_component_type()
Return True if this class is a Pyomo component
is_constructed()
Return True if this class has been constructed
is_indexed()
Return true if this component is indexed
items()
Return a list (index,data) tuples from the dictionary
iteritems()
Return an iterator of (index,data) tuples from the dictionary
iterkeys()
Return an iterator of the keys in the dictionary
itervalues()
Return an iterator of the component data objects in the dictionary
keys()
Return a list of keys in the dictionary
local_name
Get the component name only within the context of the immediate parent container.
model()
Returns the model associated with this object.
name
Get the fully qualifed component name.
parent_block()
Returns the parent of this object.
parent_component()
Returns the component associated with this object.
pprint(ostream=None, verbose=False, prefix=”)
Print component information
reconstruct(data=None)
Re-construct model expressions
root_block()
Return self.model()
set_suffix_value(suffix_or_name, value, expand=True)
Set the suffix value for this component data
set_value(value)
Set the value of a scalar component.
to_dense_data()
TODO
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return the component name
type()
Return the class type for this component
valid_model_component()
Return True if this can be used as a model component.
values()
Return a list of the component data objects in the dictionary
class pyomo.environ.Param(*args, **kwd)
Bases: pyomo.core.base.indexed_component.IndexedComponent
A parameter value, which may be defined over an index.
Constructor Arguments:
name The name of this parameter
index The index set that defines the distinct parameters. By default, this is None, indicating that there is a
single parameter.
domain A set that defines the type of values that each parameter must be.
within A set that defines the type of values that each parameter must be.
validate A rule for validating this parameter w.r.t. data that exists in the model
default A scalar, rule, or dictionary that defines default values for this parameter
initialize A dictionary or rule for setting up this parameter with existing model data
active
Return the active attribute
clear()
Clear the data in this component
clear_suffix_value(suffix_or_name, expand=True)
Clear the suffix value for this component data
construct(data=None)
Initialize this component.
A parameter is constructed using the initial data or the data loaded from an external source. We first set all
the values based on self._rule, and then allow the data dictionary to overwrite anything.
Note that we allow an undefined Param value to be constructed. We throw an exception if a user tries to
use an uninitialized Param.
default()
Return the value of the parameter default.
Possible values:
None No default value is provided.
Numeric A constant value that is the default value for all undefined parameters.
Function f(model, i) returns the value for the default value for parameter i
dim()
Return the dimension of the index
extract_values()
A utility to extract all index-value pairs defined for this parameter, returned as a dictionary.
This method is useful in contexts where key iteration and repeated __getitem__ calls are too expensive to
extract the contents of a parameter.
extract_values_sparse()
A utility to extract all index-value pairs defined with non-default values, returned as a dictionary.
This method is useful in contexts where key iteration and repeated __getitem__ calls are too expensive to
extract the contents of a parameter.
get_suffix_value(suffix_or_name, default=None)
Get the suffix value for this component data
getname(fully_qualified=False, name_buffer=None, relative_to=None)
Returns the component name associated with this object.
Parameters
• Generate full name from nested block names (fully_qualified) –
• Can be used to optimize iterative name (name_buffer) – generation
(using a dictionary)
• When generating a fully qualified name, (relative_to) – stop at
this block.
id_index_map()
Return an dictionary id->index for all ComponentData instances.
index_set()
Return the index set
is_component_type()
Return True if this class is a Pyomo component
is_constructed()
Return True if this class has been constructed
is_expression_type()
Returns False because this is not an expression
is_indexed()
Return true if this component is indexed
items()
Return a list (index,data) tuples from the dictionary
iteritems()
Return an iterator of (index,data) tuples from the dictionary
iterkeys()
Return an iterator of the keys in the dictionary
itervalues()
Return an iterator of the component data objects in the dictionary
keys()
Return a list of keys in the dictionary
local_name
Get the component name only within the context of the immediate parent container.
model()
Returns the model associated with this object.
name
Get the fully qualifed component name.
parent_block()
Returns the parent of this object.
parent_component()
Returns the component associated with this object.
pprint(ostream=None, verbose=False, prefix=”)
Print component information
reconstruct(data=None)
Reconstruct this parameter object. This is particularly useful for cases where an initialize rule is provided.
An initialize rule can return an expression that is a function of other parameters, so reconstruction can
account for changes in dependent parameters.
Only mutable parameters can be reconstructed. Otherwise, the changes would not be propagated into
expressions in objectives or constraints.
root_block()
Return self.model()
set_default(val)
Perform error checks and then set the default value for this parameter.
NOTE: this test will not validate the value of function return values.
set_suffix_value(suffix_or_name, value, expand=True)
Set the suffix value for this component data
set_value(value)
Set the value of a scalar component.
sparse_items()
Return a list (index,data) tuples for defined parameters
sparse_iteritems()
Return an iterator of (index,data) tuples for defined parameters
sparse_iterkeys()
Return an iterator for the keys in the defined parameters
sparse_itervalues()
Return an iterator for the defined param data objects
sparse_keys()
Return a list of keys in the defined parameters
sparse_values()
Return a list of the defined param data objects
store_values(new_values, check=True)
A utility to update a Param with a dictionary or scalar.
If check=True, then both the index and value are checked through the __getitem__ method. Using
check=False should only be used by developers!
to_dense_data()
TODO
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return the component name
type()
Return the class type for this component
valid_model_component()
Return True if this can be used as a model component.
values()
Return a list of the component data objects in the dictionary
class pyomo.environ.RangeSet(*args, **kwds)
Bases: pyomo.core.base.sets.OrderedSimpleSet
A set that represents a list of numeric values.
active
Return the active attribute
add(*args)
Add one or more elements to a set.
bounds()
Return bounds information. The default value is ‘None’, which indicates that this set does not contain
bounds. Otherwise, this is assumed to be a tuple: (lower, upper).
check_values()
Verify that the values in this set are valid.
clear()
Clear that data in this component.
clear_suffix_value(suffix_or_name, expand=True)
Clear the suffix value for this component data
construct(values=None)
Initialize set data
cross(*args)
Return the cross-product between this set and one or more sets
data()
The underlying set data.
difference(*args)
Return the difference between this set with one or more sets
dim()
Return the dimension of the index
discard(element)
Remove an element from the set.
If the element is not a member, do nothing.
first()
The first element is the lower bound
get_suffix_value(suffix_or_name, default=None)
Get the suffix value for this component data
getname(fully_qualified=False, name_buffer=None, relative_to=None)
Returns the component name associated with this object.
Parameters
• Generate full name from nested block names (fully_qualified) –
• Can be used to optimize iterative name (name_buffer) – generation
(using a dictionary)
• When generating a fully qualified name, (relative_to) – stop at
this block.
id_index_map()
Return an dictionary id->index for all ComponentData instances.
index()
Returns the index of this ComponentData instance relative to the parent component index set. None is
returned if this instance does not have a parent component, or if - for some unknown reason - this instance
does not belong to the parent component’s index set. This method is not intended to be a fast method; it
should be used rarely, primarily in cases of label formulation.
index_set()
Return the index set
intersection(*args)
Return the intersection of this set with one or more sets
is_component_type()
Return True if this class is a Pyomo component
is_constructed()
Return True if this class has been constructed
is_indexed()
Return true if this component is indexed
isdisjoint(other)
Return True if the set has no elements in common with ‘other’. Sets are disjoint if and only if their
intersection is the empty set.
issubset(other)
Return True if the set is a subset of ‘other’.
issuperset(other)
Return True if the set is a superset of ‘other’.
Note that we do not simply call other.issubset(self) because ‘other’ may not be a Set instance.
items()
Return a list (index,data) tuples from the dictionary
iteritems()
Return an iterator of (index,data) tuples from the dictionary
iterkeys()
Return an iterator of the keys in the dictionary
itervalues()
Return an iterator of the component data objects in the dictionary
keys()
Return a list of keys in the dictionary
last()
The last element is the upper bound
local_name
Get the component name only within the context of the immediate parent container.
member(key)
Return the value associated with this key.
model()
Returns the model associated with this object.
name
Get the fully qualifed component name.
next(match_element, k=1)
Return the next element in the set. The default behavior is to return the very next element. The k option
can specify how many steps are taken to get the next element.
If the next element is beyond the end of the set, then an exception is raised.
nextw(match_element, k=1)
Return the next element in the set. The default behavior is to return the very next element. The k option
can specify how many steps are taken to get the next element.
If the next element goes beyond the end of the list of elements in the set, then this wraps around to the
beginning of the list.
ord(match_element)
Return the position index of the input value. The position indices start at 1.
parent_block()
Returns the parent of this object.
parent_component()
Returns the component associated with this object.
pprint(ostream=None, verbose=False, prefix=”)
Print component information
prev(match_element, k=1)
Return the previous element in the set. The default behavior is to return the element immediately prior to
the specified element. The k option can specify how many steps are taken to get the previous element.
If the previous element is before the start of the set, then an exception is raised.
prevw(match_element, k=1)
Return the previous element in the set. The default behavior is to return the element immediately prior to
the specified element. The k option can specify how many steps are taken to get the previous element.
If the previous element is before the start of the set, then this wraps around to the end of the list.
reconstruct(data=None)
Re-construct model expressions
remove(element)
Remove an element from the set.
If the element is not a member, raise an error.
root_block()
Return self.model()
set_suffix_value(suffix_or_name, value, expand=True)
Set the suffix value for this component data
set_value(value)
Set the value of a scalar component.
symmetric_difference(*args)
Return the symmetric difference of this set with one or more sets
to_dense_data()
TODO
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return the component name
type()
Return the class type for this component
union(*args)
Return the union of this set with one or more sets.
valid_model_component()
Return True if this can be used as a model component.
values()
Return a list of the component data objects in the dictionary
pyomo.environ.Reference(reference, ctype=<object object>)
Creates a component that references other components
Reference generates a reference component; that is, an indexed component that does not contain data, but
instead references data stored in other components as defined by a component slice. The ctype parameter sets
the Component.type() of the resulting indexed component. If the ctype parameter is not set and all data
identified by the slice (at construction time) share a common Component.type(), then that type is assumed.
If either the ctype parameter is None or the data has more than one ctype, the resulting indexed component will
have a ctype of IndexedComponent.
If the indices associated with wildcards in the component slice all refer to the same Set objects for all data
identifed by the slice, then the resulting indexed component will be indexed by the product of those sets. How-
ever, if all data do not share common set objects, or only a subset of indices in a multidimentional set appear as
wildcards, then the resulting indexed component will be indexed by a SetOf containing a _ReferenceSet
for the slice.
Parameters
Examples
Reference components may have wildcards at multiple levels of the model hierarchy:
The resulting reference component may be used just like any other component. Changes to the stored data will
be reflected in the original objects:
>>> m.r3[1,4] = 10
>>> m.b[1].x.pprint()
x : Size=2, Index=b[1].x_index
(continues on next page)
active
Return the active attribute
clear()
Clear the data in this component
clear_suffix_value(suffix_or_name, expand=True)
Clear the suffix value for this component data
construct(data=None)
API definition for constructing components
dim()
Return the dimension of the index
get_suffix_value(suffix_or_name, default=None)
Get the suffix value for this component data
getname(fully_qualified=False, name_buffer=None, relative_to=None)
Returns the component name associated with this object.
Parameters
• Generate full name from nested block names (fully_qualified) –
• Can be used to optimize iterative name (name_buffer) – generation
(using a dictionary)
• When generating a fully qualified name, (relative_to) – stop at
this block.
id_index_map()
Return an dictionary id->index for all ComponentData instances.
index_set()
Return the index set
is_component_type()
Return True if this class is a Pyomo component
is_constructed()
Return True if this class has been constructed
is_indexed()
Return true if this component is indexed
items()
Return a list (index,data) tuples from the dictionary
iteritems()
Return an iterator of (index,data) tuples from the dictionary
iterkeys()
Return an iterator of the keys in the dictionary
itervalues()
Return an iterator of the component data objects in the dictionary
keys()
Return a list of keys in the dictionary
local_name
Get the component name only within the context of the immediate parent container.
model()
Returns the model associated with this object.
name
Get the fully qualifed component name.
parent_block()
Returns the parent of this object.
parent_component()
Returns the component associated with this object.
pprint(ostream=None, verbose=False, prefix=”)
Print component information
reconstruct(data=None)
Re-construct model expressions
root_block()
Return self.model()
set_suffix_value(suffix_or_name, value, expand=True)
Set the suffix value for this component data
set_value(value)
Set the value of a scalar component.
to_dense_data()
TODO
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return the component name
type()
Return the class type for this component
valid_model_component()
Return True if this can be used as a model component.
values()
Return a list of the component data objects in the dictionary
class pyomo.environ.Var(*args, **kwd)
Bases: pyomo.core.base.indexed_component.IndexedComponent
A numeric variable, which may be defined over an index.
Parameters
• domain (Set or function, optional) – A Set that defines valid values for the
variable (e.g., Reals, NonNegativeReals, Binary), or a rule that returns Sets. Defaults to
Reals.
• within (Set or function, optional) – An alias for domain.
• bounds (tuple or function, optional) – A tuple of (lower, upper) bounds for
the variable, or a rule that returns tuples. Defaults to (None, None).
• initialize (float or function, optional) – The initial value for the vari-
able, or a rule that returns initial values.
• rule (float or function, optional) – An alias for initialize.
• dense (bool, optional) – Instantiate all elements from index_set() when constructing
the Var (True) or just the variables returned by initialize/rule (False). Defaults to True.
active
Return the active attribute
add(index)
Add a variable with a particular index.
clear()
Clear the data in this component
clear_suffix_value(suffix_or_name, expand=True)
Clear the suffix value for this component data
construct(data=None)
Construct this component.
dim()
Return the dimension of the index
extract_values(include_fixed_values=True)
Return a dictionary of index-value pairs.
flag_as_stale()
Set the ‘stale’ attribute of every variable data object to True.
get_suffix_value(suffix_or_name, default=None)
Get the suffix value for this component data
get_values(include_fixed_values=True)
Return a dictionary of index-value pairs.
getname(fully_qualified=False, name_buffer=None, relative_to=None)
Returns the component name associated with this object.
Parameters
• Generate full name from nested block names (fully_qualified) –
• Can be used to optimize iterative name (name_buffer) – generation
(using a dictionary)
• When generating a fully qualified name, (relative_to) – stop at
this block.
id_index_map()
Return an dictionary id->index for all ComponentData instances.
index_set()
Return the index set
is_component_type()
Return True if this class is a Pyomo component
is_constructed()
Return True if this class has been constructed
is_expression_type()
Returns False because this is not an expression
is_indexed()
Return true if this component is indexed
items()
Return a list (index,data) tuples from the dictionary
iteritems()
Return an iterator of (index,data) tuples from the dictionary
iterkeys()
Return an iterator of the keys in the dictionary
itervalues()
Return an iterator of the component data objects in the dictionary
keys()
Return a list of keys in the dictionary
local_name
Get the component name only within the context of the immediate parent container.
model()
Returns the model associated with this object.
name
Get the fully qualifed component name.
parent_block()
Returns the parent of this object.
parent_component()
Returns the component associated with this object.
pprint(ostream=None, verbose=False, prefix=”)
Print component information
reconstruct(data=None)
Re-construct model expressions
root_block()
Return self.model()
set_suffix_value(suffix_or_name, value, expand=True)
Set the suffix value for this component data
set_value(value)
Set the value of a scalar component.
set_values(new_values, valid=False)
Set the values of a dictionary.
The default behavior is to validate the values in the dictionary.
to_dense_data()
TODO
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return the component name
type()
Return the class type for this component
valid_model_component()
Return True if this can be used as a model component.
values()
Return a list of the component data objects in the dictionary
pyomo.core.util.prod(terms)
A utility function to compute the product of a list of terms.
Parameters terms (list) – A list of terms that are multiplied together.
Returns The value of the product, which may be a Pyomo expression object.
pyomo.core.util.quicksum(args, start=0, linear=None)
A utility function to compute a sum of Pyomo expressions.
The behavior of quicksum() is similar to the builtin sum() function, but this function generates a more
compact Pyomo expression.
Parameters
• args – A generator for terms in the sum.
• start – A value that is initializes the sum. If this value is not a numeric constant, then the
+= operator is used to add terms to this object. Defaults to zero.
• linear – If start is not a numeric constant, then this option is ignored. Otherwise, this
value indicates whether the terms in the sum are linear. If the value is False, then the terms
are treated as nonlinear, and if True, then the terms are treated as linear. Default is None,
which indicates that the first term in the args is used to determine this value.
Returns The value of the sum, which may be a Pyomo expression object.
pyomo.core.util.sum_product(*args, **kwds)
A utility function to compute a generalized dot product.
This function accepts one or more components that provide terms that are multiplied together. These products
are added together to form a sum.
Parameters
• *args – Variable length argument list of generators that create terms in the summation.
• **kwds – Arbitrary keyword arguments.
Keyword Arguments
• index – A set that is used to index the components used to create the terms
• denom – A component or tuple of components that are used to create the denominator of
the terms
• start – The initial value used in the sum
Returns The value of the sum.
pyomo.core.util.summation = <function sum_product>
An alias for sum_product
pyomo.core.util.dot_product = <function sum_product>
An alias for sum_product
Functions
• constant (bool) – If True, constant expressions are evaluated and returned but non-
constant expressions raise either FixedExpressionError or NonconstantExpressionError (de-
fault=False).
Returns A floating point value if the expression evaluates normally, or None if an exception occurs
and is caught.
pyomo.core.expr.current.identify_components(expr, component_types)
A generator that yields a sequence of nodes in an expression tree that belong to a specified set.
Parameters
• expr – The root node of an expression tree.
• component_types (set or list) – A set of class types that will be matched during
the search.
Yields Each node that is found.
pyomo.core.expr.current.identify_variables(expr, include_fixed=True)
A generator that yields a sequence of variables in an expression tree.
Parameters
• expr – The root node of an expression tree.
• include_fixed (bool) – If True, then this generator will yield variables whose value
is fixed. Defaults to True.
Yields Each variable that is found.
pyomo.core.expr.differentiate(expr, wrt=None, wrt_list=None,
mode=EnumValue(<pyutilib.enum.enum.Enum object>, 2,
’reverse_numeric’))
Return derivative of expression.
This function returns the derivative of expr with respect to one or more variables. The type of the return value
depends on the arguments wrt, wrt_list, and mode. See below for details.
Parameters
• expr (pyomo.core.expr.numeric_expr.ExpressionBase) – The expression
to differentiate
• wrt (pyomo.core.base.var._GeneralVarData) – If specified, this function will
return the derivative with respect to wrt. wrt is normally a _GeneralVarData, but could also
be a _ParamData. wrt and wrt_list cannot both be specified.
• wrt_list (list of pyomo.core.base.var._GeneralVarData) – If speci-
fied, this function will return the derivative with respect to each element in wrt_list. A
list will be returned where the values are the derivatives with respect to the corresponding
entry in wrt_list.
• mode (pyomo.core.expr.calculus.derivatives.Modes) – Specifies the
method to use for differentiation. Should be one of the members of the Modes enum:
Modes.sympy: The pyomo expression will be converted to a sympy expression. Dif-
ferentiation will then be done with sympy, and the result will be converted back to a
pyomo expression. The sympy mode only does symbolic differentiation. The sympy
mode requires exactly one of wrt and wrt_list to be specified.
Modes.reverse_symbolic: Symbolic differentiation will be performed directly with
the pyomo expression in reverse mode. If neither wrt nor wrt_list are specified,
then a ComponentMap is returned where there will be a key for each node in the
expression tree, and the values will be the symbolic derivatives.
Modes.reverse_numeric: Numeric differentiation will be performed directly with the
pyomo expression in reverse mode. If neither wrt nor wrt_list are specified, then a
ComponentMap is returned where there will be a key for each node in the expression
tree, and the values will be the floating point values of the derivatives at the current
values of the variables.
Returns res – The value or expression of the derivative(s)
Return type float, ExpressionBase, ComponentMap, or list
Classes
class pyomo.core.expr.symbol_map.SymbolMap(labeler=None)
A class for tracking assigned labels for modeling components.
Symbol maps are used, for example, when writing problem files for input to an optimizer.
Warning: A symbol map should never be pickled. This class is typically constructed by solvers and writers,
and it may be owned by models.
byObject
maps (object id) to (string label)
Type dict
bySymbol
maps (string label) to (object weakref)
Type dict
alias
maps (string label) to (object weakref)
Type dict
default_labeler
used to compute a string label from an object
class pyomo.core.expr.current.nonlinear_expression
Context manager for mutable sums.
This context manager is used to compute a sum while treating the summation as a mutable object.
class pyomo.core.expr.current.linear_expression
Context manager for mutable linear sums.
This context manager is used to compute a linear sum while treating the summation as a mutable object.
class pyomo.core.expr.current.clone_counter
Context manager for counting cloning events.
This context manager counts the number of times that the clone_expression function is executed.
count
A property that returns the clone count value.
The following sets can be used to develop visitor patterns for Pyomo expressions.
pyomo.core.expr.numvalue.native_numeric_types = set([<type 'float'>, <type 'int'>, <type 'l
Python set used to identify numeric constants. This set includes native Python types as well as numeric types
from Python packages like numpy, which may be registered by users.
pyomo.core.expr.numvalue.native_types = set([<type 'float'>, <type 'int'>, <type 'long'>, <
Python set used to identify numeric constants and related native types. This set includes native Python types as
well as numeric types from Python packages like numpy.
native_types = native_numeric_types + { str }
pyomo.core.expr.numvalue.nonpyomo_leaf_types = set([<type 'float'>, <type 'int'>, <type 'lo
Python set used to identify numeric constants, boolean values, strings and instances of NonNumericValue,
which is commonly used in code that walks Pyomo expression trees.
nonpyomo_leaf_types = native_types + { NonNumericValue }
class pyomo.core.expr.numvalue.NumericValue
This is the base class for numeric values used in Pyomo.
__abs__()
Absolute value
This method is called when Python processes the statement:
abs(self)
__add__(other)
Binary addition
This method is called when Python processes the statement:
self + other
__div__(other)
Binary division
This method is called when Python processes the statement:
self / other
__eq__(other)
Equal to operator
This method is called when Python processes the statement:
self == other
__float__()
Coerce the value to a floating point
Raises TypeError
__ge__(other)
Greater than or equal operator
This method is called when Python processes statements of the form:
__getstate__()
Prepare a picklable state of this instance for pickling.
Nominally, __getstate__() should execute the following:
However, in this case, the (nominal) parent class is ‘object’, and object does not implement __getstate__.
So, we will check to make sure that there is a base __getstate__() to call. You might think that there is
nothing to check, but multiple inheritance could mean that another class got stuck between this class and
“object” in the MRO.
Further, since there are actually no slots defined here, the real question is to either return an empty dict or
the parent’s dict.
__gt__(other)
Greater than operator
This method is called when Python processes statements of the form:
__iadd__(other)
Binary addition
This method is called when Python processes the statement:
self += other
__idiv__(other)
Binary division
This method is called when Python processes the statement:
self /= other
__imul__(other)
Binary multiplication
This method is called when Python processes the statement:
self *= other
__int__()
Coerce the value to an integer
Raises TypeError
__ipow__(other)
Binary power
This method is called when Python processes the statement:
__isub__(other)
Binary subtraction
This method is called when Python processes the statement:
self -= other
__itruediv__(other)
Binary division (when __future__.division is in effect)
This method is called when Python processes the statement:
self /= other
__le__(other)
Less than or equal operator
This method is called when Python processes statements of the form:
__lt__(other)
Less than operator
This method is called when Python processes statements of the form:
__mul__(other)
Binary multiplication
This method is called when Python processes the statement:
self * other
__neg__()
Negation
This method is called when Python processes the statement:
- self
__pos__()
Positive expression
This method is called when Python processes the statement:
+ self
__pow__(other)
Binary power
This method is called when Python processes the statement:
self ** other
__radd__(other)
Binary addition
This method is called when Python processes the statement:
other + self
__rdiv__(other)
Binary division
This method is called when Python processes the statement:
other / self
__rmul__(other)
Binary multiplication
This method is called when Python processes the statement:
other * self
other ** self
__rsub__(other)
Binary subtraction
This method is called when Python processes the statement:
other - self
__rtruediv__(other)
Binary division (when __future__.division is in effect)
This method is called when Python processes the statement:
other / self
__setstate__(state)
Restore a pickled state into this instance
Our model for setstate is for derived classes to modify the state dictionary as control passes up the inheri-
tance hierarchy (using super() calls). All assignment of state -> object attributes is handled at the last class
before ‘object’, which may – or may not (thanks to MRO) – be here.
__sub__(other)
Binary subtraction
This method is called when Python processes the statement:
self - other
__truediv__(other)
Binary division (when __future__.division is in effect)
This method is called when Python processes the statement:
self / other
_compute_polynomial_degree(values)
Compute the polynomial degree of this expression given the degree values of its children.
Parameters values (list) – A list of values that indicate the degree of the children expres-
sion.
Returns None
getname(fully_qualified=False, name_buffer=None)
If this is a component, return the component’s name on the owning block; otherwise return the value
converted to a string
is_component_type()
Return True if this class is a Pyomo component
is_constant()
Return True if this numeric value is a constant value
is_expression_type()
Return True if this numeric value is an expression
is_fixed()
Return True if this is a non-constant value that has been fixed
is_indexed()
Return True if this numeric value is an indexed object
is_named_expression_type()
Return True if this numeric value is a named expression
is_parameter_type()
Return False unless this class is a parameter object
is_potentially_variable()
Return True if variables can appear in this expression
is_relational()
Return True if this numeric value represents a relational expression.
is_variable_type()
Return False unless this class is a variable object
polynomial_degree()
Return the polynomial degree of the expression.
Returns None
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return a string representation of the expression tree.
Parameters
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation. Defaults to False.
• labeler – An object that generates string labels for variables in the expression tree.
Defaults to None.
Returns A string representation for the expression tree.
class pyomo.core.expr.current.ExpressionBase(args)
Bases: pyomo.core.expr.numvalue.NumericValue
The base class for Pyomo expressions.
This class is used to define nodes in an expression tree.
Parameters args (list or tuple) – Children of this node.
__bool__()
Compute the value of the expression and convert it to a boolean.
Returns A boolean value.
__call__(exception=True)
Evaluate the value of the expression tree.
Parameters exception (bool) – If False, then an exception raised while evaluating is
captured, and the value returned is None. Default is True.
Returns The value of the expression or None.
__getstate__()
Pickle the expression object
Returns The pickled state.
__init__(args)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__nonzero__()
Compute the value of the expression and convert it to a boolean.
Returns A boolean value.
__str__()
Returns a string description of the expression.
description of the expression. The default is False, which is an algebraic description of the expression.
Returns A string.
_apply_operation(result)
Compute the values of this node given the values of its children.
This method is called by the _EvaluationVisitor class. It must be over-written by expression classes
to customize this logic.
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_associativity()
Return the associativity of this operator.
Returns 1 if this operator is left-to-right associative or -1 if it is right-to-left associative. Any other return
value will be interpreted as “not associative” (implying any arguments that are at this operator’s _prece-
dence() will be enclosed in parens).
_compute_polynomial_degree(values)
Compute the polynomial degree of this expression given the degree values of its children.
This method is called by the _PolynomialDegreeVisitor class. It can be over-written by expres-
sion classes to customize this logic.
Parameters values (list) – A list of values that indicate the degree of the children expres-
sion.
Returns A nonnegative integer that is the polynomial degree of the expression, or None. Default
is None.
_is_fixed(values)
Compute whether this expression is fixed given the fixed values of its children.
This method is called by the _IsFixedVisitor class. It can be over-written by expression classes to
customize this logic.
Parameters values (list) – A list of boolean values that indicate whether the children of
this expression are fixed
Returns A boolean that is True if the fixed values of the children are all True.
_to_string(values, verbose, smap, compute_values)
Construct a string representation for this node, using the string representations of its children.
This method is called by the _ToStringVisitor class. It must must be defined in subclasses.
Parameters
• values (list) – The string representations of the children of this node.
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation.
• smap – If specified, this SymbolMap is used to cache labels for variables.
• compute_values (bool) – If True, then parameters and fixed variables are evaluated
before the expression string is generated.
Returns A string representation for this node.
arg(i)
Return the i-th child node.
Parameters i (int) – Nonnegative index of the child that is returned.
Returns The i-th child node.
args
Return the child nodes
Returns: Either a list or tuple (depending on the node storage model) containing only the child nodes
of this node
clone(substitute=None)
Return a clone of the expression tree.
Note: This method does not clone the leaves of the tree, which are numeric constants and variables. It only
clones the interior nodes, and expression leaf nodes like _MutableLinearExpression. However,
named expressions are treated like leaves, and they are not cloned.
Parameters substitute (dict) – a dictionary that maps object ids to clone objects gener-
ated earlier during the cloning process.
Returns A new expression tree.
create_node_with_local_data(args)
Construct a node using given arguments.
This method provides a consistent interface for constructing a node, which is used in tree visitor scripts.
In the simplest case, this simply returns:
self.__class__(args)
But in general this creates an expression object using local data as well as arguments that represent the
child nodes.
Parameters
• args (list) – A list of child nodes for the new expression object
• memo (dict) – A dictionary that maps object ids to clone objects generated earlier during
a cloning process. This argument is needed to clone objects that are owned by a model,
and it can be safely ignored for most expression classes.
Returns A new expression object with the same type as the current class.
create_potentially_variable_object()
Create a potentially variable version of this object.
This method returns an object that is a potentially variable version of the current object. In the simplest
case, this simply sets the value of __class__:
self.__class__ = self.__class__.__mro__[1]
Note that this method is allowed to modify the current object and return it. But in some cases it may create
a new potentially variable object.
Returns An object that is potentially variable.
getname(*args, **kwds)
Return the text name of a function associated with this expression object.
In general, no arguments are passed to this function.
Parameters
• *arg – a variable length list of arguments
• **kwds – keyword arguments
Returns A string name for the function.
is_constant()
Return True if this expression is an atomic constant
This method contrasts with the is_fixed() method. This method returns True if the expression is an atomic
constant, that is it is composed exclusively of constants and immutable parameters. NumericValue objects
returning is_constant() == True may be simplified to their numeric value at any point without warning.
Note: This defaults to False, but gets redefined in sub-classes.
is_expression_type()
Return True if this object is an expression.
This method obviously returns True for this class, but it is included in other classes within Pyomo that
are not expressions, which allows for a check for expressions without evaluating the class type.
Returns A boolean.
is_fixed()
Return True if this expression contains no free variables.
Returns A boolean.
is_named_expression_type()
Return True if this object is a named expression.
This method returns False for this class, and it is included in other classes within Pyomo that are not
named expressions, which allows for a check for named expressions without evaluating the class type.
Returns A boolean.
is_potentially_variable()
Return True if this expression might represent a variable expression.
This method returns True when (a) the expression tree contains one or more variables, or (b) the expres-
sion tree contains a named expression. In both cases, the expression cannot be treated as constant since
(a) the variables may not be fixed, or (b) the named expressions may be changed at a later time to include
non-fixed variables.
Returns A boolean. Defaults to True for expressions.
nargs()
Returns the number of child nodes.
By default, Pyomo expressions represent binary operations with two arguments.
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
polynomial_degree()
Return the polynomial degree of the expression.
Returns A non-negative integer that is the polynomial degree if the expression is polynomial, or
None otherwise.
size()
Return the number of nodes in the expression tree.
Returns A nonnegative integer that is the number of interior and leaf nodes in the expression
tree.
to_string(verbose=None, labeler=None, smap=None, compute_values=False)
Return a string representation of the expression tree.
Parameters
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation. Defaults to False.
• labeler – An object that generates string labels for variables in the expression tree.
Defaults to None.
• smap – If specified, this SymbolMap is used to cache labels for variables.
• compute_values (bool) – If True, then parameters and fixed variables are evaluated
before the expression string is generated. Default is False.
Returns A string representation for the expression tree.
class pyomo.core.expr.current.NegationExpression(args)
Bases: pyomo.core.expr.numeric_expr.ExpressionBase
Negation expressions:
- x
PRECEDENCE = 4
_apply_operation(result)
Compute the values of this node given the values of its children.
This method is called by the _EvaluationVisitor class. It must be over-written by expression classes
to customize this logic.
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_compute_polynomial_degree(result)
Compute the polynomial degree of this expression given the degree values of its children.
This method is called by the _PolynomialDegreeVisitor class. It can be over-written by expres-
sion classes to customize this logic.
Parameters values (list) – A list of values that indicate the degree of the children expres-
sion.
Returns A nonnegative integer that is the polynomial degree of the expression, or None. Default
is None.
_precedence()
_to_string(values, verbose, smap, compute_values)
Construct a string representation for this node, using the string representations of its children.
This method is called by the _ToStringVisitor class. It must must be defined in subclasses.
Parameters
• values (list) – The string representations of the children of this node.
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation.
• smap – If specified, this SymbolMap is used to cache labels for variables.
• compute_values (bool) – If True, then parameters and fixed variables are evaluated
before the expression string is generated.
Returns A string representation for this node.
getname(*args, **kwds)
Return the text name of a function associated with this expression object.
In general, no arguments are passed to this function.
Parameters
• *arg – a variable length list of arguments
• **kwds – keyword arguments
Returns A string name for the function.
nargs()
Returns the number of child nodes.
By default, Pyomo expressions represent binary operations with two arguments.
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
model = ConcreteModel()
model.a = Var()
model.f = ExternalFunction(library='foo.so', function='bar')
expr = model.f(model.a)
Parameters
• args (tuple) – children of this node
• fcn – a class that defines this external function
_apply_operation(result)
Compute the values of this node given the values of its children.
This method is called by the _EvaluationVisitor class. It must be over-written by expression classes
to customize this logic.
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_compute_polynomial_degree(result)
Compute the polynomial degree of this expression given the degree values of its children.
This method is called by the _PolynomialDegreeVisitor class. It can be over-written by expres-
sion classes to customize this logic.
Parameters values (list) – A list of values that indicate the degree of the children expres-
sion.
Returns A nonnegative integer that is the polynomial degree of the expression, or None. Default
is None.
_fcn
_to_string(values, verbose, smap, compute_values)
Construct a string representation for this node, using the string representations of its children.
This method is called by the _ToStringVisitor class. It must must be defined in subclasses.
Parameters
• values (list) – The string representations of the children of this node.
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation.
self.__class__(args)
But in general this creates an expression object using local data as well as arguments that represent the
child nodes.
Parameters
• args (list) – A list of child nodes for the new expression object
• memo (dict) – A dictionary that maps object ids to clone objects generated earlier during
a cloning process. This argument is needed to clone objects that are owned by a model,
and it can be safely ignored for most expression classes.
Returns A new expression object with the same type as the current class.
getname(*args, **kwds)
Return the text name of a function associated with this expression object.
In general, no arguments are passed to this function.
Parameters
• *arg – a variable length list of arguments
• **kwds – keyword arguments
Returns A string name for the function.
nargs()
Returns the number of child nodes.
By default, Pyomo expressions represent binary operations with two arguments.
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
class pyomo.core.expr.current.ProductExpression(args)
Bases: pyomo.core.expr.numeric_expr.ExpressionBase
Product expressions:
x*y
PRECEDENCE = 4
_apply_operation(result)
Compute the values of this node given the values of its children.
This method is called by the _EvaluationVisitor class. It must be over-written by expression classes
to customize this logic.
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_compute_polynomial_degree(result)
Compute the polynomial degree of this expression given the degree values of its children.
This method is called by the _PolynomialDegreeVisitor class. It can be over-written by expres-
sion classes to customize this logic.
Parameters values (list) – A list of values that indicate the degree of the children expres-
sion.
Returns A nonnegative integer that is the polynomial degree of the expression, or None. Default
is None.
_is_fixed(args)
Compute whether this expression is fixed given the fixed values of its children.
This method is called by the _IsFixedVisitor class. It can be over-written by expression classes to
customize this logic.
Parameters values (list) – A list of boolean values that indicate whether the children of
this expression are fixed
Returns A boolean that is True if the fixed values of the children are all True.
_precedence()
_to_string(values, verbose, smap, compute_values)
Construct a string representation for this node, using the string representations of its children.
This method is called by the _ToStringVisitor class. It must must be defined in subclasses.
Parameters
• values (list) – The string representations of the children of this node.
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation.
• smap – If specified, this SymbolMap is used to cache labels for variables.
• compute_values (bool) – If True, then parameters and fixed variables are evaluated
before the expression string is generated.
Returns A string representation for this node.
getname(*args, **kwds)
Return the text name of a function associated with this expression object.
In general, no arguments are passed to this function.
Parameters
• *arg – a variable length list of arguments
• **kwds – keyword arguments
Returns A string name for the function.
class pyomo.core.expr.current.ReciprocalExpression(**kwargs)
Bases: pyomo.core.expr.numeric_expr.ExpressionBase
Reciprocal expressions:
1/x
PRECEDENCE = 4
_apply_operation(result)
Compute the values of this node given the values of its children.
This method is called by the _EvaluationVisitor class. It must be over-written by expression classes
to customize this logic.
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_associativity()
Return the associativity of this operator.
Returns 1 if this operator is left-to-right associative or -1 if it is right-to-left associative. Any other return
value will be interpreted as “not associative” (implying any arguments that are at this operator’s _prece-
dence() will be enclosed in parens).
_compute_polynomial_degree(result)
Compute the polynomial degree of this expression given the degree values of its children.
This method is called by the _PolynomialDegreeVisitor class. It can be over-written by expres-
sion classes to customize this logic.
Parameters values (list) – A list of values that indicate the degree of the children expres-
sion.
Returns A nonnegative integer that is the polynomial degree of the expression, or None. Default
is None.
_precedence()
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
x < y
x <= y
Parameters
• args (tuple) – child nodes
• strict (bool) – a flag that indicates whether the inequality is strict
PRECEDENCE = 9
_apply_operation(result)
Compute the values of this node given the values of its children.
This method is called by the _EvaluationVisitor class. It must be over-written by expression classes
to customize this logic.
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_precedence()
_strict
_to_string(values, verbose, smap, compute_values)
Construct a string representation for this node, using the string representations of its children.
This method is called by the _ToStringVisitor class. It must must be defined in subclasses.
Parameters
• values (list) – The string representations of the children of this node.
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation.
• smap – If specified, this SymbolMap is used to cache labels for variables.
• compute_values (bool) – If True, then parameters and fixed variables are evaluated
before the expression string is generated.
Returns A string representation for this node.
create_node_with_local_data(args)
Construct a node using given arguments.
This method provides a consistent interface for constructing a node, which is used in tree visitor scripts.
In the simplest case, this simply returns:
self.__class__(args)
But in general this creates an expression object using local data as well as arguments that represent the
child nodes.
Parameters
• args (list) – A list of child nodes for the new expression object
• memo (dict) – A dictionary that maps object ids to clone objects generated earlier during
a cloning process. This argument is needed to clone objects that are owned by a model,
and it can be safely ignored for most expression classes.
Returns A new expression object with the same type as the current class.
is_constant()
Return True if this expression is an atomic constant
This method contrasts with the is_fixed() method. This method returns True if the expression is an atomic
constant, that is it is composed exclusively of constants and immutable parameters. NumericValue objects
returning is_constant() == True may be simplified to their numeric value at any point without warning.
Note: This defaults to False, but gets redefined in sub-classes.
is_potentially_variable()
Return True if this expression might represent a variable expression.
This method returns True when (a) the expression tree contains one or more variables, or (b) the expres-
sion tree contains a named expression. In both cases, the expression cannot be treated as constant since
(a) the variables may not be fixed, or (b) the named expressions may be changed at a later time to include
non-fixed variables.
Returns A boolean. Defaults to True for expressions.
is_relational()
Return True if this numeric value represents a relational expression.
nargs()
Returns the number of child nodes.
By default, Pyomo expressions represent binary operations with two arguments.
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
class pyomo.core.expr.current.EqualityExpression(args)
Bases: pyomo.core.expr.numeric_expr._LinearOperatorExpression
Equality expression:
x == y
PRECEDENCE = 9
_apply_operation(result)
Compute the values of this node given the values of its children.
This method is called by the _EvaluationVisitor class. It must be over-written by expression classes
to customize this logic.
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_precedence()
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
class pyomo.core.expr.current.SumExpression(args)
Bases: pyomo.core.expr.numeric_expr.SumExpressionBase
Sum expression:
x + y
PRECEDENCE = 6
_apply_operation(result)
Compute the values of this node given the values of its children.
This method is called by the _EvaluationVisitor class. It must be over-written by expression classes
to customize this logic.
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_nargs
_precedence()
_shared_args
_to_string(values, verbose, smap, compute_values)
Construct a string representation for this node, using the string representations of its children.
This method is called by the _ToStringVisitor class. It must must be defined in subclasses.
Parameters
• values (list) – The string representations of the children of this node.
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation.
• smap – If specified, this SymbolMap is used to cache labels for variables.
• compute_values (bool) – If True, then parameters and fixed variables are evaluated
before the expression string is generated.
Returns A string representation for this node.
add(new_arg)
create_node_with_local_data(args)
Construct a node using given arguments.
This method provides a consistent interface for constructing a node, which is used in tree visitor scripts.
In the simplest case, this simply returns:
self.__class__(args)
But in general this creates an expression object using local data as well as arguments that represent the
child nodes.
Parameters
• args (list) – A list of child nodes for the new expression object
• memo (dict) – A dictionary that maps object ids to clone objects generated earlier during
a cloning process. This argument is needed to clone objects that are owned by a model,
and it can be safely ignored for most expression classes.
Returns A new expression object with the same type as the current class.
is_constant()
Return True if this expression is an atomic constant
This method contrasts with the is_fixed() method. This method returns True if the expression is an atomic
constant, that is it is composed exclusively of constants and immutable parameters. NumericValue objects
returning is_constant() == True may be simplified to their numeric value at any point without warning.
Note: This defaults to False, but gets redefined in sub-classes.
is_potentially_variable()
Return True if this expression might represent a variable expression.
This method returns True when (a) the expression tree contains one or more variables, or (b) the expres-
sion tree contains a named expression. In both cases, the expression cannot be treated as constant since
(a) the variables may not be fixed, or (b) the named expressions may be changed at a later time to include
non-fixed variables.
Returns A boolean. Defaults to True for expressions.
nargs()
Returns the number of child nodes.
By default, Pyomo expressions represent binary operations with two arguments.
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_base
_compute_polynomial_degree(result)
Compute the polynomial degree of this expression given the degree values of its children.
This method is called by the _PolynomialDegreeVisitor class. It can be over-written by expres-
sion classes to customize this logic.
Parameters values (list) – A list of values that indicate the degree of the children expres-
sion.
Returns A nonnegative integer that is the polynomial degree of the expression, or None. Default
is None.
_is_fixed(values)
Compute whether this expression is fixed given the fixed values of its children.
This method is called by the _IsFixedVisitor class. It can be over-written by expression classes to
customize this logic.
Parameters values (list) – A list of boolean values that indicate whether the children of
this expression are fixed
Returns A boolean that is True if the fixed values of the children are all True.
_precedence()
_to_string(values, verbose, smap, compute_values)
Construct a string representation for this node, using the string representations of its children.
This method is called by the _ToStringVisitor class. It must must be defined in subclasses.
Parameters
• values (list) – The string representations of the children of this node.
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation.
• smap – If specified, this SymbolMap is used to cache labels for variables.
• compute_values (bool) – If True, then parameters and fixed variables are evaluated
before the expression string is generated.
Returns A string representation for this node.
create_node_with_local_data(args)
Construct a node using given arguments.
This method provides a consistent interface for constructing a node, which is used in tree visitor scripts.
In the simplest case, this simply returns:
self.__class__(args)
But in general this creates an expression object using local data as well as arguments that represent the
child nodes.
Parameters
• args (list) – A list of child nodes for the new expression object
• memo (dict) – A dictionary that maps object ids to clone objects generated earlier during
a cloning process. This argument is needed to clone objects that are owned by a model,
and it can be safely ignored for most expression classes.
Returns A new expression object with the same type as the current class.
getname(*args, **kwds)
Return the text name of a function associated with this expression object.
In general, no arguments are passed to this function.
Parameters
• *arg – a variable length list of arguments
• **kwds – keyword arguments
Returns A string name for the function.
is_fixed()
Return True if this expression contains no free variables.
Returns A boolean.
is_potentially_variable()
Return True if this expression might represent a variable expression.
This method returns True when (a) the expression tree contains one or more variables, or (b) the expres-
sion tree contains a named expression. In both cases, the expression cannot be treated as constant since
(a) the variables may not be fixed, or (b) the named expressions may be changed at a later time to include
non-fixed variables.
Returns A boolean. Defaults to True for expressions.
nargs()
Returns the number of child nodes.
By default, Pyomo expressions represent binary operations with two arguments.
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
resolve_template()
class pyomo.core.expr.current.Expr_ifExpression(IF_=None, THEN_=None,
ELSE_=None)
Bases: pyomo.core.expr.numeric_expr.ExpressionBase
A logical if-then-else expression:
Parameters
• IF (expression) – A relational expression
• THEN (expression) – An expression that is used if IF_ is true.
• ELSE (expression) – An expression that is used if IF_ is false.
_apply_operation(result)
Compute the values of this node given the values of its children.
This method is called by the _EvaluationVisitor class. It must be over-written by expression classes
to customize this logic.
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_compute_polynomial_degree(result)
Compute the polynomial degree of this expression given the degree values of its children.
This method is called by the _PolynomialDegreeVisitor class. It can be over-written by expres-
sion classes to customize this logic.
Parameters values (list) – A list of values that indicate the degree of the children expres-
sion.
Returns A nonnegative integer that is the polynomial degree of the expression, or None. Default
is None.
_else
_if
_is_fixed(args)
Compute whether this expression is fixed given the fixed values of its children.
This method is called by the _IsFixedVisitor class. It can be over-written by expression classes to
customize this logic.
Parameters values (list) – A list of boolean values that indicate whether the children of
this expression are fixed
Returns A boolean that is True if the fixed values of the children are all True.
_then
_to_string(values, verbose, smap, compute_values)
Construct a string representation for this node, using the string representations of its children.
This method is called by the _ToStringVisitor class. It must must be defined in subclasses.
Parameters
• values (list) – The string representations of the children of this node.
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation.
• smap – If specified, this SymbolMap is used to cache labels for variables.
• compute_values (bool) – If True, then parameters and fixed variables are evaluated
before the expression string is generated.
Returns A string representation for this node.
getname(*args, **kwds)
Return the text name of a function associated with this expression object.
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
Note: This method applies the logical operation of the operator to the arguments. It does not evaluate the
arguments in the process, but assumes that they have been previously evaluated. But noted that if this class
contains auxilliary data (e.g. like the numeric coefficients in the LinearExpression class, then those
values must be evaluated as part of this function call. An uninitialized parameter value encountered during
the execution of this method is considered an error.
Parameters values (list) – A list of values that indicate the value of the children expres-
sions.
Returns A floating point value for this expression.
_compute_polynomial_degree(result)
Compute the polynomial degree of this expression given the degree values of its children.
This method is called by the _PolynomialDegreeVisitor class. It can be over-written by expres-
sion classes to customize this logic.
Parameters values (list) – A list of values that indicate the degree of the children expres-
sion.
Returns A nonnegative integer that is the polynomial degree of the expression, or None. Default
is None.
_fcn
_name
_to_string(values, verbose, smap, compute_values)
Construct a string representation for this node, using the string representations of its children.
This method is called by the _ToStringVisitor class. It must must be defined in subclasses.
Parameters
• values (list) – The string representations of the children of this node.
• verbose (bool) – If True, then the the string representation consists of nested func-
tions. Otherwise, the string representation is an algebraic equation.
• smap – If specified, this SymbolMap is used to cache labels for variables.
• compute_values (bool) – If True, then parameters and fixed variables are evaluated
before the expression string is generated.
Returns A string representation for this node.
create_node_with_local_data(args)
Construct a node using given arguments.
This method provides a consistent interface for constructing a node, which is used in tree visitor scripts.
In the simplest case, this simply returns:
self.__class__(args)
But in general this creates an expression object using local data as well as arguments that represent the
child nodes.
Parameters
• args (list) – A list of child nodes for the new expression object
• memo (dict) – A dictionary that maps object ids to clone objects generated earlier during
a cloning process. This argument is needed to clone objects that are owned by a model,
and it can be safely ignored for most expression classes.
Returns A new expression object with the same type as the current class.
getname(*args, **kwds)
Return the text name of a function associated with this expression object.
In general, no arguments are passed to this function.
Parameters
• *arg – a variable length list of arguments
• **kwds – keyword arguments
Returns A string name for the function.
nargs()
Returns the number of child nodes.
By default, Pyomo expressions represent binary operations with two arguments.
Note: This function does not simply compute the length of _args_ because some expression classes use
a subset of the _args_ array. Thus, it is imperative that developers use this method!
class pyomo.core.expr.current.AbsExpression(arg)
Bases: pyomo.core.expr.numeric_expr.UnaryFunctionExpression
An expression object for the abs() function.
Parameters args (tuple) – Children nodes
create_node_with_local_data(args)
Construct a node using given arguments.
This method provides a consistent interface for constructing a node, which is used in tree visitor scripts.
In the simplest case, this simply returns:
self.__class__(args)
But in general this creates an expression object using local data as well as arguments that represent the
child nodes.
Parameters
• args (list) – A list of child nodes for the new expression object
• memo (dict) – A dictionary that maps object ids to clone objects generated earlier during
a cloning process. This argument is needed to clone objects that are owned by a model,
and it can be safely ignored for most expression classes.
Returns A new expression object with the same type as the current class.
class pyomo.core.expr.current.SimpleExpressionVisitor
Note: This class is a customization of the PyUtilib SimpleVisitor class that is tailored to efficiently walk
Pyomo expression trees. However, this class is not a subclass of the PyUtilib SimpleVisitor class because
all key methods are reimplemented.
finalize()
Return the “final value” of the search.
The default implementation returns None, because the traditional visitor pattern does not return a value.
Returns The final value after the search. Default is None.
visit(node)
Visit a node in an expression tree and perform some operation on it.
This method should be over-written by a user that is creating a sub-class.
Parameters node – a node in an expression tree
Returns nothing
xbfs(node)
Breadth-first search of an expression tree, except that leaf nodes are immediately visited.
Note: This method has the same functionality as the PyUtilib SimpleVisitor.xbfs method. The
difference is that this method is tailored to efficiently walk Pyomo expression trees.
Parameters node – The root node of the expression tree that is searched.
Returns The return value is determined by the finalize() function, which may be defined
by the user. Defaults to None.
xbfs_yield_leaves(node)
Breadth-first search of an expression tree, except that leaf nodes are immediately visited.
Note: This method has the same functionality as the PyUtilib SimpleVisitor.
xbfs_yield_leaves method. The difference is that this method is tailored to efficiently walk
Pyomo expression trees.
Parameters node – The root node of the expression tree that is searched.
Returns The return value is determined by the finalize() function, which may be defined
by the user. Defaults to None.
class pyomo.core.expr.current.ExpressionValueVisitor
Note: This class is a customization of the PyUtilib ValueVisitor class that is tailored to efficiently walk
Pyomo expression trees. However, this class is not a subclass of the PyUtilib ValueVisitor class because
all key methods are reimplemented.
dfs_postorder_stack(node)
Perform a depth-first search in postorder using a stack implementation.
Note: This method has the same functionality as the PyUtilib ValueVisitor.
dfs_postorder_stack method. The difference is that this method is tailored to efficiently
walk Pyomo expression trees.
Parameters node – The root node of the expression tree that is searched.
Returns The return value is determined by the finalize() function, which may be defined
by the user.
finalize(ans)
This method defines the return value for the search methods in this class.
The default implementation returns the value of the initial node (aka the root node), because this visitor
pattern computes and returns value for each node to enable the computation of this value.
Parameters ans – The final value computed by the search method.
Returns The final value after the search. Defaults to simply returning ans.
visit(node, values)
Visit a node in a tree and compute its value using the values of its children.
This method should be over-written by a user that is creating a sub-class.
Parameters
• node – a node in a tree
• values – a list of values of this node’s children
Returns The value for this node, which is computed using values
visiting_potential_leaf(node)
Visit a node and return its value if it is a leaf.
Note: This class is a customization of the PyUtilib ValueVisitor class that is tailored to support re-
placement of sub-trees in a Pyomo expression tree. However, this class is not a subclass of the PyUtilib
ValueVisitor class because all key methods are reimplemented.
construct_node(node, values)
Call the expression create_node_with_local_data() method.
dfs_postorder_stack(node)
Perform a depth-first search in postorder using a stack implementation.
This method replaces subtrees. This method detects if the visit() method returns a different object. If
so, then the node has been replaced and search process is adapted to replace all subsequent parent nodes
in the tree.
Note: This method has the same functionality as the PyUtilib ValueVisitor.
dfs_postorder_stack method that is tailored to support the replacement of sub-trees in a
Pyomo expression tree.
Parameters node – The root node of the expression tree that is searched.
Returns The return value is determined by the finalize() function, which may be defined
by the user.
finalize(ans)
This method defines the return value for the search methods in this class.
The default implementation returns the value of the initial node (aka the root node), because this visitor
pattern computes and returns value for each node to enable the computation of this value.
Parameters ans – The final value computed by the search method.
Returns The final value after the search. Defaults to simply returning ans.
visit(node, values)
Visit and clone nodes that have been expanded.
Parameters
• node – The node that will be cloned.
• values (list) – The list of child nodes that have been cloned. These values are used to
define the cloned node.
Returns The cloned node. Default is to simply return the node.
visiting_potential_leaf(node)
Visit a node and return a cloned node if it is a leaf.
13.3.1 GAMS
GAMSShell Solver
class pyomo.solvers.plugins.solvers.GAMS.GAMSShell(**kwds)
A generic shell interface to GAMS solvers.
available(exception_flag=True)
True if the solver is available.
executable()
Returns the executable used by this solver.
solve(*args, **kwds)
Solve a model via the GAMS executable.
Keyword Arguments
• tee=False (bool) – Output GAMS log to stdout.
• logfile=None (str) – Filename to output GAMS log to a file.
• load_solutions=True (bool) – Load solution into model. If False, the results ob-
ject will contain the solution data.
• keepfiles=False (bool) – Keep temporary files.
• tmpdir=None (str) – Specify directory path for storing temporary files. A directory
will be created if one of this name doesn’t exist. By default uses the system default tem-
porary path.
• report_timing=False (bool) – Print timing reports for presolve, solver, postsolve,
etc.
• io_options (dict) – Options that get passed to the writer. See writer in py-
omo.repn.plugins.gams_writer for details. Updated with any other keywords passed to
solve method. Note: put_results is not available for modification on GAMSShell solver.
GAMSDirect Solver
class pyomo.solvers.plugins.solvers.GAMS.GAMSDirect(**kwds)
A generic python interface to GAMS solvers.
Visit Python API page on gams.com for installation help.
available(exception_flag=True)
True if the solver is available.
solve(*args, **kwds)
Solve a model via the GAMS Python API.
Keyword Arguments
• tee=False (bool) – Output GAMS log to stdout.
• logfile=None (str) – Filename to output GAMS log to a file.
• load_solutions=True (bool) – Load solution into model. If False, the results ob-
ject will contain the solution data.
• keepfiles=False (bool) – Keep temporary files. Equivalent of Debu-
gLevel.KeepFiles. Summary of temp files can be found in _gams_py_gjo0.pf
• tmpdir=None (str) – Specify directory path for storing temporary files. A directory
will be created if one of this name doesn’t exist. By default uses the system default tem-
porary path.
• report_timing=False (bool) – Print timing reports for presolve, solver, postsolve,
etc.
• io_options (dict) – Options that get passed to the writer. See writer in py-
omo.repn.plugins.gams_writer for details. Updated with any other keywords passed to
solve method.
GAMS Writer
This class is most commonly accessed and called upon via model.write(“filename.gms”, . . . ), but is also utilized by
the GAMS solver interfaces.
class pyomo.repn.plugins.gams_writer.ProblemWriter_gams
– add_options=None List of additional lines to write directly into model file before the
solve statement. For model attributes, <model name> is GAMS_MODEL.
– skip_trivial_constraints=False Skip writing constraints whose body section is fixed.
– file_determinism=1
How much effort do we want to put into ensuring the
GAMS file is written deterministically for a Pyomo model:
0 : None
1 : sort keys of indexed components (default)
2 : sort keys AND sort names (over declaration order)
– put_results=None Filename for optionally writing solution values and marginals to
(put_results).dat, and solver statuses to (put_results + ‘stat’).dat.
13.3.2 CPLEXPersistent
class pyomo.solvers.plugins.solvers.cplex_persistent.CPLEXPersistent(**kwds)
Bases: pyomo.solvers.plugins.solvers.persistent_solver.PersistentSolver,
pyomo.solvers.plugins.solvers.cplex_direct.CPLEXDirect
A class that provides a persistent interface to Cplex. Direct solver interfaces do not use any file io. Rather, they
interface directly with the python bindings for the specific solver. Persistent solver interfaces are similar except
that they “remember” their model. Thus, persistent solver interfaces allow incremental changes to the solver
model (e.g., the gurobi python model or the cplex python model). Note that users are responsible for notifying
the persistent solver interfaces when changes are made to the corresponding pyomo model.
Keyword Arguments
• model (ConcreteModel) – Passing a model to the constructor is equivalent to call-
ing the set_instance mehtod.
• type (str) – String indicating the class type of the solver instance.
• name (str) – String representing either the class type of the solver instance or an
assigned name.
• doc (str) – Documentation for the solver
• options (dict) – Dictionary of solver options
add_block(block)
Add a single Pyomo Block to the solver’s model.
This will keep any existing model components intact.
Parameters block (Block (scalar Block or single _BlockData)) –
add_constraint(con)
Add a single constraint to the solver’s model.
This will keep any existing model components intact.
Parameters con (Constraint (scalar Constraint or single
_ConstraintData)) –
add_sos_constraint(con)
Add a single SOS constraint to the solver’s model (if supported).
This will keep any existing model components intact.
Parameters con (SOSConstraint) –
add_var(var)
Add a single variable to the solver’s model.
This will keep any existing model components intact.
Parameters var (Var) –
available(exception_flag=True)
True if the solver is available.
has_capability(cap)
Returns a boolean value representing whether a solver supports a specific feature. Defaults to ‘False’ if
the solver is unaware of an option. Expects a string.
Example: # prints True if solver supports sos1 constraints, and False otherwise
print(solver.has_capability(‘sos1’)
# prints True is solver supports ‘feature’, and False otherwise print(solver.has_capability(‘feature’)
Parameters cap (str) – The feature
Returns val – Whether or not the solver has the specified capability.
Return type bool
has_instance()
True if set_instance has been called and this solver interface has a pyomo model and a solver model.
Returns tmp
Return type bool
load_duals(cons_to_load=None)
Load the duals into the ‘dual’ suffix. The ‘dual’ suffix must live on the parent model.
Parameters cons_to_load (list of Constraint) –
load_rc(vars_to_load)
Load the reduced costs into the ‘rc’ suffix. The ‘rc’ suffix must live on the parent model.
Parameters vars_to_load (list of Var) –
load_slacks(cons_to_load=None)
Load the values of the slack variables into the ‘slack’ suffix. The ‘slack’ suffix must live on the parent
model.
Parameters cons_to_load (list of Constraint) –
load_vars(vars_to_load=None)
Load the values from the solver’s variables into the corresponding pyomo variables.
Parameters vars_to_load (list of Var) –
problem_format()
Returns the current problem format.
remove_block(block)
Remove a single block from the solver’s model.
This will keep any other model components intact.
WARNING: Users must call remove_block BEFORE modifying the block.
Parameters block (Block (scalar Block or a single _BlockData)) –
remove_constraint(con)
Remove a single constraint from the solver’s model.
This will keep any other model components intact.
Parameters con (Constraint (scalar Constraint or single
_ConstraintData)) –
remove_sos_constraint(con)
Remove a single SOS constraint from the solver’s model.
This will keep any other model components intact.
Parameters con (SOSConstraint) –
remove_var(var)
Remove a single variable from the solver’s model.
This will keep any other model components intact.
Parameters var (Var (scalar Var or single _VarData)) –
reset()
Reset the state of the solver
results_format()
Returns the current results format.
set_callback(name, callback_fn=None)
Set the callback function for a named callback.
A call-back function has the form:
def fn(solver, model): pass
where ‘solver’ is the native solver interface object and ‘model’ is a Pyomo model instance object.
set_instance(model, **kwds)
This method is used to translate the Pyomo model provided to an instance of the solver’s Python model.
This discards any existing model and starts from scratch.
Parameters model (ConcreteModel) – The pyomo model to be used with the solver.
Keyword Arguments
• symbolic_solver_labels (bool) – If True, the solver’s components (e.g.,
variables, constraints) will be given names that correspond to the Pyomo compo-
nent names.
• skip_trivial_constraints (bool) – If True, then any constraints with
a constant body will not be added to the solver model. Be careful with this.
If a trivial constraint is skipped then that constraint cannot be removed from a
persistent solver (an error will be raised if a user tries to remove a non-existent
constraint).
• output_fixed_variable_bounds (bool) – If False then an error will be
raised if a fixed variable is used in one of the solver constraints. This is useful
for catching bugs. Ordinarily a fixed variable should appear as a constant value
in the solver constraints. If True, then the error will not be raised.
set_objective(obj)
Set the solver’s objective. Note that, at least for now, any existing objective will be discarded. Other than
that, any existing model components will remain intact.
Parameters obj (Objective) –
set_problem_format(format)
Set the current problem format (if it’s valid) and update the results format to something valid for this
problem format.
set_results_format(format)
Set the current results format (if it’s valid for the current problem format).
solve(*args, **kwds)
Solve the model.
Keyword Arguments
13.3.3 GurobiPersistent
Methods
class pyomo.solvers.plugins.solvers.gurobi_persistent.GurobiPersistent(**kwds)
Bases: pyomo.solvers.plugins.solvers.persistent_solver.PersistentSolver,
pyomo.solvers.plugins.solvers.gurobi_direct.GurobiDirect
A class that provides a persistent interface to Gurobi. Direct solver interfaces do not use any file io. Rather, they
interface directly with the python bindings for the specific solver. Persistent solver interfaces are similar except
that they “remember” their model. Thus, persistent solver interfaces allow incremental changes to the solver
model (e.g., the gurobi python model or the cplex python model). Note that users are responsible for notifying
the persistent solver interfaces when changes are made to the corresponding pyomo model.
Keyword Arguments
• model (ConcreteModel) – Passing a model to the constructor is equivalent to call-
ing the set_instance mehtod.
• type (str) – String indicating the class type of the solver instance.
• name (str) – String representing either the class type of the solver instance or an
assigned name.
• doc (str) – Documentation for the solver
• options (dict) – Dictionary of solver options
add_block(block)
Add a single Pyomo Block to the solver’s model.
This will keep any existing model components intact.
Parameters block (Block (scalar Block or single _BlockData)) –
add_constraint(con)
Add a single constraint to the solver’s model.
This will keep any existing model components intact.
Parameters con (Constraint (scalar Constraint or single
_ConstraintData)) –
add_sos_constraint(con)
Add a single SOS constraint to the solver’s model (if supported).
This will keep any existing model components intact.
Parameters con (SOSConstraint) –
add_var(var)
Add a single variable to the solver’s model.
This will keep any existing model components intact.
Parameters var (Var) –
available(exception_flag=True)
True if the solver is available.
cbCut(con)
Add a cut within a callback.
Parameters con (pyomo.core.base.constraint.
_GeneralConstraintData) – The cut to add
cbGetNodeRel(vars)
Parameters vars (Var or iterable of Var) –
cbGetSolution(vars)
Parameters vars (iterable of vars) –
cbLazy(con)
Parameters con (pyomo.core.base.constraint.
_GeneralConstraintData) – The lazy constraint to add
get_gurobi_param_info(param)
Get information about a gurobi parameter.
Parameters param (str) – The gurobi parameter to get info for. See Gurobi documenation
for possible options.
Returns
Return type six-tuple containing the parameter name, type, value, minimum value, maxi-
mum value, and default value.
get_linear_constraint_attr(con, attr)
Get the value of an attribute on a gurobi linear constraint.
Parameters
• con (pyomo.core.base.constraint._GeneralConstraintData)
– The pyomo constraint for which the corresponding gurobi constraint attribute
should be retrieved.
• attr (str) –
The attribute to get. Options are: Sense RHS ConstrName Pi Slack CBasis
DStart Lazy IISConstr SARHSLow SARHSUp FarkasDual
get_model_attr(attr)
Get the value of an attribute on the Gurobi model.
Parameters attr (str) – The attribute to get. See Gurobi documentation for descriptions
of the attributes. Options are:
NumVars NumConstrs NumSOS NumQConstrs NumgGenConstrs
NumNZs DNumNZs NumQNZs NumQCNZs NumIntVars NumBinVars
NumPWLObjVars ModelName ModelSense ObjCon ObjVal ObjBound
results_format()
Returns the current results format.
set_callback(func=None)
Specify a callback for gurobi to use.
Parameters func (function) – The function to call. The function should have three ar-
guments. The first will be the pyomo model being solved. The second will be the Guro-
biPersistent instance. The third will be an enum member of gurobipy.GRB.Callback.
This will indicate where in the branch and bound algorithm gurobi is at. For example,
suppose we want to solve
min 2*x + y s.t.
y >= (x-2)**2 0 <= x <= 4 y >= 0 y integer
as an MILP using exteneded cutting planes in callbacks.
>>>
>> from gurobipy import GRB
>> import pyomo.environ as pe
>> from pyomo.core.expr.taylor_series import taylor_series_
˓→expansion
>>
>> m = pe.ConcreteModel()
>> m.x = pe.Var(bounds=(0, 4))
>> m.y = pe.Var(within=pe.Integers, bounds=(0, None))
>> m.obj = pe.Objective(expr=2*m.x + m.y)
>> m.cons = pe.ConstraintList() # for the cutting planes
>>
>> def _add_cut(xval):
>> # a function to generate the cut
>> m.x.value = xval
>> return m.cons.add(m.y >= taylor_series_expansion((m.x -
˓→2)**2))
>>
>> _add_cut(0) # start with 2 cuts at the bounds of x
>> _add_cut(4) # this is an arbitrary choice
>>
>> opt = pe.SolverFactory('gurobi_persistent')
>> opt.set_instance(m)
>> opt.set_gurobi_param('PreCrush', 1)
>> opt.set_gurobi_param('LazyConstraints', 1)
>>
>> def my_callback(cb_m, cb_opt, cb_where):
>> if cb_where == GRB.Callback.MIPSOL:
>> cb_opt.cbGetSolution(vars=[m.x, m.y])
>> if m.y.value < (m.x.value - 2)**2 - 1e-6:
>> cb_opt.cbLazy(_add_cut(m.x.value))
>>
>> opt.set_callback(my_callback)
>> opt.solve()
>> assert abs(m.x.value - 1) <= 1e-6
>> assert abs(m.y.value - 1) <= 1e-6
set_gurobi_param(param, val)
Set a gurobi parameter.
Parameters
• param (str) – The gurobi parameter to set. Options include any gurobi param-
eter. Please see the Gurobi documentation for options.
• val (any) – The value to set the parameter to. See Gurobi documentation for
possible values.
set_instance(model, **kwds)
This method is used to translate the Pyomo model provided to an instance of the solver’s Python model.
This discards any existing model and starts from scratch.
Parameters model (ConcreteModel) – The pyomo model to be used with the solver.
Keyword Arguments
• symbolic_solver_labels (bool) – If True, the solver’s components (e.g.,
variables, constraints) will be given names that correspond to the Pyomo compo-
nent names.
• skip_trivial_constraints (bool) – If True, then any constraints with
a constant body will not be added to the solver model. Be careful with this.
If a trivial constraint is skipped then that constraint cannot be removed from a
persistent solver (an error will be raised if a user tries to remove a non-existent
constraint).
• output_fixed_variable_bounds (bool) – If False then an error will be
raised if a fixed variable is used in one of the solver constraints. This is useful
for catching bugs. Ordinarily a fixed variable should appear as a constant value
in the solver constraints. If True, then the error will not be raised.
set_linear_constraint_attr(con, attr, val)
Set the value of an attribute on a gurobi linear constraint.
Parameters
• con (pyomo.core.base.constraint._GeneralConstraintData)
– The pyomo constraint for which the corresponding gurobi constraint attribute
should be modified.
• attr (str) –
The attribute to be modified. Options are: CBasis DStart Lazy
• val (any) – See gurobi documentation for acceptable values.
set_objective(obj)
Set the solver’s objective. Note that, at least for now, any existing objective will be discarded. Other than
that, any existing model components will remain intact.
Parameters obj (Objective) –
set_problem_format(format)
Set the current problem format (if it’s valid) and update the results format to something valid for this
problem format.
set_results_format(format)
Set the current results format (if it’s valid for the current problem format).
set_var_attr(var, attr, val)
Set the value of an attribute on a gurobi variable.
Parameters
• con (pyomo.core.base.var._GeneralVarData) – The pyomo var for
which the corresponding gurobi var attribute should be modified.
• attr (str) –
The attribute to be modified. Options are: Start VarHintVal VarHintPri
BranchPriority VBasis PStart
All data is associated with a symbol name, which may be indexed, and which may belong to a namespace. The
default namespace is None.
Parameters
• model – The model for which this data is associated. This is used for error checking
(e.g. object names must exist in the model, set dimensions must match, etc.). Default
is None.
• filename (str) – A file from which data is loaded. Default is None.
• data_dict (dict) – A dictionary used to initialize the data in this object. Default
is None.
__getitem__(*args)
Return the specified data value.
If a single argument is given, then this is the symbol name:
dp = DataPortal()
dp[name]
If a two arguments are given, then the first is the namespace and the second is the symbol name:
dp = DataPortal()
dp[namespace, name]
disconnect()
Close the data manager object that is associated with the input source.
items(namespace=None)
Return an iterator of (name, value) tuples from the data in the specified namespace.
Yields The next (name, value) tuple in the namespace. If the symbol has a simple data value,
then that is included in the tuple. Otherwise, the tuple includes a dictionary mapping
symbol indices to values.
keys(namespace=None)
Return an iterator of the data keys in the specified namespace.
Yields A string name for the next symbol in the specified namespace.
load(**kwds)
Import data from an external data source.
Parameters model – The model object for which this data is associated. Default is None.
Other keyword arguments are passed to the connect() method.
namespaces()
Return an iterator for the namespaces in the data portal.
Yields A string name for the next namespace.
store(**kwds)
Export data to an external data source.
Parameters model – The model object for which this data is associated. Default is None.
Other keyword arguments are passed to the connect() method.
values(namespace=None)
Return an iterator of the data values in the specified namespace.
Yields The data value for the next symbol in the specified namespace. This may be a simple
value, or a dictionary of values.
class pyomo.dataportal.TableData.TableData
A class used to read/write data from/to a table in an external data source.
__init__()
Constructor
__weakref__
list of weak references to the object (if defined)
add_options(**kwds)
Add the keyword options to the Options object in this object.
available()
Returns Return True if the data manager is available.
clear()
Clear the data that was extracted from this table
close()
Close the data manager.
initialize(**kwds)
Initialize the data manager with keyword arguments.
The filename argument is recognized here, and other arguments are passed to the add_options()
method.
open()
Open the data manager.
Warning: The pyomo.kernel API is still in the beta phase of development. It is fully tested and functional;
however, the interface may change as it becomes further integrated with the rest of Pyomo.
Warning: Models built with pyomo.kernel components are not yet compatible with pyomo extension modules
(e.g., pyomo.pysp, pyomo.dae, pyomo.gdp).
The pyomo.kernel library is an experimental modeling interface designed to provide a better experience for users
doing concrete modeling and advanced application development with Pyomo. It includes the basic set of modeling
components necessary to build algebraic models, which have been redesigned from the ground up to make it easier for
users to customize and extend. For a side-by-side comparison of pyomo.kernel and pyomo.environ syntax,
visit the link below.
pyomo.kernel pyomo.environ
Import
import pyomo.kernel as pmo import pyomo.environ as
˓→aml
Model1
def create(data): m = aml.AbstractModel()
instance = pmo.block() # ... define model ...
# ... define instance instance = \
˓→... m.create_
return instance ˓→instance(datafile)
instance = create(data)
m = pmo.block() m = aml.ConcreteModel()
m.b = pmo.block() m.b = aml.Block()
Set2
m.s = [1,2] m.s = aml.
˓→Set(initialize=[1,2],
# [0,1,2] ˓→ordered=True)
m.q = range(3)
# [1,2,3]
m.q = aml.RangeSet(1,3)
Parameter3
m.p = pmo.parameter(0) m.p = aml.
˓→Param(mutable=True,
˓→mutable=True,
rule=pd_)
# uses 0-based indexing
# pl[0] = 0, pl[0] = 1, ..
˓→. #
m.pl = pmo.parameter_ # No ParamList exists
˓→list() #
for j in m.q:
m.pl.append(
pmo.parameter(j))
Variable
m.v = pmo. m.v = aml.
˓→variable(value=1, ˓→Var(initialize=1.0,
lb=1, bounds=(1,
ub=4) ˓→4))
m.vd = pmo.variable_dict()
for i in m.s: m.vd = aml.Var(m.s,
m.vd[i] = pmo.
˓→variable(ub=9) ˓→bounds=(None,9))
Models built from pyomo.kernel components are fully compatible with the standard solver interfaces included
with Pyomo. A minimal example script that defines and solves a model is shown below.
import pyomo.kernel as pmo
model = pmo.block()
model.x = pmo.variable()
model.c = pmo.constraint(model.x >= 1)
model.o = pmo.objective(model.x)
opt = pmo.SolverFactory("ipopt")
result = opt.solve(model)
assert str(result.solver.termination_condition) == "optimal"
As the next section will show, the standard modeling component containers are also compatible with user-defined
classes that derive from the existing modeling components.
Sub-Classing
The existing components and containers in pyomo.kernel are designed to make sub-classing easy. User-defined
classes that derive from the standard modeling components and containers in pyomo.kernel are compatible
with existing containers of the same component category. As an example, in the following code we see that the
pyomo.kernel.block_list container can store both pyomo.kernel.block objects as well as a user-
defined Widget object that derives from pyomo.kernel.block. The Widget object can also be placed on
another block object as an attribute and treated itself as a block.
class Widget(pyomo.kernel.block):
...
model = pyomo.kernel.block()
model.blist = pyomo.kernel.block_list()
model.blist.append(Widget())
model.blist.append(pyomo.kernel.block())
model.w = Widget()
model.w.x = pyomo.kernel.variable()
1 pyomo.kernel does not include an alternative to the AbstractModel component from pyomo.environ. All data necessary to build
block. Thus, these objects can be stored in containers such as pyomo.kernel.block_dict and pyomo.kernel.block_list.
The next series of examples goes into more detail on how to implement derived components or containers.
The following code block shows a class definition for a non-negative variable, starting from pyomo.kernel.
variable as a base class.
class NonNegativeVariable(pyomo.kernel.variable):
"""A non-negative variable."""
__slots__ = ()
#
# restrict assignments to x.lb to non-negative numbers
#
@property
def lb(self):
# calls the base class property getter
return pyomo.kernel.variable.lb.fget(self)
@lb.setter
def lb(self, lb):
if lb < 0:
raise ValueError("lower bound must be non-negative")
# calls the base class property setter
pyomo.kernel.variable.lb.fset(self, lb)
The NonNegativeVariable class prevents negative values from being stored into its lower bound during initial-
ization or later on through assignment statements (e.g, x.lb = -1 fails). Note that the __slots__ == () line
at the beginning of the class definition is optional, but it is recommended if no additional data members are necessary
as it reduces the memory requirement of the new variable type.
The next code block defines a custom variable container called Point that represents a 3-dimensional point in
Cartesian space. The new type derives from the pyomo.kernel.variable_tuple container and uses the
NonNegativeVariable type we defined previously in the z coordinate.
class Point(pyomo.kernel.variable_tuple):
"""A 3-dimensional point in Cartesian space with the
z coordinate restricted to non-negative values."""
__slots__ = ()
def __init__(self):
super(Point, self).__init__(
(pyomo.kernel.variable(),
pyomo.kernel.variable(),
NonNegativeVariable()))
@property
def x(self):
return self[0]
@property
def y(self):
return self[1]
@property
def z(self):
return self[2]
The Point class can be treated like a tuple storing three variables, and it can be placed inside of other variable
containers or added as attributes to blocks. The property methods included in the class definition provide an additional
syntax for accessing the three variables it stores, as the next code example will show.
The following code defines a class for building a convex second-order cone constraint from a Point object. It derives
from the pyomo.kernel.constraint class, overriding the constructor to build the constraint expression and
utilizing the property methods on the point class to increase readability.
class SOC(pyomo.kernel.constraint):
"""A convex second-order cone constraint"""
__slots__ = ()
The pyomo.kernel library offers significant opportunities to reduce memory requirements for highly structured
models. The situation where this is most apparent is when expressing a model in terms of many small blocks con-
sisting of singleton components. As an example, consider expressing a model consisting of a large number of voltage
transformers. One option for doing so might be to define a Transformer component as a subclass of pyomo.kernel.
block. The example below defines such a component, including some helper methods for connecting input and output
voltage variables and updating the transformer ratio.
class Transformer(pyomo.kernel.block):
def __init__(self):
super(Transformer,self).__init__()
self._a = pyomo.kernel.parameter()
self._v_in = pyomo.kernel.expression()
self._v_out = pyomo.kernel.expression()
self._c = pyomo.kernel.constraint(
self._a * self._v_out == self._v_in)
def set_ratio(self, a):
assert a > 0
self._a.value = a
def connect_v_in(self, v_in):
self._v_in.expr = v_in
def connect_v_out(self, v_out):
self._v_out.expr = v_out
A simplified version of this using pyomo.environ components might look like what is below.
def Transformer():
b = pyomo.environ.Block(concrete=True)
b._a = pyomo.environ.Param(mutable=True)
b._v_in = pyomo.environ.Expression()
b._v_out = pyomo.environ.Expression()
b._c = pyomo.environ.Constraint(expr=\
b._a * b._v_out == b._v_in)
return b
The transformer expressed using pyomo.kernel components requires roughly 2 KB of memory, whereas the
pyomo.environ version requires roughly 8.4 KB of memory (an increase of more than 4x). Additionally, the
pyomo.kernel transformer is fully compatible with all existing pyomo.kernel block containers.
Pyomo 5.6.3 introduced support into pyomo.kernel for six conic constraint forms that are directly recognized by
the new Mosek solver interface. These are
• conic.quadratic:
∑︀ 2 2
𝑖 𝑥𝑖 ≤ 𝑟 , 𝑟 ≥ 0
• conic.rotated_quadratic:
∑︀ 2
𝑖 𝑥𝑖 ≤ 2𝑟1 𝑟2 , 𝑟1 , 𝑟2 ≥ 0
• conic.primal_exponential:
𝑥1 exp(𝑥2 /𝑥1 ) ≤ 𝑟, 𝑥1 , 𝑟 ≥ 0
• conic.primal_power (𝛼 is a constant):
||𝑥||2 ≤ 𝑟1𝛼 𝑟21−𝛼 , 𝑟1 , 𝑟2 ≥ 0, 0 < 𝛼 < 1
• conic.dual_exponential:
−𝑥2 exp((𝑥1 /𝑥2 ) − 1) ≤ 𝑟, 𝑥2 ≤ 0, 𝑟 ≥ 0
• conic.dual_power (𝛼 is a constant):
||𝑥||2 ≤ (𝑟1 /𝛼)𝛼 (𝑟2 /(1 − 𝛼))1−𝛼 , 𝑟1 , 𝑟2 ≥ 0, 0 < 𝛼 < 1
Other solver interfaces will treat these objects as general nonlinear or quadratic constraints, and may or may not
have the ability to identify their convexity. For instance, Gurobi will recognize the expressions produced by the
quadratic and rotated_quadratic objects as representing convex domains as long as the variables involved
satisfy the convexity conditions. However, other solvers may not include this functionality.
Each of these conic constraint classes are of the same category type as standard pyomo.kernel.constraint
object, and, thus, are directly supported by the standard constraint containers (constraint_tuple,
constraint_list, constraint_dict).
Each conic constraint class supports two methods of instantiation. The first method is to directly instantiate a conic
constraint object, providing all necessary input variables:
This method may be limiting if utilizing the Mosek solver as the user must ensure that additional conic constraints
do not use variables that are directly involved in any existing conic constraints (this is a limitation the Mosek solver
itself).
To overcome this limitation, and to provide a more general way of defining conic domains, each conic constraint class
provides the as_domain class method. This alternate constructor has the same argument signature as the class, but
in place of each variable, one can optionally provide a constant, a linear expression, or None. The as_domain class
method returns a block object that includes the core conic constraint, auxiliary variables used to express the conic
constraint, as well as auxiliary constraints that link the inputs (that are not None) to the auxiliary variables. Example:
13.5.3 Reference
Modeling Components:
Blocks
Summary
Member Documentation
class pyomo.core.kernel.block.block
Bases: pyomo.core.kernel.block.IBlock
A generalized container for defining hierarchical models by adding modeling components as attributes.
Examples
child_ctypes()
Returns the set of child object category types stored in this container.
children(ctype=<class ’pyomo.core.kernel.base._no_ctype’>)
Iterate over the children of this block.
Parameters ctype – Indicates the category of children to include. The default value indi-
cates that all categories should be included.
Returns iterator of child objects
load_solution(solution, allow_consistent_values_for_fixed_vars=False,
comparison_tolerance_for_fixed_vars=1e-05)
Load a solution.
Parameters
• solution – A pyomo.opt.Solution object with a symbol map. Option-
ally, the solution can be tagged with a default variable value (e.g., 0) that will
be applied to those variables in the symbol map that do not have a value in the
solution.
• allow_consistent_values_for_fixed_vars – Indicates whether a
solution can specify consistent values for variables that are fixed.
• comparison_tolerance_for_fixed_vars – The tolerance used to de-
fine whether or not a value in the solution is consistent with the value of a fixed
variable.
write(filename, format=None, _solver_capability=None, _called_by_solver=False, **kwds)
Write the model to a file, with a given format.
Parameters
• filename (str) – The name of the file to write.
• format – The file format to use. If this is not specified, the file format will be
inferred from the filename suffix.
• **kwds – Additional keyword options passed to the model writer.
Returns a SymbolMap
class pyomo.core.kernel.block.block_tuple(*args, **kwds)
Bases: pyomo.core.kernel.tuple_container.TupleContainer
A tuple-style container for objects with category type IBlock
class pyomo.core.kernel.block.block_list(*args, **kwds)
Bases: pyomo.core.kernel.list_container.ListContainer
A list-style container for objects with category type IBlock
class pyomo.core.kernel.block.block_dict(*args, **kwds)
Bases: pyomo.core.kernel.dict_container.DictContainer
A dict-style container for objects with category type IBlock
Variables
Summary
Member Documentation
Examples
domain
Set the domain of the variable. This method updates the domain_type property and overwrites the lb
and ub properties with the domain bounds.
domain_type
The domain type of the variable (RealSet or IntegerSet)
fixed
The fixed status of the variable
lb
The lower bound of the variable
stale
The stale status of the variable
ub
The upper bound of the variable
value
The value of the variable
class pyomo.core.kernel.variable.variable_tuple(*args, **kwds)
Bases: pyomo.core.kernel.tuple_container.TupleContainer
A tuple-style container for objects with category type IVariable
class pyomo.core.kernel.variable.variable_list(*args, **kwds)
Bases: pyomo.core.kernel.list_container.ListContainer
A list-style container for objects with category type IVariable
class pyomo.core.kernel.variable.variable_dict(*args, **kwds)
Bases: pyomo.core.kernel.dict_container.DictContainer
A dict-style container for objects with category type IVariable
Constraints
Summary
Member Documentation
property on the constraint. Default is None. This keyword should not be used in
combination with the expr keyword.
• lb – Sets the lower bound of the constraint. Can be updated later by assigning to the
lb property on the constraint. Default is None, which is equivalent to -inf. This
keyword should not be used in combination with the expr keyword.
• ub – Sets the upper bound of the constraint. Can be updated later by assigning to the
ub property on the constraint. Default is None, which is equivalent to +inf. This
keyword should not be used in combination with the expr keyword.
• rhs – Sets the right-hand side of the constraint. Can be updated later by assigning
to the rhs property on the constraint. The default value of None implies that this
keyword is ignored. Otherwise, use of this keyword implies that the equality prop-
erty is set to True. This keyword should not be used in combination with the expr
keyword.
Examples
body
The body of the constraint
expr
Get or set the expression on this constraint.
class pyomo.core.kernel.constraint.linear_constraint(variables=None, coeffi-
cients=None, terms=None,
lb=None, ub=None,
rhs=None)
Bases: pyomo.core.kernel.constraint._MutableBoundsConstraintMixin, pyomo.
core.kernel.constraint.IConstraint
A linear constraint
A linear constraint stores a linear relational expression defined by a list of variables and coefficients. This class
can be used to reduce build time and memory for an optimization model. It also increases the speed at which
the model can be output to a solver.
Parameters
• variables (list) – Sets the list of variables in the linear expression defining the
body of the constraint. Can be updated later by assigning to the variables property
on the constraint.
• coefficients (list) – Sets the list of coefficients for the variables in the linear
expression defining the body of the constraint. Can be updated later by assigning to the
coefficients property on the constraint.
• terms (list) – An alternative way of initializing the variables and
coefficients lists using an iterable of (variable, coefficient) tuples. Can be up-
dated later by assigning to the terms property on the constraint. This keyword should
not be used in combination with the variables or coefficients keywords.
• lb – Sets the lower bound of the constraint. Can be updated later by assigning to the
lb property on the constraint. Default is None, which is equivalent to -inf.
• ub – Sets the upper bound of the constraint. Can be updated later by assigning to the
ub property on the constraint. Default is None, which is equivalent to +inf.
• rhs – Sets the right-hand side of the constraint. Can be updated later by assigning to
the rhs property on the constraint. The default value of None implies that this key-
word is ignored. Otherwise, use of this keyword implies that the equality property
is set to True.
Examples
body
The body of the constraint
canonical_form(compute_values=True)
Build a canonical representation of the body of this constraints
terms
An iterator over the terms in the body of this constraint as (variable, coefficient) tuples
class pyomo.core.kernel.constraint.constraint_tuple(*args, **kwds)
Bases: pyomo.core.kernel.tuple_container.TupleContainer
A tuple-style container for objects with category type IConstraint
class pyomo.core.kernel.constraint.constraint_list(*args, **kwds)
Bases: pyomo.core.kernel.list_container.ListContainer
A list-style container for objects with category type IConstraint
class pyomo.core.kernel.constraint.constraint_dict(*args, **kwds)
Bases: pyomo.core.kernel.dict_container.DictContainer
A dict-style container for objects with category type IConstraint
Parameters
Summary
Member Documentation
class pyomo.core.kernel.parameter.parameter(value=None)
Bases: pyomo.core.kernel.parameter.IParameter
A object for storing a mutable, numeric value that can be used to build a symbolic expression.
value
The value of the paramater
class pyomo.core.kernel.parameter.functional_value(fn=None)
Bases: pyomo.core.kernel.parameter.IParameter
An object for storing a numeric function that can be used in a symbolic expression.
Note that models making use of this object may require the dill module for serialization.
fn
The function stored with this object
class pyomo.core.kernel.parameter.parameter_tuple(*args, **kwds)
Bases: pyomo.core.kernel.tuple_container.TupleContainer
A tuple-style container for objects with category type IParameter
class pyomo.core.kernel.parameter.parameter_list(*args, **kwds)
Bases: pyomo.core.kernel.list_container.ListContainer
A list-style container for objects with category type IParameter
class pyomo.core.kernel.parameter.parameter_dict(*args, **kwds)
Bases: pyomo.core.kernel.dict_container.DictContainer
A dict-style container for objects with category type IParameter
Objectives
Summary
Member Documentation
Expressions
Summary
Member Documentation
class pyomo.core.kernel.expression.expression(expr=None)
Bases: pyomo.core.kernel.expression.IExpression
A named, mutable expression.
class pyomo.core.kernel.expression.expression_tuple(*args, **kwds)
Bases: pyomo.core.kernel.tuple_container.TupleContainer
A tuple-style container for objects with category type IExpression
class pyomo.core.kernel.expression.expression_list(*args, **kwds)
Bases: pyomo.core.kernel.list_container.ListContainer
A list-style container for objects with category type IExpression
class pyomo.core.kernel.expression.expression_dict(*args, **kwds)
Bases: pyomo.core.kernel.dict_container.DictContainer
A dict-style container for objects with category type IExpression
Summary
Member Documentation
Suffixes
• active (True/None) – Controls whether or not to filter the iteration to include only
the active part of the storage tree. The default is True. Setting this keyword to None
causes the active status of objects to be ignored.
• descend_into (bool, function) – Indicates whether or not to descend into
a heterogeneous container. Default is True, which is equivalent to lambda x: True,
meaning all heterogeneous containers will be descended into.
Returns iterator of suffixes
class pyomo.core.kernel.suffix.suffix(*args, **kwds)
Bases: pyomo.core.kernel.suffix.ISuffix
A container for storing extraneous model data that can be imported to or exported from a solver.
datatype
Return the suffix datatype.
direction
Return the suffix direction.
export_enabled
Returns True when this suffix is enabled for export to solvers.
import_enabled
Returns True when this suffix is enabled for import from solutions.
class pyomo.core.kernel.suffix.suffix_dict(*args, **kwds)
Bases: pyomo.core.kernel.dict_container.DictContainer
A dict-style container for objects with category type ISuffix
pyomo.core.kernel.suffix.suffix_generator(blk, datatype=<object object>, active=True,
descend_into=True)
Generates an efficient traversal of all suffixes that have been declared.
Parameters
• blk – A block object.
• datatype – Restricts the suffixes included in the returned generator to those match-
ing the provided suffix datatype.
• active (True/None) – Controls whether or not to filter the iteration to include only
the active part of the storage tree. The default is True. Setting this keyword to None
causes the active status of objects to be ignored.
• descend_into (bool, function) – Indicates whether or not to descend into
a heterogeneous container. Default is True, which is equivalent to lambda x: True,
meaning all heterogeneous containers will be descended into.
Returns iterator of suffixes
Modules
Summary
Member Documentation
pyomo.core.kernel.piecewise_library.transforms.piecewise(breakpoints, values,
input=None, out-
put=None, bound=’eq’,
repn=’sos2’,
validate=True,
simplify=True,
equal_slopes_tolerance=1e-
06, re-
quire_bounded_input_variable=True,
re-
quire_variable_domain_coverage=True)
Models a single-variate piecewise linear function.
This function takes a list breakpoints and function values describing a piecewise linear function and transforms
this input data into a block of variables and constraints that enforce a piecewise linear relationship between
an input variable and an output variable. In the general case, this transformation requires the use of discrete
decision variables.
Parameters
• breakpoints (list) – The list of breakpoints of the piecewise linear function.
This can be a list of numbers or a list of objects that store mutable data (e.g., mutable
parameters). If mutable data is used validation might need to be disabled by setting
the validate keyword to False. The list of breakpoints must be in non-decreasing
order.
• values (list) – The values of the piecewise linear function corresponding to the
breakpoints.
• input – The variable constrained to be the input of the piecewise linear function.
• output – The variable constrained to be the output of the piecewise linear function.
• bound (str) – The type of bound to impose on the output expression. Can be one of:
– ’lb’: y <= f(x)
– ’eq’: y = f(x)
– ’ub’: y >= f(x)
• repn (str) – The type of piecewise representation to use. Choices are shown below
(+ means step functions are supported)
– ’sos2’: standard representation using sos2 constraints (+)
– ’dcc’: disaggregated convex combination (+)
– ’dlog’: logarithmic disaggregated convex combination (+)
– ’cc’: convex combination (+)
– ’log’: logarithmic branching convex combination (+)
– ’mc’: multiple choice
– ’inc’: incremental method (+)
• validate (bool) – Indicates whether or not to perform validation of the input data.
The default is True. Validation can be performed manually after the piecewise object
is created by calling the validate() method. Validation should be performed any
time the inputs are changed (e.g., when using mutable parameters in the breakpoints
list or when the input variable changes).
• simplify (bool) – Indicates whether or not to attempt to simplify the piecewise
representation to avoid using discrete variables. This can be done when the feasible
region for the output variable, with respect to the piecewise function and the bound
type, is a convex set. Default is True. Validation is required to perform simplification,
so this keyword is ignored when the validate keyword is False.
• equal_slopes_tolerance (float) – Tolerance used check if consecutive
slopes are nearly equal. If any are found, validation will fail. Default is 1e-6. This
keyword is ignored when the validate keyword is False.
• require_bounded_input_variable (bool) – Indicates if the input variable is
required to have finite upper and lower bounds. Default is True. Setting this keyword
to False can be used to allow general expressions to be used as the input in place of
a variable. This keyword is ignored when the validate keyword is False.
• require_variable_domain_coverage (bool) – Indicates if the function do-
main (defined by the endpoints of the breakpoints list) needs to cover the entire domain
of the input variable. Default is True. Ignored for any bounds of variables that are not
finite, or when the input is not assigned a variable. This keyword is ignored when the
validate keyword is False.
Returns a block that stores any new variables, constraints, and other modeling objects used by the
piecewise representation
Return type TransformedPiecewiseLinearFunction
class pyomo.core.kernel.piecewise_library.transforms.PiecewiseLinearFunction(breakpoints,
val-
ues,
val-
i-
date=True,
**kwds)
Bases: object
A piecewise linear function
Piecewise linear functions are defined by a list of breakpoints and a list function values corresponding to each
breakpoint. The function value between breakpoints is implied through linear interpolation.
Parameters
• breakpoints (list) – The list of function breakpoints.
• values (list) – The list of function values (one for each breakpoint).
• validate (bool) – Indicates whether or not to perform validation of the input data.
The default is True. Validation can be performed manually after the piecewise object
is created by calling the validate() method. Validation should be performed any
time the inputs are changed (e.g., when using mutable parameters in the breakpoints
list).
• **kwds – Additional keywords are passed to the validate() method when the
validate keyword is True; otherwise, they are ignored.
__call__(x)
Evaluates the piecewise linear function at the given point using interpolation. Note that step functions are
assumed lower-semicontinuous.
breakpoints
The set of breakpoints used to defined this function
validate(equal_slopes_tolerance=1e-06)
Validate this piecewise linear function by verifying various properties of the breakpoints and values lists
(e.g., that the list of breakpoints is nondecreasing).
Parameters equal_slopes_tolerance (float) – Tolerance used check if consecu-
tive slopes are nearly equal. If any are found, validation will fail. Default is 1e-6.
Returns a function characterization code (see util.characterize_function())
Return type int
Raises PiecewiseValidationError – if validation fails
values
The set of values used to defined this function
class pyomo.core.kernel.piecewise_library.transforms.TransformedPiecewiseLinearFunction(f,
in-
put=
out-
put=
bou
val-
i-
date
**k
Bases: pyomo.core.kernel.block.block
validate(**kwds)
Validate this piecewise linear function by verifying various properties of the breakpoints, values, and input
variable (e.g., that the list of breakpoints is nondecreasing).
See base class documentation for keyword descriptions.
class pyomo.core.kernel.piecewise_library.transforms.piecewise_mc(*args,
**kwds)
Bases: pyomo.core.kernel.piecewise_library.transforms.
TransformedPiecewiseLinearFunction
Discrete MC piecewise representation
Expresses a piecewise linear function using the MC formulation.
validate(**kwds)
Validate this piecewise linear function by verifying various properties of the breakpoints, values, and input
variable (e.g., that the list of breakpoints is nondecreasing).
See base class documentation for keyword descriptions.
class pyomo.core.kernel.piecewise_library.transforms.piecewise_inc(*args,
**kwds)
Bases: pyomo.core.kernel.piecewise_library.transforms.
TransformedPiecewiseLinearFunction
Discrete INC piecewise representation
Expresses a piecewise linear function using the INC formulation.
validate(**kwds)
Validate this piecewise linear function by verifying various properties of the breakpoints, values, and input
variable (e.g., that the list of breakpoints is nondecreasing).
See base class documentation for keyword descriptions.
class pyomo.core.kernel.piecewise_library.transforms.piecewise_dlog(*args,
**kwds)
Bases: pyomo.core.kernel.piecewise_library.transforms.
TransformedPiecewiseLinearFunction
Discrete DLOG piecewise representation
Expresses a piecewise linear function using the DLOG formulation. This formulation uses logarithmic number
of discrete variables in terms of number of breakpoints.
validate(**kwds)
Validate this piecewise linear function by verifying various properties of the breakpoints, values, and input
variable (e.g., that the list of breakpoints is nondecreasing).
See base class documentation for keyword descriptions.
class pyomo.core.kernel.piecewise_library.transforms.piecewise_log(*args,
**kwds)
Bases: pyomo.core.kernel.piecewise_library.transforms.
TransformedPiecewiseLinearFunction
Discrete LOG piecewise representation
Expresses a piecewise linear function using the LOG formulation. This formulation uses logarithmic number of
discrete variables in terms of number of breakpoints.
validate(**kwds)
Validate this piecewise linear function by verifying various properties of the breakpoints, values, and input
variable (e.g., that the list of breakpoints is nondecreasing).
Summary
Member Documentation
pyomo.core.kernel.piecewise_library.transforms_nd.piecewise_nd(tri, values,
input=None,
output=None,
bound=’eq’,
repn=’cc’)
Models a multi-variate piecewise linear function.
This function takes a D-dimensional triangulation and a list of function values associated with the points of the
triangulation and transforms this input data into a block of variables and constraints that enforce a piecewise
linear relationship between an D-dimensional vector of input variable and a single output variable. In the general
case, this transformation requires the use of discrete decision variables.
Parameters
• tri (scipy.spatial.Delaunay) – A triangulation over the discretized variable
domain. Can be generated using a list of variables using the utility function util.
generate_delaunay(). Required attributes:
– points: An (npoints, D) shaped array listing the D-dimensional coordinates of the
discretization points.
– simplices: An (nsimplices, D+1) shaped array of integers specifying the D+1
indices of the points vector that define each simplex of the triangulation.
• values (numpy.array) – An (npoints,) shaped array of the values of the piecewise
function at each of coordinates in the triangulation points array.
• input – A D-length list of variables or expressions bound as the inputs of the piece-
wise function.
• output – The variable constrained to be the output of the piecewise linear function.
• bound (str) – The type of bound to impose on the output expression. Can be one of:
– ’lb’: y <= f(x)
– ’eq’: y = f(x)
– ’ub’: y >= f(x)
• repn (str) – The type of piecewise representation to use. Can be one of:
– ’cc’: convex combination
Returns a block containing any new variables, constraints, and other components used by the
piecewise representation
Return type TransformedPiecewiseLinearFunctionND
class pyomo.core.kernel.piecewise_library.transforms_nd.PiecewiseLinearFunctionND(tri,
val-
ues,
val-
i-
date=True,
**kwds)
Bases: object
A multi-variate piecewise linear function
Multi-varite piecewise linear functions are defined by a triangulation over a finite domain and a list of function
values associated with the points of the triangulation. The function value between points in the triangulation is
implied through linear interpolation.
Parameters
• tri (scipy.spatial.Delaunay) – A triangulation over the discretized variable
domain. Can be generated using a list of variables using the utility function util.
generate_delaunay(). Required attributes:
– points: An (npoints, D) shaped array listing the D-dimensional coordinates of the
discretization points.
– simplices: An (nsimplices, D+1) shaped array of integers specifying the D+1
indices of the points vector that define each simplex of the triangulation.
• values (numpy.array) – An (npoints,) shaped array of the values of the piecewise
function at each of coordinates in the triangulation points array.
__call__(x)
Evaluates the piecewise linear function using interpolation. This method supports vectorized function
calls as the interpolation process can be expensive for high dimensional data.
For the case when a single point is provided, the argument x should be a (D,) shaped numpy array or list,
where D is the dimension of points in the triangulation.
For the vectorized case, the argument x should be a (n,D)-shaped numpy array.
triangulation
The triangulation over the domain of this function
values
The set of values used to defined this function
class pyomo.core.kernel.piecewise_library.transforms_nd.TransformedPiecewiseLinearFunctionN
Bases: pyomo.core.kernel.block.block
Base class for transformed multi-variate piecewise linear functions
A transformed multi-variate piecewise linear functions is a block of variables and constraints that enforce a
piecewise linear relationship between an vector input variables and a single output variable.
Parameters
• f (PiecewiseLinearFunctionND) – The multi-variate piecewise linear function
to transform.
• input – The variable constrained to be the input of the piecewise linear function.
• output – The variable constrained to be the output of the piecewise linear function.
• bound (str) – The type of bound to impose on the output expression. Can be one of:
– ’lb’: y <= f(x)
– ’eq’: y = f(x)
– ’ub’: y >= f(x)
__call__(x)
Evaluates the piecewise linear function using interpolation. This method supports vectorized function
calls as the interpolation process can be expensive for high dimensional data.
For the case when a single point is provided, the argument x should be a (D,) shaped numpy array or list,
where D is the dimension of points in the triangulation.
For the vectorized case, the argument x should be a (n,D)-shaped numpy array.
bound
The bound type assigned to the piecewise relationship (‘lb’,’ub’,’eq’).
input
The tuple of expressions that store the inputs to the piecewise function. The returned objects can be
updated by assigning to their expr attribute.
output
The expression that stores the output of the piecewise function. The returned object can be updated by
assigning to its expr attribute.
triangulation
The triangulation over the domain of this function
values
The set of values used to defined this function
class pyomo.core.kernel.piecewise_library.transforms_nd.piecewise_nd_cc(*args,
**kwds)
Bases: pyomo.core.kernel.piecewise_library.transforms_nd.
TransformedPiecewiseLinearFunctionND
Discrete CC multi-variate piecewise representation
Expresses a multi-variate piecewise linear function using the CC formulation.
exception pyomo.core.kernel.piecewise_library.util.PiecewiseValidationError
Bases: exceptions.Exception
An exception raised when validation of piecewise linear functions fail.
pyomo.core.kernel.piecewise_library.util.characterize_function(breakpoints,
values)
Characterizes a piecewise linear function described by a list of breakpoints and function values.
Parameters
pyomo.core.kernel.piecewise_library.util.generate_delaunay(variables, num=10,
**kwds)
Generate a Delaunay triangulation of the D-dimensional bounded variable domain given a list of D variables.
Requires numpy and scipy.
Parameters
• variables – A list of variables, each having a finite upper and lower bound.
• num (int) – The number of grid points to generate for each variable (default=10).
• **kwds – All additional keywords are passed to the scipy.spatial.Delaunay construc-
tor.
Returns A scipy.spatial.Delaunay object.
pyomo.core.kernel.piecewise_library.util.generate_gray_code(nbits)
Generates a Gray code of nbits as list of lists
pyomo.core.kernel.piecewise_library.util.is_constant(vals)
Checks if a list of points is constant
pyomo.core.kernel.piecewise_library.util.is_nondecreasing(vals)
Checks if a list of points is nondecreasing
pyomo.core.kernel.piecewise_library.util.is_nonincreasing(vals)
Checks if a list of points is nonincreasing
pyomo.core.kernel.piecewise_library.util.is_positive_power_of_two(x)
Checks if a number is a nonzero and positive power of 2
pyomo.core.kernel.piecewise_library.util.log2floor(n)
Computes the exact value of floor(log2(n)) without using floating point calculations. Input argument must be a
positive integer.
Conic Constraints
A collection of classes that provide an easy and performant way to declare conic constraints. The Mosek solver
interface includes special handling of these objects that recognizes them as convex constraints. Other solver interfaces
will treat these objects as general nonlinear or quadratic expressions, and may or may not have the ability to identify
their convexity.
Summary
Member Documentation
class pyomo.core.kernel.conic.quadratic(r, x)
Bases: pyomo.core.kernel.conic._ConicBase
A quadratic conic constraint of the form:
x[0]^2 + . . . + x[n-1]^2 <= r^2,
which is recognized as convex for r >= 0.
Parameters
• r (variable) – A variable.
• x (list[variable]) – An iterable of variables.
classmethod as_domain(r, x)
Builds a conic domain. Input arguments take the same form as those of the conic constraint, but in place
of each variable, one can optionally supply a constant, linear expression, or None.
Returns A block object with the core conic constraint (block.q) expressed using auxiliary
variables (block.r, block.x) linked to the input arguments through auxiliary constraints
(block.c).
Return type block
check_convexity_conditions(relax=False)
Returns True if all convexity conditions for the conic constraint are satisfied. If relax is True, then variable
domains are ignored and it is assumed that all variables are continuous.
class pyomo.core.kernel.conic.rotated_quadratic(r1, r2, x)
Bases: pyomo.core.kernel.conic._ConicBase
A rotated quadratic conic constraint of the form:
x[0]^2 + . . . + x[n-1]^2 <= 2*r1*r2,
which is recognized as convex for r1,r2 >= 0.
Parameters
• r1 (variable) – A variable.
• r2 (variable) – A variable.
• x (list[variable]) – An iterable of variables.
classmethod as_domain(r1, r2, x)
Builds a conic domain. Input arguments take the same form as those of the conic constraint, but in place
of each variable, one can optionally supply a constant, linear expression, or None.
Returns A block object with the core conic constraint (block.q) expressed using auxiliary
variables (block.r1, block.r2, block.x) linked to the input arguments through auxiliary
constraints (block.c).
Return type block
check_convexity_conditions(relax=False)
Returns True if all convexity conditions for the conic constraint are satisfied. If relax is True, then variable
domains are ignored and it is assumed that all variables are continuous.
class pyomo.core.kernel.conic.primal_exponential(r, x1, x2)
Bases: pyomo.core.kernel.conic._ConicBase
A primal exponential conic constraint of the form:
x1*exp(x2/x1) <= r,
which is recognized as convex for x1,r >= 0.
Parameters
• r (variable) – A variable.
• x1 (variable) – A variable.
• x2 (variable) – A variable.
classmethod as_domain(r, x1, x2)
Builds a conic domain. Input arguments take the same form as those of the conic constraint, but in place
of each variable, one can optionally supply a constant, linear expression, or None.
Returns A block object with the core conic constraint (block.q) expressed using auxiliary
variables (block.r, block.x1, block.x2) linked to the input arguments through auxiliary
constraints (block.c).
Return type block
check_convexity_conditions(relax=False)
Returns True if all convexity conditions for the conic constraint are satisfied. If relax is True, then variable
domains are ignored and it is assumed that all variables are continuous.
class pyomo.core.kernel.conic.primal_power(r1, r2, x, alpha)
Bases: pyomo.core.kernel.conic._ConicBase
A primal power conic constraint of the form: sqrt(x[0]^2 + . . . + x[n-1]^2) <= (r1^alpha)*(r2^(1-alpha))
which is recognized as convex for r1,r2 >= 0 and 0 < alpha < 1.
Parameters
• r1 (variable) – A variable.
• r2 (variable) – A variable.
• x (list[variable]) – An iterable of variables.
• alpha (float, parameter, etc.) – A constant term.
classmethod as_domain(r1, r2, x, alpha)
Builds a conic domain. Input arguments take the same form as those of the conic constraint, but in place
of each variable, one can optionally supply a constant, linear expression, or None.
Returns A block object with the core conic constraint (block.q) expressed using auxiliary
variables (block.r1, block.r2, block.x) linked to the input arguments through auxiliary
constraints (block.c).
Return type block
check_convexity_conditions(relax=False)
Returns True if all convexity conditions for the conic constraint are satisfied. If relax is True, then variable
domains are ignored and it is assumed that all variables are continuous.
Base API:
class pyomo.core.kernel.base.ICategorizedObject
Bases: object
Interface for objects that maintain a weak reference to a parent storage object and have a category type.
This class is abstract. It assumes any derived class declares the attributes below with or without slots:
_ctype
Stores the object’s category type, which should be some class derived from ICategorizedObject. This
attribute may be declared at the class level.
_parent
Stores a weak reference to the object’s parent container or None.
_storage_key
Stores key this object can be accessed with through its parent container.
_active
Stores the active status of this object.
Type bool
activate()
Activate this object.
active
The active status of this object.
clone()
Returns a copy of this object with the parent pointer set to None.
A clone is almost equivalent to deepcopy except that any categorized objects encountered that are not
descendents of this object will reference the same object on the clone.
ctype
The object’s category type.
deactivate()
Deactivate this object.
getname(fully_qualified=False, name_buffer={}, convert=<type ’str’>, relative_to=None)
Dynamically generates a name for this object.
Parameters
• fully_qualified (bool) – Generate a full name by iterating through all
anscestor containers. Default is False.
• convert (function) – A function that converts a storage key into a string
representation. Default is the built-in function str.
• relative_to (object) – When generating a fully qualified name, generate
the name relative to this block.
Returns If a parent exists, this method returns a string representing the name of the object
in the context of its parent; otherwise (if no parent exists), this method returns None.
local_name
The object’s local name within the context of its parent. Alias for obj.getname(fully_qualified=False).
name
The object’s fully qualified name. Alias for obj.getname(fully_qualified=True).
parent
The object’s parent (possibly None).
storage_key
The object’s storage key within its parent
class pyomo.core.kernel.base.ICategorizedObjectContainer
Bases: pyomo.core.kernel.base.ICategorizedObject
Interface for categorized containers of categorized objects.
activate(shallow=True)
Activate this container.
child(*args, **kwds)
Returns a child of this container given a storage key.
children(*args, **kwds)
A generator over the children of this container.
components(*args, **kwds)
A generator over the set of components stored under this container.
deactivate(shallow=True)
Deactivate this container.
class pyomo.core.kernel.homogeneous_container.IHomogeneousContainer
Bases: pyomo.core.kernel.base.ICategorizedObjectContainer
A partial implementation of the ICategorizedObjectContainer interface for implementations that store a single
category of objects and that uses the same category as the objects it stores.
Complete implementations need to set the _ctype attribute and declare the remaining required abstract properties
of the ICategorizedObjectContainer base class.
Note that this implementation allows nested storage of other ICategorizedObjectContainer imple-
mentations that are defined with the same ctype.
components(active=True)
Generates an efficient traversal of all components stored under this container. Components are categorized
objects that are either (1) not containers, or (2) are heterogeneous containers.
Parameters active (True/None) – Controls whether or not to filter the iteration to in-
clude only the active part of the storage tree. The default is True. Setting this keyword
to None causes the active status of objects to be ignored.
Returns iterator of components in the storage tree
class pyomo.core.kernel.heterogeneous_container.IHeterogeneousContainer
Bases: pyomo.core.kernel.base.ICategorizedObjectContainer
A partial implementation of the ICategorizedObjectContainer interface for implementations that store multiple
categories of objects.
Complete implementations need to set the _ctype attribute and declare the remaining required abstract properties
of the ICategorizedObjectContainer base class.
child_ctypes(*args, **kwds)
Returns the set of child object category types stored in this container.
collect_ctypes(active=True, descend_into=True)
Returns the set of object category types that can be found under this container.
Parameters
• active (True/None) – Controls whether or not to filter the iteration to include
only the active part of the storage tree. The default is True. Setting this keyword
to None causes the active status of objects to be ignored.
• descend_into (bool, function) – Indicates whether or not to descend
into a heterogeneous container. Default is True, which is equivalent to lambda x:
True, meaning all heterogeneous containers will be descended into.
Returns A set of object category types
components(ctype=<class ’pyomo.core.kernel.base._no_ctype’>, active=True, descend_into=True)
Generates an efficient traversal of all components stored under this container. Components are categorized
objects that are either (1) not containers, or (2) are heterogeneous containers.
Parameters
• ctype – Indicates the category of components to include. The default value
indicates that all categories should be included.
• active (True/None) – Controls whether or not to filter the iteration to include
only the active part of the storage tree. The default is True. Setting this keyword
to None causes the active status of objects to be ignored.
• descend_into (bool, function) – Indicates whether or not to descend
into a heterogeneous container. Default is True, which is equivalent to lambda x:
True, meaning all heterogeneous containers will be descended into.
Returns iterator of components in the storage tree
pyomo.core.kernel.heterogeneous_container.heterogeneous_containers(node,
ctype=<class
’py-
omo.core.kernel.base._no_ctype’>,
ac-
tive=True,
de-
scend_into=True)
A generator that yields all heterogeneous containers included in an object storage tree, including the root object.
Heterogeneous containers are categorized objects with a category type different from their children.
Parameters
• node – The root object.
• ctype – Indicates the category of objects to include. The default value indicates that
all categories should be included.
• active (True/None) – Controls whether or not to filter the iteration to include only
the active part of the storage tree. The default is True. Setting this keyword to None
causes the active status of objects to be ignored.
• descend_into (bool, function) – Indicates whether or not to descend into
a heterogeneous container. Default is True, which is equivalent to lambda x: True,
meaning all heterogeneous containers will be descended into.
Returns iterator of heterogeneous containers in the storage tree, include the root object.
Containers:
class pyomo.core.kernel.tuple_container.TupleContainer(*args)
Bases: pyomo.core.kernel.homogeneous_container.IHomogeneousContainer,
_abcoll.Sequence
A partial implementation of the IHomogeneousContainer interface that provides tuple-like storage functionality.
Complete implementations need to set the _ctype property at the class level and initialize the remaining ICate-
gorizedObject attributes during object creation. If using __slots__, a slot named “_data” must be included.
Note that this implementation allows nested storage of other ICategorizedObjectContainer implementations that
are defined with the same ctype.
__class__
alias of abc.ABCMeta
__delattr__
x.__delattr__(‘name’) <==> del x.name
__eq__(other)
x.__eq__(y) <==> x==y
__format__()
default object formatter
__getattribute__
x.__getattribute__(‘name’) <==> x.name
__hash__
__init__(*args)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__metaclass__
alias of abc.ABCMeta
__ne__(other)
x.__ne__(y) <==> x!=y
__new__(S, ...) → a new object with type S, a subtype of T
__reduce__()
helper for pickle
__reduce_ex__()
helper for pickle
__repr__
__setattr__
x.__setattr__(‘name’, value) <==> x.name = value
__sizeof__() → int
size of object in memory, in bytes
__str__()
Convert this object to a string by first attempting to generate its fully qualified name. If the object does
not have a name (because it does not have a parent, then a string containing the class name is returned.
classmethod __subclasshook__(C)
Abstract classes can override this to customize issubclass().
local_name
The object’s local name within the context of its parent. Alias for obj.getname(fully_qualified=False).
name
The object’s fully qualified name. Alias for obj.getname(fully_qualified=True).
parent
The object’s parent (possibly None).
storage_key
The object’s storage key within its parent
class pyomo.core.kernel.list_container.ListContainer(*args)
Bases: pyomo.core.kernel.tuple_container.TupleContainer, _abcoll.
MutableSequence
A partial implementation of the IHomogeneousContainer interface that provides list-like storage functionality.
Complete implementations need to set the _ctype property at the class level and initialize the remaining ICate-
gorizedObject attributes during object creation. If using __slots__, a slot named “_data” must be included.
Note that this implementation allows nested storage of other ICategorizedObjectContainer implementations that
are defined with the same ctype.
__class__
alias of abc.ABCMeta
__delattr__
x.__delattr__(‘name’) <==> del x.name
__eq__(other)
x.__eq__(y) <==> x==y
__format__()
default object formatter
__getattribute__
x.__getattribute__(‘name’) <==> x.name
__hash__
__init__(*args)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__metaclass__
alias of abc.ABCMeta
__ne__(other)
x.__ne__(y) <==> x!=y
__new__(S, ...) → a new object with type S, a subtype of T
__reduce__()
helper for pickle
__reduce_ex__()
helper for pickle
__repr__
__setattr__
x.__setattr__(‘name’, value) <==> x.name = value
__sizeof__() → int
size of object in memory, in bytes
__str__()
Convert this object to a string by first attempting to generate its fully qualified name. If the object does
not have a name (because it does not have a parent, then a string containing the class name is returned.
classmethod __subclasshook__(C)
Abstract classes can override this to customize issubclass().
This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImple-
mented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal
algorithm (and the outcome is cached).
__weakref__
list of weak references to the object (if defined)
activate(shallow=True)
Activate this container.
active
The active status of this object.
append(value)
S.append(object) – append object to the end of the sequence
child(key)
Get the child object associated with a given storage key for this container.
Raises KeyError – if the argument is not a storage key for any children of this container
children()
A generator over the children of this container.
clone()
Returns a copy of this object with the parent pointer set to None.
A clone is almost equivalent to deepcopy except that any categorized objects encountered that are not
descendents of this object will reference the same object on the clone.
components(active=True)
Generates an efficient traversal of all components stored under this container. Components are categorized
objects that are either (1) not containers, or (2) are heterogeneous containers.
Parameters active (True/None) – Controls whether or not to filter the iteration to in-
clude only the active part of the storage tree. The default is True. Setting this keyword
to None causes the active status of objects to be ignored.
Returns iterator of components in the storage tree
count(value) → integer – return number of occurrences of value
ctype
The object’s category type.
deactivate(shallow=True)
Deactivate this container.
extend(values)
S.extend(iterable) – extend sequence by appending elements from the iterable
__eq__(other)
x.__eq__(y) <==> x==y
__format__()
default object formatter
__getattribute__
x.__getattribute__(‘name’) <==> x.name
__init__(*args, **kwds)
x.__init__(. . . ) initializes x; see help(type(x)) for signature
__metaclass__
alias of abc.ABCMeta
__ne__(other)
x.__ne__(y) <==> x!=y
__new__(S, ...) → a new object with type S, a subtype of T
__reduce__()
helper for pickle
__reduce_ex__()
helper for pickle
__repr__
__setattr__
x.__setattr__(‘name’, value) <==> x.name = value
__sizeof__() → int
size of object in memory, in bytes
__str__()
Convert this object to a string by first attempting to generate its fully qualified name. If the object does
not have a name (because it does not have a parent, then a string containing the class name is returned.
classmethod __subclasshook__(C)
Abstract classes can override this to customize issubclass().
This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImple-
mented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal
algorithm (and the outcome is cached).
__weakref__
list of weak references to the object (if defined)
activate(shallow=True)
Activate this container.
active
The active status of this object.
child(key)
Get the child object associated with a given storage key for this container.
Raises KeyError – if the argument is not a storage key for any children of this container
children()
A generator over the children of this container.
clear() → None. Remove all items from D.
clone()
Returns a copy of this object with the parent pointer set to None.
A clone is almost equivalent to deepcopy except that any categorized objects encountered that are not
descendents of this object will reference the same object on the clone.
components(active=True)
Generates an efficient traversal of all components stored under this container. Components are categorized
objects that are either (1) not containers, or (2) are heterogeneous containers.
Parameters active (True/None) – Controls whether or not to filter the iteration to in-
clude only the active part of the storage tree. The default is True. Setting this keyword
to None causes the active status of objects to be ignored.
Returns iterator of components in the storage tree
ctype
The object’s category type.
deactivate(shallow=True)
Deactivate this container.
get(k[, d ]) → D[k] if k in D, else d. d defaults to None.
getname(fully_qualified=False, name_buffer={}, convert=<type ’str’>, relative_to=None)
Dynamically generates a name for this object.
Parameters
• fully_qualified (bool) – Generate a full name by iterating through all
anscestor containers. Default is False.
• convert (function) – A function that converts a storage key into a string
representation. Default is the built-in function str.
• relative_to (object) – When generating a fully qualified name, generate
the name relative to this block.
Returns If a parent exists, this method returns a string representing the name of the object
in the context of its parent; otherwise (if no parent exists), this method returns None.
items() → list of D’s (key, value) pairs, as 2-tuples
iteritems() → an iterator over the (key, value) items of D
iterkeys() → an iterator over the keys of D
itervalues() → an iterator over the values of D
keys() → list of D’s keys
local_name
The object’s local name within the context of its parent. Alias for obj.getname(fully_qualified=False).
name
The object’s fully qualified name. Alias for obj.getname(fully_qualified=True).
parent
The object’s parent (possibly None).
pop(k[, d ]) → v, remove specified key and return the corresponding value.
If key is not found, d is returned if given, otherwise KeyError is raised.
popitem() → (k, v), remove and return some (key, value) pair
as a 2-tuple; but raise KeyError if D is empty.
setdefault(k[, d ]) → D.get(k,d), also set D[k]=d if k not in D
storage_key
The object’s storage key within its parent
update([E ], **F) → None. Update D from mapping/iterable E and F.
If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method,
does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
values() → list of D’s values
Contributing to Pyomo
We welcome all contributions including bug fixes, feature enhancements, and documentation improvements. Pyomo
manages source code contributions via GitHub pull requests (PRs).
A PR should be 1 set of related changes. PRs for large-scale non-functional changes (i.e. PEP8, comments) should
be separated from functional changes. This simplifies the review process and ensures that functional changes aren’t
obscured by large amounts of non-functional changes.
313
Pyomo Documentation, Release 5.6.9.dev0
14.1.2 Testing
Pyomo uses unittest, TravisCI, and Appveyor for testing and continuous integration. Submitted code should
include tests to establish the validity of its results and/or effects. Unit tests are preferred but we also accept integration
tests. When test are run on a PR, we require at least 70% coverage of the lines modified in the PR and prefer coverage
closer to 90%. We also require that all tests pass before a PR will be merged.
At any point in the development cycle, a “work in progress” pull request may be opened by including ‘[WIP]’ at the
beginning of the PR title. This allows your code changes to be tested by Pyomo’s automatic testing infrastructure. Any
pull requests marked ‘[WIP]’ will not be reviewed or merged by the core development team. In addition, any ‘[WIP]’
pull request left open for an extended period of time without active development may be marked ‘stale’ and closed.
After a PR is opened it will be reviewed by at least two members of the core development team. The core development
team consists of anyone with write-access to the Pyomo repository. Pull requests opened by a core developer only
require one review. The reviewers will decide if they think a PR should be merged or if more changes are necessary.
Reviewers look for:
• Outside of pyomo.contrib: Code rigor and standards, edge cases, side effects, etc.
• Inside of pyomo.contrib: No “glaringly obvious” problems with the code
• Documentation and tests
The core development team tries to review pull requests in a timely manner but we make no guarantees on review
timeframes. In addition, PRs might not be reviewed in the order they are opened in.
In order to contribute to Pyomo, you must first make a fork of the Pyomo git repository. Next, you should create a
branch on your fork dedicated to the development of the new feature or bug fix you’re interested in. Once you have this
branch checked out, you can start coding. Bug fixes and minor enhancements to existing Pyomo functionality should
be made in the appropriate files in the Pyomo code base. New examples, features, and packages built on Pyomo should
be placed in pyomo.contrib. Follow the link below to find out if pyomo.contrib is right for your code.
14.4 pyomo.contrib
Pyomo uses the pyomo.contrib package to facilitate the inclusion of third-party contributions that enhance Py-
omo’s core functionality. The are two ways that pyomo.contrib can be used to integrate third-party packages:
• pyomo.contrib can provide wrappers for separate Python packages, thereby allowing these packages to be
imported as subpackages of pyomo.
• pyomo.contrib can include contributed packages that are developed and maintained outside of the Pyomo
developer team.
Including contrib packages in the Pyomo source tree provides a convenient mechanism for defining new functionality
that can be optionally deployed by users. We expect this mechanism to include Pyomo extensions and experimental
modeling capabilities. However, contrib packages are treated as optional packages, which are not maintained by the
Pyomo developer team. Thus, it is the responsibility of the code contributor to keep these packages up-to-date.
Contrib package contributions will be considered as pull-requests, which will be reviewed by the Pyomo developer
team. Specifically, this review will consider the suitability of the proposed capability, whether tests are available to
check the execution of the code, and whether documentation is available to describe the capability. Contrib packages
will be tested along with Pyomo. If test failures arise, then these packages will be disabled and an issue will be created
to resolve these test failures.
The following two examples illustrate the two ways that pyomo.contrib can be used to integrate third-party con-
tributions.
The pyomocontrib_simplemodel package is derived from Pyomo, and it defines the class SimpleModel that illustrates
how Pyomo can be used in a simple, less object-oriented manner. Specifically, this class mimics the modeling style
supported by PuLP.
While pyomocontrib_simplemodel can be installed and used separate from Pyomo, this package is included in
pyomo/contrib/simplemodel. This allows this package to be referenced as if were defined as a subpackage of
pyomo.contrib. For example:
m = SimpleModel()
r = m.var('r', bounds=(0,None))
h = m.var('h', bounds=(0,None))
m += 2*pi*r*(r + h)
m += pi*h*r**2 == 355
status = m.solve("ipopt")
This example illustrates that a package can be distributed separate from Pyomo while appearing to be included in the
pyomo.contrib subpackage. Pyomo requires a separate directory be defined under pyomo/contrib for each
such package, and the Pyomo developer team will approve the inclusion of third-party packages in this manner.
Third-party contributions can also be included directly within the pyomo.contrib package. The pyomo/
contrib/example package provides an example of how this can be done, including a directory for plugins and
package tests. For example, this package can be imported as a subpackage of pyomo.contrib:
Although pyomo.contrib.example is included in the Pyomo source tree, it is treated as an optional package.
Pyomo will attempt to import this package, but if an import failure occurs, Pyomo will silently ignore it. Otherwise,
this pyomo package will be treated like any other. Specifically:
• Plugin classes defined in this package are loaded when pyomo.environ is loaded.
• Tests in this package are run with other Pyomo tests.
Third-Party Contributions
Pyomo includes a variety of additional features and functionality provided by third parties through the pyomo.
contrib package. This package includes both contributions included with the main Pyomo distribution and wrappers
for third-party packages that must be installed separately.
These packages are maintained by the original contributors and are managed as optional Pyomo packages.
Contributed packages distributed with Pyomo:
317
Pyomo Documentation, Release 5.6.9.dev0
The following code snippet demonstrates usage of the variable aggregation transformation on a concrete Pyomo model:
>>> from pyomo.environ import *
>>> m = ConcreteModel()
>>> m.v1 = Var(initialize=1, bounds=(1, 8))
>>> m.v2 = Var(initialize=2, bounds=(0, 3))
>>> m.v3 = Var(initialize=3, bounds=(-7, 4))
>>> m.v4 = Var(initialize=4, bounds=(2, 6))
>>> m.c1 = Constraint(expr=m.v1 == m.v2)
>>> m.c2 = Constraint(expr=m.v2 == m.v3)
>>> m.c3 = Constraint(expr=m.v3 == m.v4)
>>> TransformationFactory('contrib.aggregate_vars').apply_to(m)
To see the results of the transformation, you could then use the command
>>> m.pprint()
class pyomo.contrib.preprocessing.plugins.var_aggregator.VariableAggregator(**kwds)
Aggregate model variables that are linked by equality constraints.
Before:
𝑥=𝑦
𝑎 = 2𝑥 + 6𝑦 + 7
𝑏 = 5𝑦 + 6
After:
𝑧=𝑥=𝑦
𝑎 = 8𝑧 + 7
𝑏 = 5𝑧 + 6
Warning: TODO: unclear what happens to “capital-E” Expressions at this point in time.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
update_variables(model)
Update the values of the variables that were replaced by aggregates.
TODO: reduced costs
class pyomo.contrib.preprocessing.plugins.bounds_to_vars.ConstraintToVarBoundTransform(**kw
Change constraints to be a bound on the variable.
Looks for constraints of form: 𝑘 * 𝑣 + 𝑐1 ≤ 𝑐2 . Changes variable lower bound on 𝑣 to match (𝑐2 − 𝑐1 )/𝑘 if it
results in a tighter bound. Also does the same thing for lower bounds.
Keyword arguments below are specified for the apply_to and create_using functions.
Keyword Arguments
• tolerance – tolerance on bound equality (𝐿𝐵 = 𝑈 𝐵)
• detect_fixed – If True, fix variable when |𝐿𝐵 − 𝑈 𝐵| ≤ 𝑡𝑜𝑙𝑒𝑟𝑎𝑛𝑐𝑒.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
class pyomo.contrib.preprocessing.plugins.induced_linearity.InducedLinearity(**kwds)
Reformulate nonlinear constraints with induced linearity.
Finds continuous variables 𝑣 where 𝑣 = 𝑑1 + 𝑑2 + 𝑑3 , where 𝑑’s are discrete variables. These continuous
variables may participate nonlinearly in other expressions, which may then be induced to be linear.
The overall algorithm flow can be summarized as:
1. Detect effectively discrete variables and the constraints that imply discreteness.
2. Determine the set of valid values for each effectively discrete variable
3. Find nonlinear expressions in which effectively discrete variables participate.
4. Reformulate nonlinear expressions appropriately.
Keyword arguments below are specified for the apply_to and create_using functions.
Keyword Arguments
• equality_tolerance – Tolerance on equality constraints.
• pruning_solver – Solver to use when pruning possible values.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
class pyomo.contrib.preprocessing.plugins.deactivate_trivial_constraints.TrivialConstraintD
Deactivates trivial constraints.
Trivial constraints take form 𝑘1 = 𝑘2 or 𝑘1 ≤ 𝑘2 , where 𝑘1 and 𝑘2 are constants. These constraints typically
arise when variables are fixed.
Keyword arguments below are specified for the apply_to and create_using functions.
Keyword Arguments
• tmp – True to store a set of transformed constraints for future reversion of the trans-
formation.
• ignore_infeasible – True to skip over trivial constraints that are infeasible in-
stead of raising a ValueError.
• return_trivial – a list to which the deactivated trivialconstraints are appended
(side effect)
• tolerance – tolerance on constraint violations
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
revert(instance)
Revert constraints deactivated by the transformation.
Parameters instance – the model instance on which trivial constraints were earlier de-
activated.
class pyomo.contrib.preprocessing.plugins.detect_fixed_vars.FixedVarDetector(**kwds)
Detects variables that are de-facto fixed but not considered fixed.
For each variable 𝑣 found on the model, check to see if its lower bound 𝑣 𝐿𝐵 is within some tolerance of its
upper bound 𝑣 𝑈 𝐵 . If so, fix the variable to the value of 𝑣 𝐿𝐵 .
Keyword arguments below are specified for the apply_to and create_using functions.
Keyword Arguments
• tmp – True to store the set of transformed variables and their old values so that they
can be restored.
• tolerance – tolerance on bound equality (LB == UB)
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
revert(instance)
Revert variables fixed by the transformation.
class pyomo.contrib.preprocessing.plugins.equality_propagate.FixedVarPropagator(**kwds)
Propagate variable fixing for equalities of type 𝑥 = 𝑦.
If 𝑥 is fixed and 𝑦 is not fixed, then this transformation will fix 𝑦 to the value of 𝑥.
This transformation can also be performed as a temporary transformation, whereby the transformed variables
are saved and can be later unfixed.
Keyword arguments below are specified for the apply_to and create_using functions.
Keyword Arguments tmp – True to store the set of transformed variables and their old states so
that they can be later restored.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
revert(instance)
Revert variables fixed by the transformation.
class pyomo.contrib.preprocessing.plugins.equality_propagate.VarBoundPropagator(**kwds)
Propagate variable bounds for equalities of type 𝑥 = 𝑦.
If 𝑥 has a tighter bound then 𝑦, then this transformation will adjust the bounds on 𝑦 to match those of 𝑥.
Keyword arguments below are specified for the apply_to and create_using functions.
Keyword Arguments tmp – True to store the set of transformed variables and their old states so
that they can be later restored.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
revert(instance)
Revert variable bounds.
class pyomo.contrib.preprocessing.plugins.init_vars.InitMidpoint(**kwds)
Initialize non-fixed variables to the midpoint of their bounds.
• If the variable does not have bounds, set the value to zero.
• If the variable is missing one bound, set the value to that of the existing bound.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
class pyomo.contrib.preprocessing.plugins.init_vars.InitZero(**kwds)
Initialize non-fixed variables to zero.
• If setting the variable value to zero will violate a bound, set the variable value to the relevant bound value.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
class pyomo.contrib.preprocessing.plugins.remove_zero_terms.RemoveZeroTerms(**kwds)
Looks for 0𝑣 in a constraint and removes it.
Currently limited to processing linear constraints of the form 𝑥1 = 0𝑥3 , occurring as a result of fixing 𝑥2 = 0.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
class pyomo.contrib.preprocessing.plugins.strip_bounds.VariableBoundStripper(**kwds)
Strip bounds from variables.
Keyword arguments below are specified for the apply_to and create_using functions.
Keyword Arguments
• strip_domains – strip the domain for discrete variables as well
• reversible – Whether the bound stripping will be temporary. If so, store informa-
tion for reversion.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
revert(instance)
Revert variable bounds and domains changed by the transformation.
class pyomo.contrib.preprocessing.plugins.zero_sum_propagator.ZeroSumPropagator(**kwds)
Propagates fixed-to-zero for sums of only positive (or negative) vars.
If 𝑧 is fixed to zero and 𝑧 = 𝑥1 + 𝑥2 + 𝑥3 and 𝑥1 , 𝑥2 , 𝑥3 are all non-negative or all non-positive, then 𝑥1 , 𝑥2 ,
and 𝑥3 will be fixed to zero.
apply_to(model, **kwds)
Apply the transformation to the given model.
create_using(model, **kwds)
Create a new model with this transformation
The GDPopt solver in Pyomo allows users to solve nonlinear Generalized Disjunctive Programming (GDP) models
using logic-based decomposition approaches, as opposed to the conventional approach via reformulation to a Mixed
Integer Nonlinear Programming (MINLP) model.
The main advantage of these techniques is their ability to solve subproblems in a reduced space, including nonlinear
constraints only for True logical blocks. As a result, GDPopt is most effective for nonlinear GDP models.
Three algorithms are available in GDPopt:
1. Logic-based outer approximation (LOA) [Turkay & Grossmann, 1996]
2. Global logic-based outer approximation (GLOA) [Lee & Grossmann, 2001]
3. Logic-based branch-and-bound (LBB) [Lee & Grossmann, 2001]
Usage and implementation details for GDPopt can be found in the PSE 2018 paper (Chen et al., 2018), or via its
preprint.
Credit for prototyping and development can be found in the GDPopt class documentation, below.
Usage of GDPopt to solve a Pyomo.GDP concrete model involves:
>>> SolverFactory('gdpopt').solve(model)
Note: By default, GDPopt uses the GDPopt-LOA strategy. Other strategies may be used by specifying the strategy
argument during solve(). All GDPopt options are listed below.
Chen et al., 2018 contains the following flowchart, taken from the preprint version:
Required imports
>>> from pyomo.environ import *
>>> from pyomo.gdp import *
>>> model.objective.display()
>>> model.display()
>>> model.pprint()
Note: When troubleshooting, it can often be helpful to turn on verbose output using the tee flag.
The GDPopt-LBB solver branches through relaxed subproblems with inactive disjunctions. It explores the possibilities
based on best lower bound, eventually activating all disjunctions and presenting the globally optimal solution.
To use the GDPopt-LBB solver, define your Pyomo GDP model as usual:
Required imports
>>> from pyomo.environ import *
>>> from pyomo.gdp import Disjunct, Disjunction
>>> print(results)
>>> print(results.solver.status)
ok
>>> print(results.solver.termination_condition)
optimal
Warning: GDPopt optional arguments should be considered beta code and are subject to change.
class pyomo.contrib.gdpopt.GDPopt.GDPoptSolver
Keyword arguments below are specified for the solve function.
Decomposition solver for Generalized Disjunctive Programming (GDP) problems.
The GDPopt (Generalized Disjunctive Programming optimizer) solver applies a variety of decomposition-based
approaches to solve Generalized Disjunctive Programming (GDP) problems. GDP models can include nonlin-
ear, continuous variables and constraints, as well as logical conditions.
These approaches include:
• Logic-based outer approximation (LOA)
• Logic-based branch-and-bound (LBB)
• Partial surrogate cuts [pending]
• Generalized Bender decomposition [pending]
This solver implementation was developed by Carnegie Mellon University in the research group of Ignacio
Grossmann.
For nonconvex problems, LOA may not report rigorous lower/upper bounds.
Questions: Please make a post at StackOverflow and/or contact Qi Chen <https://github.com/qtothec>.
The multistart solver is used in cases where the objective function is known to be non-convex but the global optimum
is still desired. It works by running a non-linear solver of your choice multiple times at different starting points, and
returns the best of the solutions.
Required import
>>> from pyomo.environ import *
class pyomo.contrib.multistart.multi.MultiStart
Solver wrapper that initializes at multiple starting points.
# TODO: also return appropriate duals
For theoretical underpinning, see https://www.semanticscholar.org/paper/
How-many-random-restarts-are-enough-Dick-Wong/55b248b398a03dc1ac9a65437f88b835554329e0
Keyword arguments below are specified for the solve function.
Keyword Arguments
• strategy – Specify the restart strategy.
– ”rand”: random choice between variable bounds
– ”midpoint_guess_and_bound”: midpoint between current value and farthest
bound
– ”rand_guess_and_bound”: random choice between current value and farthest
bound
– ”rand_distributed”: random choice among evenly distributed values
– ”midpoint”: exact midpoint between the bounds. If using this option, multiple
iterations are useless.
• solver – solver to use, defaults to ipopt
• solver_args – Dictionary of keyword arguments to pass to the solver.
• iterations – Specify the number of iterations, defaults to 10. If -1 is specified, the
high confidence stopping rule will be used
15.4 parmest
parmest is a Python package built on the Pyomo optimization modeling language ([PyomoJournal], [PyomoBookII])
to support parameter estimation using experimental data along with confidence regions and subsequent creation of
scenarios for PySP.
15.4.1 Overview
The Python package called parmest facilitates model-based parameter estimation along with characterization of un-
certainty associated with the estimates. For example, parmest can provide confidence regions around the parameter
estimates. Additionally, parameter vectors, each with an attached probability estimate, can be used to build scenarios
for design optimization.
Functionality in parmest includes:
• Model based parameter estimation using experimental data
• Bootstrap resampling for parameter estimation
• Confidence regions based on single or multi-variate distributions
• Likelihood ratio
• Parallel processing
Background
The goal of parameter estimation is to estimate values for a vector, 𝜃, to use in the functional form
𝑦 = 𝑔(𝑥; 𝜃)
where 𝑥 is a vector containing measured data, typically in high dimension, 𝜃 is a vector of values to estimate, in much
lower dimension, and the response vectors are given as 𝑦𝑖 , 𝑖 = 1, . . . , 𝑚 with 𝑚 also much smaller than the dimension
of 𝑥. This is done by collecting 𝑆 data points, which are 𝑥 ˜, 𝑦˜ pairs and then finding 𝜃 values that minimize some
function of the deviation between the values of 𝑦˜ that are measured and the values of 𝑔(˜
𝑥; 𝜃) for each corresponding
˜, which is a subvector of the vector 𝑥. Note that for most experiments, only small parts of 𝑥 will change from one
𝑥
experiment to the next.
The following least squares objective can be used to estimate parameter values, where data points are indexed by
𝑠 = 1, . . . , 𝑆
𝑆
∑︁
˜, 𝑦˜) ≡
min 𝑄(𝜃; 𝑥 𝑞𝑠 (𝜃; 𝑥
˜𝑠 , 𝑦˜𝑠 )
𝜃
𝑠=1
where
𝑚
∑︁ 2
𝑞𝑠 (𝜃; 𝑥
˜𝑠 , 𝑦˜𝑠 ) = 𝑦𝑠𝑖 − 𝑔𝑖 (˜
𝑤𝑖 [˜ 𝑥𝑠 ; 𝜃)] ,
𝑖=1
i.e., the contribution of sample 𝑠 to 𝑄, where 𝑤 ∈ ℜ is a vector of weights for the responses. For multi-dimensional
𝑚
𝑦, this is the squared weighted 𝐿2 norm and for univariate 𝑦 the weighted squared deviation. Custom objectives can
also be defined for parameter estimation.
In the applications of interest to us, the function 𝑔(·) is usually defined as an optimization problem with a large
number of (perhaps constrained) optimization variables, a subset of which are fixed at values 𝑥 ˜ when the optimization
is performed. In other applications, the values of 𝜃 are fixed parameter values, but for the problem formulation above,
the values of 𝜃 are the primary optimization variables. Note that in general, the function 𝑔(·) will have a large set of
parameters that are not included in 𝜃. Often, the 𝑦𝑖𝑠 will be vectors themselves, perhaps indexed by time with index
sets that vary with 𝑠.
parmest is included in Pyomo (pyomo/contrib/parmest). To run parmest, you will need Python version 3.x along with
various Python package dependencies and the IPOPT software library for non-linear optimization.
1. numpy
2. pandas
3. pyomo
4. pyutilib
5. matplotlib (optional, used for graphics)
6. scipy.stats (optional, used for graphics)
7. seaborn (optional, used for graphics)
8. mpi4py.MPI (optional, used for parallel computing)
IPOPT
Testing
cd pyomo/contrib/parmest/tests
python test_parmest.py
Parameter Estimation using parmest requires a Pyomo model, experimental data which defines multiple scenarios, and
a list of thetas to estimate. parmest uses PySP [PyomoBookII] to solve a two-stage stochastic programming problem,
where the experimental data is used to create a scenario tree. The objective function needs to be written in PySP form
with the Pyomo Expression for first stage cost (named “FirstStateCost”) set to zero and the Pyomo Expression for
second stage cost (named “SecondStageCost”) defined as the deviation between model and the observations (typically
defined as the sum of squared deviation between model values and observed values).
If the Pyomo model is not formatted as a two-stage stochastic programming problem in this format, the user can supply
a custom function to use as the second stage cost and the Pyomo model will be modified within parmest to match the
specifications required by PySP. The PySP callback function is also defined within parmest. The callback function
returns a populated and initialized model for each scenario.
To use parmest, the user creates a Estimator object and uses its methods for:
• Parameter estimation, theta_est
• Bootstrap resampling for parameter estimation, theta_est_bootstrap
• Compute the objective at theta values, objective_at_theta
• Compute likelihood ratio, likelihood_ratio_test
A Estimator object can be created using the following code. A description of each argument is listed below.
Examples are provided in the Examples Section.
Model function
The first argument is a function which uses data for a single scenario to return a populated and initialized Pyomo
model for that scenario. Parameters that the user would like to estimate must be defined as variables (Pyomo Var). The
variables can be fixed (parmest unfixes variables that will be estimated). The model does not have to be specifically
written for parmest. That is, parmest can modify the objective for pySP, see Objective function below.
Data
The second argument is the data which will be used to populate the Pyomo model. Supported data formats include:
• Pandas Dataframe where each row is a separate scenario and column names refer to observed quantities.
Pandas DataFrames are easily stored and read in from csv, excel, or databases, or created directly in Python.
• List of dictionaries where each entry in the list is a separate scenario and the keys (or nested keys) refer to
observed quantities. Dictionaries are often preferred over DataFrames when using static and time series data.
Dictionaries are easily stored and read in from json or yaml files, or created directly in Python.
• List of json file names where each entry in the list contains a json file name for a separate scenario. This format
is recommended when using large datasets in parallel computing.
The data must be compatible with the model function that returns a populated and initialized Pyomo model for a single
scenario. Data can include multiple entries per variable (time series and/or duplicate sensors). This information can
be included in custom objective functions, see Objective function below.
Theta names
The third argument is a list of variable names that the user wants to estimate. The list contains strings with Var names
from the Pyomo model.
Objective function
The forth argument is an optional argument which defines the optimization objective function to use in parameter
estimation. If no objective function is specified, the Pyomo model is used “as is” and should be defined with a
“FirstStateCost” and “SecondStageCost” expression that are used to build an objective for PySP. If the Pyomo model
is not written as a two stage stochastic programming problem in this format, and/or if the user wants to use an objective
that is different than the original model, a custom objective function can be defined for parameter estimation. The
objective function arguments include model and data and the objective function returns a Pyomo expression which are
used to define “SecondStageCost”. The objective function can be used to customize data points and weights that are
used in parameter estimation.
15.4.4 Graphics
parmest includes a function, pairwise_plot, to visualize results from bootstrap and likelihood ratio analysis.
Confidence intervals using rectangular, multivariate normal, and kernel density estimate distributions can be included
in the plot and used for scenario creation. Examples are provided in the Examples Section.
The pairwise plot includes a histogram of each parameter along the diagonal and a scatter plot for each pair of param-
eters in the upper and lower sections. The pairwise plot can also include the following optional information:
• A single value for each theta (generally theta* from parameter estimation).
• Confidence intervals for rectangular, multivariate normal, and/or kernel density estimate distributions at a spec-
ified level (i.e. 0.8). For plots with more than 2 parameters, theta* is used to extract a slice of the confidence
region for each pairwise plot.
• Filled contour lines for objective values at a specified level (i.e. 0.8). For plots with more than 2 parameters,
theta* is used to extract a slice of the contour lines for each pairwise plot.
• In addition to creating a figure, the user can optionally return the confidence region distributions which can be
used to generate scenarios.
The following examples were generated using the reactor design example. :ref:fig-pairwise1 uses output from the
bootstrap analysis, and :ref:fig-pairwise2 uses output from the likelihood ratio test.
15.4.5 Examples
Fig. 15.1: Pairwise bootstrap plot with rectangular, multivariate normal and kernel density estimation confidence
region.
Fig. 15.2: Pairwise likelihood ratio plot with contours of the objective and points that lie within an alpha confidence
region.
Each example contains a Python file that contains the Pyomo model and a Python file to run parameter estimation.
The description below uses the reactor design example. The file reactor_design.py includes a function which returns
an populated instance of the Pyomo model. Note that the model is defined to maximize cb and that k1, k2, and k3 are
fixed. The _main_ program is included for easy testing of the model declaration.
"""
Continuously stirred tank reactor model, based on
pyomo\examples\doc\pyomobook\nonlinear-ch\react_design\ReactorDesign.py
"""
import pandas as pd
from pyomo.environ import *
from pyomo.core import *
def reactor_design_model(data):
# Rate constants
model.k1 = Var(initialize = 5.0/6.0, within=PositiveReals) # min^-1
model.k2 = Var(initialize = 5.0/3.0, within=PositiveReals) # min^-1
model.k3 = Var(initialize = 1.0/6000.0, within=PositiveReals) # m^3/(gmol min)
model.k1.fixed = True
model.k2.fixed = True
model.k3.fixed = True
# Objective
model.obj = Objective(expr = model.cb, sense=maximize)
# Constraints
model.ca_bal = Constraint(expr = (0 == model.sv * model.caf \
- model.sv * model.ca - model.k1 * model.ca \
- 2.0 * model.k3 * model.ca ** 2.0))
return model
The file reactor_design_parmest.py uses parmest to estimate values of k1, k2, and k3 by minimizing the sum of
squared error between model and observed values of ca, cb, cc, and cd. The file also uses parmest to run parameter
estimation with bootstrap resampling and perform a likelihood ratio test over a range of theta values.
import numpy as np
import pandas as pd
from itertools import product
import pyomo.contrib.parmest.parmest as parmest
from reactor_design import reactor_design_model
# Vars to estimate
theta_names = ['k1', 'k2', 'k3']
# Data
data = pd.read_excel('reactor_data.xlsx')
bootstrap_theta = pest.theta_est_bootstrap(50)
print(bootstrap_theta.head())
parmest.pairwise_plot(bootstrap_theta)
parmest.pairwise_plot(bootstrap_theta, theta, 0.8, ['MVN', 'KDE', 'Rect'])
obj_at_theta = pest.objective_at_theta(theta_vals)
print(obj_at_theta.head())
The semibatch and Rooney Biegler examples are defined in a similar manner.
Additional use cases include:
• Parameter estimation using data with duplicate sensors and time-series data (reactor design example)
• Parameter estimation using mpi4py, the example saves results to a file for later analysis/graphics (semibatch
example)
Parallel implementation in parmest is preliminary. To run parmest in parallel, you need the mpi4py Python package
and a compatible MPI installation. If you do NOT have mpi4py or a MPI installation, parmest still works (you should
not get MPI import errors).
For example, the following command can be used to run the semibatch model in parallel:
The file semibatch_parmest_parallel.py is shown below. Results are saved to file for later analysis.
"""
The following script can be used to run semibatch parameter estimation in
parallel and save results to files for later analysis and graphics.
Example command: mpiexec -n 4 python semibatch_parmest_parallel.py
"""
import numpy as np
import pandas as pd
from itertools import product
import pyomo.contrib.parmest.parmest as parmest
from semibatch import generate_model
# Vars to estimate
theta_names = ['k1', 'k2', 'E1', 'E2']
bootstrap_theta = pest.theta_est_bootstrap(100)
bootstrap_theta.to_csv('bootstrap_theta.csv')
k1 = np.arange(4, 24, 3)
k2 = np.arange(40, 160, 40)
E1 = np.arange(29000, 32000, 500)
E2 = np.arange(38000, 42000, 500)
theta_vals = pd.DataFrame(list(product(k1, k2, E1, E2)), columns=theta_names)
obj_at_theta = pest.objective_at_theta(theta_vals)
obj_at_theta.to_csv('obj_at_theta.csv')
Installation
The mpi4py Python package should be installed using conda. The following installation instructions were tested on a
Mac with Python 3.5.
Create a conda environment and install mpi4py using the following commands:
The first one should be faster and should start 4 instances of Python.
15.4.7 API
theta_est(solver=’ef_ipopt’, bootlist=None)
Run parameter estimation using all data
Parameters solver (string, optional) – “ef_ipopt” or “k_aug”. Default is
“ef_ipopt”.
Returns
• objectiveval (float) – The objective function value
• thetavals (dict) – A dictionary of all values for theta
• Hessian (dict) – A dictionary of dictionaries for the Hessian. The Hessian is not
returned if the solver is ef.
theta_est_bootstrap(N, samplesize=None, replacement=True, seed=None, re-
turn_samples=False)
Run parameter estimation using N bootstap samples
Parameters
• N (int) – Number of bootstrap samples to draw from the data
• samplesize (int or None, optional) – Sample size, if None sample-
size will be set to the number of experiments
• replacement (bool, optional) – Sample with or without replacement
• seed (int or None, optional) – Set the random seed
• return_samples (bool, optional) – Return a list of experiment num-
bers used in each bootstrap estimation
Returns bootstrap_theta – Theta values for each bootstrap sample and (if return_samples
= True) the sample numbers used in each estimation
Return type DataFrame
pyomo.contrib.parmest.parmest.group_data(data, groupby_column_name, use_mean=None)
Group data by experiment/scenario
Parameters
• data (DataFrame) – Data
• groupby_column_name (strings) – Name of data column which contains ex-
periment/scenario numbers
• use_mean (list of column names or None, optional) – Name of
data columns which should be reduced to a single value per experiment/scenario by
taking the mean
Returns grouped_data – Grouped data
Return type list of dictionaries
pyomo.contrib.parmest.graphics.pairwise_plot(theta_values, theta_star=None,
alpha=None, distributions=[],
axis_limits=None, add_obj_contour=True,
add_legend=True, filename=None, re-
turn_scipy_distributions=False)
Plot pairwise relationship for theta values, and optionally confidence intervals and results from likelihood ratio
tests
Parameters
• theta_values (DataFrame, columns = variable names and
(optionally) 'obj' and alpha values) – Theta values and (optionally)
an objective value and results from the likelihood ratio test
• genindex
• modindex
• search
The Pyomo-MC++ interface allows for bounding of factorable functions using the MC++ library developed by the
OMEGA research group at Imperial College London. Documentation for MC++ may be found on the MC++ website.
Pyomo now supports automated downloading and compilation of MC++. To install MC++ and other third party
compiled extensions, run:
pyomo download-extensions
pyomo build-extensions
To get and install just MC++, run the following commands in the pyomo/contrib/mcpp directory:
python getMCPP.py
python build.py
This should install MC++ to the pyomo plugins directory, by default located at $HOME/.pyomo/.
Support for MC++ has only been validated by Pyomo developers using Linux and OSX. Installation instructions for
the MC++ library may be found on the MC++ website.
We assume that you have installed MC++ into a directory of your choice. We will denote this directory by
$MCPP_PATH. For example, you should see that the file $MCPP_PATH/INSTALL exists.
Navigate to the pyomo/contrib/mcpp directory in your pyomo installation. This directory should contain a file
named mcppInterface.cpp. You will need to compile this file using the following command:
This links the MC++ required library FADBAD++, MC++ itself, and Python to compile the Pyomo-MC++ interface.
If successful, you will now have a file named mcppInterface.o in your working directory. If you are not using
Python 3.6, you will need to link to the appropriate Python version. You now need to create a shared object file with
the following command:
You may then test your installation by running the test file:
python test_mcpp.py
The Mixed-Integer Nonlinear Decomposition Toolbox in Pyomo (MindtPy) solver allows users to solve Mixed-Integer
Nonlinear Programs (MINLP) using decomposition algorithms. These decomposition algorithms usually rely on the
solution of Mixed-Intger Linear Programs (MILP) and Nonlinear Programs (NLP).
MindtPy currently implements the Outer Approximation (OA) algorithm originally described in Duran & Grossmann.
Usage and implementation details for MindtPy can be found in the PSE 2018 paper Bernal et al., (ref, preprint).
Usage of MindtPy to solve a Pyomo concrete model involves:
>>> SolverFactory('mindtpy').solve(model)
Required imports
>>> from pyomo.environ import *
>>> model.objective.display()
>>> model.display()
>>> model.pprint()
Note: When troubleshooting, it can often be helpful to turn on verbose output using the tee flag.
Warning: MindtPy optional arguments should be considered beta code and are subject to change.
class pyomo.contrib.mindtpy.MindtPy.MindtPySolver
A decomposition-based MINLP solver.
available(exception_flag=True)
Check if solver is available. TODO: For now, it is always available. However, sub-solvers may not always
be available, and so this should reflect that possibility.
solve(model, **kwds)
Solve the model. Warning: this solver is still in beta. Keyword arguments subject to change. Undocu-
mented keyword arguments definitely subject to change. Warning: at this point in time, if you try to use
PSC or GBD with anything other than IPOPT as the NLP solver, bad things will happen. This is because
the suffixes are not in place to extract dual values from the variable bounds for any other solver. TODO:
fix needed with the GBD implementation. :param model: a Pyomo model or block to be solved :type
model: Block
version()
Return a 3-tuple describing the solver version.
The z3 Satisfiability Solver interface can convert pyomo variables and expressions for use with the z3 Satisfiability
Solver
15.7.1 Installation
z3 is required for use of the Sat Solver can be installed via the command
Required import
>>> from pyomo.environ import *
>>> from pyomo.contrib.satsolver.satsolver import SMTSatSolver
Invoke the sat solver using optional argument model to automatically process
pyomo model
>>> is_feasible = SMTSatSolver(model = m).check()
Bibliography
345
Pyomo Documentation, Release 5.6.9.dev0
• genindex
• modindex
• search
347
Pyomo Documentation, Release 5.6.9.dev0
Pyomo Resources
349
Pyomo Documentation, Release 5.6.9.dev0
351
Pyomo Documentation, Release 5.6.9.dev0
[RooneyBiegler] W.C. Rooney, L.T. Biegler, “Design for model parameter uncertainty using nonlinear confidence
regions”, AIChE Journal, 47(8), 2001
[SemiBatch] O. Abel, W. Marquardt, “Scenario-integrated modeling and optimization of dynamic systems”, AIChE
Journal, 46(4), 2000
[Vielma_et_al] J. P. Vielma, S. Ahmed, G. Nemhauser. “Mixed-Integer Models for Non-separable Piecewise Linear
Optimization: Unifying framework and Extensions”, Operations Research 58, 2010. pp. 303-315.
352 Bibliography
Python Module Index
p
pyomo.contrib.parmest.graphics, 340
pyomo.contrib.parmest.parmest, 339
pyomo.core.base.units_container, 165
pyomo.core.kernel.base, 301
pyomo.core.kernel.heterogeneous_container,
303
pyomo.core.kernel.homogeneous_container,
303
pyomo.core.kernel.piecewise_library.util,
297
pyomo.core.kernel.suffix, 287
pyomo.pysp.util.rapper, 162
353
Pyomo Documentation, Release 5.6.9.dev0
355
Pyomo Documentation, Release 5.6.9.dev0
__hash__ (pyomo.core.kernel.tuple_container.TupleContainer
__new__() (pyomo.core.kernel.list_container.ListContainer
attribute), 305 method), 307
__iadd__() (pyomo.core.expr.numvalue.NumericValue __new__() (pyomo.core.kernel.tuple_container.TupleContainer
method), 226 method), 305
__idiv__() (pyomo.core.expr.numvalue.NumericValue __nonzero__() (py-
method), 226 omo.core.expr.current.ExpressionBase
__imul__() (pyomo.core.expr.numvalue.NumericValue method), 230
method), 227 __pos__() (pyomo.core.expr.numvalue.NumericValue
__init__() (pyomo.core.expr.current.ExpressionBase method), 228
method), 230 __pow__() (pyomo.core.expr.numvalue.NumericValue
__init__() (pyomo.core.kernel.dict_container.DictContainer method), 228
method), 310 __radd__() (pyomo.core.expr.numvalue.NumericValue
__init__() (pyomo.core.kernel.list_container.ListContainer method), 228
method), 307 __rdiv__() (pyomo.core.expr.numvalue.NumericValue
__init__() (pyomo.core.kernel.tuple_container.TupleContainer method), 228
method), 305 __reduce__() (pyomo.core.kernel.dict_container.DictContainer
__init__() (pyomo.dataportal.DataPortal.DataPortal method), 310
method), 268 __reduce__() (pyomo.core.kernel.list_container.ListContainer
__init__() (pyomo.dataportal.TableData.TableData method), 307
method), 269 __reduce__() (pyomo.core.kernel.tuple_container.TupleContainer
__int__() (pyomo.core.expr.numvalue.NumericValue method), 305
method), 227 __reduce_ex__() (py-
__ipow__() (pyomo.core.expr.numvalue.NumericValue omo.core.kernel.dict_container.DictContainer
method), 227 method), 310
__isub__() (pyomo.core.expr.numvalue.NumericValue __reduce_ex__() (py-
method), 227 omo.core.kernel.list_container.ListContainer
__itruediv__() (py- method), 307
omo.core.expr.numvalue.NumericValue __reduce_ex__() (py-
method), 227 omo.core.kernel.tuple_container.TupleContainer
__le__() (pyomo.core.expr.numvalue.NumericValue method), 305
method), 227 __repr__ (pyomo.core.kernel.dict_container.DictContainer
__lt__() (pyomo.core.expr.numvalue.NumericValue attribute), 310
method), 227 __repr__ (pyomo.core.kernel.list_container.ListContainer
__metaclass__ (py- attribute), 307
omo.core.kernel.dict_container.DictContainer __repr__ (pyomo.core.kernel.tuple_container.TupleContainer
attribute), 310 attribute), 305
__metaclass__ (py- __rmul__() (pyomo.core.expr.numvalue.NumericValue
omo.core.kernel.list_container.ListContainer method), 228
attribute), 307 __rpow__() (pyomo.core.expr.numvalue.NumericValue
__metaclass__ (py- method), 228
omo.core.kernel.tuple_container.TupleContainer __rsub__() (pyomo.core.expr.numvalue.NumericValue
attribute), 305 method), 228
__mul__() (pyomo.core.expr.numvalue.NumericValue __rtruediv__() (py-
method), 227 omo.core.expr.numvalue.NumericValue
__ne__() (pyomo.core.kernel.dict_container.DictContainer method), 228
method), 310 __setattr__ (pyomo.core.kernel.dict_container.DictContainer
__ne__() (pyomo.core.kernel.list_container.ListContainer attribute), 310
method), 307 __setattr__ (pyomo.core.kernel.list_container.ListContainer
__ne__() (pyomo.core.kernel.tuple_container.TupleContainer attribute), 307
method), 305 __setattr__ (pyomo.core.kernel.tuple_container.TupleContainer
__neg__() (pyomo.core.expr.numvalue.NumericValue attribute), 305
method), 228 __setitem__() (py-
__new__() (pyomo.core.kernel.dict_container.DictContainer omo.dataportal.DataPortal.DataPortal
method), 310 method), 268
356 Index
Pyomo Documentation, Release 5.6.9.dev0
Index 357
Pyomo Documentation, Release 5.6.9.dev0
358 Index
Pyomo Documentation, Release 5.6.9.dev0
Index 359
Pyomo Documentation, Release 5.6.9.dev0
360 Index
Pyomo Documentation, Release 5.6.9.dev0
Index 361
Pyomo Documentation, Release 5.6.9.dev0
362 Index
Pyomo Documentation, Release 5.6.9.dev0
Index 363
Pyomo Documentation, Release 5.6.9.dev0
364 Index
Pyomo Documentation, Release 5.6.9.dev0
Index 365
Pyomo Documentation, Release 5.6.9.dev0
366 Index
Pyomo Documentation, Release 5.6.9.dev0
Index 367
Pyomo Documentation, Release 5.6.9.dev0
368 Index
Pyomo Documentation, Release 5.6.9.dev0
load_slacks() (py- N
omo.solvers.plugins.solvers.gurobi_persistent.GurobiPersistent
name (pyomo.core.kernel.base.ICategorizedObject at-
method), 264 tribute), 302
load_solution() (pyomo.core.kernel.block.block name (pyomo.core.kernel.dict_container.DictContainer
method), 277 attribute), 311
load_vars() (pyomo.solvers.plugins.solvers.cplex_persistent.CPLEXPersistent
name (pyomo.core.kernel.list_container.ListContainer
method), 258 attribute), 309
load_vars() (pyomo.solvers.plugins.solvers.gurobi_persistent.GurobiPersistent
name (pyomo.core.kernel.tuple_container.TupleContainer
method), 264 attribute), 307
local_name (pyomo.core.kernel.base.ICategorizedObjectname (pyomo.environ.AbstractModel attribute), 200
attribute), 302 name (pyomo.environ.Block attribute), 203
local_name (pyomo.core.kernel.dict_container.DictContainer
name (pyomo.environ.ConcreteModel attribute), 196
attribute), 311 name (pyomo.environ.Constraint attribute), 205
local_name (pyomo.core.kernel.list_container.ListContainer
name (pyomo.environ.Objective attribute), 207
attribute), 309 name (pyomo.environ.Param attribute), 210
local_name (pyomo.core.kernel.tuple_container.TupleContainer
name (pyomo.environ.RangeSet attribute), 213
attribute), 306 name (pyomo.environ.Set attribute), 218
local_name (pyomo.environ.AbstractModel attribute), name (pyomo.environ.Var attribute), 220
200 namespaces() (pyomo.dataportal.DataPortal.DataPortal
local_name (pyomo.environ.Block attribute), 202 method), 269
local_name (pyomo.environ.ConcreteModel at- nargs() (pyomo.core.expr.current.EqualityExpression
tribute), 196 method), 243
local_name (pyomo.environ.Constraint attribute), 205 nargs() (pyomo.core.expr.current.Expr_ifExpression
local_name (pyomo.environ.Objective attribute), 207 method), 249
local_name (pyomo.environ.Param attribute), 210 nargs() (pyomo.core.expr.current.ExpressionBase
local_name (pyomo.environ.RangeSet attribute), 213 method), 233
local_name (pyomo.environ.Set attribute), 217 nargs() (pyomo.core.expr.current.ExternalFunctionExpression
local_name (pyomo.environ.Var attribute), 220 method), 237
local_suffix_generator() (in module py- nargs() (pyomo.core.expr.current.GetItemExpression
omo.core.kernel.suffix), 287 method), 247
log2floor() (in module py- nargs() (pyomo.core.expr.current.InequalityExpression
omo.core.kernel.piecewise_library.util), 298 method), 242
lslack (pyomo.core.kernel.matrix_constraint.matrix_constraint
nargs() (pyomo.core.expr.current.NegationExpression
attribute), 283 method), 235
nargs() (pyomo.core.expr.current.ReciprocalExpression
M method), 240
make_ef() (pyomo.pysp.util.rapper.StochSolver nargs() (pyomo.core.expr.current.SumExpression
method), 162 method), 245
matrix_constraint (class in py- nargs() (pyomo.core.expr.current.UnaryFunctionExpression
omo.core.kernel.matrix_constraint), 282 method), 251
member() (pyomo.environ.RangeSet method), 213 native_numeric_types (in module py-
MindtPySolver (class in py- omo.core.expr.numvalue), 225
omo.contrib.mindtpy.MindtPy), 343 native_types (in module py-
model() (pyomo.environ.AbstractModel method), 200 omo.core.expr.numvalue), 225
model() (pyomo.environ.Block method), 202 NegationExpression (class in py-
model() (pyomo.environ.ConcreteModel method), 196 omo.core.expr.current), 234
model() (pyomo.environ.Constraint method), 205 next() (pyomo.environ.RangeSet method), 213
model() (pyomo.environ.Objective method), 207 nextw() (pyomo.environ.RangeSet method), 213
model() (pyomo.environ.Param method), 210 nonlinear_expression (class in py-
model() (pyomo.environ.RangeSet method), 213 omo.core.expr.current), 224
model() (pyomo.environ.Set method), 217 nonpyomo_leaf_types (in module py-
model() (pyomo.environ.Var method), 220 omo.core.expr.numvalue), 225
MultiStart (class in pyomo.contrib.multistart.multi), NumericValue (class in pyomo.core.expr.numvalue),
328 225
Index 369
Pyomo Documentation, Release 5.6.9.dev0
370 Index
Pyomo Documentation, Release 5.6.9.dev0
Index 371
Pyomo Documentation, Release 5.6.9.dev0
372 Index
Pyomo Documentation, Release 5.6.9.dev0
Index 373
Pyomo Documentation, Release 5.6.9.dev0
method), 267 T
solve_ef() (pyomo.pysp.util.rapper.StochSolver TableData (class in pyomo.dataportal.TableData), 269
method), 163 tear_set_arcs() (py-
solve_ph() (pyomo.pysp.util.rapper.StochSolver omo.network.SequentialDecomposition
method), 163 method), 151
sos (class in pyomo.core.kernel.sos), 286 terms (pyomo.core.kernel.constraint.linear_constraint
sos1() (in module pyomo.core.kernel.sos), 286 attribute), 282
sos2() (in module pyomo.core.kernel.sos), 286 theta_est() (pyomo.contrib.parmest.parmest.Estimator
sos_dict (class in pyomo.core.kernel.sos), 286 method), 339
sos_list (class in pyomo.core.kernel.sos), 286 theta_est_bootstrap() (py-
sos_tuple (class in pyomo.core.kernel.sos), 286 omo.contrib.parmest.parmest.Estimator
source (pyomo.network.arc._ArcData attribute), 144 method), 340
sources() (pyomo.network.port._PortData method), TightenContraintFromVars (class in py-
143 omo.contrib.preprocessing.plugins.constraint_tightener),
sparse (pyomo.core.kernel.matrix_constraint.matrix_constraint 320
attribute), 283 to_dense_data() (pyomo.environ.AbstractModel
sparse_items() (pyomo.environ.Param method), method), 201
210 to_dense_data() (pyomo.environ.Block method),
sparse_iteritems() (pyomo.environ.Param 203
method), 210 to_dense_data() (pyomo.environ.ConcreteModel
sparse_iterkeys() (pyomo.environ.Param method), 197
method), 210 to_dense_data() (pyomo.environ.Constraint
sparse_itervalues() (pyomo.environ.Param method), 205
method), 211 to_dense_data() (pyomo.environ.Objective
sparse_keys() (pyomo.environ.Param method), 211 method), 208
sparse_values() (pyomo.environ.Param method), to_dense_data() (pyomo.environ.Param method),
211 211
stale (pyomo.core.kernel.variable.variable attribute), to_dense_data() (pyomo.environ.RangeSet
279 method), 214
StochSolver (class in pyomo.pysp.util.rapper), 162 to_dense_data() (pyomo.environ.Set method), 218
storage_key (pyomo.core.kernel.base.ICategorizedObject to_dense_data() (pyomo.environ.Var method), 220
attribute), 302 to_string() (pyomo.core.expr.current.ExpressionBase
storage_key (pyomo.core.kernel.dict_container.DictContainer method), 234
attribute), 311 to_string() (pyomo.core.expr.numvalue.NumericValue
storage_key (pyomo.core.kernel.list_container.ListContainer method), 230
attribute), 309 to_string() (pyomo.environ.AbstractModel method),
storage_key (pyomo.core.kernel.tuple_container.TupleContainer 201
attribute), 307 to_string() (pyomo.environ.Block method), 203
store() (pyomo.dataportal.DataPortal.DataPortal to_string() (pyomo.environ.ConcreteModel
method), 269 method), 197
store_values() (pyomo.environ.Param method), to_string() (pyomo.environ.Constraint method), 205
211 to_string() (pyomo.environ.Objective method), 208
suffix (class in pyomo.core.kernel.suffix), 288 to_string() (pyomo.environ.Param method), 211
suffix_dict (class in pyomo.core.kernel.suffix), 288 to_string() (pyomo.environ.RangeSet method), 214
suffix_generator() (in module py- to_string() (pyomo.environ.Set method), 218
omo.core.kernel.suffix), 288 to_string() (pyomo.environ.Var method), 220
sum_product() (in module pyomo.core.util), 221 transfer_attributes_from() (py-
SumExpression (class in pyomo.core.expr.current), omo.environ.AbstractModel method), 201
243 transfer_attributes_from() (py-
summation (in module pyomo.core.util), 221 omo.environ.ConcreteModel method), 197
SymbolMap (class in pyomo.core.expr.symbol_map), TransformedPiecewiseLinearFunction
224 (class in py-
symmetric_difference() (py- omo.core.kernel.piecewise_library.transforms),
omo.environ.RangeSet method), 214 291
374 Index
Pyomo Documentation, Release 5.6.9.dev0
Index 375
Pyomo Documentation, Release 5.6.9.dev0
W
warm_start_capable() (py-
omo.solvers.plugins.solvers.cplex_persistent.CPLEXPersistent
method), 260
376 Index