Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"

[compat]
Documenter = "~0.21"
Documenter = "~0.22"
153 changes: 112 additions & 41 deletions docs/src/apimanual.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
```@meta
CurrentModule = MathOptInterface
DocTestSetup = quote
using MathOptInterface
const MOI = MathOptInterface
end
```

# Manual
Expand Down Expand Up @@ -138,15 +142,25 @@ from the [`ModelLike`](@ref) abstract type.
Notably missing from the model API is the method to solve an optimization problem.
`ModelLike` objects may store an instance (e.g., in memory or backed by a file format)
without being linked to a particular solver. In addition to the model API, MOI
defines [`AbstractOptimizer`](@ref). *Optimizers* (or solvers) implement the model API (inheriting from `ModelLike`) and additionally
provide methods to solve the model.
defines [`AbstractOptimizer`](@ref). *Optimizers* (or solvers) implement the
model API (inheriting from `ModelLike`) and additionally provide methods to
solve the model.

Through the rest of the manual, `model` is used as a generic `ModelLike`, and
`optimizer` is used as a generic `AbstractOptimizer`.

[Discuss how models are constructed, optimizer attributes.]
Models are constructed by
* adding variables using [`add_variables`](@ref) (or [`add_variables`](@ref)),
see [Adding variables](@ref);
* setting an objective sense and function using [`set`](@ref),
see [Setting an objective](@ref).
* and adding constraints using [`add_constraint`](@ref) (or
[`add_constraints`](@ref)), see [Sets and Constraints](@ref).

The way the problem is solved by the optimimizer is controlled by
[`AbstractOptimizerAttribute`](@ref)s, see [Solver-specific attributes](@ref).

## Variables
## Adding variables

All variables in MOI are scalar variables.
New scalar variables are created with [`add_variable`](@ref) or
Expand Down Expand Up @@ -210,6 +224,8 @@ the function ``5x_1 - 2.3x_2 + 1``.
`[ScalarAffineTerm(5.0, x[1]), ScalarAffineTerm(-2.3, x[2])]`. This is
Julia's broadcast syntax and is used quite often.

### Setting an objective

Objective functions are assigned to a model by setting the
[`ObjectiveFunction`](@ref) attribute. The [`ObjectiveSense`](@ref) attribute is
used for setting the optimization sense.
Expand Down Expand Up @@ -290,9 +306,8 @@ add_constraint(model, VectorOfVariables([x,y,z]), SecondOrderCone(3))

Below is a list of common constraint types and how they are represented
as function-set pairs in MOI. In the notation below, ``x`` is a vector of decision variables,
``x_i`` is a scalar decision variable, and all other terms are fixed constants.

[Define notation more precisely. ``a`` vector; ``A`` matrix; don't reuse ``u,l,b`` as scalar and vector]
``x_i`` is a scalar decision variable, ``\alpha, \beta`` are scalar constants,
``a, b`` are a constant vectors and `A` is a constant matrix.

#### Linear constraints

Expand All @@ -301,11 +316,11 @@ as function-set pairs in MOI. In the notation below, ``x`` is a vector of decisi
| ``a^Tx \le u`` | `ScalarAffineFunction` | `LessThan` |
| ``a^Tx \ge l`` | `ScalarAffineFunction` | `GreaterThan` |
| ``a^Tx = b`` | `ScalarAffineFunction` | `EqualTo` |
| ``l \le a^Tx \le u`` | `ScalarAffineFunction` | `Interval` |
| ``x_i \le u`` | `SingleVariable` | `LessThan` |
| ``x_i \ge l`` | `SingleVariable` | `GreaterThan` |
| ``x_i = b`` | `SingleVariable` | `EqualTo` |
| ``l \le x_i \le u`` | `SingleVariable` | `Interval` |
| ``\alpha \le a^Tx \le \beta`` | `ScalarAffineFunction` | `Interval` |
| ``x_i \le \beta | `SingleVariable` | `LessThan` |
| ``x_i \ge \alpha | `SingleVariable` | `GreaterThan` |
| ``x_i = \beta | `SingleVariable` | `EqualTo` |
| ``\alpha \le x_i \le \beta | `SingleVariable` | `Interval` |
| ``Ax + b \in \mathbb{R}_+^n`` | `VectorAffineFunction` | `Nonnegatives` |
| ``Ax + b \in \mathbb{R}_-^n`` | `VectorAffineFunction` | `Nonpositives` |
| ``Ax + b = 0`` | `VectorAffineFunction` | `Zeros` |
Expand Down Expand Up @@ -469,59 +484,115 @@ non-global tree search solvers like

## A complete example: solving a knapsack problem

[ needs formatting help, doc tests ]

We first need to select a solver supporting the given problem (see
[`supports`](@ref) and [`supports_constraint`](@ref)). In this example, we
want to solve a binary-constrained knapsack problem:
`max c'x: w'x <= C, x binary`. Suppose we choose GLPK:
```julia
using MathOptInterface
const MOI = MathOptInterface
using GLPK

# Solves the binary-constrained knapsack problem:
# max c'x: w'x <= C, x binary using GLPK.

optimizer = GLPK.Optimizer()
```
We first define the constants of the problem:
```jldoctest knapsack; setup = :(optimizer = MOI.Utilities.MockOptimizer(MOI.Utilities.Model{Float64}()); MOI.Utilities.set_mock_optimize!(optimizer, mock -> MOI.Utilities.mock_optimize!(mock, ones(3))))
c = [1.0, 2.0, 3.0]
w = [0.3, 0.5, 1.0]
C = 3.2

num_variables = length(c)

optimizer = GLPK.Optimizer()
num_variables_to_create = length(c)

# Create the variables in the problem.
x = MOI.add_variables(optimizer, num_variables)
# output

# Set the objective function.
3
```
We create the variables of the problem and set the objective function:
```jldoctest knapsack
x = MOI.add_variables(optimizer, num_variables_to_create)
objective_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, x), 0.0)
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
objective_function)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)

# Add the knapsack constraint.
# output

MAX_SENSE::OptimizationSense = 1
```
We add the knapsack constraint and integrality constraints:
```jldoctest knapsack
knapsack_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(w, x), 0.0)
MOI.add_constraint(optimizer, knapsack_function, MOI.LessThan(C))

# Add integrality constraints.
for i in 1:num_variables
for i in 1:num_variables_to_create
MOI.add_constraint(optimizer, MOI.SingleVariable(x[i]), MOI.ZeroOne())
end

# All set!
# output

```
We are all set! We can now call [`optimize!`](@ref) and wait for the solver to
find the solution:
```jldoctest knapsack
MOI.optimize!(optimizer)

termination_status = MOI.get(optimizer, MOI.TerminationStatus())
obj_value = MOI.get(optimizer, MOI.ObjectiveValue())
if termination_status != MOI.OPTIMAL
error("Solver terminated with status $termination_status")
end
# output

```
The first thing to check after optimization is why the solver stopped, e.g.,
did it stop because of a time limit or did it stop because it found the optimal
solution ?
```jldoctest knapsack
MOI.get(optimizer, MOI.TerminationStatus())

# output


OPTIMAL::TerminationStatusCode = 1
```
It found the optimal solution! Now let's see what is that solution.
But first, let's check if it has more than one solution to share:
```jldoctest knapsack
MOI.get(optimizer, MOI.ResultCount())

# output

1
```
Only one.

@assert MOI.get(optimizer, MOI.ResultCount()) > 0
!!! note
While the value of `MOI.get(optimizer, MOI.ResultCount())` is often one, it
is important to check its value in order to write a robust code. For
instance, when the problem is unbounded, the solver might return two
results: one feasible primal solution `x` showing that the primal is
feasible and one infeasibility ray `r` showing that the dual in infeasible.
The unbounded ray is given by `x + λ * r` with `λ ≥ 0`. Note that each
result is insufficient alone to certify unboundedness.

As the termination status is `MOI.OPTIMAL` and there is only one result, this
result should be a feasible solution. Let's check to confirm:
```jldoctest knapsack
MOI.get(optimizer, MOI.PrimalStatus())

# output

FEASIBLE_POINT::ResultStatusCode = 1
```
Good, so this is indeed the optimal solution! What is its objective value:
```jldoctest knapsack
MOI.get(optimizer, MOI.ObjectiveValue())

@assert MOI.get(optimizer, MOI.PrimalStatus()) == MOI.FEASIBLE_POINT
# output

6.0
```
And what is the value of the variables `x`?
```jldoctest knapsack
MOI.get(optimizer, MOI.VariablePrimal(), x)

primal_variable_result = MOI.get(optimizer, MOI.VariablePrimal(), x)
# output

@show obj_value
@show primal_variable_result
3-element Array{Float64,1}:
1.0
1.0
1.0
```

## Problem modification
Expand Down