# Querying Solutions

So far we have seen all the elements and constructs related to writing a JuMP optimization model. In this section we reach the point of what to do with a solved problem. Suppose your model is named `model`

. Right after the call to `optimize!(model)`

, it's natural to ask JuMP questions about the finished optimization step. Typical questions include:

- Why has the optimization process stopped? Did it hit the time limit or run into numerical issues?
- Do I have a solution to my problem?
- Is it optimal?
- Do I have a dual solution?
- How sensitive is the solution to data perturbations?

JuMP follows closely the concepts defined in MathOptInterface (MOI) to answer user questions about a finished call to `optimize!(model)`

. There are three main steps in querying a solution:

First, we can query the `termination_status`

which will tell us why the optimization stopped. This could be due to a number of reasons. For example, the solver found an optimal solution, the problem was proven to be infeasible, or a user-provided limit such as a time limit was encountered. For more information, see the Termination statuses section below.

Second, we can query the `primal_status`

and the `dual_status`

, which will tell us what kind of results we have for our primal and dual solutions. This might be an optimal primal-dual pair, a primal solution without a corresponding dual solution, or a certificate of primal or dual infeasibility. For more information, see the Solution statuses section below.

Third, we can query `value`

and `dual`

to obtain the primal and dual values of the optimization variables and constraints (if there are values to be queried).

## Termination statuses

The reason why the optimization of `model`

was finished is given by

`termination_status(model)`

This function will return a `MOI.TerminationStatusCode`

`enum`

.

`TerminationStatusCode`

An Enum of possible values for the `TerminationStatus`

attribute. This attribute is meant to explain the reason why the optimizer stopped executing in the most recent call to `optimize!`

.

If no call has been made to `optimize!`

, then the `TerminationStatus`

is:

`OPTIMIZE_NOT_CALLED`

: The algorithm has not started.

**OK**

These are generally OK statuses, i.e., the algorithm ran to completion normally.

`OPTIMAL`

: The algorithm found a globally optimal solution.`INFEASIBLE`

: The algorithm concluded that no feasible solution exists.`DUAL_INFEASIBLE`

: The algorithm concluded that no dual bound exists for the problem. If, additionally, a feasible (primal) solution is known to exist, this status typically implies that the problem is unbounded, with some technical exceptions.`LOCALLY_SOLVED`

: The algorithm converged to a stationary point, local optimal solution, could not find directions for improvement, or otherwise completed its search without global guarantees.`LOCALLY_INFEASIBLE`

: The algorithm converged to an infeasible point or otherwise completed its search without finding a feasible solution, without guarantees that no feasible solution exists.`INFEASIBLE_OR_UNBOUNDED`

: The algorithm stopped because it decided that the problem is infeasible or unbounded; this occasionally happens during MIP presolve.

**Solved to relaxed tolerances**

`ALMOST_OPTIMAL`

: The algorithm found a globally optimal solution to relaxed tolerances.`ALMOST_INFEASIBLE`

: The algorithm concluded that no feasible solution exists within relaxed tolerances.`ALMOST_DUAL_INFEASIBLE`

: The algorithm concluded that no dual bound exists for the problem within relaxed tolerances.`ALMOST_LOCALLY_SOLVED`

: The algorithm converged to a stationary point, local optimal solution, or could not find directions for improvement within relaxed tolerances.

**Limits**

The optimizer stopped because of some user-defined limit.

`ITERATION_LIMIT`

: An iterative algorithm stopped after conducting the maximum number of iterations.`TIME_LIMIT`

: The algorithm stopped after a user-specified computation time.`NODE_LIMIT`

: A branch-and-bound algorithm stopped because it explored a maximum number of nodes in the branch-and-bound tree.`SOLUTION_LIMIT`

: The algorithm stopped because it found the required number of solutions. This is often used in MIPs to get the solver to return the first feasible solution it encounters.`MEMORY_LIMIT`

: The algorithm stopped because it ran out of memory.`OBJECTIVE_LIMIT`

: The algorthm stopped because it found a solution better than a minimum limit set by the user.`NORM_LIMIT`

: The algorithm stopped because the norm of an iterate became too large.`OTHER_LIMIT`

: The algorithm stopped due to a limit not covered by one of the above.

**Problematic**

This group of statuses means that something unexpected or problematic happened.

`SLOW_PROGRESS`

: The algorithm stopped because it was unable to continue making progress towards the solution.`NUMERICAL_ERROR`

: The algorithm stopped because it encountered unrecoverable numerical error.`INVALID_MODEL`

: The algorithm stopped because the model is invalid.`INVALID_OPTION`

: The algorithm stopped because it was provided an invalid option.`INTERRUPTED`

: The algorithm stopped because of an interrupt signal.`OTHER_ERROR`

: The algorithm stopped because of an error not covered by one of the statuses defined above.

Additionally, we can receive a solver specific string explaning why the optimization stopped with `raw_status`

.

## Solution statuses

These statuses indicate what kind of result is available to be queried with `value`

and `dual`

. It's possible that no result is available to be queried.

We can obtain these statuses by calling `primal_status`

for the primal status, and `dual_status`

for the dual status. Both will return a `MOI.ResultStatusCode`

`enum`

.

`ResultStatusCode`

An Enum of possible values for the `PrimalStatus`

and `DualStatus`

attributes. The values indicate how to interpret the result vector.

`NO_SOLUTION`

: the result vector is empty.`FEASIBLE_POINT`

: the result vector is a feasible point.`NEARLY_FEASIBLE_POINT`

: the result vector is feasible if some constraint tolerances are relaxed.`INFEASIBLE_POINT`

: the result vector is an infeasible point.`INFEASIBILITY_CERTIFICATE`

: the result vector is an infeasibility certificate. If the`PrimalStatus`

is`INFEASIBILITY_CERTIFICATE`

, then the primal result vector is a certificate of dual infeasibility. If the`DualStatus`

is`INFEASIBILITY_CERTIFICATE`

, then the dual result vector is a proof of primal infeasibility.`NEARLY_INFEASIBILITY_CERTIFICATE`

: the result satisfies a relaxed criterion for a certificate of infeasibility.`REDUCTION_CERTIFICATE`

: the result vector is an ill-posed certificate; see this article for details. If the`PrimalStatus`

is`REDUCTION_CERTIFICATE`

, then the primal result vector is a proof that the dual problem is ill-posed. If the`DualStatus`

is`REDUCTION_CERTIFICATE`

, then the dual result vector is a proof that the primal is ill-posed.`NEARLY_REDUCTION_CERTIFICATE`

: the result satisfies a relaxed criterion for an ill-posed certificate.`UNKNOWN_RESULT_STATUS`

: the result vector contains a solution with an unknown interpretation.`OTHER_RESULT_STATUS`

: the result vector contains a solution with an interpretation not covered by one of the statuses defined above.

Common status situations are described in the MOI docs.

## Obtaining solutions

Provided the primal status is not `MOI.NO_SOLUTION`

, the primal solution can be obtained by calling `value`

. For the dual solution, the function is `dual`

. Calling `has_values`

for the primal status and `has_duals`

for the dual solution is an equivalent way to check whether the status is `MOI.NO_SOLUTION`

.

It is important to note that if `has_values`

or `has_duals`

return false, calls to `value`

and `dual`

might throw an error or return arbitrary values.

The container type (e.g., scalar, vector, or matrix) of the returned solution (primal or dual) depends on the type of the variable or constraint. See `AbstractShape`

and `dual_shape`

for details.

To call `value`

or `dual`

on containers of `VariableRef`

or `ConstraintRef`

, use the broadcast syntax, e.g., `value.(x)`

.

The objective value of a solved problem can be obtained via `objective_value`

. The best known bound on the optimal objective value can be obtained via `objective_bound`

. If the solver supports it, the value of the dual objective can be obtained via `dual_objective_value`

.

The following is a recommended workflow for solving a model and querying the solution:

```
using JuMP
model = Model()
@variable(model, x[1:10] >= 0)
# ... other constraints ...
optimize!(model)
if termination_status(model) == MOI.OPTIMAL
optimal_solution = value.(x)
optimal_objective = objective_value(model)
elseif termination_status(model) == MOI.TIME_LIMIT && has_values(model)
suboptimal_solution = value.(x)
suboptimal_objective = objective_value(model)
else
error("The model was not solved correctly.")
end
```

## Sensitivity analysis for LP

Given an LP problem and an optimal solution corresponding to a basis, we can question how much an objective coefficient or standard form rhs coefficient (c.f., `normalized_rhs`

) can change without violating primal or dual feasibility of the basic solution. Note that not all solvers computes the basis and the sensitivity analysis requires that the solver interface implements `MOI.ConstraintBasisStatus`

.

Given an LP optimal solution (and both `has_values`

and `has_duals`

returns `true`

) `lp_objective_perturbation_range`

returns a range of the allowed perturbation of the cost coefficient corresponding to the input variable. Note that the current primal solution remains optimal within this range, however the corresponding dual solution might change since a cost coefficient is perturbed. Similarly, `lp_rhs_perturbation_range`

returns a range of the allowed perturbation of the rhs coefficient corresponding to the input constraint. And in this range the current dual solution remains optimal but the primal solution might change since a rhs coefficient is perturbed.

However, if the problem is degenerate, there are multiple optimal bases and hence these ranges might not be as intuitive and seem too narrow. E.g., a larger cost coefficient perturbation might not invalidate the optimality of the current primal solution. Moreover, if a problem is degenerate, due to finite precision, it can happen that, e.g., a perturbation seems to invalidate a basis even though it doesn't (again providing too narrow ranges). To prevent this `feasibility_tolerance`

and `optimality_tolerance`

is introduced, which in turn, might make the ranges too wide for numerically challenging instances. Thus do not blindly trust these ranges, especially not for highly degenerate or numerically unstable instances.

To give a simple example, we could analyze the sensitivity of the optimal solution to the following (non-degenerate) LP problem:

```
julia> model = Model();
julia> @variable(model, x[1:2]);
julia> @constraint(model, c1, x[1] + x[2] <= 1);
julia> @constraint(model, c2, x[1] - x[2] <= 1);
julia> @constraint(model, c3, -0.5 <= x[2] <= 0.5);
julia> @objective(model, Max, x[1]);
```

To analyze the sensitivity of the problem we could check the allowed perturbation ranges of, e.g., the cost coefficients and the rhs coefficient of constraint `c1`

as follows:

```
julia> optimize!(model);
julia> value.(x)
2-element Array{Float64,1}:
1.0
0.0
julia> lp_objective_perturbation_range(x[1])
(-1.0, Inf)
julia> lp_objective_perturbation_range(x[2])
(-1.0, 1.0)
julia> lp_rhs_perturbation_range(c1)
(-1.0, 1.0)
```

`JuMP.lp_objective_perturbation_range`

— Function.```
lp_objective_perturbation_range(var::VariableRef;
optimality_tolerance::Float64)
::Tuple{Float64, Float64}
```

Gives the range by which the cost coefficient can change and the current LP basis remains optimal, i.e., the reduced costs remain valid.

**Notes**

- The range denotes valid changes, Δ ∈ [l, u], for which cost[var] += Δ do not violate the current optimality conditions.
`optimality_tolerance`

is the dual feasibility tolerance, this should preferably match the tolerance used by the solver. The defualt tolerance should however apply in most situations (c.f. "Computational Techniques of the Simplex Method" by István Maros, section 9.3.4).

`JuMP.lp_rhs_perturbation_range`

— Function.```
lp_rhs_perturbation_range(constraint::ConstraintRef;
feasibility_tolerance::Float64)
::Tuple{Float64, Float64}
```

Gives the range by which the rhs coefficient can change and the current LP basis remains feasible, i.e., where the shadow prices apply.

**Notes**

- The rhs coefficient is the value right of the relation, i.e., b for the constraint when of the form a*x □ b, where □ is ≤, =, or ≥.
- The range denotes valid changes, e.g., for a*x <= b + Δ, the LP basis remains feasible for all Δ ∈ [l, u].
`feasibility_tolerance`

is the primal feasibility tolerance, this should preferably match the tolerance used by the solver. The defualt tolerance should however apply in most situations (c.f. "Computational Techniques of the Simplex Method" by István Maros, section 9.3.4).

## Reference

`JuMP.termination_status`

— Function.`termination_status(model::Model)`

Return the reason why the solver stopped (i.e., the MathOptInterface model attribute `TerminationStatus`

).

`JuMP.raw_status`

— Function.`raw_status(model::Model)`

Return the reason why the solver stopped in its own words (i.e., the MathOptInterface model attribute `RawStatusString`

).

`JuMP.primal_status`

— Function.`primal_status(model::Model)`

Return the status of the most recent primal solution of the solver (i.e., the MathOptInterface model attribute `PrimalStatus`

).

`JuMP.has_values`

— Function.`has_values(model::Model)`

Return `true`

if the solver has a primal solution available to query, otherwise return `false`

. See also `value`

.

`JuMP.value`

— Function.`value(con_ref::ConstraintRef)`

Get the primal value of this constraint in the result returned by a solver. That is, if `con_ref`

is the reference of a constraint `func`

-in-`set`

, it returns the value of `func`

evaluated at the value of the variables (given by `value(::VariableRef)`

). Use `has_values`

to check if a result exists before asking for values.

**Note**

For scalar contraints, the constant is moved to the `set`

so it is not taken into account in the primal value of the constraint. For instance, the constraint `@constraint(model, 2x + 3y + 1 == 5)`

is transformed into `2x + 3y`

-in-`MOI.EqualTo(4)`

so the value returned by this function is the evaluation of `2x + 3y`

. ```

`value(v::VariableRef)`

Get the value of this variable in the result returned by a solver. Use `has_values`

to check if a result exists before asking for values.

`value(ex::GenericAffExpr, var_value::Function)`

Evaluate `ex`

using `var_value(v)`

as the value for each variable `v`

.

`value(v::GenericAffExpr)`

Evaluate an `GenericAffExpr`

given the result returned by a solver. Replaces `getvalue`

for most use cases.

`value(p::NonlinearParameter)`

Return the current value stored in the nonlinear parameter `p`

.

**Example**

```
model = Model()
@NLparameter(model, p == 10)
value(p)
# output
10.0
```

`value(ex::NonlinearExpression, var_value::Function)`

Evaluate `ex`

using `var_value(v)`

as the value for each variable `v`

.

`value(ex::NonlinearExpression)`

Evaluate `ex`

using `value`

as the value for each variable `v`

.

`JuMP.dual_status`

— Function.`dual_status(model::Model)`

Return the status of the most recent dual solution of the solver (i.e., the MathOptInterface model attribute `DualStatus`

).

`JuMP.has_duals`

— Function.`has_duals(model::Model)`

Return true if the solver has a dual solution available to query, otherwise return false.

See also `dual`

and `shadow_price`

.

`JuMP.dual`

— Function.`dual(con_ref::ConstraintRef)`

Get the dual value of this constraint in the result returned by a solver. Use `has_dual`

to check if a result exists before asking for values. See also `shadow_price`

.

`JuMP.solve_time`

— Function.`solve_time(model::Model)`

If available, returns the solve time reported by the solver. Returns "ArgumentError: ModelLike of type `Solver.Optimizer`

does not support accessing the attribute MathOptInterface.SolveTime()" if the attribute is not implemented.

`JuMP.OptimizeNotCalled`

— Type.`struct OptimizeNotCalled <: Exception end`

A result attribute cannot be queried before `optimize!`

is called.

`MathOptInterface.optimize!`

— Function.`optimize!(optimizer::AbstractOptimizer)`

Start the solution procedure.