Manual

# Manual

## Purpose

Each mathematical optimization solver API has its own concepts and data structures for representing optimization models and obtaining results. However, it is often desirable to represent an instance of an optimization problem at a higher level so that it is easy to try using different solvers. MathOptInterface (MOI) is an abstraction layer designed to provide a unified interface to mathematical optimization solvers so that users do not need to understand multiple solver-specific APIs. MOI can be used directly, or through a higher-level modeling interface like JuMP.

MOI has been designed to replace MathProgBase, which has been used by modeling packages such as JuMP and Convex.jl. This second-generation abstraction layer addresses a number of limitations of MathProgBase. MOI is designed to:

• Be simple and extensible, unifying linear, quadratic, and conic optimization, and seamlessly facilitate extensions to essentially arbitrary constraints and functions (e.g., indicator constraints, complementarity constraints, and piecewise linear functions)
• Be fast by allowing access to a solver's in-memory representation of a problem without writing intermediate files (when possible) and by using multiple dispatch and avoiding requiring containers of nonconcrete types
• Allow a solver to return multiple results (e.g., a pool of solutions)
• Allow a solver to return extra arbitrary information via attributes (e.g., variable- and constraint-wise membership in an irreducible inconsistent subset for infeasibility analysis)
• Provide a greatly expanded set of status codes explaining what happened during the optimization procedure
• Enable a solver to more precisely specify which problem classes it supports
• Enable both primal and dual warm starts
• Enable adding and removing both variables and constraints by indices that are not required to be consecutive
• Enable any modification that the solver supports to an existing model
• Avoid requiring the solver wrapper to store an additional copy of the problem data

This manual introduces the concepts needed to understand MOI and give a high-level picture of how all of the pieces fit together. The primary focus is on MOI from the perspective of a user of the interface. At the end of the manual we have a section on Implementing a solver interface. The API Reference page lists the complete API.

MOI does not export functions, but for brevity we often omit qualifying names with the MOI module. Best practice is to have

using MathOptInterface
const MOI = MathOptInterface

and prefix all MOI methods with MOI. in user code. If a name is also available in base Julia, we always explicitly use the module prefix, for example, with MOI.get.

## Standard form problem

The standard form problem is:

\begin{align} & \min_{x \in \mathbb{R}^n} & f_0(x) \\ & \;\;\text{s.t.} & f_i(x) & \in \mathcal{S}_i & i = 1 \ldots m \end{align}

where:

The current function types are:

Extensions for nonlinear programming are present but not yet well documented.

MOI defines some commonly used sets, but the interface is extensible to other sets recognized by the solver.

## The ModelLike and AbstractOptimizer APIs

The most significant part of MOI is the definition of the model API that is used to specify an instance of an optimization problem (e.g., by adding variables and constraints). Objects that implement the model API should inherit from the ModelLike abstract type.

Notably missing from the model API is the method to solve an optimization problem. ModelLike objects may store an instance (e.g., in memory or backed by a file format) without being linked to a particular solver. In addition to the model API, MOI defines AbstractOptimizer. Optimizers (or solvers) implement the model API (inheriting from ModelLike) and additionally provide methods to solve the model.

Through the rest of the manual, model is used as a generic ModelLike, and optimizer is used as a generic AbstractOptimizer.

[Discuss how models are constructed, optimizer attributes.]

## Variables

All variables in MOI are scalar variables. New scalar variables are created with add_variable or add_variables, which return a VariableIndex or Vector{VariableIndex} respectively. VariableIndex objects are type-safe wrappers around integers that refer to a variable in a particular model.

One uses VariableIndex objects to set and get variable attributes. For example, the VariablePrimalStart attribute is used to provide an initial starting point for a variable or collection of variables:

v = add_variable(model)
set(model, VariablePrimalStart(), v, 10.5)
set(model, VariablePrimalStart(), v2, [1.3,6.8,-4.6])

A variable can be deleted from a model with delete(::ModelLike, ::VariableIndex). Not all models support deleting variables; an DeleteNotAllowed error is thrown if this is not supported.

## Functions

MOI defines six functions as listed in the definition of the Standard form problem. The simplest function is SingleVariable defined as:

struct SingleVariable <: AbstractFunction
variable::VariableIndex
end

If v is a VariableIndex object, then SingleVariable(v) is simply the scalar-valued function from the complete set of variables in a model that returns the value of variable v. One may also call this function a coordinate projection, which is more useful for defining constraints than as an objective function.

A more interesting function is ScalarAffineFunction, defined as

struct ScalarAffineFunction{T} <: AbstractScalarFunction
terms::Vector{ScalarAffineTerm{T}}
constant::T
end

The ScalarAffineTerm struct defines a variable-coefficient pair:

struct ScalarAffineTerm{T}
coefficient::T
variable_index::VariableIndex
end

If x is a vector of VariableIndex objects, then ScalarAffineFunction(ScalarAffineTerm.([5.0,-2.3],[x[1],x[2]]),1.0) represents the function $5x_1 - 2.3x_2 + 1$.

Note

ScalarAffineTerm.([5.0,-2.3],[x[1],x[2]]) is a shortcut for [ScalarAffineTerm(5.0, x[1]), ScalarAffineTerm(-2.3, x[2])]. This is Julia's broadcast syntax and is used quite often.

Objective functions are assigned to a model by setting the ObjectiveFunction attribute. The ObjectiveSense attribute is used for setting the optimization sense. For example,

x = add_variables(model, 2)
set(model, ObjectiveFunction{ScalarAffineFunction{Float64}}(),
ScalarAffineFunction(ScalarAffineTerm.([5.0,-2.3],[x[1],x[2]]),1.0))
set(model, ObjectiveSense(), MIN_SENSE)

sets the objective to the function just discussed in the minimization sense.

See Functions and function modifications for the complete list of functions.

## Sets and Constraints

All constraints are specified with add_constraint by restricting the output of some function to a set. The interface allows an arbitrary combination of functions and sets, but of course solvers may decide to support only a small number of combinations.

For example, linear programming solvers should support, at least, combinations of affine functions with the LessThan and GreaterThan sets. These are simply linear constraints. SingleVariable functions combined with these same sets are used to specify upper and lower bounds on variables.

The code example below encodes the linear optimization problem:

\begin{align} & \max_{x \in \mathbb{R}^2} & 3x_1 + 2x_2 & \\ & \;\;\text{s.t.} & x_1 + x_2 &\le 5 \\ && x_1 & \ge 0 \\ &&x_2 & \ge -1 \end{align}
x = add_variables(model, 2)
set(model, ObjectiveFunction{ScalarAffineFunction{Float64}}(),
ScalarAffineFunction(ScalarAffineTerm.([3.0, 2.0], x), 0.0))
set(model, ObjectiveSense(), MAX_SENSE)
LessThan(5.0))
add_constraint(model, SingleVariable(x[2]), GreaterThan(-1.0))

Besides scalar-valued functions in scalar-valued sets it possible to use vector-valued functions and sets.

The code example below encodes the convex optimization problem:

\begin{align} & \max_{x,y,z \in \mathbb{R}} & y + z & \\ & \;\;\text{s.t.} & 3x &= 2 \\ && x & \ge ||(y,z)||_2 \end{align}
x,y,z = add_variables(model, 3)
set(model, ObjectiveFunction{ScalarAffineFunction{Float64}}(),
ScalarAffineFunction(ScalarAffineTerm.(1.0, [y,z]), 0.0))
set(model, ObjectiveSense(), MAX_SENSE)
vector_terms = [VectorAffineTerm(1, ScalarAffineTerm(3.0, x))]
add_constraint(model, VectorOfVariables([x,y,z]), SecondOrderCone(3))

[Describe ConstraintIndex objects.]

### Constraints by function-set pairs

Below is a list of common constraint types and how they are represented as function-set pairs in MOI. In the notation below, $x$ is a vector of decision variables, $x_i$ is a scalar decision variable, and all other terms are fixed constants.

[Define notation more precisely. $a$ vector; $A$ matrix; don't reuse $u,l,b$ as scalar and vector]

#### Linear constraints

Mathematical ConstraintMOI FunctionMOI Set
$a^Tx \le u$ScalarAffineFunctionLessThan
$a^Tx \ge l$ScalarAffineFunctionGreaterThan
$a^Tx = b$ScalarAffineFunctionEqualTo
$l \le a^Tx \le u$ScalarAffineFunctionInterval
$x_i \le u$SingleVariableLessThan
$x_i \ge l$SingleVariableGreaterThan
$x_i = b$SingleVariableEqualTo
$l \le x_i \le u$SingleVariableInterval
$Ax + b \in \mathbb{R}_+^n$VectorAffineFunctionNonnegatives
$Ax + b \in \mathbb{R}_-^n$VectorAffineFunctionNonpositives
$Ax + b = 0$VectorAffineFunctionZeros

By convention, solvers are not expected to support nonzero constant terms in the ScalarAffineFunctions the first four rows above, because they are redundant with the parameters of the sets. For example, $2x + 1 \le 2$ should be encoded as $2x \le 1$.

Constraints with SingleVariable in LessThan, GreaterThan, EqualTo, or Interval sets have a natural interpretation as variable bounds. As such, it is typically not natural to impose multiple lower or upper bounds on the same variable, and by convention we do not ask solver interfaces to support this. It is natural, however, to impose upper and lower bounds separately as two different constraints on a single variable. The difference between imposing bounds by using a single Interval constraint and by using separate LessThan and GreaterThan constraints is that the latter will allow the solver to return separate dual multipliers for the two bounds, while the former will allow the solver to return only a single dual for the interval constraint.

[Define $\mathbb{R}_+, \mathbb{R}_-$]

#### Conic constraints

Mathematical ConstraintMOI FunctionMOI Set
$\lVert Ax + b\rVert_2 \le c^Tx + d$VectorAffineFunctionSecondOrderCone
$y \ge \lVert x \rVert_2$VectorOfVariablesSecondOrderCone
$2yz \ge \lVert x \rVert_2^2, y,z \ge 0$VectorOfVariablesRotatedSecondOrderCone
$(a_1^Tx + b_1,a_2^Tx + b_2,a_3^Tx + b_3) \in \mathcal{E}$VectorAffineFunctionExponentialCone
$A(x) \in \mathcal{S}_+$VectorAffineFunctionPositiveSemidefiniteConeTriangle
$A(x) \in \mathcal{S}'_+$VectorAffineFunctionPositiveSemidefiniteConeSquare
$x \in \mathcal{S}_+$VectorOfVariablesPositiveSemidefiniteConeTriangle
$x \in \mathcal{S}'_+$VectorOfVariablesPositiveSemidefiniteConeSquare

[Define $\mathcal{E}$ (exponential cone), $\mathcal{S}_+$ (smat), $\mathcal{S}'_+$ (svec). $A(x)$ is an affine function of $x$ that outputs a matrix.]

Mathematical ConstraintMOI FunctionMOI Set
$x^TQx + a^Tx + b \ge 0$ScalarQuadraticFunctionGreaterThan
$x^TQx + a^Tx + b \le 0$ScalarQuadraticFunctionLessThan
$x^TQx + a^Tx + b = 0$ScalarQuadraticFunctionEqualTo
Bilinear matrix inequalityVectorQuadraticFunctionPositiveSemidefiniteCone...

#### Discrete and logical constraints

Mathematical ConstraintMOI FunctionMOI Set
$x_i \in \mathbb{Z}$SingleVariableInteger
$x_i \in \{0,1\}$SingleVariableZeroOne
$x_i \in \{0\} \cup [l,u]$SingleVariableSemicontinuous
$x_i \in \{0\} \cup \{l,l+1,\ldots,u-1,u\}$SingleVariableSemiinteger
At most one component of $x$ can be nonzeroVectorOfVariablesSOS1
At most two components of $x$ can be nonzero, and if so they must be adjacent componentsVectorOfVariablesSOS2

## Solving and retrieving the results

Once an optimizer is loaded with the objective function and all of the constraints, we can ask the solver to solve the model by calling optimize!.

optimize!(optimizer)

The optimization procedure may terminate for a number of reasons. The TerminationStatus attribute of the optimizer returns a TerminationStatusCode object which explains why the solver stopped. The termination statuses distinguish between proofs of optimality, infeasibility, local convergence, limits, and termination because of something unexpected like invalid problem data or failure to converge. A typical usage of the TerminationStatus attribute is as follows:

status = MOI.get(optimizer, TerminationStatus())
if status == MOI.OPTIMAL
# Ok, we solved the problem!
else
# Handle other cases.
end

After checking the TerminationStatus, one should typically check ResultCount. This attribute returns the number of results that the solver has available to return. A result is defined as a primal-dual pair, but either the primal or the dual may be missing from the result. While the OPTIMAL termination status normally implies that at least one result is available, other statuses do not. For example, in the case of infeasiblity, a solver may return no result or a proof of infeasibility. The ResultCount distinguishes between these two cases.

The PrimalStatus and DualStatus attributes return a ResultStatusCode that indicates if that component of the result is present (i.e., not NO_SOLUTION) and explains how to interpret the result.

If PrimalStatus is not NO_SOLUTION, then the primal may be retrieved with the VariablePrimal attribute:

MOI.get(optimizer, VariablePrimal(), x)

If x is a VariableIndex then the function call returns a scalar, and if x is a Vector{VariableIndex} then the call returns a vector of scalars. VariablePrimal() is equivalent to VariablePrimal(1), i.e., the variable primal vector of the first result. Use VariablePrimal(N) to access the Nth result.

See also the attributes ConstraintPrimal, and ConstraintDual. See Duals for a discussion of the MOI conventions for primal-dual pairs and certificates.

Note

We omit discussion of how to handle multiple results, i.e., when ResultCount is greater than 1. This is supported in the API but not yet implemented in any solver.

### Common status situations

The sections below describe how to interpret typical or interesting status cases for three common classes of solvers. The example cases are illustrative, not comprehensive. Solver wrappers may provide additional information on how the solver's statuses map to MOI statuses.

? in the tables indicate that multiple different values are possible.

#### Primal-dual convex solver

Linear programming and conic optimization solvers fall into this category.

What happened?TerminationStatus()ResultCount()PrimalStatus()DualStatus()
Proved optimalityOPTIMAL1FEASIBLE_POINTFEASIBLE_POINT
Proved infeasibleINFEASIBLE1NO_SOLUTIONINFEASIBILITY_CERTIFICATE
Optimal within relaxed tolerancesALMOST_OPTIMAL1FEASIBLE_POINT or ALMOST_FEASIBLE_POINTFEASIBLE_POINT or ALMOST_FEASIBLE_POINT
Detected an unbounded ray of the primalDUAL_INFEASIBLE1INFEASIBILITY_CERTIFICATENO_SOLUTION
StallSLOW_PROGRESS1??

#### Global branch-and-bound solvers

Mixed-integer programming solvers fall into this category.

What happened?TerminationStatus()ResultCount()PrimalStatus()DualStatus()
Proved optimalityOPTIMAL1FEASIBLE_POINTNO_SOLUTION
Presolve detected infeasibility or unboundednessINFEASIBLE_OR_UNBOUNDED0NO_SOLUTIONNO_SOLUTION
Proved infeasibilityINFEASIBLE0NO_SOLUTIONNO_SOLUTION
Timed out (no solution)TIME_LIMIT0NO_SOLUTIONNO_SOLUTION
Timed out (with a solution)TIME_LIMIT1FEASIBLE_POINTNO_SOLUTION
CPXMIP_OPTIMAL_INFEASALMOST_OPTIMAL1INFEASIBLE_POINTNO_SOLUTION

CPXMIP_OPTIMAL_INFEAS is a CPLEX status that indicates that a preprocessed problem was solved to optimality, but the solver was unable to recover a feasible solution to the original problem.

#### Local search solvers

Nonlinear programming solvers fall into this category. It also includes non-global tree search solvers like Juniper.

What happened?TerminationStatus()ResultCount()PrimalStatus()DualStatus()
Converged to a stationary pointLOCALLY_SOLVED1FEASIBLE_POINTFEASIBLE_POINT
Completed a non-global tree search (with a solution)LOCALLY_SOLVED1FEASIBLE_POINTFEASIBLE_POINT
Converged to an infeasible pointLOCALLY_INFEASIBLE1INFEASIBLE_POINT?
Completed a non-global tree search (no solution found)LOCALLY_INFEASIBLE0NO_SOLUTIONNO_SOLUTION
Iteration limitITERATION_LIMIT1??
Diverging iteratesNORM_LIMIT or OBJECTIVE_LIMIT1??

## A complete example: solving a knapsack problem

[ needs formatting help, doc tests ]

using MathOptInterface
const MOI = MathOptInterface
using GLPK

# Solves the binary-constrained knapsack problem:
# max c'x: w'x <= C, x binary using GLPK.

c = [1.0, 2.0, 3.0]
w = [0.3, 0.5, 1.0]
C = 3.2

num_variables = length(c)

optimizer = GLPK.Optimizer()

# Create the variables in the problem.

# Set the objective function.
objective_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, x), 0.0)
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
objective_function)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)

knapsack_function = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(w, x), 0.0)

for i in 1:num_variables
end

# All set!
MOI.optimize!(optimizer)

termination_status = MOI.get(optimizer, MOI.TerminationStatus())
obj_value = MOI.get(optimizer, MOI.ObjectiveValue())
if termination_status != MOI.OPTIMAL

### Constraint bridges

A constraint often possess different equivalent formulations, but a solver may only support one of them. It would be duplicate work to implement rewritting rules in every solver wrapper for every different formulation of the constraint to express it in the form supported by the solver. Constraint bridges provide a way to define a rewritting rule on top of the MOI interface which can be used by any optimizer. Some rules also implement constraint modifications and constraint primal and duals translations.

For example, the SplitIntervalBridge defines the reformulation of a ScalarAffineFunction-in-Interval constraint into a ScalarAffineFunction-in-GreaterThan and a ScalarAffineFunction-in-LessThan constraint. The SplitInterval is the bridge optimizer that applies the SplitIntervalBridge rewritting rule. Given an optimizer optimizer implementing ScalarAffineFunction-in-GreaterThan and ScalarAffineFunction-in-LessThan, the optimizer

bridgedoptimizer = SplitInterval(optimizer)

will additionally support ScalarAffineFunction-in-Interval.

## Implementing a solver interface

[The interface is designed for multiple dispatch, e.g., attributes, combinations of sets and functions.]

### Solver-specific attributes

Solver-specific attributes should either be passed to the optimizer on creation, e.g., MyPackage.Optimizer(PrintLevel = 0), or through a sub-type of AbstractOptimizerAttribute. For example, inside MyPackage, we could add the following:

struct PrintLevel <: MOI.AbstractOptimizerAttribute end
function MOI.set(model::Optimizer, ::PrintLevel, level::Int)
# ... set the print level ...
end

Then, the user can write:

model = MyPackage.Optimizer()
MOI.set(model, MyPackage.PrintLevel(), 0)

### Implementing copy

Avoid storing extra copies of the problem when possible. This means that solver wrappers should not use CachingOptimizer as part of the wrapper. Instead, do one of the following to load the problem (assuming the solver wrapper type is called Optimizer):

• If the solver supports loading the problem incrementally, implement add_variable, add_constraint for supported constraints and set for supported attributes and add:

function MOI.copy_to(dest::Optimizer, src::MOI.ModelLike; kws...)
return MOI.Utilities.automatic_copy_to(dest, src; kws...)
end

with

MOI.Utilities.supports_default_copy_to(model::Optimizer, copy_names::Bool) = true

or

MOI.Utilities.supports_default_copy_to(model::Optimizer, copy_names::Bool) = !copy_names

depending on whether the solver support names; see Utilities.supports_default_copy_to for more details.

• If the solver does not support loading the problem incrementally, do not implement add_variable and add_constraint as implementing them would require caching the problem. Let users or JuMP decide whether to use a CachingOptimizer instead. Write either a custom implementation of copy_to or implement the Allocate-Load API. If you choose to implement the Allocate-Load API, do

function MOI.copy_to(dest::Optimizer, src::MOI.ModelLike; kws...)
return MOI.Utilities.automatic_copy_to(dest, src; kws...)
end

with

MOI.Utilities.supports_allocate_load(model::Optimizer, copy_names::Bool) = true

or

MOI.Utilities.supports_allocate_load(model::Optimizer, copy_names::Bool) = !copy_names

depending on whether the solver support names; see Utilities.supports_allocate_load for more details.

Note that even if both writing a custom implementation of copy_to and implementing the Allocate-Load API requires the user to copy the model from a cache, the Allocate-Load API allows MOI layers to be added between the cache and the solver which allows transformations to be applied without the need for additional caching. For instance, with the proposed Light bridges, no cache will be needed to store the bridged model when bridges are used by JuMP so implementing the Allocate-Load API will allow JuMP to use only one cache instead of two.

### JuMP mapping

MOI defines a very general interface, with multiple possible ways to describe the same constraint. This is considered a feature, not a bug. MOI is designed to make it possible to experiment with alternative representations of an optimization problem at both the solving and modeling level. When implementing an interface, it is important to keep in mind that the constraints which a solver supports via MOI will have a near 1-to-1 correspondence with how users can express problems in JuMP, because JuMP does not perform automatic transformations. (Alternative systems like Convex.jl do.) The following bullet points show examples of how JuMP constraints are translated into MOI function-set pairs:

• @constraint(m, 2x + y <= 10) becomes ScalarAffineFunction-in-LessThan;
• @constraint(m, 2x + y >= 10) becomes ScalarAffineFunction-in-GreaterThan;
• @constraint(m, 2x + y == 10) becomes ScalarAffineFunction-in-EqualTo;
• @constraint(m, 0 <= 2x + y <= 10) becomes ScalarAffineFunction-in-Interval;
• @constraint(m, 2x + y in ArbitrarySet()) becomes ScalarAffineFunction-in-ArbitrarySet.

Variable bounds are handled in a similar fashion:

• @variable(m, x <= 1) becomes SingleVariable-in-LessThan;
• @variable(m, x >= 1) becomes SingleVariable-in-GreaterThan.

One notable difference is that a variable with an upper and lower bound is translated into two constraints, rather than an interval. i.e.:

• @variable(m, 0 <= x <= 1) becomes SingleVariable-in-LessThan and SingleVariable-in-GreaterThan.

Therefore, if a solver wrapper does not support ScalarAffineFunction-in-LessThan constraints, users will not be able to write: @constraint(m, 2x + y <= 10) in JuMP. With this in mind, developers should support all the constraint types that they want to be usable from JuMP. That said, from the perspective of JuMP, solvers can safely choose to not support the following constraints:

• AbstractScalarFunction in GreaterThan, LessThan, EqualTo, or Interval with a nonzero constant in the function. Constants in the affine function should instead be moved into the parameters of the corresponding sets. The ScalarFunctionConstantNotZero exception may be thrown in this case.

• ScalarAffineFunction in Nonnegative, Nonpositive or Zeros. Alternative constraints are available by using a VectorAffineFunction with one output row or ScalarAffineFunction with GreaterThan, LessThan, or EqualTo.

• Two SingleVariable-in-LessThan constraints applied to the same variable (similarly with GreaterThan). These should be interpreted as variable bounds, and each variable naturally has at most one upper or lower bound.

### Column Generation

There is no special interface for column generation. If the solver has a special API for setting coefficients in existing constraints when adding a new variable, it is possible to queue modifications and new variables and then call the solver's API once all of the new coefficients are known.

### Problem data

All data passed to the solver should be copied immediately to internal data structures. Solvers may not modify any input vectors and should assume that input vectors may be modified by users in the future. This applies, for example, to the terms vector in ScalarAffineFunction. Vectors returned to the user, e.g., via ObjectiveFunction or ConstraintFunction attributes, should not be modified by the solver afterwards. The in-place version of get! can be used by users to avoid extra copies in this case.

### Statuses

Solver wrappers should document how the low-level statuses map to the MOI statuses. Statuses like NEARLY_FEASIBLE_POINT and INFEASIBLE_POINT, are designed to be used when the solver explicitly indicates that relaxed tolerances are satisfied or the returned point is infeasible, respectively.

### Naming

MOI solver interfaces may be in the same package as the solver itself (either the C wrapper if the solver is accessible through C, or the Julia code if the solver is written in Julia, for example). The guideline for naming the file containing the MOI wrapper is src/MOI_wrapper.jl and test/MOI_wrapper.jl for the tests. If the MOI wrapper implementation is spread in several files, they should be stored in a src/MOI_wrapper folder and included by a src/MOI_wrapper/MOI_wrapper.jl file. In some cases it may be more appropriate to host the MOI wrapper in its own package; in this case it is recommended that the MOI wrapper package be named MathOptInterfaceXXX where XXX is the solver name.

By convention, optimizers should not be exported and should be named PackageName.Optimizer. For example, CPLEX.Optimizer, Gurobi.Optimizer, and Xpress.Optimizer.

### Testing guideline

The skeleton below can be used for the wrapper test file of a solver name FooBar. A few bridges are used to give examples, you can find more bridges in the Bridges section.

using MathOptInterface
const MOI = MathOptInterface
const MOIT = MOI.Test
const MOIB = MOI.Bridges

const optimizer = FooBarOptimizer()
const config = MOIT.TestConfig(atol=1e-6, rtol=1e-6)

@testset "MOI Continuous Linear" begin
# If optimizer does not support the Interval set,
# the SplitInterval bridge can be used to split each f-in-Interval(lb, ub) constraint into
# a constraint f-in-GreaterThan(lb) and a constraint f-in-LessThan(ub)
MOIT.contlineartest(MOIB.SplitInterval{Float64}(optimizer), config)
end

@testset "MOI Continuous Conic" begin
# If the solver supports rotated second order cone, the GeoMean bridge can be used to make it support geometric mean cone constraints.
# If it additionally support positive semidefinite cone constraints, the RootDet bridge can be used to make it support root-det cone constraints.
MOIT.contlineartest(MOIB.RootDet{Float64}(MOIB.GeoMean{Float64}(optimizer)), config)
end

@testset "MOI Integer Conic" begin
MOIT.intconictest(optimizer, config)
end

If the wrapper does not support building the model incrementally (i.e. with add_variable and add_constraint), the line const optimizer = FooBarOptimizer() can be replaced with

const MOIU = MOI.Utilities
# Include here the functions/sets supported by the solver wrapper (not those that are supported through bridges)
MOIU.@model FooBarModelData () (EqualTo, GreaterThan, LessThan) (Zeros, Nonnegatives, Nonpositives) () (SingleVariable,) (ScalarAffineFunction,) (VectorOfVariables,) (VectorAffineFunction,)
const optimizer = MOIU.CachingOptimizer(FooBarModelData{Float64}(), FooBarOptimizer())