[Do not merge] a script to benchmark impact of lazy PTDF#114
Conversation
|
Performance Results
|
|
The iterative approach converges very fast, even faster than julia> function solve_with_loop(model)
jmp = POM.IOM.get_optimization_container(model).JuMPmodel
constraints_lb, constraints_ub = Dict{Any,Any}(), Dict{Any,Any}()
for (k, v) in model.internal.container.constraints
if k == POM.ConstraintKey{POM.FlowRateConstraint,PSY.Line}("lb")
for vi in v
constraints_lb[vi] = JuMP.constraint_object(vi)
end
elseif k == POM.ConstraintKey{POM.FlowRateConstraint,PSY.Line}("ub")
for vi in v
constraints_ub[vi] = JuMP.constraint_object(vi)
end
end
end
JuMP.delete(jmp, [k for k in keys(constraints_lb)])
JuMP.delete(jmp, [k for k in keys(constraints_ub)])
JuMP.set_silent(jmp)
total_solve_time = 0.0
while true
start_time = time()
JuMP.optimize!(jmp)
total_solve_time += time() - start_time
n_constraints_added = 0
for (k, c) in constraints_lb
if JuMP.value(c.func) < c.set.lower
JuMP.@constraint(jmp, c.func in c.set)
n_constraints_added += 1
delete!(constraints_lb, k)
end
end
for (k, c) in constraints_ub
if JuMP.value(c.func) > c.set.upper
JuMP.@constraint(jmp, c.func in c.set)
n_constraints_added += 1
delete!(constraints_ub, k)
end
end
if n_constraints_added == 0
break
else
@show n_constraints_added
end
end
@show total_solve_time
return jmp
end
solve_with_loop (generic function with 1 method)
julia> solve_with_loop(model)
n_constraints_added = 129
n_constraints_added = 14
total_solve_time = 33.15950894355774
A JuMP Model
├ mode: DIRECT
├ solver: Gurobi
├ objective_sense: MIN_SENSE
│ └ objective_function_type: JuMP.AffExpr
├ num_variables: 302472
├ num_constraints: 618311
│ ├ JuMP.VariableRef in MOI.GreaterThan{Float64}: 238176
│ ├ JuMP.AffExpr in MOI.GreaterThan{Float64}: 36235
│ ├ JuMP.VariableRef in MOI.ZeroOne: 64296
│ ├ JuMP.AffExpr in MOI.LessThan{Float64}: 86212
│ ├ JuMP.VariableRef in MOI.LessThan{Float64}: 131280
│ └ JuMP.AffExpr in MOI.EqualTo{Float64}: 62112
└ Names registered in the model: none |
|
Here is runtime in seconds for various cases:
The last line with The log for 201s includes whereas the loop approach has an intermediate solve So it the lazy constraint approach is solving a smaller problem. It just takes longer. Is this just a special case? An outlier? Or something more fundamental in HiGHS? I don't know at the moment. |
|
The answer is that the solve times are highly variable for different random seeds. I get anywhere between 180 seconds and 650 seconds. It depends whether we manage to solve the second problem almost at the root node, or if we start branching. I'll get some instances for @Opt-Mucca since the ones where we branch are ideal candidates for his parallel MIP solver. |
So this actually super interesting.
The win depends on the MIPGap. Previously I had been running with MIPGap=0.01 and I couldn't get the lazy constraints to make a difference in solve time. It's because it matters only for a much smaller MIPGap.
Nevertheless, I think this is a reasonable enough evidence to pursue. It probably makes an even bigger difference as we get larger systems.
Next I think we ned to show than an iterative HiGHS solve is a performance win. Which is less obvious.
Without
With Lazy=1