Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel ParticleSwarm #1125

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 43 additions & 7 deletions src/multivariate/solvers/zeroth_order/particle_swarm.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,27 @@ struct ParticleSwarm{Tl, Tu} <: ZerothOrderOptimizer
lower::Tl
upper::Tu
n_particles::Int
batched::Bool
end

ParticleSwarm(a, b, c) = ParticleSwarm(a, b, c, false)

"""
# Particle Swarm
## Constructor
```julia
ParticleSwarm(; lower = [],
upper = [],
n_particles = 0)
n_particles = 0,
batched = false)
```

The constructor takes 3 keywords:
The constructor takes 4 keywords:
* `lower = []`, a vector of lower bounds, unbounded below if empty or `-Inf`'s
* `upper = []`, a vector of upper bounds, unbounded above if empty or `Inf`'s
* `n_particles = 0`, the number of particles in the swarm, defaults to least three
* `batched = false`, if true, the objective function is evaluated on a matrix
of column vectors.

## Description
The Particle Swarm implementation in Optim.jl is the so-called Adaptive Particle
Expand All @@ -27,13 +33,26 @@ particle and move it away from its (potentially and probably) local optimum, to
improve the ability to find a global optimum. Of course, this comes a the cost
of slower convergence, but hopefully converges to the global optimum as a result.

If `batched = true` is specified, there should be a 2-argument method for the objective
function, `f(val, X)`. The input vectors are columns of `X`. The outputs are written
into `val`. This makes it possible to parallelize the function evaluations, e.g. with:

```julia
objfun(x) = sum(x)
function objfun(val, X)
Threads.@threads for i in axes(X, 2)
val[i] = objfun(@view(X[:, i]))
end
end
```

Note, that convergence is never assessed for ParticleSwarm. It will run until it
reaches the maximum number of iterations set in Optim.Options(iterations=x)`.

## References
- [1] Zhan, Zhang, and Chung. Adaptive particle swarm optimization, IEEE Transactions on Systems, Man, and Cybernetics, Part B: CyberneticsVolume 39, Issue 6 (2009): 1362-1381
"""
ParticleSwarm(; lower = [], upper = [], n_particles = 0) = ParticleSwarm(lower, upper, n_particles)
ParticleSwarm(; lower = [], upper = [], n_particles = 0, batched=false) = ParticleSwarm(lower, upper, n_particles, batched)

Base.summary(::ParticleSwarm) = "Particle Swarm"

Expand Down Expand Up @@ -93,7 +112,7 @@ function initial_state(method::ParticleSwarm, options, d, initial_x::AbstractArr
end

@assert length(lower) == length(initial_x) "limits must be of same length as x_initial."
@assert all(upper .> lower) "upper must be greater than lower"
@assert all(upper .>= lower) "upper must be greater than or equal to lower"

if method.n_particles > 0
if method.n_particles < 3
Expand Down Expand Up @@ -157,8 +176,14 @@ function initial_state(method::ParticleSwarm, options, d, initial_x::AbstractArr
X_best[j, 1] = initial_x[j]
end

for i in 2:n_particles
score[i] = value(d, X[:, i])
if method.batched
# here we could make a view of X, but then the objective will
# be compiled for a view also. We avoid that.
value(d, @view(score[2:n_particles]), X[:, 2:n_particles])
else
for i in 2:n_particles
score[i] = value(d, X[:, i])
end
end

ParticleSwarmState(
Expand Down Expand Up @@ -246,7 +271,11 @@ function update_state!(f, state::ParticleSwarmState{T}, method::ParticleSwarm) w
if state.limit_search_space
limit_X!(state.X, state.lower, state.upper, state.n_particles, n)
end
compute_cost!(f, state.n_particles, state.X, state.score)
if method.batched
compute_cost_batched!(f, state.X, state.score)
else
compute_cost!(f, state.n_particles, state.X, state.score)
end

state.iteration += 1
false
Expand Down Expand Up @@ -482,3 +511,10 @@ function compute_cost!(f,
end
nothing
end

function compute_cost_batched!(f,
X::Matrix,
score::Vector)
value(f, score, X)
nothing
end
13 changes: 11 additions & 2 deletions test/multivariate/solvers/zeroth_order/particle_swarm.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,21 @@
# TODO: Run on MultivariateProblems.UnconstrainedProblems?
Random.seed!(100)

function f_s(x::Vector)
function f_s(x::AbstractVector)
(x[1] - 5.0)^4
end

function rosenbrock_s(x::Vector)
function rosenbrock_s(x::AbstractVector)
(1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
end

function rosenbrock_s(val::AbstractVector, X::AbstractMatrix)
for i in axes(X, 2)
val[i] = rosenbrock_s(X[:, i])
end
end


initial_x = [0.0]
upper = [100.0]
lower = [-100.0]
Expand All @@ -32,4 +39,6 @@
@test summary(res) == "Particle Swarm"
res = Optim.optimize(rosenbrock_s, initial_x, ParticleSwarm(n_particles = n_particles), options)
@test summary(res) == "Particle Swarm"
res = Optim.optimize(rosenbrock_s, initial_x, ParticleSwarm(n_particles = n_particles, batched=true), options)
@test summary(res) == "Particle Swarm"
end