Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@
## Python
*.pyc

## Virtual Environments
.venv

## Packaging
.pypirc
.cache
Expand All @@ -11,12 +14,14 @@ dist
*.log
*.patch
*.diff
pyproject.toml

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also add _version.py

## Sublime
*.sublime-project
*.sublime-workspace

## VSCode
.vscode

## Data
logs/
exdata/
Expand Down
14 changes: 14 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
hooks:
- id: check-case-conflict
- id: end-of-file-fixer
- id: trailing-whitespace

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.0
hooks:
- id: ruff
args: ['--fix']
- id: ruff-format
1 change: 1 addition & 0 deletions .python-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.11
9 changes: 0 additions & 9 deletions MANIFEST.in

This file was deleted.

19 changes: 0 additions & 19 deletions Pipfile

This file was deleted.

666 changes: 0 additions & 666 deletions Pipfile.lock

This file was deleted.

49 changes: 34 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Paper + bibtex are at the bottom of this file, or check out [Giuse's page](https

## Installation

You can install DiBB from Pypi using `pip install dibb[cma]` or `dibb[lmmaes]`, which installs both DiBB and a solid underlying optimizer ([CMA-ES](https://github.com/CMA-ES/pycma) or [LM-MA-ES](https://github.com/giuse/lmmaes) respectively). Running `pip install dibb[all]` installs all currently available optimizers.
You can install DiBB from Pypi using `pip install dibb[cma]` or `dibb[lmmaes]`, which installs both DiBB and a solid underlying optimizer ([CMA-ES](https://github.com/CMA-ES/pycma) or [LM-MA-ES](https://github.com/giuse/lmmaes) respectively). Running `pip install dibb[all]` installs all currently available optimizers.

If you just `pip install dibb`, it will install only DiBB (and Parameter Guessing); you can then install separately an optimizer of your choice, just make sure that
[there is already a wrapper available for it](dibb/opt_wrappers/__init__.py) -- if not, [writing your own](dibb/opt_wrappers/README_interface.md) is very easy.
Expand All @@ -44,7 +44,8 @@ DiBB is compatible with several traditional workflows from the larger family of
```python
# Let's define a simple function to optimize:
import numpy as np
def sphere(coords): np.dot(coords, coords)
def sphere(coords):
return np.dot(coords, coords)

# You can use the classic `fmin`, if you need
from dibb import fmin
Expand All @@ -65,7 +66,7 @@ The algorithm used is more correctly the partially-separable version of CMA crea
To enable the full potential of DiBB you should get your hands on a few machines (one per each block, plus one for the head process), then:

- `pip install ray` on each of them
- Set up basic SSH key-pair authentication (here's a [quick script](https://github.com/giuse/devops/blob/master/pair_ssh_keys.sh))
- Set up basic SSH key-pair authentication (here's a [quick script](https://github.com/giuse/devops/blob/master/pair_ssh_keys.sh))
- Mark down the IP addresses
- Customize a copy of [`ray_cluster_config.yaml`](ray_cluster_config.yaml)
(instructions inside)
Expand Down Expand Up @@ -104,44 +105,60 @@ You can find the complete list of accepted parameters, their descriptions and de
First you will need to install the requirements (besides DiBB of course): 1. an optimizer, 2. a neural network and 3. a RL control environment.

```bash
pip install "dibb[cma] tinynet gym[classic_control]"
pip install "dibb[cma] tinynet gymnasium[classic_control]"
```

Or you can also just do:

```bash
pip install "dibb[neuroevolution-example]"
```

_(the quotes are here to escape the square parenthesis for `zsh`, which is currently the default shell on Mac)_

Then copy+paste the example below to a `.py` file and run it.
It should not take long, even on your local machine -- or you can launch a cluster of 3 machines first (using `ray_cluster_config.yaml`): the example below will run on the cluster with no further changes.

```python
# INSTALL REQUIREMENTS FIRST: dibb (with optimizer), neural network, RL environment:
# $ pip install "dibb[cma] tinynet gym[classic_control]"
# $ pip install "dibb[cma] tinynet gymnasium[classic_control]"

import numpy as np
import ray
import tinynet # https://github.com/giuse/tinynet/
import gym # https://www.gymlibrary.ml/environments/classic_control/
import gymnasium as gym
import warnings # silence some gym warnings
warnings.filterwarnings(action='ignore', category=UserWarning, module='gym')
from dibb import DiBB

# Set up the environment and network
env = gym.make("CartPole-v1")
nactions = env.action_space.n
ninputs = env.reset().size
obs, info = env.reset()
ninputs = obs.size
# Just 2 neurons (for the Cartpole only has 2 actions: left and right)
# with linear activation `f(x)=x` are already enough
net = tinynet.FFNN([ninputs, nactions], act_fn=lambda x: x)

# The fitness function accumulates the episodic reward (basic gym gameplay)
def play_gym(ind, render=False):
obs = env.reset(seed=1) # fix random seed: faster training but less generalization
reset_result = env.reset(seed=1)
# Handle both old and new gymnasium API
if isinstance(reset_result, tuple):
obs, info = reset_result
else:
obs = reset_result

net.set_weights(ind) # set the weights into the network
score = 0
done = False
while not done:
if render: env.render() # you can watch it play!
if render:
env.render() # you can watch it play!
action = net.activate(obs).argmax() # pick action of neuron with highest act
obs, rew, done, info = env.step(action)
# With NE we ignore the availability of per-action reward (it's rarely
obs, rew, terminated, truncated, info = env.step(action)
done = terminated or truncated # Episode ended (task completed, failed, or external constraints)?

# With NE we ignore the availability of per-action reward (it's rarely
# available in nature anyway), and just accumulate it over the episode
score += rew
return score
Expand All @@ -153,7 +170,7 @@ dibb_config = {
'fit_fn' : play_gym,
'minim_task' : False, # IMPORTANT: in this task we want to _maximize_ the score!
'ndims' : net.nweights,
'nblocks' : net.noutputs, # Let's use a block for each output neuron (2 here)
'nblocks' : int(net.noutputs), # Let's use a block for each output neuron (2 here)
'optimizer' : 'default', # CMA or LMMAES if installed, else Parameter Guessing
# 'optimizer_options' : {'popsize' : 50} # Ray manages parallel fitness evals
}
Expand All @@ -162,11 +179,13 @@ dibb = DiBB(**dibb_config).run(ngens=15) # Cartpole is not a challenge for DPS/N
###################################################

# Watch the best individual play!
env = gym.make("CartPole-v1", render_mode="human") # use render mode (not supported during training due to serialization issues)
best_fit = play_gym(dibb.best_ind, render=True)
print("Best fitness:", best_fit)

# You can even resume the run for a few more generations if needed:
# print("Resume training for 15 more generations")
# env = gym.make("CartPole-v1") # back to normal mode
# dibb.run(ngens=15)
# print("Best fitness:", play_gym(dibb.best_ind, render=True))

Expand Down Expand Up @@ -227,7 +246,7 @@ import os; os.environ['PYTHONINSPECT'] = 'TRUE'
3. Define a `__call__(self, ind)` method to take an individual and return its fitness (on one run; use the `ntrials` option for nondeterministic fitnesses)
4. Pass to DiBB a string with a call to the constructor: `DiBB(fit_fn="Fitness()")`
5. DiBB will send the string to the remote machines, which will then run`fitness = eval("Fitness()")`, thus creating locally the exact environment required for individual evaluation
- Debugging is best done following [Ray's guidelines and tools](https://docs.ray.io/en/latest/ray-observability/ray-debugging.html). Particularly, be aware that Ray silently collects uncaught exceptions on the remote machines, and upon errors it does not flush the local stdout buffers to the main node.
- Debugging is best done following [Ray's guidelines and tools](https://docs.ray.io/en/latest/ray-observability/ray-debugging.html). Particularly, be aware that Ray silently collects uncaught exceptions on the remote machines, and upon errors it does not flush the local stdout buffers to the main node.

This means that if your code breaks a remote machine in the cluster, it will look like DiBB is failing silently and without support. There is nothing we can do about it (yet). Please be patient, either trust DiBB and simplify your fitness, or carefully debug and submit a fix as pull request if it's our fault. Thank you!

Expand Down Expand Up @@ -275,5 +294,5 @@ This work has been published at GECCO 2022, The 24th Genetic and Evolutionary Co

The experiment code to reproduce our COCO results is [available here](https://github.com/eXascaleInfolab/dibb_coco), created and maintained by Luca [(@rolshoven)](https://github.com/rolshoven).

Since 2021 -- initially as part of his Master thesis, then out of his personal quest for excellence -- Luca Sven Rolshoven [(@rolshoven)](https://github.com/rolshoven) has contributed to this project with engaging discussions, reliable debugging, the original "hooks" feature, [fundamental testing](https://github.com/eXascaleInfolab/dibb_coco), and managing our cluster of 25 decommissioned office computers :)
Since 2021 -- initially as part of his Master thesis, then out of his personal quest for excellence -- Luca Sven Rolshoven [(@rolshoven)](https://github.com/rolshoven) has contributed to this project with engaging discussions, reliable debugging, the original "hooks" feature, [fundamental testing](https://github.com/eXascaleInfolab/dibb_coco), and managing our cluster of 25 decommissioned office computers :)
The git history of this repository has been wiped at publication for privacy concerns, but his contribution should not go unacknowledged nor underestimated. Thanks Luca!
1 change: 0 additions & 1 deletion VERSION

This file was deleted.

10 changes: 0 additions & 10 deletions dibb/__init__.py

This file was deleted.

45 changes: 0 additions & 45 deletions dibb/opt_wrappers/__init__.py

This file was deleted.

91 changes: 91 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
[build-system]
requires = ["hatchling>=1.25", "hatch-vcs"]
build-backend = "hatchling.build"

[project]
name = "dibb"
dynamic = ["version"]
description = "Distributed Black Box Optimization Framework"
readme = "README.md"
requires-python = ">=3.11"
license = { text = "MIT" }
authors = [
{ name = "Giuseppe Cuccu", email = "[email protected]" },
{ name = "Luca Rolshoven", email = "[email protected]" },
]
keywords = ["black-box optimization", "distributed computation"]
urls = { "Homepage" = "https://github.com/giuse/dibb" }

dependencies = [
"numpy",
"ray[default]",
]

[project.optional-dependencies]
lmmaes = ["lmmaes"]
cma = ["cma"]
all = ["lmmaes", "cma"]
quality = ["ruff>=0.14.0", "pre-commit>=4.3.0"]
tests = ["pytest>=7.4.0", "deepdiff", "pip>=25.2"]
dev = ["dibb[quality,tests]"]
neuroevolution-example = [
"dibb[cma]",
"tinynet",
"gymnasium[classic_control]",
]

[tool.hatch.version]
source = "vcs"

[tool.hatch.build.hooks.vcs]
version-file = "src/dibb/_version.py"

[tool.hatch.build.targets.wheel]
packages = ["src/dibb"]
include = [
"README.md",
"LICENSE.txt",
"ray_cluster_config.yaml",
]

[tool.hatch.build.targets.sdist]
include = [
"README.md",
"LICENSE.txt",
"ray_cluster_config.yaml",
]

[tool.uv]
package = true
dev-dependencies = [
"ptpython",
"brutelogger",
]

[tool.ruff]
line-length = 100

[tool.ruff.lint]
select = ["E", "F", "I", "UP", "W"]
ignore = ["E501"]
preview = true

[tool.ruff.lint.isort]
known-first-party = ["dibb"]
lines-after-imports = 2

[tool.ruff.lint.per-file-ignores]
"src/dibb/_version.py" = ["UP007"]

[tool.ruff.format]
quote-style = "double"
indent-style = "space"
skip-magic-trailing-comma = false
line-ending = "auto"

[tool.ruff.lint.pydocstyle]
convention = "google"

[tool.pytest.ini_options]
pythonpath = ["src"]
testpaths = ["tests"]
30 changes: 0 additions & 30 deletions setup.cfg

This file was deleted.

Loading