- Click the big green button
Use this template
or click here. - Enter a Repository name and click
Create repository from template
- Head over to the created repository and complete the setup.
- In the a new repository, complete the project setup by editing the
cookiecutter.json
file. - Hit cmd + S and then Enter to perform a commit (the commit message doesn't really matter).
- Wait Setup Repository Action to complete.
- That's it, easy isn't it?
If you have anything to add, please reply in this Twitter thread.
On June 6, 2019, GitHub introduced Repository Templates giving users an easy way to share boilerplate for their projects. This feature is fantastic, but lacking adoption to my knowledge and opinion for one reason... Read the full blog post to learn how to template new projects using repository templates and GitHub Actions.# Cookiecutter Data Science
A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.
- Python 2.7 or 3.5+
- Cookiecutter Python package >= 1.4.0: This can be installed with pip by or conda depending on how you manage your Python packages:
$ pip install cookiecutter
or
$ conda config --add channels conda-forge
$ conda install cookiecutter
cookiecutter -c v1 https://github.com/drivendata/cookiecutter-data-science
Cookiecutter data science is moving to v2 soon, which will entail using
the command ccds ...
rather than cookiecutter ...
. The cookiecutter command
will continue to work, and this version of the template will still be available.
To use the legacy template, you will need to explicitly use -c v1
to select it.
Please update any scripts/automation you have to append the -c v1
option (as above),
which is available now.
The directory structure of your new project looks like this:
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── src <- Source code for use in this project.
│ └── {{cookiecutter.repo_name}} <- Generated graphics and figures to be used in reporting
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results oriented visualizations
│ └── visualize.py
│
└── noxfile.py <- nox file with settings for running nox; see nox.readthedocs.io
We welcome contributions! See the docs for guidelines.
pip install -r requirements.txt
py.test tests