diff --git a/docs/assets/hub/models-linked-datasets.png b/docs/assets/hub/models-linked-datasets.png new file mode 100644 index 000000000..733f73dc2 Binary files /dev/null and b/docs/assets/hub/models-linked-datasets.png differ diff --git a/docs/assets/hub/models-linked-spaces.png b/docs/assets/hub/models-linked-spaces.png new file mode 100644 index 000000000..894dd9c8c Binary files /dev/null and b/docs/assets/hub/models-linked-spaces.png differ diff --git a/docs/assets/hub/models-usage-modal.png b/docs/assets/hub/models-usage-modal.png new file mode 100644 index 000000000..96116e71f Binary files /dev/null and b/docs/assets/hub/models-usage-modal.png differ diff --git a/docs/assets/hub/models-usage.png b/docs/assets/hub/models-usage.png new file mode 100644 index 000000000..ae5eccf0c Binary files /dev/null and b/docs/assets/hub/models-usage.png differ diff --git a/docs/hub/_sections.yml b/docs/hub/_sections.yml index d07c87609..3d1f97095 100644 --- a/docs/hub/_sections.yml +++ b/docs/hub/_sections.yml @@ -4,47 +4,5 @@ - local: repositories-main title: Repositories -- local: main - title: Hub documentation - -- local: model-repos - title: Everything you wanted to know about repos - -- local: adding-a-model - title: Adding a model to the Hub - -- local: libraries - title: Libraries - -- local: inference - title: Inference - -- local: endpoints - title: Hub API Endpoints - -- local: spaces - title: Spaces documentation - -- local: org-cards - title: Organization cards - -- local: adding-a-library - title: Integrate a library with the Hub - -- local: searching-the-hub - title: How to search the Hub efficiently - -- local: how-to-downstream - title: How to download files from the Hub - -- local: how-to-upstream - title: How to create repositories and upload files to the Hub - -- local: how-to-inference - title: How to use the Inference API from the Hub library - -- local: adding-a-task - title: Adding a new task to the Hub - -- local: security - title: Security tips +- local: models-main + title: Models diff --git a/docs/hub/adding-a-library.md b/docs/hub/deprecated/adding-a-library.md similarity index 100% rename from docs/hub/adding-a-library.md rename to docs/hub/deprecated/adding-a-library.md diff --git a/docs/hub/adding-a-model.md b/docs/hub/deprecated/adding-a-model.md similarity index 100% rename from docs/hub/adding-a-model.md rename to docs/hub/deprecated/adding-a-model.md diff --git a/docs/hub/adding-a-task.md b/docs/hub/deprecated/adding-a-task.md similarity index 100% rename from docs/hub/adding-a-task.md rename to docs/hub/deprecated/adding-a-task.md diff --git a/docs/hub/datasets-tags.md b/docs/hub/deprecated/datasets-tags.md similarity index 100% rename from docs/hub/datasets-tags.md rename to docs/hub/deprecated/datasets-tags.md diff --git a/docs/hub/endpoints.md b/docs/hub/deprecated/endpoints.md similarity index 100% rename from docs/hub/endpoints.md rename to docs/hub/deprecated/endpoints.md diff --git a/docs/hub/how-to-downstream.md b/docs/hub/deprecated/how-to-downstream.md similarity index 100% rename from docs/hub/how-to-downstream.md rename to docs/hub/deprecated/how-to-downstream.md diff --git a/docs/hub/how-to-inference.md b/docs/hub/deprecated/how-to-inference.md similarity index 100% rename from docs/hub/how-to-inference.md rename to docs/hub/deprecated/how-to-inference.md diff --git a/docs/hub/how-to-upstream.md b/docs/hub/deprecated/how-to-upstream.md similarity index 100% rename from docs/hub/how-to-upstream.md rename to docs/hub/deprecated/how-to-upstream.md diff --git a/docs/hub/inference.md b/docs/hub/deprecated/inference.md similarity index 100% rename from docs/hub/inference.md rename to docs/hub/deprecated/inference.md diff --git a/docs/hub/input-examples.md b/docs/hub/deprecated/input-examples.md similarity index 100% rename from docs/hub/input-examples.md rename to docs/hub/deprecated/input-examples.md diff --git a/docs/hub/libraries.md b/docs/hub/deprecated/libraries.md similarity index 100% rename from docs/hub/libraries.md rename to docs/hub/deprecated/libraries.md diff --git a/docs/hub/main.md b/docs/hub/deprecated/main.md similarity index 100% rename from docs/hub/main.md rename to docs/hub/deprecated/main.md diff --git a/docs/hub/model-repos.md b/docs/hub/deprecated/model-repos.md similarity index 100% rename from docs/hub/model-repos.md rename to docs/hub/deprecated/model-repos.md diff --git a/docs/hub/org-cards.md b/docs/hub/deprecated/org-cards.md similarity index 100% rename from docs/hub/org-cards.md rename to docs/hub/deprecated/org-cards.md diff --git a/docs/hub/searching-the-hub.md b/docs/hub/deprecated/searching-the-hub.md similarity index 100% rename from docs/hub/searching-the-hub.md rename to docs/hub/deprecated/searching-the-hub.md diff --git a/docs/hub/security.md b/docs/hub/deprecated/security.md similarity index 100% rename from docs/hub/security.md rename to docs/hub/deprecated/security.md diff --git a/docs/hub/spaces.md b/docs/hub/deprecated/spaces.md similarity index 100% rename from docs/hub/spaces.md rename to docs/hub/deprecated/spaces.md diff --git a/docs/hub/tutorial-add-library.md b/docs/hub/deprecated/tutorial-add-library.md similarity index 100% rename from docs/hub/tutorial-add-library.md rename to docs/hub/deprecated/tutorial-add-library.md diff --git a/docs/hub/hugging-face-hub.md b/docs/hub/hugging-face-hub.md index 8629c6bce..558778bd9 100644 --- a/docs/hub/hugging-face-hub.md +++ b/docs/hub/hugging-face-hub.md @@ -24,7 +24,7 @@ On it, you'll be able to upload and discover... Unlike other hosting solutions, the Hub offers **versioning, commit history, diffs, branches, and over a dozen library integrations**! You can learn more about the features that all repositories share in the **Repositories documentation**. ## Models -Models on the Hugging Face Hub allow for simple discovery and usage to maximize model impact. To promote responsible model usage and development, model repos are equipped with [Model Cards](TODO) to inform users of each model's limitations and biases. Additional [metadata](/docs/hub/model-repos#model-card-metadata) about info such as their tasks, languages, and metrics can be included, with training metrics charts even added if the repository contains [TensorBoard traces](https://huggingface.co/models?filter=tensorboard). It's also easy to add an **inference widget** to your model, allowing anyone to play with the model directly in the browser! For production settings, an API is provided to **instantly serve your model**. +Models on the Hugging Face Hub allow for simple discovery and usage to maximize model impact. To promote responsible model usage and development, model repos are equipped with [Model Cards](./models-cards) to inform users of each model's limitations and biases. Additional [metadata](/docs/hub/model-repos#model-card-metadata) about info such as their tasks, languages, and metrics can be included, with training metrics charts even added if the repository contains [TensorBoard traces](https://huggingface.co/models?filter=tensorboard). It's also easy to add an **inference widget** to your model, allowing anyone to play with the model directly in the browser! For production settings, an API is provided to **instantly serve your model**. To upload models to the Hub, or download models and integrate them into your work, explore the **Models documentation**. You can also choose from [**over a dozen frameworks**](/docs/hub/libraries) such as πŸ€— Transformers, Asteroid, and ESPnet that support the Hugging Face Hub. diff --git a/docs/hub/models-adding-libraries.md b/docs/hub/models-adding-libraries.md new file mode 100644 index 000000000..189fd712f --- /dev/null +++ b/docs/hub/models-adding-libraries.md @@ -0,0 +1,181 @@ +--- +title: Integrate a library with the Hub +--- + +# Integrate your library with the Hub + +The Hugging Face Hub aims to facilitate sharing machine learning models, checkpoints, and artifacts. This endeavor includes integrating the Hub into many of the amazing third-party libraries in the community. Some of the ones already integrated include [spaCy](https://spacy.io/usage/projects#huggingface_hub), [AllenNLP](https://allennlp.org/), and [timm](https://rwightman.github.io/pytorch-image-models/), among many others. Integration means users can download and upload files to the Hub directly from your library. We hope you will integrate your library and join us in democratizing artificial intelligence for everyone! + +Integrating the Hub with your library provides many benefits, including: + +- Free model hosting for you and your users. +- Built-in file versioning - even for huge files - made possible by [Git-LFS](https://git-lfs.github.com/). +- All public models are powered by the [Inference API](https://api-inference.huggingface.co/docs/python/html/index.html). +- In-browser widgets allow users to interact with your hosted models directly. + +This tutorial will help you integrate the Hub into your library so your users can benefit from all the features offered by the Hub. + +Before you begin, we recommend you create a [Hugging Face account](https://huggingface.co/join) from which you can manage your repositories and files. + +If you need help with the integration, feel free to open an [issue](https://github.com/huggingface/huggingface_hub/issues/new/choose), and we would be more than happy to help you! + +## Installation + +1. Install the `huggingface_hub` library with pip in your environment: + + ```bash + python -m pip install huggingface_hub + ``` + +2. Once you have successfully installed the `huggingface_hub` library, log in to your Hugging Face account: + + ```bash + huggingface-cli login + ``` + + ```bash + _| _| _| _| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _|_|_|_| _|_| _|_|_| _|_|_|_| + _| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _| + _|_|_|_| _| _| _| _|_| _| _|_| _| _| _| _| _| _|_| _|_|_| _|_|_|_| _| _|_|_| + _| _| _| _| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _| + _| _| _|_| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _| _| _| _|_|_| _|_|_|_| + + + Username: + Password: + ``` + +3. Alternatively, if you prefer working from a Jupyter or Colaboratory notebook, login with `notebook_login`: + + ```python + >>> from huggingface_hub import notebook_login + >>> notebook_login() + ``` + + `notebook_login` will launch a widget in your notebook from which you can enter your Hugging Face credentials. + +## Download files from the Hub + +Integration allows users to download your hosted files directly from the Hub using your library. + +Use the `hf_hub_download` function to retrieve a URL and download files from your repository. Downloaded files are stored in your cache: `~/.cache/huggingface/hub`. You don't have to re-download the file the next time you use it, and for larger files, this can save a lot of time. Furthermore, if the repository is updated with a new version of the file, `huggingface_hub` will automatically download the latest version and store it in the cache for you. Users don't have to worry about updating their files. + +For example, download the `config.json` file from the [lysandre/arxiv-nlp](https://huggingface.co/lysandre/arxiv-nlp) repository: + +```python +>>> from huggingface_hub import hf_hub_download +>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json") +``` + +Download a specific version of the file by specifying the `revision` parameter. The `revision` parameter can be a branch name, tag, or commit hash. + +The commit hash must be a full-length hash instead of the shorter 7-character commit hash: + +```python +>>> from huggingface_hub import hf_hub_download +>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", revision="877b84a8f93f2d619faa2a6e514a32beef88ab0a") +``` + +Use the `cache_dir` parameter to change where a file is stored: + +```python +>>> from huggingface_hub import hf_hub_download +>>> hf_hub_download(repo_id="lysandre/arxiv-nlp", filename="config.json", cache_dir="/home/lysandre/test") +``` + +### Code sample + +We recommend adding a code snippet to explain how to use a model in your downstream library. + +![/docs/assets/hub/code_snippet.png](/docs/assets/hub/code_snippet.png) + +Add a code snippet by updating the [Libraries Typescript file](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts) with instructions for your model. For example, the [Asteroid](https://huggingface.co/asteroid-team) integration includes a brief code snippet for how to load and use an Asteroid model: + +```typescript +const asteroid = (model: ModelData) => +`from asteroid.models import BaseModel + +model = BaseModel.from_pretrained("${model.id}")`; +``` + +Doing so will also add a tag to your model so users can quickly identify models from your library. + +![/docs/assets/hub/libraries-tags.png](/docs/assets/hub/libraries-tags.png) + +## Upload files to the Hub + +You might also want to provide a method for creating model repositories and uploading files to the Hub directly from your library. The `huggingface_hub` library offers two ways to assist you with creating repositories and uploading files: + +- `create_repo` creates a repository on the Hub. +- `upload_file` directly uploads files to a repository on the Hub. + +### `create_repo` + +The `create_repo` method creates a repository on the Hub. Use the `name` parameter to provide a name for your repository: + +```python +>>> from huggingface_hub import create_repo +>>> create_repo(repo_id="test-model") +'https://huggingface.co/lysandre/test-model' +``` + +When you check your Hugging Face account, you should now see a `test-model` repository under your namespace. + +### `upload_file` + +The `upload_file` method uploads files to the Hub. This method requires the following: + +- A path to the file to upload. +- The final path in the repository. +- The repository you wish to push the files to. + +For example: + +```python +>>> from huggingface_hub import upload_file +>>> upload_file( +... path_or_fileobj="/home/lysandre/dummy-test/README.md", +... path_in_repo="README.md", +... repo_id="lysandre/test-model" +... ) +'https://huggingface.co/lysandre/test-model/blob/main/README.md' +``` + +If you need to upload more than one file, look at the utilities offered by the `Repository` class [here](TODO). + +Once again, if you check your Hugging Face account, you should see the file inside your repository. + +Lastly, it is important to add a model card so users understand how to use your model. See [here](/docs/hub/model-repos#what-are-model-cards-and-why-are-they-useful) for more details about how to create a model card. + +## Set up the Inference API + +Our Inference API powers models uploaded to the Hub through your library. + +All third-party libraries are Dockerized, so you can install the dependencies you'll need for your library to work correctly. Add your library to the existing Docker images by navigating to the [Docker images folder](https://github.com/huggingface/api-inference-community/tree/main/docker_images). + +1. Copy the `common` folder and rename it with the name of your library (e.g. `docker/common` to `docker/your-awesome-library`). +2. There are four files you need to edit: + * List the packages required for your library to work in `requirements.txt`. + * Update `app/main.py` with the tasks supported by your model (see [here](https://github.com/huggingface/api-inference-community) for a complete list of available tasks). Look out for the `IMPLEMENT_THIS` flag to add your supported task. + + ```python + ALLOWED_TASKS: Dict[str, Type[Pipeline]] = { + "token-classification": TokenClassificationPipeline + } + ``` + + * For each task your library supports, modify the `app/pipelines/task_name.py` files accordingly. We have also added an `IMPLEMENT_THIS` flag in the pipeline files to guide you. If there isn't a pipeline that supports your task, feel free to add one. Open an [issue](https://github.com/huggingface/hub-docs/issues/new) here, and we will be happy to help you. + * Add your model and task to the `tests/test_api.py` file. For example, if you have a text generation model: + + ```python + TESTABLE_MODELS: Dict[str,str] = { + "text-generation": "my-gpt2-model" + } + ``` +3. Finally, run the following test to ensure everything works as expected: + + ```bash + pytest -sv --rootdir docker_images/your-awesome-library/docker_images/your-awesome-library/ + ``` + +With these simple but powerful methods, you brought the full functionality of the Hub into your library. Users can download files stored on the Hub from your library with `hf_hub_download`, create repositories with `create_repo`, and upload files with `upload_file`. You also set up Inference API with your library, allowing users to interact with your models on the Hub from inside a browser. \ No newline at end of file diff --git a/docs/hub/models-cards-co2.md b/docs/hub/models-cards-co2.md new file mode 100644 index 000000000..5e01cee8d --- /dev/null +++ b/docs/hub/models-cards-co2.md @@ -0,0 +1,54 @@ +--- +title: Carbon Emissions +--- + +

Displaying carbon emissions for your model

+ +## Why is it beneficial to calculate the carbon emissions of my model? + +Training ML models is often energy-intensive and can produce a substantial carbon footprint, as described by [Strubell et al.](https://arxiv.org/abs/1906.02243). It's therefore important to *track* and *report* the emissions of models to get a better idea of the environmental impacts of our field. + + +## What information should I include about the carbon footprint of my model? + +If you can, you should include information about: +- where the model was trained (in terms of location) +- the hardware used -- e.g. GPU, TPU, or CPU, and how many +- training type: pre-training or fine-tuning +- the estimated carbon footprint of the model, calculated in real-time with the [Code Carbon](https://github.com/mlco2/codecarbon) package or after training using the [ML CO2 Calculator](https://mlco2.github.io/impact/). + +## Carbon footprint metadata + +You can add the carbon footprint data to the model card metadata (in the README.md file). The structure of the metadata should be: + +```yaml +--- +co2_eq_emissions: + emissions: "in grams of CO2" + source: "source of the information, either directly from AutoTrain, code carbon or from a scientific article documenting the model" + training_type: "pre-training or fine-tuning" + geographical_location: "as granular as possible, for instance Quebec, Canada or Brooklyn, NY, USA" + hardware_used: "how much compute and what kind, e.g. 8 v100 GPUs" +--- +``` + +## How is the carbon footprint of my model calculated? 🌎 + +Considering the computing hardware, location, usage, and training time, you can estimate how much CO2 the model produced. + +The math is pretty simple! βž• + +First, you take the *carbon intensity* of the electric grid used for the training -- this is how much CO2 is produced by KwH of electricity used. The carbon intensity depends on the location of the hardware and the [energy mix](https://electricitymap.org/) used at that location -- whether it's renewable energy like solar 🌞, wind 🌬️ and hydro πŸ’§, or non-renewable energy like coal ⚫ and natural gas πŸ’¨. The more renewable energy gets used for training, the less carbon-intensive it is! + +Then, you take the power consumption of the GPU during training using the `pynvml` library. + +Finally, you multiply the power consumption and carbon intensity by the training time of the model, and you have an estimate of the CO2 emission. + +Keep in mind that this isn't an exact number because other factors come into play -- like the energy used for data center heating and cooling -- which will increase carbon emissions. But this will give you a good idea of the scale of CO2 emissions that your model is producing! + +To add **Carbon Emissions** metadata to your models: + +1. If you are using **AutoTrain**, this is tracked for you πŸ”₯ +2. Otherwise, use a tracker like Code Carbon in your training code, then specify `co2_eq_emissions.emissions: 1.2345` in your model card metadata, where `1.2345` is the emissions value in **grams**. + +To learn more about the carbon footprint of Transformers, check out the [video](https://www.youtube.com/watch?v=ftWlj4FBHTg), part of the Hugging Face Course! diff --git a/docs/hub/models-cards.md b/docs/hub/models-cards.md new file mode 100644 index 000000000..10188aaa1 --- /dev/null +++ b/docs/hub/models-cards.md @@ -0,0 +1,108 @@ +--- +title: Model Cards +--- + +

Model Cards

+ +## What are Model Cards? + +Model cards are Markdown files that accompany the models and provide handy information. They are essential for discoverability, reproducibility, and sharing! You can find a model card as the `README.md` file in any model repo. + +The model card should describe: +- the model +- its intended uses & potential limitations, including biases and ethical considerations as detailed in [Mitchell, 2018](https://arxiv.org/abs/1810.03993) +- the training params and experimental info (you can embed or link to an experiment tracking platform for reference) +- which datasets were used to train your model +- your evaluation results + +## Model card metadata + +A model repo will render its `README.md` as a model card. To control how the Hub displays the card, you can create a YAML section in the README file to define some metadata. Start by adding three `---` at the top, then include all of the relevant metadata, and close the section with another group of `---` like the example below: + +```yaml +--- +language: + - "List of ISO 639-1 code for your language" + - lang1 + - lang2 +thumbnail: "url to a thumbnail used in social sharing" +tags: +- tag1 +- tag2 +license: "any valid license identifier" +datasets: +- dataset1 +- dataset2 +metrics: +- metric1 +- metric2 +--- +``` + +You can also specify the supported frameworks in the model card metadata section: + +```yaml +tags: +- flair +``` + +Find more about our supported libraries [here](./models-the-hub#libraries), and see the detailed model card specification [here](https://github.com/huggingface/hub-docs/blame/main/modelcard.md). + +The metadata that you add to the model card enables certain interactions on the Hub. For example: +* The tags that you add to the metadata allow users to filter and discover models at https://huggingface.co/models. +* If you choose a license using the keywords listed in the right column of [this table](#list-of-license-identifiers), the license will be displayed on the model page. +* Adding datasets to the metadata will add a message reading `Datasets used to train:` to your model card and link the relevant datasets, if they're available on the Hub. + +Dataset, metric, and language identifiers are those listed on the [Datasets](https://huggingface.co/datasets), [Metrics](https://huggingface.co/metrics) and [Languages](https://huggingface.co/languages) pages and in the [`datasets`](https://github.com/huggingface/datasets) repository. + +You can even specify your **model's eval results** in a structured way, which will allow the Hub to parse, display, and even link them to Papers With Code leaderboards. See how to format this data [in the metadata spec](https://github.com/huggingface/hub-docs/blame/main/modelcard.md). + +Here is a partial example (omitting the eval results part): +```yaml +--- +language: +- ru +- en +tags: +- translation +license: apache-2.0 +datasets: +- wmt19 +metrics: +- bleu +- sacrebleu +--- +``` + +If a model includes valid eval results, they will be displayed like this: + +![/docs/assets/hub/eval-results.jpg](/docs/assets/hub/eval-results.jpg) + +### CO2 Emissions +The model card is also a great place to show information about the CO2 impact of your model. Visit our [guide on tracking and reporting CO2 emissions](./models-cards-co2) to learn more. + +## FAQ + +### How are model tags determined? + +Each model page lists all the model's tags in the page header, below the model name. These are primarily computed from the model card metadata, although some are added automatically, as described in [How is a model's type of inference API and widget determined?](./models-widgets). + +### Can I write LaTeX in my model card? + +Yes! We use the [KaTeX](https://katex.org/) math typesetting library to render math formulas server-side before parsing the Markdown. + +You have to use the following delimiters: +- `$$ ... $$` for display mode +- `\\` `(` `...` `\\` `)` for inline mode (no space between the slashes and the parenthesis). + +Then you'll be able to write: + +$$ +\LaTeX +$$ + +$$ +\mathrm{MSE} = \left(\frac{1}{n}\right)\sum_{i=1}^{n}(y_{i} - x_{i})^{2} +$$ + +$$ E=mc^2 $$ \ No newline at end of file diff --git a/docs/hub/models-inference.md b/docs/hub/models-inference.md new file mode 100644 index 000000000..d350b876d --- /dev/null +++ b/docs/hub/models-inference.md @@ -0,0 +1,37 @@ +--- +title: Inference API docs +--- + +

Inference API

+ +Please refer to [Accelerated Inference API Documentation](https://api-inference.huggingface.co/docs/python/html/index.html) for detailed information. + + +## What technology do you use to power the inference API? + +For πŸ€— Transformers models, we power the API through our [Pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) feature. + +On top of `Pipelines` and depending on the model type, we make several production optimizations like: +- compiling models to optimized intermediary representations (e.g. [ONNX](https://medium.com/microsoftazure/accelerate-your-nlp-pipelines-using-hugging-face-transformers-and-onnx-runtime-2443578f4333)), +- maintaining a Least Recently Used cache, ensuring that the most popular models are always loaded, +- scaling the underlying compute infrastructure on the fly depending on the load constraints. + +For models from [other libraries](/docs/hub/libraries), the API uses [Starlette](https://www.starlette.io) and runs in [Docker containers](https://github.com/huggingface/api-inference-community/tree/main/docker_images). Each library defines the implementation of [different pipelines](https://github.com/huggingface/api-inference-community/tree/main/docker_images/sentence_transformers/app/pipelines). + + +## How can I turn off the inference API for my model? + +Specify `inference: false` in your model card's metadata. + + +## Can I send large volumes of requests? Can I get accelerated APIs? + +If you are interested in accelerated inference, higher volumes of requests, or an SLA, please contact us at `api-enterprise at huggingface.co`. + +## How can I see my usage? + +You can head to the [Inference API dashboard](https://api-inference.huggingface.co/dashboard/). Learn more about it in the [Inference API documentation](https://api-inference.huggingface.co/docs/python/html/usage.html#api-usage-dashboard). + +## Is there programmatic access to the Inference API? + +Yes, the `huggingface_hub` library has a client wrapper documented [here](/docs/hub/how-to-inference). diff --git a/docs/hub/models-interacting.md b/docs/hub/models-interacting.md new file mode 100644 index 000000000..292c5fed0 --- /dev/null +++ b/docs/hub/models-interacting.md @@ -0,0 +1,128 @@ +--- +title: Interacting with Models on the Hub +--- + +

Interacting with models on the hub

+ +## Accessing models for local use + +Since all models on the Model Hub are Git repositories, you can clone the models locally by running: + +```bash +git lfs install +git clone +``` + +For detailed information on accessing the model, you can click on the "Use in Transformer" button on any model page. + +![Models can be used locally through the "Use in Transformer" button](../assets/hub/models-usage.png) + +If the model is compatible with πŸ€— Transformers, you'll even receive snippets to help you get started. + +![Snippets for using a model with the πŸ€— transformers library](../assets/hub/models-usage-modal.png) + +### Can I access models programmatically? + +You can use the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) library to create, delete, update and retrieve information from repos. You can also download files from repos or integrate them into your library! For example, you can quickly load a Scikit learn model with a few lines. + +```py +from huggingface_hub import hf_hub_url, cached_download +import joblib + +REPO_ID = "YOUR_REPO_ID" +FILENAME = "sklearn_model.joblib" + +model = joblib.load(cached_download( + hf_hub_url(REPO_ID, FILENAME) +)) +``` + +## Uploading models + +The first step is to create an account at [Hugging Face](https://huggingface.co/login). Models on the Hub are Git-based repositories, which give you versioning, branches, discoverability and sharing features, integration with over a dozen libraries, and more! You have control over what you want to upload to your repository, which could include checkpoints, configs, and any other files. + +You can link repositories with an individual, such as [osanseviero/fashion_brands_patterns](https://huggingface.co/osanseviero/fashion_brands_patterns), or with an organization, such as [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum). Organizations can collect models related to a company, community, or library! If you choose an organization, the model will be featured on the organization’s page, and every member of the organization will have the ability to contribute to the repository. You can create a new organization [here](https://huggingface.co/organizations/new). + +There are several ways to upload models to the Hub, described below. + +### Using the web interface + +To create a brand new model repository, visit [huggingface.co/new](http://huggingface.co/new). Then follow these steps: + +1. In the "Files and versions" tab, select "Add File" and specify "Upload File": + +![/docs/assets/hub/add-file.png](/docs/assets/hub/add-file.png) + +2. From there, select a file from your computer to upload and leave a helpful commit message to know what you are uploading: + +![docs/assets/hub/commit-file.png](/docs/assets/hub/commit-file.png) + +3. Afterwards, click **Commit changes** to upload your model to the Hub! + +4. Inspect files and history + +You can check your repository with all the recently added files! + +![/docs/assets/hub/repo_with_files.png](/docs/assets/hub/repo_with_files.png) + +The UI allows you to explore the model files and commits and to see the diff introduced by each commit: + +![/docs/assets/hub/explore_history.gif](/docs/assets/hub/explore_history.gif) + +5. Add metadata + +You can add metadata to your model card. You can specify: +* the type of task this model is for, enabling widgets and the Inference API. +* the used library (`transformers`, `spaCy`, etc.) +* the language +* the dataset +* metrics +* license +* a lot more! + +Read more about model tags [here](/docs/hub/model-repos#model-card-metadata). + +6. Add TensorBoard traces + +Any repository that contains TensorBoard traces (filenames that contain `tfevents`) is categorized with the [`TensorBoard` tag](https://huggingface.co/models?filter=tensorboard). As a convention, we suggest that you save traces under the `runs/` subfolder. The "Training metrics" tab then makes it easy to review charts of the logged variables, like the loss or the accuracy. + +![Training metrics tab on a model's page, with TensorBoard](/docs/assets/hub/tensorboard.png) + +Models trained with πŸ€— Transformers will generate [TensorBoard traces](https://huggingface.co/transformers/main_classes/callback.html?highlight=tensorboard#transformers.integrations.TensorBoardCallback) by default if [`tensorboard`](https://pypi.org/project/tensorboard/) is installed. + + +### Using Git + +Since model repos are just Git repositories, you can use Git to push your model files to the Hub. Follow the guide on [Getting Started with Repositories](repositories-getting-started.md) to learn about using the `git` CLI to commit and push your models. + + +### Using the `huggingface_hub` client library + +The rich feature set in the `huggingface_hub` library allows you to manage repositories, including creating repos and uploading models to the Model Hub. Visit [the client library's documentation](https://huggingface.co/docs/huggingface_hub/index) to learn more. + + +## FAQ + +### How can I see what dataset was used to train the model? + +It's up to the person who uploaded the model to include the training information! You may find the information about the datasets that the model was trained on in the model card. If the datasets used for the model are on the Hub, the uploader may have included them in the model card's metadata. In that case, the datasets would be linked with a handy card on the right side of the model page: + +![Linked datasets for a model](../assets/hub/models-linked-datasets.png) + +### How can I see an example of the model in action? + +Models can have inference widgets that let you try out the model in the browser! Inference widgets are easy to configure, and there are many different options at your disposal. Visit the [Widgets documentation](models-widgets.md) to learn more. + +The Hugging Face Hub is also home to Spaces, which are interactive demos used to showcase models. If a model has any Spaces associated with it, you'll find them linked on the model page like so: + +![Linked spaces for a model](../assets/hub/models-linked-spaces.png) + +Spaces are a great way to show off a model you've made or explore new ways to use existing models! Visit the [Spaces documentation](TODO) to learn how to make your own. + +### How do I upload an update / new version of the model? + +Releasing an update to a model that you've already published can be done by pushing a new commit to your model's repo. To do this, go through the same process that you followed to upload your initial model. Your previous model versions will remain in the repository's commit history. + +### What if I have a different checkpoint of the model trained on a different dataset? + +By convention, each model repo should contain a single checkpoint trained on a particular dataset. You should upload any new checkpoints trained on different datasets to the Hub in a new model repo. You can link the models together by using a [tag in your model card's metadata](./modelcard) or by linking to them in the model cards. The [akiyamasho/AnimeBackgroundGAN-Shinkai](https://huggingface.co/akiyamasho/AnimeBackgroundGAN-Shinkai#other-pre-trained-model-versions) model, for example, references other checkpoints in the model card under *"Other pre-trained model versions"*. diff --git a/docs/hub/models-main.md b/docs/hub/models-main.md new file mode 100644 index 000000000..424140e43 --- /dev/null +++ b/docs/hub/models-main.md @@ -0,0 +1,18 @@ +--- +title: Models +--- + +

Models

+ +The Hugging Face Hub hosts many models for a [variety of machine learning tasks](https://huggingface.co/tasks). Models are stored in repositories, so they benefit from [all the features](./repositories-main) possessed by every repo on the Hugging Face Hub. Additionally, model repos have attributes that make exploring and using models as easy as possible. These docs will take you through everything you'll need to know to find models on the Hub, upload your models, and make the most of everything the Model Hub offers! + +## Contents + +- [The Model Hub](./models-the-hub) +- [Model Cards](./models-cards) + - [CO2 emissions](./models-cards-co2) +- [Tasks](./models-tasks) +- [Interacting with models on the Hub](./models-interacting) + - [Integrating libraries with the Hub](./models-adding-libraries) +- [Widgets](./models-widgets) +- [Inference API](./models-inference) diff --git a/docs/hub/models-tasks.md b/docs/hub/models-tasks.md new file mode 100644 index 000000000..22282a964 --- /dev/null +++ b/docs/hub/models-tasks.md @@ -0,0 +1,88 @@ +--- +title: Tasks +--- + +# Tasks + +## What's a task? + +Tasks, or pipeline types, describe the "shape" of each model's API (inputs and outputs) and are used to determine which Inference API and widget we want to display for any given model. + +![/docs/assets/hub/tasks.png](/docs/assets/hub/tasks.png) + +This classification is relatively coarse-grained (you can always add more fine-grained task names in your model tags), so **you should rarely have to create a new task**. If you want to add support for a new task, this document explains the required steps. + +## Overview + +Having a new task integrated into the Hub means that: +* Users can search for all models of a given task. +* The Inference API supports the task. +* Users can try out models directly with the widget. πŸ† + +Note that you don't need to implement all the steps by yourself. Adding a new task is a community effort, and multiple people can contribute. πŸ§‘β€πŸ€β€πŸ§‘ + +To begin the process, open a new issue in the [huggingface_hub](https://github.com/huggingface/huggingface_hub/issues) repository. Please use the "Adding a new task" template. ⚠️Before doing any coding, it's suggested to go over this document. ⚠️ + +The first step is to upload a model for your proposed task. Once you have a model in the Hub for the new task, the next step is to enable it in the Inference API. There are three types of support that you can choose from: + +* πŸ€— using a `transformers` model +* 🐳 using a model from an [officially supported library](/docs/hub/libraries) +* πŸ–¨οΈ using a model with custom inference code. This experimental option has downsides, so we recommend using one of the other approaches. + +Finally, you can add a couple of UI elements, such as the task icon and the widget, that complete the integration in the Hub. πŸ“· + +Some steps are orthogonal; you don't need to do them in order. **You don't need the Inference API to add the icon.** This means that, even if there isn't full integration yet, users can still search for models of a given task. + +## Adding new tasks to the Hub + +### Using Hugging Face transformers library + +If your model is a `transformers`-based model, there is a 1:1 mapping between the Inference API task and a `pipeline` class. Here are some example PRs from the `transformers` library: +* [Adding ImageClassificationPipeline](https://github.com/huggingface/transformers/pull/11598) +* [Adding AudioClassificationPipeline](https://github.com/huggingface/transformers/pull/13342) + +Once the pipeline is submitted and deployed, you should be able to use the Inference API for your model. + +### Using Community Inference API with a supported library + +The Hub also supports over 10 open-source libraries in the [Community Inference API](https://github.com/huggingface/api-inference-community). + +**Adding a new task is relatively straightforward and requires 2 PRs:** +* PR 1: Add the new task to the API [validation](https://github.com/huggingface/api-inference-community/blob/main/api_inference_community/validation.py). This code ensures that the inference input is valid for a given task. Some PR examples: + * [Add text-to-image](https://github.com/huggingface/huggingface_hub/commit/5f040a117cf2a44d704621012eb41c01b103cfca#diff-db8bbac95c077540d79900384cfd524d451e629275cbb5de7a31fc1cd5d6c189) + * [Add audio-classification](https://github.com/huggingface/huggingface_hub/commit/141e30588a2031d4d5798eaa2c1250d1d1b75905#diff-db8bbac95c077540d79900384cfd524d451e629275cbb5de7a31fc1cd5d6c189) + * [Add structured-data-classification](https://github.com/huggingface/huggingface_hub/commit/dbea604a45df163d3f0b4b1d897e4b0fb951c650#diff-db8bbac95c077540d79900384cfd524d451e629275cbb5de7a31fc1cd5d6c189) +* PR 2: Add the new task to a library docker image. You should also add a template to [`docker_images/common/app/pipelines`](https://github.com/huggingface/api-inference-community/tree/main/docker_images/common/app/pipelines) to facilitate integrating the task in other libraries. Here is an example PR: + * [Add text-classification to spaCy](https://github.com/huggingface/huggingface_hub/commit/6926fd9bec23cb963ce3f58ec53496083997f0fa#diff-3f1083a92ca0047b50f9ad2d04f0fe8dfaeee0e26ab71eb8835e365359a1d0dc) + +### Adding Community Inference API for a quick prototype + +**My model is not supported by any library. Am I doomed? 😱** + +No, you're not! The [generic Inference API](https://github.com/huggingface/api-inference-community/tree/main/docker_images/generic) is an experimental Docker image for quickly prototyping new tasks and introducing new libraries, which should allow you to have a new task in production with very little development from your side. + +How does it work from the user's point of view? Users create a copy of a [template](https://huggingface.co/templates) repo for their given task. Users then need to define their `requirements.txt` and fill `pipeline.py`. Note that this is intended for quick experimentation and prototyping instead of fast production use-cases. + + +### UI elements + +The Hub allows users to filter models by a given task. To do this, you need to add the task to several places. You'll also get to pick an icon for the task! + +1. Add the task type to `Types.ts` + +In [interfaces/Types.ts](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts), you need to do a couple of things + +* Add the type to `PipelineType`. Note that pipeline types are sorted into different categories (NLP, Audio, Computer Vision, and others). +* Specify the task color in `PIPELINE_COLOR`. +* Specify the display order in `PIPELINE_TAGS_DISPLAY_ORDER`. + +2. Choose an icon + +You can add an icon in the [lib/Icons](https://github.com/huggingface/hub-docs/tree/main/js/src/lib/Icons) directory. We usually choose carbon icons from https://icones.js.org/collection/carbon. + + +### Widget + +Once the task is in production, what could be more exciting than implementing some way for users to play directly with the models in their browser? 🀩 You can find all the widgets [here](https://huggingface-widgets.netlify.app/). + +If you would be interested in contributing with a widget, you can look at the [implementation](https://github.com/huggingface/hub-docs/tree/main/js/src/lib/components/InferenceWidget/widgets) of all the widgets. You can also find WIP documentation on implementing a widget in https://github.com/huggingface/hub-docs/tree/main/js. \ No newline at end of file diff --git a/docs/hub/models-the-hub.md b/docs/hub/models-the-hub.md new file mode 100644 index 000000000..74442fece --- /dev/null +++ b/docs/hub/models-the-hub.md @@ -0,0 +1,42 @@ +--- +title: The Model Hub +--- + +

The Model Hub

+ +## What is the Model Hub? + +The Model Hub is where the members of the Hugging Face community can host all of their model checkpoints for simple storage, discovery, and sharing. Download pre-trained models with the [`huggingface_hub` client library](https://huggingface.co/docs/huggingface_hub/index) or with πŸ€— [`Transformers`](https://huggingface.co/docs/transformers/index) for fine-tuning and other usages. You can even leverage the [Inference API](./models-inference) to use models in production. + +You can refer to the following video for a guide on navigating the Model Hub: + + + +## Libraries + +Integrating the `huggingface_hub` library into your projects allows users to interact with the Hub directly from your library. The Hub supports many libraries, and we're working on expanding this support! We're happy to welcome to the Hub a set of Open Source libraries that are pushing Machine Learning forward. + +The table below summarizes the supported libraries and their level of integration. Find all our supported libraries [here](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts)! + +| Library | Description | Inference API | Widgets | Download from Hub | Push to Hub | +|-----------------------|-------------------------------------------------------------------------------|---------------|-------:|-------------------|-------------| +| [πŸ€— Transformers](https://github.com/huggingface/transformers) | State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX | βœ… | βœ… | βœ… | βœ… | +| [Adapter Transformers](https://github.com/Adapter-Hub/adapter-transformers) | Extends πŸ€—Transformers with Adapters. | ❌ | ❌ | βœ… | βœ… | +| [AllenNLP](https://github.com/allenai/allennlp) | An open-source NLP research library, built on PyTorch. | βœ… | βœ… | βœ… | ❌ | +| [Asteroid](https://github.com/asteroid-team/asteroid) | Pytorch-based audio source separation toolkit | βœ… | βœ… | βœ… | ❌ | +| [docTR](https://github.com/mindee/doctr) | Models and datasets for OCR-related tasks in PyTorch & TensorFlow | βœ… | βœ… | βœ… | ❌ | +| [ESPnet](https://github.com/espnet/espnet) | End-to-end speech processing toolkit (e.g. TTS) | βœ… | βœ… | βœ… | ❌ | +| [Flair](https://github.com/flairNLP/flair) | Very simple framework for state-of-the-art NLP. | βœ… | βœ… | βœ… | ❌ | +| [Pyannote](https://github.com/pyannote/pyannote-audio) | Neural building blocks for speaker diarization. | ❌ | ❌ | βœ… | ❌ | +| [PyCTCDecode](https://github.com/kensho-technologies/pyctcdecode) | Language model supported CTC decoding for speech recognition | ❌ | ❌ | βœ… | ❌ | +| [Sentence Transformers](https://github.com/UKPLab/sentence-transformers) | Compute dense vector representations for sentences, paragraphs, and images. | βœ… | βœ… | βœ… | βœ… | +| [spaCy](https://github.com/explosion/spaCy) | Advanced Natural Language Processing in Python and Cython. | βœ… | βœ… | βœ… | βœ… | +| [Speechbrain](https://speechbrain.github.io/) | A PyTorch Powered Speech Toolkit. | βœ… | βœ… | βœ… | ❌ | +| [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS) | Real-time state-of-the-art speech synthesis architectures. | ❌ | ❌ | βœ… | ❌ | +| [Timm](https://github.com/rwightman/pytorch-image-models) | Collection of image models, scripts, pretrained weights, etc. | ❌ | ❌ | βœ… | ❌ | +| [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) | Set of reliable implementations of deep reinforcement learning algorithms in PyTorch | ❌ | ❌ | βœ… | βœ… | + + +### How can I add a new library to the Inference API? + +Read about it in [Adding a Library Guide](/docs/hub/adding-a-library). diff --git a/docs/hub/models-widgets.md b/docs/hub/models-widgets.md new file mode 100644 index 000000000..7e41ef961 --- /dev/null +++ b/docs/hub/models-widgets.md @@ -0,0 +1,116 @@ +--- +title: Model Widgets +--- + +

Widgets

+ +## What's a widget? + +Many model repos have a widget that allows anyone to run inferences directly in the browser! + +Here are some examples: +* [Named Entity Recognition](https://huggingface.co/spacy/en_core_web_sm?text=My+name+is+Sarah+and+I+live+in+London) using [spaCy](https://spacy.io/). +* [Image Classification](https://huggingface.co/google/vit-base-patch16-224) using [πŸ€— Transformers](https://github.com/huggingface/transformers) +* [Text to Speech](https://huggingface.co/julien-c/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train) using [ESPnet](https://github.com/espnet/espnet). +* [Sentence Similarity](https://huggingface.co/osanseviero/full-sentence-distillroberta3) using [Sentence Transformers](https://github.com/UKPLab/sentence-transformers). + +You can try out all the widgets [here](https://huggingface-widgets.netlify.app/). + +## Creating a Widget + +A widget is automatically created for your model when you upload it to the Hub. To determine which pipeline and widget to display (`text-classification`, `token-classification`, `translation`, etc.), we analyze information in the repo, such as the metadata provided in the model card and configuration files. This information is mapped to a single `pipeline_tag`. We choose to expose **only one** widget per model for simplicity. + +For most use cases, we determine the model type from the tags. For example, if there is `tag: text-classification` in the [model card metadata](./models-cards), the inferred `pipeline_tag` will be `text-classification`. + +However, for πŸ€— Transformers, the model type is determined automatically from `config.json`. The architecture can determine the type: for example, `AutoModelForTokenClassification` corresponds to `token-classification`. If you're interested in this, you can see pseudo-code in [this gist](https://gist.github.com/julien-c/857ba86a6c6a895ecd90e7f7cab48046). + +You can always manually override your pipeline type with `pipeline_tag: xxx` in your [model card metadata](./models-cards). + +### How can I control my model's widget example input? + +You can specify the widget input in the model card metadata section: + +```yaml +widget: +- text: "Jens Peter Hansen kommer fra Danmark" +``` + +You can provide more than one example input. In the examples dropdown menu of the widget, they will appear as `Example 1`, `Example 2`, etc. Optionally, you can supply `example_title` as well. + +![/docs/assets/hub/widget_input_examples.gif](/docs/assets/hub/widget_input_examples.gif) + +```yaml +widget: +- text: "Is this review positive or negative? Review: Best cast iron skillet you will ever buy." + example_title: "Sentiment analysis" +- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ..." + example_title: "Coreference resolution" +- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ..." + example_title: "Logic puzzles" +- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night ..." + example_title: "Reading comprehension" +``` + +Moreover, you can specify non-text example inputs in the model card metadata. Refer [here](https://github.com/huggingface/hub-docs/blob/main/docs/hub/input-examples.md) for a complete list of sample input formats for all widget types. For vision & audio widget types, provide example inputs with `src` rather than `text`. + +For example, allow users to choose from two sample audio files for automatic speech recognition tasks by: + +```yaml +widget: +- src: https://example.org/somewhere/speech_samples/sample1.flac + example_title: Speech sample 1 +- src: https://example.org/somewhere/speech_samples/sample2.flac + example_title: Speech sample 2 +``` + +Note that you can also include example files in your model repository and use +them as: + +```yaml +widget: +- src: https://huggingface.co/username/model_repo/resolve/main/sample1.flac + example_title: Custom Speech Sample 1 +``` + +We provide example inputs for some languages and most widget types in [the DefaultWidget.ts file](https://github.com/huggingface/hub-docs/blob/master/js/src/lib/interfaces/DefaultWidget.ts). If some examples are missing, we welcome PRs from the community to add them! + + +## What are all the possible task/widget types? + +You can find all the supported tasks [here](https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts). + +Here are some links to examples: + +- `text-classification`, for instance [`roberta-large-mnli`](https://huggingface.co/roberta-large-mnli) +- `token-classification`, for instance [`dbmdz/bert-large-cased-finetuned-conll03-english`](https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english) +- `question-answering`, for instance [`distilbert-base-uncased-distilled-squad`](https://huggingface.co/distilbert-base-uncased-distilled-squad) +- `translation`, for instance [`t5-base`](https://huggingface.co/t5-base) +- `summarization`, for instance [`facebook/bart-large-cnn`](https://huggingface.co/facebook/bart-large-cnn) +- `conversational`, for instance [`facebook/blenderbot-400M-distill`](https://huggingface.co/facebook/blenderbot-400M-distill) +- `text-generation`, for instance [`gpt2`](https://huggingface.co/gpt2) +- `fill-mask`, for instance [`distilroberta-base`](https://huggingface.co/distilroberta-base) +- `zero-shot-classification` (implemented on top of a nli `text-classification` model), for instance [`facebook/bart-large-mnli`](https://huggingface.co/facebook/bart-large-mnli) +- `table-question-answering`, for instance [`google/tapas-base-finetuned-wtq`](https://huggingface.co/google/tapas-base-finetuned-wtq) +- `sentence-similarity`, for instance [`osanseviero/full-sentence-distillroberta2`](/osanseviero/full-sentence-distillroberta2) + +## How can I control my model's widget Inference API parameters? + +Generally, the Inference API for a model uses the default pipeline settings associated with each task. But if you'd like to change the pipeline's default settings and specify additional inference parameters, you can configure the parameters directly through the model card metadata. Refer [here](https://api-inference.huggingface.co/docs/python/html/detailed_parameters.html#) for some of the most commonly used parameters associated with each task. + +For example, if you want to specify an aggregation strategy for a NER task in the widget: + +```yaml +inference: + parameters: + aggregation_strategy: "none" +``` + +Or if you'd like to change the temperature for a summarization task in the widget: + +```yaml +inference: + parameters: + temperature: 0.7 +``` + +The Inference API allows you to send HTTP requests to models in the Hugging Face Hub, and it's 2x to 10x faster than the widgets! ⚑⚑ Learn more about it by reading the [Inference API documentation](./models-inference). \ No newline at end of file diff --git a/docs/hub/repositories-best-practices.md b/docs/hub/repositories-best-practices.md index d712c5e6b..aa53db1b3 100644 --- a/docs/hub/repositories-best-practices.md +++ b/docs/hub/repositories-best-practices.md @@ -22,4 +22,52 @@ Can use content from https://github.com/huggingface/huggingface_hub/issues/769 a You are able to add a license to any repo that you create on the Hugging Face Hub to let other users know about the permissions that you want to attribute to your code. The license can also be added to your repository's `README.md` file, known as a *card* on the Hub, in the card's metadata section. Remember to seek out and respect a project's license if you're considering using their code. -A [**full list of the available licenses**](TODO) is available in these docs. +A full list of the available licenses is available here: + +Fullname | License identifier (to use in model card) +--- | --- +Academic Free License v3.0 | `afl-3.0` +Apache license 2.0 | `apache-2.0` +Artistic license 2.0 | `artistic-2.0` +Boost Software License 1.0 | `bsl-1.0` +BSD 2-clause "Simplified" license | `bsd-2-clause` +BSD 3-clause "New" or "Revised" license | `bsd-3-clause` +BSD 3-clause Clear license | `bsd-3-clause-clear` +Creative Commons license family | `cc` +Creative Commons Zero v1.0 Universal | `cc0-1.0` +Creative Commons Attribution 3.0 | `cc-by-3.0` +Creative Commons Attribution 4.0 | `cc-by-4.0` +Creative Commons Attribution Share Alike 3.0 | `cc-by-sa-3.0` +Creative Commons Attribution Share Alike 4.0 | `cc-by-sa-4.0` +Creative Commons Attribution Non Commercial 3.0 |`cc-by-nc-3.0` +Creative Commons Attribution Non Commercial 4.0 |`cc-by-nc-4.0` +Creative Commons Attribution Non Commercial Share Alike 3.0| `cc-by-nc-sa-3.0` +Creative Commons Attribution Non Commercial Share Alike 4.0| `cc-by-nc-sa-4.0` +Do What The F*ck You Want To Public License | `wtfpl` +Educational Community License v2.0 | `ecl-2.0` +Eclipse Public License 1.0 | `epl-1.0` +Eclipse Public License 2.0 | `epl-2.0` +European Union Public License 1.1 | `eupl-1.1` +GNU Affero General Public License v3.0 | `agpl-3.0` +GNU General Public License family | `gpl` +GNU General Public License v2.0 | `gpl-2.0` +GNU General Public License v3.0 | `gpl-3.0` +GNU Lesser General Public License family | `lgpl` +GNU Lesser General Public License v2.1 | `lgpl-2.1` +GNU Lesser General Public License v3.0 | `lgpl-3.0` +ISC | `isc` +LaTeX Project Public License v1.3c | `lppl-1.3c` +Microsoft Public License | `ms-pl` +MIT | `mit` +Mozilla Public License 2.0 | `mpl-2.0` +Open Software License 3.0 | `osl-3.0` +PostgreSQL License | `postgresql` +SIL Open Font License 1.1 | `ofl-1.1` +University of Illinois/NCSA Open Source License | `ncsa` +The Unlicense | `unlicense` +zLib License | `zlib` +Open Data Commons Public Domain Dedication and License | `pddl` +Lesser General Public License For Linguistic Resources | `lgpl-lr` +Other | `other` + +In case of `license: other` please add the license's text to a `LICENSE` file inside your repo (or contact us to add the license you use to this list). diff --git a/docs/hub/repositories-getting-started.md b/docs/hub/repositories-getting-started.md index abc4afffe..60a43a389 100644 --- a/docs/hub/repositories-getting-started.md +++ b/docs/hub/repositories-getting-started.md @@ -33,13 +33,13 @@ Using the Hub's web interface you can easily create repositories, add files (eve 3. Next, enter your model’s name. This will also be the name of the repository. Finally, you can specify whether you want your model to be public or private. -You can leave the *License* field blank for now. To learn about licenses, visit the **Licenses** (TODO: LINK TO LICENSES) section of this document. +You can leave the *License* field blank for now. To learn about licenses, visit the [**Licenses**](repositories-best-practices#Licenses) section of this documentation. After creating your model repository, you should see a page like this: ![/docs/assets/hub/empty_repo.png](/docs/assets/hub/empty_repo.png) -Note that the Hub prompts you to create a *Model Card*, which you can learn about in the **Model Cards documentation** (TODO: LINK). Including a Model Card in your model repo is best practice, but since we're only making a test repo at the moment we can skip this. +Note that the Hub prompts you to create a *Model Card*, which you can learn about in the [**Model Cards documentation**](./models-cards). Including a Model Card in your model repo is best practice, but since we're only making a test repo at the moment we can skip this. ## Cloning repositories