Skip to content

Commit b5d1619

Browse files
authored
Refresh README.md (mlflow#13878)
Signed-off-by: B-Step62 <[email protected]>
1 parent b527225 commit b5d1619

14 files changed

+206
-249
lines changed

.github/workflows/cross-version-testing.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ time
161161
## When do we run cross version tests?
162162
163163
1. Daily at 7:00 UTC using a cron scheduler.
164-
[README on the repository root](../../README.rst) has a badge ([![badge-img][]][badge-target]) that indicates the status of the most recent cron run.
164+
[README on the repository root](../../README.md) has a badge ([![badge-img][]][badge-target]) that indicates the status of the most recent cron run.
165165
2. When a PR that affects the ML integrations is created. Note we only run tests relevant to
166166
the affected ML integrations. For example, a PR that affects files in `mlflow/sklearn` triggers
167167
cross version tests for `sklearn`.

.pre-commit-config.yaml

-8
Original file line numberDiff line numberDiff line change
@@ -55,14 +55,6 @@ repos:
5555
stages: [pre-commit]
5656
require_serial: true
5757

58-
- id: rstcheck
59-
name: rstcheck
60-
entry: rstcheck
61-
language: system
62-
files: README.rst
63-
stages: [pre-commit]
64-
require_serial: true
65-
6658
- id: must-have-signoff
6759
name: must-have-signoff
6860
entry: 'grep "Signed-off-by:"'

README.md

+171
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,171 @@
1+
# MLflow: A Machine Learning Lifecycle Platform
2+
3+
[![Latest Docs](https://img.shields.io/badge/docs-latest-success.svg?style=for-the-badge)](https://mlflow.org/docs/latest/index.html)
4+
[![Apache 2 License](https://img.shields.io/badge/license-Apache%202-brightgreen.svg?style=for-the-badge&logo=apache)](https://github.com/mlflow/mlflow/blob/master/LICENSE.txt)
5+
[![Total Downloads](https://img.shields.io/pypi/dw/mlflow?style=for-the-badge&logo=pypi&logoColor=white)](https://pepy.tech/project/mlflow)
6+
[![Slack](https://img.shields.io/badge/[email protected]?logo=slack&logoColor=white&labelColor=3F0E40&style=for-the-badge)](https://mlflow.org/community/#slack)
7+
[![Twitter](https://img.shields.io/twitter/follow/MLflow?style=for-the-badge&labelColor=00ACEE&logo=twitter&logoColor=white)](https://twitter.com/MLflow)
8+
9+
MLflow is an open-source platform, purpose-built to assist machine learning practitioners and teams in handling the complexities of the machine learning process. MLflow focuses on the full lifecycle for machine learning projects, ensuring that each phase is manageable, traceable, and reproducible
10+
11+
---
12+
13+
The core components of MLflow are:
14+
15+
- [Experiment Tracking](https://mlflow.org/docs/latest/tracking.html) 📝: A set of APIs to log models, params, and results in ML experiments and compare them using an interactive UI.
16+
- [Model Packaging](https://mlflow.org/docs/latest/models.html) 📦: A standard format for packaging a model and its metadata, such as dependency versions, ensuring reliable deployment and strong reproducibility.
17+
- [Model Registry](https://mlflow.org/docs/latest/model-registry.html) 💾: A centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of MLflow Models.
18+
- [Serving](https://mlflow.org/docs/latest/deployment/index.html) 🚀: Tools for seamless model deployment to batch and real-time scoring on platforms like Docker, Kubernetes, Azure ML, and AWS SageMaker.
19+
- [Evaluation](https://mlflow.org/docs/latest/model-evaluation/index.html) 📊: A suite of automated model evaluation tools, seamlessly integrated with experiment tracking to record model performance and visually compare results across multiple models.
20+
- [Observability](https://mlflow.org/docs/latest/llms/tracing/index.html) 🔍: Tracing integrations with various GenAI libraries and a Python SDK for manual instrumentation, offering smoother debugging experience and supporting online monitoring.
21+
22+
<img src="https://mlflow.org/img/hero.png" alt="MLflow Hero" width=100%>
23+
24+
## Installation
25+
26+
To install the MLflow Python package, run the following command:
27+
28+
```
29+
pip install mlflow
30+
```
31+
32+
Alternatively, you can install MLflow from on differnet package hosting platforms:
33+
34+
| | |
35+
| ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
36+
| PyPI | [![PyPI - mlflow](https://img.shields.io/pypi/v/mlflow.svg?style=for-the-badge&logo=pypi&logoColor=white&label=mlflow)](https://pypi.org/project/mlflow/) [![PyPI - mlflow-skinny](https://img.shields.io/pypi/v/mlflow-skinny.svg?style=for-the-badge&logo=pypi&logoColor=white&label=mlflow-skinny)](https://pypi.org/project/mlflow-skinny/) |
37+
| conda-forge | [![Conda - mlflow](https://img.shields.io/conda/vn/conda-forge/mlflow.svg?style=for-the-badge&logo=anaconda&label=mlflow)](https://anaconda.org/conda-forge/mlflow) [![Conda - mlflow-skinny](https://img.shields.io/conda/vn/conda-forge/mlflow.svg?style=for-the-badge&logo=anaconda&label=mlflow-skinny)](https://anaconda.org/conda-forge/mlflow-skinny) |
38+
| CRAN | [![CRAN - mlflow](https://img.shields.io/cran/v/mlflow.svg?style=for-the-badge&logo=r&label=mlflow)](https://cran.r-project.org/package=mlflow) |
39+
| Maven Central | [![Maven Central - mlflow-client](https://img.shields.io/maven-central/v/org.mlflow/mlflow-client.svg?style=for-the-badge&logo=apache-maven&label=mlflow-client)](https://mvnrepository.com/artifact/org.mlflow/mlflow-client) [![Maven Central - mlflow-parent](https://img.shields.io/maven-central/v/org.mlflow/mlflow-parent.svg?style=for-the-badge&logo=apache-maven&label=mlflow-parent)](https://mvnrepository.com/artifact/org.mlflow/mlflow-parent) [![Maven Central - mlflow-scoring](https://img.shields.io/maven-central/v/org.mlflow/mlflow-scoring.svg?style=for-the-badge&logo=apache-maven&label=mlflow-scoring)](https://mvnrepository.com/artifact/org.mlflow/mlflow-scoring) [![Maven Central - mlflow-spark](https://img.shields.io/maven-central/v/org.mlflow/mlflow-spark.svg?style=for-the-badge&logo=apache-maven&label=mlflow-spark)](https://mvnrepository.com/artifact/org.mlflow/mlflow-spark) |
40+
41+
## Documentation 📘
42+
43+
Official documentation for MLflow can be found at [here](https://mlflow.org/docs/latest/index.html).
44+
45+
## Usage
46+
47+
### Experiment Tracking ([Doc](https://mlflow.org/docs/latest/tracking.html))
48+
49+
The following examples trains a simple regression model with scikit-learn, while enabling MLflow's [autologging](https://mlflow.org/docs/latest/tracking/autolog.html) feature for experiment tracking.
50+
51+
```python
52+
import mlflow
53+
54+
from sklearn.model_selection import train_test_split
55+
from sklearn.datasets import load_diabetes
56+
from sklearn.ensemble import RandomForestRegressor
57+
58+
# Enable MLflow's automatic experiment tracking for scikit-learn
59+
mlflow.sklearn.autolog()
60+
61+
# Load the training dataset
62+
db = load_diabetes()
63+
X_train, X_test, y_train, y_test = train_test_split(db.data, db.target)
64+
65+
rf = RandomForestRegressor(n_estimators=100, max_depth=6, max_features=3)
66+
# MLflow triggers logging automatically upon model fitting
67+
rf.fit(X_train, y_train)
68+
```
69+
70+
Once the above code finishes, run the following command in a separate terminal and access the MLflow UI via the printed URL. An MLflow **Run** should be automatically created, which tracks the training dataset, hyper parameters, performance metrics, the trained model, dependencies, and even more.
71+
72+
```
73+
mlflow ui
74+
```
75+
76+
### Serving Models ([Doc](https://mlflow.org/docs/latest/deployment/index.html))
77+
78+
You can deploy the logged model to a local inference server by a one-line command using the MLflow CLI. Visit the documentation for how to deploy models to other hosting platforms.
79+
80+
```bash
81+
mlflow models serve --model-uri runs:/<run-id>/model
82+
```
83+
84+
### Evaluating Models ([Doc](https://mlflow.org/docs/latest/model-evaluation/index.html))
85+
86+
The following example runs automatic evaluation for question-answering tasks with several built-in metrics.
87+
88+
```python
89+
import mlflow
90+
import pandas as pd
91+
92+
# Evaluation set contains (1) input question (2) model outputs (3) ground truth
93+
df = pd.DataFrame(
94+
{
95+
"inputs": ["What is MLflow?", "What is Spark?"],
96+
"outputs": [
97+
"MLflow is an innovative fully self-driving airship powered by AI.",
98+
"Sparks is an American pop and rock duo formed in Los Angeles.",
99+
],
100+
"ground_truth": [
101+
"MLflow is an open-source platform for managing the end-to-end machine learning (ML) "
102+
"lifecycle.",
103+
"Apache Spark is an open-source, distributed computing system designed for big data "
104+
"processing and analytics.",
105+
],
106+
}
107+
)
108+
eval_dataset = mlflow.data.from_pandas(
109+
df, predictions="outputs", targets="ground_truth"
110+
)
111+
112+
# Start an MLflow Run to record the evaluation results to
113+
with mlflow.start_run(run_name="evaluate_qa"):
114+
# Run automatic evaluation with a set of built-in metrics for question-answering models
115+
results = mlflow.evaluate(
116+
data=eval_dataset,
117+
model_type="question-answering",
118+
)
119+
120+
print(results.tables["eval_results_table"])
121+
```
122+
123+
### Observability ([Doc](https://mlflow.org/docs/latest/llms/tracing/index.html))
124+
125+
MLflow Tracing provides LLM observability for various GenAI libraries such as OpenAI, LangChain, LlamaIndex, DSPy, AutoGen, and more. To enable auto-tracing, call `mlflow.xyz.autolog()` before running your models. Refer to the documentation for customization and manual instrumentation.
126+
127+
```python
128+
import mlflow
129+
from openai import OpenAI
130+
131+
# Enable tracing for OpenAI
132+
mlflow.openai.autolog()
133+
134+
# Query OpenAI LLM normally
135+
response = OpenAI().chat.completions.create(
136+
model="gpt-4o-mini",
137+
messages=[{"role": "user", "content": "Hi!"}],
138+
temperature=0.1,
139+
)
140+
```
141+
142+
Then navigate to the "Traces" tab in the MLflow UI to find the trace records OpenAI query.
143+
144+
## Community
145+
146+
- For help or questions about MLflow usage (e.g. "how do I do X?") visit the [docs](https://mlflow.org/docs/latest/index.html)
147+
or [Stack Overflow](https://stackoverflow.com/questions/tagged/mlflow).
148+
- Alternatively, you can ask the question to our AI-powered chat bot. Visit the doc website and click on the **"Ask AI"** button at the right bottom to start chatting with the bot.
149+
- To report a bug, file a documentation issue, or submit a feature request, please [open a GitHub issue](https://github.com/mlflow/mlflow/issues/new/choose).
150+
- For release announcements and other discussions, please subscribe to our mailing list ([email protected])
151+
or join us on [Slack](https://mlflow.org/slack).
152+
153+
## Contributing
154+
155+
We happily welcome contributions to MLflow! We are also seeking contributions to items on the
156+
[MLflow Roadmap](https://github.com/mlflow/mlflow/milestone/3). Please see our
157+
[contribution guide](CONTRIBUTING.md) to learn more about contributing to MLflow.
158+
159+
## Core Members
160+
161+
MLflow is currently maintained by the following core members with significant contributions from hundreds of exceptionally talented community members.
162+
163+
- [Ben Wilson](https://github.com/BenWilson2)
164+
- [Corey Zumar](https://github.com/dbczumar)
165+
- [Daniel Lok](https://github.com/daniellok-db)
166+
- [Gabriel Fu](https://github.com/gabrielfu)
167+
- [Harutaka Kawamura](https://github.com/harupy)
168+
- [Serena Ruan](https://github.com/serena-ruan)
169+
- [Weichen Xu](https://github.com/WeichenXu123)
170+
- [Yuki Watanabe](https://github.com/B-Step62)
171+
- [Tomu Hirata](https://github.com/TomeHirata)

0 commit comments

Comments
 (0)