-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Model Catalog API to Model Registry #702
Comments
@andreyvelich @thesuperzapper we're working on a design for a simple model catalog API wrapper for local and remote catalogs like Hugging Face. I'll add more details with diagram, etc. to this feature request tomorrow. We're hoping it's a valuable addition to Model Registry and helps make the entry point and user experience easier into Kubeflow Model Registry, especially for foundation models and LLMs. |
A proposed high level architecture would like below. A configmap can be used to configure a small amount of high level metadata about a collection of catalog sources, such as name, type, description, etc. about the catalog. Every catalog source may also include a secretName if credentials are required to connect to the catalog. E.g. HuggingFace could store API secret/token in a Kubernetes secret. Model registry would implement the logic to handle different catalog types, and process the catalog source information to fetch the catalog or create a remote connection to it. When Catalog API operations to list CatalogModels and CatalogModelRevisions are received, model registry will query the catalog source implementation logic to fetch the requested information. |
This comment has been minimized.
This comment has been minimized.
Based on some feedback, we need some additional information in the catalog:
Updated schema and example are in this gist: https://gist.github.com/pboyd/278c7b1e9ce0292b82cb871fa7d2103b |
Is there an OpenAPI definition defined for this Catalog model or is it going to use existing Model Registry API? |
Perhaps it makes sense to have the Catalog API exist as a separate process/deployable/container. To my understanding, the governance aspect to the model-registry is critical in terms of stability/reliability. A consumer of the model-registry depends on the API being available, else there is a possibility of an automated training run not being registered if the registry was down (representing a significant loss in training time and compute cost). Therefore, I think it makes sense to isolate the load/traffic of catalog operations from the core model-registry. I threw a quick diagram together to help illustrate this architecture. A separate The additional benefit is to be able to allow the model-registry itself to become a Let me know your thoughts, I don't have strong feelings on this point, so if its a big project to separate the component out, just say so! I figured it's easier to do now rather than later. |
This proposal doesn't take that option away. If the model registry container image supports specifying which service(s) needs to be started, either as a command or command line option, etc. we can configure it to run as a separate deployment and service. If we want to produce a distinct binary and container image, that's a slightly orthogonal conversation. I don't have a strong preference one way or another how MR and MC services are deployed. It depends on end user's scaling needs. That's why the initial PR started by keeping options open with a single binary and cmdline options.
If we follow the high level architecture in the proposal, we could simply create a That way we avoid polluting the MC data model with MR details and vice versa. I can also imagine that there could be filtering requirements to limit which wdyt? |
We should be careful in displaying the |
Yes, having a separate model registry catalog source would allow us to separate RBAC concerns there to make sure credentials are being forwarded and any filtering is done appropriately. |
Is your feature request related to a problem? Please describe.
Several public ML and LLM model catalogs such as Hugging Face are now available with easily accessible opensource models.
At present Kubeflow Model Registry has an API for registering and managing locally trained and published Registered Models. Also, users have to use a variety of different websites, UIs, or APIs to browse and discover foundation models in various catalogs and manually register them for deployment.
Describe the solution you'd like
There is a need for a uniform and simple way to access various Model Catalogs hosting foundation models to allow users to easily discover and register models in a Model Registry for local training, enhancement, and serving.
The implementation could start simple by allowing users to create a simple curated model catalog source that is backed by a yaml file. The yaml file could contain a list of high level foundation models metadata, some README text, and information about catalog model versions.
In the future other catalog source implementations can be created to allow browsing Hugging Face, OpenAPI, etc.
Describe alternatives you've considered
As an alternative, a common UI could be built that has adhoc client code to access different catalogs to browse and register models to a Kubeflow Model Registry. However, having a consolidated/common simple backend will make developing such a catalog browsing UI simpler in the future.
Additional context
As an example, a simple API that exposes information about models such as the information provided in HuggingFace modelcard, and also supports simple catalog model search by names, tags, etc. would be incredibly useful for Kubeflow users.
The text was updated successfully, but these errors were encountered: