Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/proposals/.DS_Store
Binary file not shown.
166 changes: 166 additions & 0 deletions docs/proposals/1374-mc-inference-gateways/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
# Multi-Cluster Inference Gateways

Author(s): @robscott, @bexxmodd

## Proposal Status

***Draft***

## Summary
Copy link
Contributor

@nirrozenbaum nirrozenbaum Aug 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've read the proposal. overall it looks very nice and at the very high level it could work (not getting into the details).
I think the main part that is still missing here is the motivation.
the only motivation that was mentioned in this doc is that the cluster may get out of resources.

I can share with you that this idea was proposed multiple times internally in IBM (much before GIE) but the answer was always the same - the cluster can be scaled with more resources and the complexity of spreading across multiple clusters doesn't worth it when looking at the tradeoff.

I would try to focus on this point - I think you need to find at least one use case or problem that cannot be solved by scaling the single cluster with more resources.

I think there is no doubt that this proposal adds complexity to GIE and there should be a real requirement or real use case for us to do that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with your perspective of adding any complexity should be justified and usually I'm the one who argues against it. However, in this case, there's a hard limit to how much resources can be scaled up vertically, so in the cloud environment that becomes only possible by adding new regions and scale horizontally. The expectation for the GWs is that this means adding new clusters. That's why there's a strong support for MC inference GW from other vendors like Microsoft, Red Hat, Solo, etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a hard limit to how much resources can be scaled up vertically

have you hit that limit in GCP?
it would be great to add that information as background to the proposal.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Customers exhausting allocated GPU availability within a given region is a very real challenge across multiple cloud vendors currently.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Customers exhausting allocated GPU availability within a given region is a very real challenge across multiple cloud vendors currently.

I wasn't arguing otherwise :).

I was trying to stress a point - when GPU availability within a given region is exhausted, can a cloud vendor add more resources to that region? theoretically that's possible and solves the problem without the multi cluster complexity.
but scaling also has its limits. we can scale up to a certain limit. the question was if we know what that limit is, and if we can document it in order to understand what are the conditions where multi cluster becomes a better solution that scaling the single cluster.

adding this kind of information can strengthen the motivation section and help in understanding of whether we should invest in this use case or not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree to add information about scalability for strengthening motivation. I'll do it in a general sense, as the specific numbers will very from provider to provider.

Currently, GKE Gateway limits to 1500 pods per regional clusters, and 500 pods per zonal clusters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this proposal would benefit from something that we've done a decent bit in Gateway API: an initial scoping doc that contains definitions, responsibilities, and use-cases that are explicitly out of scope. I want to understand why the inference extension should solve this vs. the gateway. What alternatives exist? Can we afford to assume pod to pod connectivity across all clusters? What are the exact boundaries. I think the what and why needs to be decided before we go too deep on the how


Inference Gateways aim to provide efficient routing to LLM workloads running in Kubernetes. In practice, an Inference Gateway is a Gateway that conforms to the [Gateway API Inference Extension](https://gateway-api-inference-extension.sigs.k8s.io/concepts/conformance/). This Gateway supports a new type of backend - InferencePool. When routing to an [InferencePool](https://gateway-api-inference-extension.sigs.k8s.io/api-types/inferencepool/), the Gateway calls out to an “Endpoint Picker” referenced by the InferencePool to get instructions on which specific endpoint within the pool it should route the request to.

![Inference Architecture](images/gw-epp-ip.png)

### Why Multi-Cluster?

Until now, Inference Gateways have been focused exclusively on routing to a single cluster. Unfortunately, the resources needed to run LLM workloads continue to be scarce, and the desired capacity is rarely available within a single cluster. To address this, we propose expanding InferencePool to support multi-cluster routing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is multi-cluster inference pools a gateway "thing" or gateway inference extension "thing" (e.g., EPP is responsible for delagating to a remote)?
I think the expectation is that gateways would support this directly?

Copy link
Contributor

@elevran elevran Aug 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the benefits for using multicluster inference pool vs a front-end routing layer that directs to the relevant gateways? Is it expected to provide better inference experience, a better UX, ...?

The introduction of multicluster (IP) routing is non-trivial. Adding Submariner (or some other overlay) could make for a complex system with non-trivial cost and failure modes. If we're leaning towards NOT routing directly to remote endpoints, the use of a gateway proxies creates a more robust and scalable system.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is multi-cluster inference pools a gateway "thing" or gateway inference extension "thing" (e.g., EPP is responsible for delagating to a remote)? I think the expectation is that gateways would support this directly?

MC Inference is part of the inference extension, rather than gateway just as InferencePool is.
Where's the expectation for it coming from? Is there a public thread or discussion?

To address your second questions in short, the idea of extending Inference Extension to support MC is motivated with the desire to make it as simple as possible for users. We don't want users manually configuring local Gateways.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bexxmodd Sorry for not being clear in phrasing the questions. Hope the below makes it clearer.

  1. I understand that the multicluster inferencing is part of IGW APIs and not the GW API. What I meant to ask was: in your view, which component should be implementing the handling of imported remote inference pools - is it the role of (1) an EPP, (2) the gateway implementation (e.g., by configuring Envoy) or (3) leaving it implementation dependent? Regarding the use of "expectation" - it was meant to confirm my reading of the design (i.e., that the gateway implementations would take on the role of handling the actual remote routing / traffic paths and it is not a feature of the EPP). There is no public thread or discussion - apologize for the confusion.
  2. there are multiple ways to solve this without having users manually configure gateways. For example, using GitOps to replicate the InferencePool along with a L7LB to select between clusters; a controller that configures Istio egress and ingress proxies, etc. The main point I'm trying to convey is that multicluster flat IP networks (ie. all pods across all clusters are directly routable from all clusters) can be a complex and fragile solution in many cloud and on premise deployments. Relying on the installation of a multicluster overlay (such as Submariner) makes the proposed multicluster inferencing solution depend on a third party tool to be viable and I don't think it would make users life easier.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the expectation is that gateways would support this directly?

@bexxmodd I think I have the same confusion as @elevran. I understand that this proposal is about the high level spec on how we add resources similar to inferencePool (or extend it) to support MC inference. But the implementation will still be in the gateway controllers right? Or Is there a plan to allow someone to provide a standalone plug-in that can magically enable a non-mc inference aware GW to be able to route to multi-cluster inference endpoints?


### Goals

* Enable Inference Gateways to route to backends in multiple clusters.
* Follow a pattern that is familiar to users of [Multi-Cluster Services (MCS)](https://multicluster.sigs.k8s.io/concepts/multicluster-services-api/) and/or Gateways.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder why this is a goal? What's the benefit of following that pattern? IIRC, we are not going to directly use the serviceExport/Import API and one of the non-goals is that we don't want to "be overly prescriptive about implementation details". So that leaves me scratching my head on which part of MCS we are actually following as we neither reuse the MCS API nor dictate implementation. The UX described below seems to be pretty generic that if someone would design it without ever knowing MCS probably would do something similar on a high level.

Just to be clear, I am not against it, but I am not sure if it has to be a goal. Also, I would agree more if the goal is that we have to use MCS since it's an some what established API multi-cluster networking protocol with implementations.


### Non-Goals

* Be overly prescriptive about implementation details - this should focus on the resulting UX and leave significant flexibility in how it is achieved.
* L4 ClusterIP routing and/or automatic DNS naming - all traffic needs to flow through the Inference Gateway for this pattern to be useful (otherwise the Endpoint Picker itself would be bypassed).

## Proposal

The multi-cluster Inference Gateway model will largely follow the multi-cluster services model, with a few key differences. We will omit DNS and ClusterIP resolution, and avoid a separate resource, e.g. ServiceExport, by inlining the concept within InferencePool. Additionally, we will add support for having separate Endpoint Pickers in each cluster.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we will add support for having separate Endpoint Pickers in each cluster.

Can you provide additional context here. Separate as in separate from the EPP ref'd by an InferencePool with an export annotation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are remote clusters expected to learn of Exported/Imported InferencePool objects?
How will remote access to API masters be secured and coordinated over time?

Copy link
Contributor Author

@bexxmodd bexxmodd Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are remote clusters expected to learn of Exported/Imported InferencePool objects? How will remote access to API masters be secured and coordinated over time?

We'll be replicating how Multi-Cluster Service does that. Essentially extending Kubernetes' native service discovery across multiple clusters, creating a logical ClusterSet.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, the how part seems to be more to the implementation detail?

Copy link

@mikemorris mikemorris Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are remote clusters expected to learn of Exported/Imported InferencePool objects?

The InferencePoolImport resource should be created in each cluster in the ClusterSet in the same namespace and with the same name as any exported InferencePool (likely automatically by some global controller, but details can be impl-specific), so that each cluster has a local reference resource to any exported remote InferencePool.

How will remote access to API masters be secured and coordinated over time?

This is a big "details left to implementation" question regarding credentials, read/write access to clusters in a peer-to-peer or hub model, and push vs pull semantics.


![InferencePoolImport](images/inference-pool-import.png)

### API Changes

#### InferencePool

A new `inference.networking.k8s.io/export` annotation is added to InferencePool (replacement for ServiceExport resource in MCS). In the future this may become a field, but we’ll start with an annotation to allow for faster iteration. [We’ll avoid using a bool here to align with k8s API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#primitive-types). The supported values to start will be `ClusterSet` until any other use-case is accepted. In the future, we may allow for some intermediate values such as Regional or domain-prefixed values.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should only start with ClusterSet until a use case for Local or any other supported value is accepted.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For cluster-set, the assumption that inferenepool is replicated to all (or none) of the member clusters. There is no selective import/export (e.g., if resources in a cluster are maxed, what should it do with an imported pool?).
I think coordination of the export/import process (including, for example, status reporting) should be specified.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should only start with ClusterSet until a use case for Local or any other supported value is accepted.

Removed Local.

For cluster-set, the assumption that inferenepool is replicated to all (or none) of the member clusters. There is no selective import/export (e.g., if resources in a cluster are maxed, what should it do with an imported pool?). I think coordination of the export/import process (including, for example, status reporting) should be specified.

I'm not sure I understand the question. Are you referring to how controller coordinates export/import? Also, we want to keep API changes as minimal as possible so we should leave some things open to the implementation details.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Correct - I'm wondering if you want to recommend/enforce that import / export is done to all clusters in the ClusterSet or a controller is free to support selective sharing (e.g., whose policy specification is out of scope).
  2. How, if at all, are Import status reported on an Export? Is there an indication in the exported InferencePool that it was successfully imported and where? I don't consider those implementation details but part of the API. The controller should be able to coordinate across clusters so the user is aware of status (e.g., where is the import coming from, was the export successful, etc.)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify, clusterSet does not have the assumption that a service exists in all its members.

Copy link

@mikemorris mikemorris Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if you want to recommend/enforce that import / export is done to all clusters in the ClusterSet or a controller is free to support selective sharing (e.g., whose policy specification is out of scope).

I would discourage "selective sharing" as it can be a leaky abstraction to "non-sameness" in a clusterset (i.e. if default/api is a different workload on clusters A and B, and selective sharing is used as a workaround to export the workload from cluster B to A and C, and separately from A to only B, it would be easy to quickly lose track of service identity.

How, if at all, are Import status reported on an Export? Is there an indication in the exported InferencePool that it was successfully imported and where? I don't consider those implementation details but part of the API. The controller should be able to coordinate across clusters so the user is aware of status (e.g., where is the import coming from, was the export successful, etc.)

This is the primary utility of status.conditions on the ServiceExport resource (and status.clusters on ServiceImport for "where is the import coming from") in MCS, although messaging conflicts or failed exports can be difficult for some implementations not using a centralized hub model. I think it's a valid question whether this is a required part of the API for an initial implementation, or if something like logs/metrics from a multi-cluster Gateway API Inference Extension controller is sufficient initially.

Copy link

@mikemorris mikemorris Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a proposed API spec for InferencePoolImport might help clarify some of these questions, like if/how it conveys addresses to reach remote EPPs or InferencePools or if it has any relationship/dependency on a local EPP?

Copy link

@mikemorris mikemorris Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I was thinking a potential use case for Local might be to create a local InferencePoolImport resource to be able to incrementally migrate Gateways in the local cluster from a local InferencePool over to InferencePoolImport without exposing their local InferencePool to remote traffic yet (but now that I'm thinking about this more, the UX is actually a bit wonky because if the same name/namespace InferencePool on another cluster was already exported to the ClusterSet, then traffic could still immediately start routing off-cluster, and would just additionally have local endpoints available, so maybe that's actually more confusing than helpful...)

#### InferencePoolImport

A new API that mirrors ServiceImport from the MCS API. This allows anyone in a connected cluster to reference a Multi-Cluster InferencePool, even if the local cluster does not have a local InferencePool. In the context of Gateway API, that means that a Gateway could be configured to reference an InferencePoolImport, even if that cluster did not contain an InferencePool.
This API will be used almost exclusively for tracking endpoints, but unlike MCS, we actually have two distinct sets of endpoints that we could track:

1. Endpoint Pickers
1. Model Server Endpoints
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would routing directly to model server endpoints bypass the remote EPP? Does that imply that that EPP is operating with a partial view of resources in its cluster and their load? Is that reasonable/desirable?


Copy link
Contributor

@danehans danehans Aug 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • We should also consider routing to a remote EPP through a remote cluster Gateway (see the original design doc appendix).

  • Why does InferencePoolImport need to know about the model server endpoints in a remote cluster? I would expect the local Gateway to route to remote EPPs or through a remote Gateway based on one of the following conditions:

    • A local InferencePool exists and no local GPUs are available, e.g. EPP returns a 503/429.
    • A local InferencePool exists and the local Gateway decides the request is better served by an InferencePoolImport.
    • No local InferencePool exists but an InferencePoolImport exists with a status that indicates available GPU resources.

Note: The EPP protocol spec should be updated when this design is finalized (please create a tracker issue).

## Implementation Details

In the happy path, the only type of endpoint that a Gateway would need to know about is Endpoint Pickers. Ultimately, each Gateway will be sending requests to Endpoint Pickers, and then following the directions of that Endpoint Picker. As long as an Endpoint Picker is available, there’s no need to actually propagate the model server endpoints.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to suggest the following

  1. gateway in cluster A routes to EPP in cluster B
  2. EPP returns (e.g.,) a local endpoint in cluster B
  3. gateway in cluster A sends the request directly to endpoint in cluster B

This requires 2x RTT / BW and cross cluster coordination, routing... Couldn't GW(A) delegate to GW(B) and then leave the rest local to B?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Gateway in the cluster doesn't do any actual routing in single or multicluster. When the gateway resource in the cluster is created, the gateway controller creates Load Balancer resources, so when the request is sent by the client, it's received by the L7LB and routed to the EPP in the appropriate cluster. It's never actually routed to the cluster with the gateway.

Copy link
Contributor

@elevran elevran Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was referring to the L7LB in each cluster as "the Gateway", not the the Gateway resources in the k8s API.
Do you mean to say that there's an additional LB, besides Envoy/nginx/etc proxy, that is routing directly to the EPP service. or is it done via the proxy which is programmed via the Gateway API?
Also, LoadBalancer resources are often true of cloud deployment and not necessarily on-premise clusters,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ultimately, each Gateway will be sending requests to Endpoint Pickers, and then following the directions of that Endpoint Picker.

I don't really follow this. So the gateway sends to many EPPs and somehow it follows a single EPP to the model server endpoints. How does the gateway pick which EPP to follow?


### Failure Mode

If the Endpoint Picker is unavailable and the failure mode is configured as “FailOpen”, we could take one of several approaches:

#### Honor FailOpen configuration

This seems to require the Gateway to be aware of at least some model server endpoints, which requires more endpoint propagation.

#### Fail over to other cluster/Endpoint Picker

In a world where there are multiple clusters/Endpoint Pickers to choose from, it may be desirable to fail over to another cluster. Ultimately if all Endpoint Pickers are unavailable, we may end up back at the same problem though of needing to be aware of model server endpoints.

#### Consider FailOpen “Extended” support for multi-cluster

Given the potential complexity of supporting a FailOpen mode for multi-cluster, we could consider this “Extended” or optional support.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on MC failover being "Extended" support.


### Cluster/Endpoint Picker Selection

It’s likely that each Gateway implementation will have some different logic here, but there will likely be at least two common paths here:

#### Metrics from model server endpoints

In the case where a Gateway is aware of all model server endpoints, it could theoretically also track metrics for each of these endpoints.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This duplicates the work of the EPP.

Copy link
Contributor Author

@bexxmodd bexxmodd Aug 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This duplicates the work of the EPP.

Isn't EPP tracking metrics for the individual pod? Maybe have Gateway collect aggregated metrics for InfPool?

Copy link

@mikemorris mikemorris Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed that this risks an unclear separation of concerns (or duplication of functionality) between a Gateway and EPP.


#### Metrics from Endpoint Picker

Since Gateways are ultimately deciding which Endpoint Picker to send traffic to, it could make sense for Endpoint Pickers to report back load/utilization data to the Gateway to help inform that decision. (This would reflect the utilization of model server Pods within the local InferencePool managed by each EPP).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EPP already exposes InferencePool metrics. It will be up to the implementation on how to use these metrics to make a routing decision.

Copy link

@mikemorris mikemorris Aug 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused here - how or in what circumstance would a Gateway potentially be evaluating InferencePool metrics to make routing decisions?

If multiple InferencePoolImport or InferencePool backendRefs are available under a single HTTPRouteRule, does this imply a need for some sort of dynamic weight on backendRef instead of static configuration? If an EPP is only called to select an endpoint after the target InferencePool has been selected, how would metrics be considered?

Would an HTTPRouteRule extensionRef filter potentially be useful for this selection somehow?

Is this the theoretical future enhancement referenced below and currently out of scope?


#### PreferClose/PreferLocal

Local cluster by default, failover if out of capacity.

### Theoretical Future Enhancement: Multi-Cluster Endpoint Pickers

In the future, a more advanced implementation could allow Endpoint Pickers to pick from endpoints in other clusters (relying on the same underlying infrastructure that propagates endpoints for this multi-cluster model). We're intentionally avoiding that from the initial scope as it's both more complicated to implement, and unlikely to be scalable given the need for Endpoint Pickers to have a very tight feedback loop (usually via frequent scraping of metrics) with each model server Pod in the InferencePool. Extending that model across clusters could become quite costly.

**Pros**:

* Reuses existing MCS model
* Simplest possible API model
* “Export” configuration lives on InferencePool and clearly applies to the entire pool, not just EPP
* Can clearly reference an InferencePool in other clusters without having one locally

**Cons**:

* Does not reuse MCS API (unclear if this is a con)

## Alternative 1: MCS API for EPP

If we lean into the idea that the only thing a Gateway needs to know is the Endpoint Picker endpoints and what cluster(s) they're associated with, we could build this on top of the MCS API. With this approach, the Endpoint Picker is exposed with a Multi-Cluster Service:

![MCS API for EPP](images/mcs-api-epp.png)

**Pros**:

* Reuses existing MCS infrastructure.
* Likely relatively simple to implement.

**Cons**:

* Referencing InferencePools in other clusters requires you to create an InferencePool locally.
* Significantly more complex configuration (more YAML at least).
* "FailOpen" mode becomes ~impossible if implementations don't actually have some model server endpoints to fall back to.
* In this model, you don’t actually choose to export an InferencePool, you export the Endpoint Picker, that could lead to significant confusion.
* InferencePool is meant to be a replacement for a Service so it may seem counterintuitive for a user to create a Service to achieve multi-cluster inference.

## Alternative 2: New MCS API

One of the key pain points we’re seeing here is that the current iteration of the MCS API requires a tight coupling between name/namespace and kind, with Service being the only kind of backend supported right now. This goes against the broader SIG-Network direction of introducing more focused kinds of backends (like InferencePool). To address this, we could create a resource that has an `exportRef` that allows for exporting different types of resources.

Well we were at it, we could combine the separate `export` and `import` resources that exist today, with `export` acting as the (optional) spec of this new resource, and `import` acting as `status` of the resource. Instead of `import` resources being automatically created, users would create them wherever they wanted to reference or export something to a MultiClusterService.

Here’s a very rough example:

```yaml
apiVersion: networking.k8s.io/v1
kind: MultiClusterService
metadata:
name: bookinfo
namespace: bookinfo
spec:
exportRef:
group: v1
kind: Service
name: bookinfo
scope: ClusterSet
status:
conditions:
- type: Accepted
status: "True"
message: "MultiClusterService has been accepted"
lastTransitionTime: "2025-03-30T01:33:51Z"
targetCount: 1
ports:
- protocol: TCP
appProtocol: HTTP
port: 8080
```

### Open Questions

How can we ensure that cross-cluster connections to EPP are secure? (Requires resolution of https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/735#issuecomment-3133302612)
Can we find a way to configure preferences for where a request should be routed?

### Prior Art

* [GEP-1748: Gateway API Interaction with Multi-Cluster Services](https://gateway-api.sigs.k8s.io/geps/gep-1748/)
* [Envoy Gateway with Multi-Cluster Services](https://gateway.envoyproxy.io/latest/tasks/traffic/multicluster-service/)
* [Multicluster Service API](https://multicluster.sigs.k8s.io/concepts/multicluster-services-api/)
* [Submariner](https://submariner.io/)

### References

* [Original Doc for MultiCluster Inference Gateway](https://docs.google.com/document/d/1QGvG9ToaJ72vlCBdJe--hmrmLtgOV_ptJi9D58QMD2w/edit?tab=t.0#heading=h.q6xiq2fzcaia)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.