diff --git a/keps/sig-multicluster/5939-mcs-cluster-selection/README.md b/keps/sig-multicluster/5939-mcs-cluster-selection/README.md new file mode 100644 index 000000000000..283ac53d7bf0 --- /dev/null +++ b/keps/sig-multicluster/5939-mcs-cluster-selection/README.md @@ -0,0 +1,1233 @@ + +# KEP-5939: Cluster Selection for Multi-Cluster Services + + + + + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories](#user-stories) + - [Story 1: Locality-based failover](#story-1-locality-based-failover) + - [Story 2: Multi-dimension selection with arbitrary properties](#story-2-multi-dimension-selection-with-arbitrary-properties) + - [Story 3: Data sovereignty with no fallback](#story-3-data-sovereignty-with-no-fallback) + - [Notes/Constraints/Caveats](#notesconstraintscaveats) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [API Overview](#api-overview) + - [Label Selector](#label-selector) + - [Validation](#validation) + - [Cluster Selection Algorithm](#cluster-selection-algorithm) + - [Interaction with Service Traffic Distribution](#interaction-with-service-traffic-distribution) + - [Reading ClusterProperties from Constituent Clusters](#reading-clusterproperties-from-constituent-clusters) + - [Reference Implementation in kubernetes-sigs/mcs-api](#reference-implementation-in-kubernetes-sigsmcs-api) + - [Conflict Resolution](#conflict-resolution) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Alpha -> Beta graduation](#alpha---beta-graduation) + - [Beta -> GA graduation](#beta---ga-graduation) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) within one minor version of promotion to GA +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website +[MCS-API]: /keps/sig-multicluster/1645-multi-cluster-services-api +[ClusterProperty]: /keps/sig-multicluster/2149-clusterid +[PlacementDecision]: /keps/sig-multicluster/5313-placement-decision-api +[mcs-api-repo]: https://github.com/kubernetes-sigs/mcs-api + +## Summary + + + +The [Multi-Cluster Services API (MCS-API)][MCS-API] lets users export +a Kubernetes `Service` with `ServiceExport` so that it becomes +consumable across a clusterset. When the same `Service` is exported +from multiple clusters, each importing cluster gets a single +automatically generated `ServiceImport`. + +Today there is no standard way to express a preference over which +clusters should receive traffic. A consumer in `eu-west` has no way +to say "prefer my own cluster, then my region, then anywhere else". + +This KEP introduces a **cluster selection** concept for directing +traffic. It adds an ordered `clusterSelectors` field to +`ServiceExport` that lets users select subsets of constituent +clusters based on [`ClusterProperty`][ClusterProperty] values. + +Each entry can use a standard Kubernetes `metav1.LabelSelector` with +a special `@SameAsImporter@` value. If no entry matches clusters with +available endpoints, configurable fallback behavior applies and +defaults to all clusters. + +## Motivation + + + +MCS-API already supports existing Service fields that help steer +traffic such as `trafficDistribution` (for example, `PreferSameZone`) +and `internalTrafficPolicy`. These work across clusters. However, they +only function at the host or zone level. + +Real-world multi-cluster deployments span regions, countries, and +continents. Users need the ability to keep traffic local for latency +and cost reasons, with well-defined fallback behavior when local +endpoints are unavailable. Without a standard API for this, every +MCS-API implementation has its own mechanism, typically through +annotations (for example, Cilium's +[Service Affinity](https://docs.cilium.io/en/stable/network/clustermesh/affinity/) +annotation which is limited to only select the local cluster or all remote +clusters). A standard CRD field allows more expressive cluster selection +and makes configurations portable across implementations. + +### Goals + + + +- Allow users to steer traffic to a number of preferred clusters. +- Act as an additional layer on top of `trafficDistribution` and + similar Service API fields, not a replacement. +- Use [`ClusterProperty`][ClusterProperty] as the source of cluster + metadata for selection. +- Support arbitrary (user-defined) `ClusterProperty` keys, not only + a pre-defined set. +- Support various selection criteria over `ClusterProperty`, + including "same value as the importing cluster" and arbitrary + values. +- Define an unambiguous algorithm that can be implemented by any + MCS-API implementation and by third-party integrations such as + Gateway API. + +### Non-Goals + + + +- Select individual endpoints. This KEP is only about selecting + clusters. +- Introduce complex routing rules, as those should be handled by + service-mesh or Gateway API implementations. + +## Proposal + + + +This KEP builds on two existing building blocks: + +1. **[MCS-API]**: provides `ServiceExport`/`ServiceImport` and the conflict + resolution model. + +2. **[About API / `ClusterProperty`][ClusterProperty]**: provides a standard + way to attach key/value metadata to clusters. This KEP uses those + properties as the data source for cluster selection. + +New fields `clusterSelectors` and `clusterSelectorsFallbackPolicy` are added +to `ServiceExport.spec`. They are globally reconciled with an exact match +resolution to `ServiceImport.spec` so that all constituent clusters get +the same behavior. In case of conflict, the values of these fields +are resolved to those of the oldest `ServiceExport`. + +The `clusterSelectors` field is an ordered list of entries. Each +entry defines a set of clusters whose endpoints should be preferred. +The list is evaluated in order: the first entry that matches clusters +with at least one available endpoint is used. The +`clusterSelectorsFallbackPolicy` field controls what happens when no +entry matches. It defaults to `"AllClusters"` (fallback to all +constituent clusters) and can be set to `"None"` to suppress fallback +entirely. + +Each entry in `clusterSelectors` can select clusters based on different +criteria. Currently only a standard Kubernetes `metav1.LabelSelector` +(as used in `NetworkPolicy`, `Deployment`, and similar APIs) +evaluated against `ClusterProperty` key/value pairs is supported. + +### User Stories + + + +#### Story 1: Locality-based failover + +A platform team exports `api` from multiple clusters across regions. +They want each cluster to prefer its own endpoints first, then the same +region, then the same continent, then fallback to all clusters. + +```yaml +apiVersion: multicluster.x-k8s.io/v1beta1 +kind: ServiceExport +metadata: + name: api +spec: + clusterSelectors: + - propertySelector: + matchExpressions: + - key: cluster.clusterset.k8s.io + operator: In + values: + - "@SameAsImporter@" + - propertySelector: + matchLabels: + region.topology.k8s.io: "@SameAsImporter@" + - propertySelector: + matchLabels: + continent.topology.k8s.io: "@SameAsImporter@" +``` + +This can be combined with the Service-level `trafficDistribution: PreferSameZone` +to further prefer same-zone endpoints within the selected cluster(s). + +#### Story 2: Multi-dimension selection with arbitrary properties + +Some workloads require selection rules combining multiple dimensions. +For example, an ML inference service may run across clusters with +different GPU capabilities. Rules combining some form of locality +and GPU tiering might be needed. + +The following rules prefer clusters in the same continent with +high-end GPUs, then clusters in the same continent with any GPU, +then any cluster with a GPU. + +```yaml +apiVersion: multicluster.x-k8s.io/v1beta1 +kind: ServiceExport +metadata: + name: inference-server +spec: + clusterSelectors: + # Same continent with high-end GPU + - propertySelector: + matchLabels: + continent.topology.k8s.io: "@SameAsImporter@" + gpu-tier.mycompany.com: "high" + # Same continent with any GPU + - propertySelector: + matchLabels: + continent.topology.k8s.io: "@SameAsImporter@" + matchExpressions: + - key: gpu-tier.mycompany.com + operator: Exists + # Any cluster with a GPU + - propertySelector: + matchExpressions: + - key: gpu-tier.mycompany.com + operator: Exists +``` + +Note that `continent.topology.k8s.io` is not a standard key as of today. + +#### Story 3: Data sovereignty with no fallback + +Some regulatory constraints require that traffic never reaches certain +clusters, even when preferred ones are unavailable. Rather than +splitting into separate clustersets, teams can keep a single clusterset +and use `clusterSelectorsFallbackPolicy: None` to prevent fallback: + +```yaml +apiVersion: multicluster.x-k8s.io/v1beta1 +kind: ServiceExport +metadata: + name: customer-data-api +spec: + clusterSelectorsFallbackPolicy: None + clusterSelectors: + - propertySelector: + matchLabels: + compliance.mycompany.com: "gdpr" +``` + +### Notes/Constraints/Caveats + + + +Cluster selection only applies to `ClusterIP` (non-headless) +multi-cluster services. Headless `ServiceImport`s return individual +pod IPs via DNS and are not compatible with cluster selection. If +`clusterSelectors` is set on a `ServiceExport` for a headless +service, a condition type `Valid` with reason `InvalidServiceType` +and status `"False"` will be set on the `ServiceExport`. + +How implementations gather `ClusterProperty` data from constituent +clusters is implementation specific. This KEP does not prescribe +infrastructure requirements that may not fit all deployment models and +may be revisited later if a standard mechanism is needed. + +CEL expressions as additional selection criteria (alongside +`propertySelector`) are planned for a future iteration of this KEP. +The design space (per-candidate boolean vs list-based evaluation, +CEL cost estimation, CEL library configuration) requires further +discussion and expertise. Each entry in `clusterSelectors` is +designed to be extended with an `expression` field later. + +Externally-driven cluster selection (for example, having an +external controller populate the selection via a +[`PlacementDecision`][PlacementDecision]) is out of scope for now. +The current version of this KEP only provides inline selection on +`ServiceExport` via `ClusterProperty` data. + +Future selection criteria (such as CEL expressions or +externally-driven selection) would be added as additional fields +on each entry in `clusterSelectors`. Within a single entry, +criteria would be mutually exclusive: for example, a user could +not combine `propertySelector` and a CEL expression in the same +entry, but could use `propertySelector` in one entry and a CEL +expression in another with the usual ordered fallback between them. + +### Risks and Mitigations + + + +## Design Details + + + +### API Overview + +The `ServiceExport.spec` gains `clusterSelectors` and +`clusterSelectorsFallbackPolicy` fields: + +```go +type ServiceExportSpec struct { + // ... existing fields ... + + // ClusterSelectors is an ordered list of cluster selectors. + // Each entry defines a preferred set of clusters. The first + // entry whose matching clusters have ready or serving endpoints + // is used. If no entry matches, the behavior depends on + // ClusterSelectorsFallbackPolicy. + // +optional + // +listType=atomic + // +kubebuilder:validation:items:MaxItems:=16 + ClusterSelectors []ClusterSelector `json:"clusterSelectors,omitempty"` + + // ClusterSelectorsFallbackPolicy controls what happens when no + // ClusterSelector entry matches. + // + // "AllClusters" (default): fallback to all constituent clusters. + // "None": do not fallback. The service has no endpoints + // and is effectively unavailable. + // + // +optional + // +kubebuilder:default="AllClusters" + // +kubebuilder:validation:Enum=AllClusters;None + ClusterSelectorsFallbackPolicy *FallbackPolicy `json:"clusterSelectorsFallbackPolicy,omitempty"` +} + +// FallbackPolicy defines the behavior when no ClusterSelector matches. +// +kubebuilder:validation:Enum=AllClusters;None +type FallbackPolicy string + +const ( + // FallbackAllClusters falls back to all constituent clusters. + FallbackAllClusters FallbackPolicy = "AllClusters" + // FallbackNone does not fallback. The service has no endpoints + // and is effectively unavailable. + FallbackNone FallbackPolicy = "None" +) +``` + +The `ServiceImport.spec` gains the same fields, populated by the +mcs-controller after conflict resolution: + +```go +type ServiceImportSpec struct { + // ... existing fields ... + + // ClusterSelectors is the reconciled ordered list of cluster + // selectors, derived from the constituent ServiceExports. + // +optional + // +listType=atomic + // +kubebuilder:validation:items:MaxItems:=16 + ClusterSelectors []ClusterSelector `json:"clusterSelectors,omitempty"` + + // ClusterSelectorsFallbackPolicy is the reconciled fallback + // policy, derived from the constituent ServiceExports. + // +optional + // +kubebuilder:default="AllClusters" + // +kubebuilder:validation:Enum=AllClusters;None + ClusterSelectorsFallbackPolicy *FallbackPolicy `json:"clusterSelectorsFallbackPolicy,omitempty"` +} +``` + +The `ClusterSelector` type is defined as follows: + +```go +// ClusterSelector selects a set of clusters. +type ClusterSelector struct { + // PropertySelector selects clusters by matching ClusterProperty + // key/value pairs using standard label selector semantics. + // The special value "@SameAsImporter@" in matchLabels values or + // matchExpressions values means "use the same value as the + // importing cluster for this key." + // +required + PropertySelector *metav1.LabelSelector `json:"propertySelector"` +} +``` + +#### Label Selector + +When `propertySelector` is set, it uses the same standard Kubernetes +`metav1.LabelSelector` (as in `NetworkPolicy`, `Deployment`, and +similar APIs) evaluated against each constituent cluster's +`ClusterProperty` map. Both `matchLabels` and `matchExpressions` are +supported. + +The special value `@SameAsImporter@` may be used in `matchLabels` +values and `matchExpressions` values (with `In` or `NotIn` operators). +It is replaced by the importing cluster's actual property value for +the corresponding key, or by an empty string if the importing cluster +does not have that key. Similarly to `trafficDistribution`, this lets +users say "prefer clusters in the same region as me" without +hard-coding a specific region, while keeping the MCS-API property +that all `ServiceExport`s for a given service carry the same +configuration across all constituent clusters. + +Implementations may choose to replace `@SameAsImporter@` with the +local cluster's actual value when populating the `ServiceImport`, so +that the resolved selectors are directly readable without further +interpretation when consuming a `ServiceImport`. + +#### Validation + +Every entry in the `clusterSelectors` list must have `propertySelector` +set. An entry with an empty or missing `propertySelector` is invalid. +A condition type `Valid` with reason `InvalidClusterSelector` and +status `"False"` will be set on the `ServiceExport`. + +When future selection criteria are added to entries in +`clusterSelectors` (such as a CEL expression field), each entry must +use only one selection criteria. Setting multiple fields on +the same entry (for example, both `propertySelector` and a CEL +expression) is invalid and will similarly raise the +`InvalidClusterSelector` condition. + +### Cluster Selection Algorithm + +The algorithm is evaluated in each importing cluster independently: + +1. If `clusterSelectors` is empty or unset, use all endpoints from all + constituent clusters (no filtering). +2. For each selector in `clusterSelectors` (in order): + 1. Find all constituent clusters that match and collect their + endpoints. + 2. If at least one of those endpoints is available (if it has + a `ready` or `serving` condition set to `true`), use that set + of endpoints and stop. +3. If no cluster selector matched: + - If `clusterSelectorsFallbackPolicy` is `"AllClusters"` (the + default), use all endpoints from all constituent clusters. + - If `clusterSelectorsFallbackPolicy` is `"None"`, the service + has no endpoints and is effectively unavailable. + +When the active selector changes (for example, because the preferred +clusters lost all ready endpoints), implementations should not +instantly drop existing connections. How this is handled (for example, +connection draining, graceful transition) is implementation specific, +but implementations should make a best effort to avoid abrupt +connection termination. + +### Interaction with Service Traffic Distribution + +Cluster selection runs before Service-level `trafficDistribution` +(for example, `PreferSameZone`). If `clusterSelectors` narrows the +set of eligible clusters, `trafficDistribution` still applies within +the resulting endpoints. + +For the time being, cluster selection is **not** +`trafficDistribution`-aware. If the preferred clusters have available +endpoints, they are selected even if none of those endpoints are in +the same zone as the consumer. `trafficDistribution` does not cause +fallback to the next cluster selector. + +Configuring cluster selection `trafficDistribution`-awareness could +be explored later if use cases emerge. + +### Reading ClusterProperties from Constituent Clusters + +How an implementation gathers `ClusterProperty` data from constituent +clusters is implementation specific. Implementations for instance may: +- Query remote clusters directly and keep the data in memory. +- Use a hub cluster or central registry. +- Reflect `ClusterProfile` objects locally. + +This KEP does not prescribe a mechanism. If a standard mechanism +is needed in the future, this may be addressed in a follow-up. + +### Reference Implementation in kubernetes-sigs/mcs-api + +The [`kubernetes-sigs/mcs-api`][mcs-api-repo] repository will provide +the CRD types for the new `clusterSelectors` field on `ServiceExport` +and `ServiceImport`, along with a generic `PreparedSelector` API for +evaluating selectors: + +```go +// Cluster represents a cluster in the clusterset. Implementations +// provide their own backing type. +type Cluster interface { + // GetID returns the cluster ID (typically the value of the + // cluster.clusterset.k8s.io ClusterProperty). + GetID() string + // GetProperties returns the ClusterProperty key/value pairs. + GetProperties() map[string]string +} + +// PreparedSelector is a compiled ClusterSelector ready for evaluation. +type PreparedSelector[T Cluster] interface { + // SelectClusters returns the subset of candidates that match. + SelectClusters(candidates []T) ([]T, error) +} + +// PrepareSelector compiles a ClusterSelector for the given importer. +// It resolves any @SameAsImporter@ placeholders using the importer's +// properties, then parses the label selector using the standard +// Kubernetes LabelSelectorAsSelector function. The returned +// PreparedSelector is bound to this importer and can be reused +// across evaluations. +func PrepareSelector[T Cluster](selector ClusterSelector, importer T) (PreparedSelector[T], error) +``` + +The full selection algorithm (iterating selectors, checking endpoint +readiness, fallback, caching) is implementation-specific. + +### Conflict Resolution + +`clusterSelectors` and `clusterSelectorsFallbackPolicy` are global +properties of the multi-cluster service. They follow the standard +MCS-API conflict resolution policy: + +- All `ServiceExport`s for the same service must agree on + `clusterSelectors` as a whole (the entire ordered list must match + exactly) and similarly on `clusterSelectorsFallbackPolicy`. + If they disagree, precedence is given to the oldest `ServiceExport` + (by `creationTimestamp`). +- A `Conflict` condition with status `True` is set on all + `ServiceExport`s with a `ClusterSelectorsConflict` or + `ClusterSelectorsFallbackPolicyConflict` reason. +- The resolved value is written to the `ServiceImport` by the + mcs-controller. + +### Test Plan + + + +[x] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + + +##### Integration tests + + + + + + +##### e2e tests + + + + +### Graduation Criteria + + + +#### Alpha -> Beta graduation + +- At least two implementations supporting the feature. +- Some conformance tests have been implemented and at least two + implementations have passed those tests. +- Support for CEL expressions is specified as additional selection + criteria, or explicitly ruled out. + +#### Beta -> GA graduation + +- KEP-1645 and KEP-2149 have both graduated to stable. +- Conformance tests are mature and extensive. +- Most MCS-API implementations support the feature and pass + the conformance tests. +- Support for external cluster selection mechanisms (for example, + `PlacementDecision`) is specified as additional selection + criteria, or explicitly ruled out. + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +`@SameAsImporter@` is a placeholder value overloading standard +`metav1.LabelSelector` values, which is unusual in the Kubernetes +ecosystem. The About API (KEP-2149) also places no +character restrictions on general `ClusterProperty` values, so a +property value `@SameAsImporter@` would be indistinguishable from the +placeholder. In practice this is unlikely, and the convenience for +common use cases justifies it. + +## Alternatives + + + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-multicluster/5939-mcs-cluster-selection/kep.yaml b/keps/sig-multicluster/5939-mcs-cluster-selection/kep.yaml new file mode 100644 index 000000000000..7be0a7becf52 --- /dev/null +++ b/keps/sig-multicluster/5939-mcs-cluster-selection/kep.yaml @@ -0,0 +1,22 @@ +title: Cluster Selection for Multi-Cluster Services +kep-number: 5939 +authors: + - "@MrFreezeex" +owning-sig: sig-multicluster +# TODO: check with sig-network +# participating-sigs: +# - sig-network +status: implementable +creation-date: 2026-03-01 +reviewers: + - TBD +approvers: + - "@skitt" + - "@jeremyot" + +see-also: + - "/keps/sig-multicluster/1645-multi-cluster-services-api" + - "/keps/sig-multicluster/2149-clusterid" + +latest-milestone: "0.0" +stage: alpha