Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions pages/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -989,6 +989,7 @@
+ [Add IP restrictions on an OVHcloud Managed Kubernetes cluster](public_cloud/containers_orchestration/managed_kubernetes/add-ip-restrictions)
+ [Changing the security update policy on an OVHcloud Managed Kubernetes cluster](public_cloud/containers_orchestration/managed_kubernetes/change-security-update)
+ [Configuring the OIDC provider on an OVHcloud Managed Kubernetes cluster](public_cloud/containers_orchestration/managed_kubernetes/configuring-oidc-provider-config)
+ [Customising IP allocation on an OVHcloud Managed Kubernetes cluster](public_cloud/containers_orchestration/managed_kubernetes/configuring-pods-services-ip-allocation)
+ [Nodepools & Nodes](public-cloud-containers-orchestration-managed-kubernetes-k8s-configuration-nodepools-and-nodes)
+ [How to manage nodes and node pools on an OVHcloud Managed Kubernetes cluster](public_cloud/containers_orchestration/managed_kubernetes/managing-nodes)
+ [Dynamically resizing a cluster with the cluster autoscaler](public_cloud/containers_orchestration/managed_kubernetes/using-cluster-autoscaler)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
---
title: Customising IP allocation on an OVHcloud Managed Kubernetes cluster
excerpt: "Find out how to configure the IP allocation policy for your pods and services on an OVHcloud Managed Kubernetes cluster with Standard plan"
updated: 2026-03-17
---

## Objective

**This guide details how to customise the IP ranges used for the pods and services in your OVHcloud Managed Kubernetes cluster with Standard plan.**

## Requirements

- An [OVHcloud Managed Kubernetes](/links/public-cloud/kubernetes) cluster

## Limits

The customisation of the pods and services IP allocation policy is not possible on clusters with the Free plan.

You cannot modify the IP allocation policy of a running cluster. It must be set at cluster creation or when resetting the cluster, which erases all data.

## Configuration details

Two parameters are available to control the IP allocation policy in your OVHcloud Managed Kubernetes cluster.

| Parameter | Default value | Function |
| ------------------ | --------------- | ------------------------------------------------------------------ |
| `podsIpv4Cidr` | `10.240.0.0/13` | This is the subnet used to address all the pods in the cluster |
| `servicesIpv4Cidr` | `10.3.0.0/16` | This is the subnet used to address all the services in the cluster |

> [!primary]
>
> You can find more information about the CIDR notation here: [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing)
>

Keep the following rules in mind for these parameters:

- `podsIpv4Cidr` and `servicesIpv4Cidr` **must not** collide with each other, nor with the OpenStack subnets on the same VLAN in your project.
- The subnets **must** be chosen in the [private network blocks](https://en.wikipedia.org/wiki/List_of_reserved_IP_addresses).
- The minimal size allowed for the `podsIpv4Cidr` and the `servicesIpv4Cidr` subnets is `/16`.

Each node in the cluster is assigned a `/24` subnet inside `podsIpv4Cidr`; choosing a `/16` limits the cluster to 256 nodes.

> [!warning]
>
> Adding nodes past the limit of possible `/24` subnets in the `podsIpv4Cidr` could render your cluster unstable.
>

## Instructions using the OVHcloud API

> [!primary]
>
> You can find more information on how to use the OVHcloud API here: [First steps with the OVHcloud API](/pages/manage_and_operate/api/first-steps).
>

### Creating a new cluster with a custom IP allocation policy

Using the following call, you can create a new cluster:

> [!api]
>
> @api {v1} /cloud/project/{serviceName}/kube POST /cloud/project/{serviceName}/kube
>

To set a custom IP allocation policy on pods and/or services, you can use the following example:

```json
{
"name": "my-cluster-with-custom-ip",
"nodepool": {
"desiredNodes": 3,
"flavorName": "b3-8",
"name": "my-nodepool"
},
"region": "GRA11",
"ipAllocationPolicy": {
"podsIpv4Cidr": "172.16.0.0/12",
"servicesIpv4Cidr": "10.100.0.0/16"
}
}
```

Once the cluster is created, use this call to verify the IP allocation policy:

> [!api]
>
> @api {v1} /cloud/project/{serviceName}/kube GET /cloud/project/{serviceName}/kube
>

### Resetting a cluster to change its IP allocations

> [!warning]
>
> Resetting a cluster will delete all data and workload running on the cluster.
>

Using the following call, you can reset a cluster and specify a custom IP allocation policy:

> [!api]
>
> @api {v1} /cloud/project/{serviceName}/kube/{kubeId}/reset POST /cloud/project/{serviceName}/kube/{kubeId}/reset
>

To set a custom IP allocation policy on pods and/or services, you can use the following example:

```json
{
"ipAllocationPolicy": {
"podsIpv4Cidr": "172.16.0.0/12",
"servicesIpv4Cidr": "10.100.0.0/16"
}
}
```

Once the cluster is reset, use this call to verify the IP allocation policy:

> [!api]
>
> @api {v1} /cloud/project/{serviceName}/kube GET /cloud/project/{serviceName}/kube
>

## Go further

[Known limits of OVHcloud Managed Kubernetes](/pages/public_cloud/containers_orchestration/managed_kubernetes/known-limits)

[First steps with the OVHcloud API](/pages/manage_and_operate/api/first-steps)

If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for assisting you on your specific use case of your project.

Join our [community of users](/links/community).
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
---
title: Personnaliser l'allocation IP sur un cluster OVHcloud Managed Kubernetes
excerpt: "Découvrez comment configurer la politique d'allocation IP pour vos pods et services sur un cluster OVHcloud Managed Kubernetes avec le plan Standard"
updated: 2026-03-17
---

## Objectif

**Ce guide détaille comment personnaliser les plages IP utilisées pour les pods et les services dans votre cluster OVHcloud Managed Kubernetes avec le plan Standard.**

## Prérequis

- Un cluster [OVHcloud Managed Kubernetes](/links/public-cloud/kubernetes)

## Limites

La personnalisation de la politique d'allocation IP des pods et services n'est pas possible sur les clusters avec le plan Free.

Vous ne pouvez pas modifier la politique d'allocation IP d'un cluster en cours d'exécution. Elle doit être définie lors de la création du cluster ou lors de la réinitialisation du cluster, ce qui efface toutes les données.

## Détails de la configuration

Deux paramètres permettent de contrôler la politique d'allocation IP dans votre cluster OVHcloud Managed Kubernetes.

| Paramètre | Valeur par défaut | Fonction |
| ------------------ | ----------------- | --------------------------------------------------------------------------- |
| `podsIpv4Cidr` | `10.240.0.0/13` | Sous-réseau pour l'adressage des pods du cluster |
| `servicesIpv4Cidr` | `10.3.0.0/16` | Sous-réseau pour l'adressage des services du cluster |

> [!primary]
>
> Plus d'informations sur la notation CIDR : [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing)
>

Voici les règles à respecter pour ces paramètres :

- `podsIpv4Cidr` et `servicesIpv4Cidr` **ne doivent pas** entrer en conflit l'un avec l'autre, ni avec les sous-réseaux OpenStack du même VLAN dans votre projet.
- Les sous-réseaux **doivent** être choisis dans les [blocs de réseau privé](https://en.wikipedia.org/wiki/List_of_reserved_IP_addresses).
- La taille minimale autorisée pour les sous-réseaux `podsIpv4Cidr` et `servicesIpv4Cidr` est `/16`.

Chaque nœud du cluster se voit attribuer un sous-réseau `/24` dans le `podsIpv4Cidr` ; choisir un `/16` limite le cluster à 256 nœuds.

> [!warning]
>
> L'ajout de nœuds au-delà de la limite des sous-réseaux `/24` possibles dans le `podsIpv4Cidr` pourrait rendre votre cluster instable.
>

## Instructions via l'API OVHcloud

> [!primary]
>
> Plus d'informations sur l'utilisation de l'API OVHcloud : [Premiers pas avec l'API OVHcloud](/pages/manage_and_operate/api/first-steps).
>

### Créer un nouveau cluster avec une politique d'allocation IP personnalisée

Utilisez l'appel suivant pour créer un nouveau cluster :

> [!api]
>
> @api {v1} /cloud/project/{serviceName}/kube POST /cloud/project/{serviceName}/kube
>

Pour définir une politique d'allocation IP personnalisée sur les pods et/ou services, utilisez l'exemple suivant :

```json
{
"name": "my-cluster-with-custom-ip",
"nodepool": {
"desiredNodes": 3,
"flavorName": "b3-8",
"name": "my-nodepool"
},
"region": "GRA11",
"ipAllocationPolicy": {
"podsIpv4Cidr": "172.16.0.0/12",
"servicesIpv4Cidr": "10.100.0.0/16"
}
}
```

Une fois le cluster créé, utilisez cet appel pour vérifier la politique d'allocation IP :

> [!api]
>
> @api {v1} /cloud/project/{serviceName}/kube GET /cloud/project/{serviceName}/kube
>

### Réinitialiser un cluster pour modifier ses allocations IP

> [!warning]
>
> La réinitialisation d'un cluster supprimera toutes les données et charges de travail en cours d'exécution sur le cluster.
>

Utilisez l'appel suivant pour réinitialiser un cluster et spécifier une politique d'allocation IP personnalisée :

> [!api]
>
> @api {v1} /cloud/project/{serviceName}/kube/{kubeId}/reset POST /cloud/project/{serviceName}/kube/{kubeId}/reset
>

Pour définir une politique d'allocation IP personnalisée sur les pods et/ou services, utilisez l'exemple suivant :

```json
{
"ipAllocationPolicy": {
"podsIpv4Cidr": "172.16.0.0/12",
"servicesIpv4Cidr": "10.100.0.0/16"
}
}
```

Une fois le cluster réinitialisé, utilisez cet appel pour vérifier la politique d'allocation IP :

> [!api]
>
> @api {v1} /cloud/project/{serviceName}/kube GET /cloud/project/{serviceName}/kube
>

## Aller plus loin

[Limites connues d'OVHcloud Managed Kubernetes](/pages/public_cloud/containers_orchestration/managed_kubernetes/known-limits)

[Premiers pas avec l'API OVHcloud](/pages/manage_and_operate/api/first-steps)

Si vous avez besoin d'une formation ou d'une assistance technique pour la mise en œuvre de nos solutions, contactez votre commercial ou cliquez sur [ce lien](/links/professional-services) pour obtenir un devis et demander une analyse personnalisée de votre projet à nos experts Professional Services.

Échangez avec notre [communauté d'utilisateurs](/links/community).
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
id: 9a51d31f-debc-42fe-9d5d-c6ed95121d5d
full_slug: public-cloud-kubernetes-configure-pods-services-ip-allocation-policy
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Known limits
excerpt: 'Requirements and limits to respect'
updated: 2026-02-03
updated: 2026-03-17
---

<style>
Expand All @@ -16,7 +16,7 @@ updated: 2026-02-03
margin-bottom: 5px;
}
pre.console code {
b font-family: monospace !important;
font-family: monospace !important;
font-size: 0.75em;
color: #ccc;
}
Expand All @@ -34,7 +34,7 @@ updated: 2026-02-03

We have tested our OVHcloud Managed Kubernetes Service plans with a max number of nodes, while higher configurations might work and that there is no hard limits, we recommend staying under these limits for optimal stability.

Keep in mind that impact on the control plane isn't solely determined by the number of nodes. What truly defines a 'large cluster' depends on the combination of resources deployed pods, custom resources, and other objects which all contribute to control plane load. A cluster with fewer nodes but intensive resource utilization can stress the control plane more than a cluster with many nodes running minimal workloads. In such configuration it is recommended to switch to the Standard plan in order to benefit from higher and dedicated control plane resources.
Keep in mind that impact on the control plane isn't solely determined by the number of nodes. What truly defines a 'large cluster' depends on the combination of resources deployed pods, custom resources, and other objects which all contribute to control plane load. A cluster with fewer nodes but intensive resource utilisation can stress the control plane more than a cluster with many nodes running minimal workloads. In such configuration it is recommended to switch to the Standard plan in order to benefit from higher and dedicated control plane resources.

While 110 pods per node is the default value defined by Kubernetes, please note that the OVHcloud teams deploy some management components on nodes (CNI, agents, Konnectivity, etc.), these are considered 'cluster mandatory' and will impact the pods per node capacity for user workloads. For the same reason, as those management components are mandatory and require a small amount of node resources, in case of node overloading you might face some of your pods being in state `Terminated` with `Reason: OOMKilled` and `Exit Code: 137`. That's why it is important to have a clean resources management for your workload in order to avoid nodes overloading and instabilities.

Expand Down Expand Up @@ -160,7 +160,7 @@ To ensure proper operation of your OVHcloud Managed Kubernetes cluster, certain
| 80 (169.254.169.254/32) | TCP | Init service (OpenStack metadata) |
| 25000–31999 | TCP | TLS tunnel between pods and kubernetes API server |
| 8090 | TCP | Internal (OVHcloud) node management service |
| 123 | UDP | NTP servers synchronization (systemd-timesync) |
| 123 | UDP | NTP servers synchronisation (systemd-timesync) |
| 53 | TCP/UDP | Allow domain name resolution (systemd-resolve) |
| 111 | TCP | rpcbind (only if using NFS client) |
| 4443 | TCP | Metrics server communication |
Expand Down Expand Up @@ -266,16 +266,18 @@ To prevent network conflicts, it is recommended to **keep the DHCP service runni

#### Reserved IP ranges

The following ranges are used by the cluster, and should not be used elsewhere on the private network attached to the cluster:
By default, the following ranges are used by the cluster, and should not be used elsewhere on the private network attached to the cluster:

```bash
10.240.0.0/13 # Subnet used by pods
10.3.0.0/16 # Subnet used by services
```

However, these ranges can be customised either when creating a cluster or when resetting an existing one by following this guide: [Customising IP allocation on an OVHcloud Managed Kubernetes cluster (Standard plan only)](/pages/public_cloud/containers_orchestration/managed_kubernetes/configuring-pods-services-ip-allocation).

> [!warning]
>
> These ranges are fixed for now but will be configurable in a future release. Do not use them elsewhere in your private network.
> The subnet ranges cannot be modified on a running cluster without resetting it and losing all data.
>

## Cluster health
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Known limits
excerpt: 'Requirements and limits to respect'
updated: 2026-02-03
updated: 2026-03-17
---

<style>
Expand All @@ -16,7 +16,7 @@ updated: 2026-02-03
margin-bottom: 5px;
}
pre.console code {
b font-family: monospace !important;
font-family: monospace !important;
font-size: 0.75em;
color: #ccc;
}
Expand All @@ -34,7 +34,7 @@ updated: 2026-02-03

We have tested our OVHcloud Managed Kubernetes Service plans with a max number of nodes, while higher configurations might work and that there is no hard limits, we recommend staying under these limits for optimal stability.

Keep in mind that impact on the control plane isn't solely determined by the number of nodes. What truly defines a 'large cluster' depends on the combination of resources deployed pods, custom resources, and other objects which all contribute to control plane load. A cluster with fewer nodes but intensive resource utilization can stress the control plane more than a cluster with many nodes running minimal workloads. In such configuration it is recommended to switch to the Standard plan in order to benefit from higher and dedicated control plane resources.
Keep in mind that impact on the control plane isn't solely determined by the number of nodes. What truly defines a 'large cluster' depends on the combination of resources deployed pods, custom resources, and other objects which all contribute to control plane load. A cluster with fewer nodes but intensive resource utilisation can stress the control plane more than a cluster with many nodes running minimal workloads. In such configuration it is recommended to switch to the Standard plan in order to benefit from higher and dedicated control plane resources.

While 110 pods per node is the default value defined by Kubernetes, please note that the OVHcloud teams deploy some management components on nodes (CNI, agents, Konnectivity, etc.), these are considered 'cluster mandatory' and will impact the pods per node capacity for user workloads. For the same reason, as those management components are mandatory and require a small amount of node resources, in case of node overloading you might face some of your pods being in state `Terminated` with `Reason: OOMKilled` and `Exit Code: 137`. That's why it is important to have a clean resources management for your workload in order to avoid nodes overloading and instabilities.

Expand Down Expand Up @@ -160,7 +160,7 @@ To ensure proper operation of your OVHcloud Managed Kubernetes cluster, certain
| 80 (169.254.169.254/32) | TCP | Init service (OpenStack metadata) |
| 25000–31999 | TCP | TLS tunnel between pods and kubernetes API server |
| 8090 | TCP | Internal (OVHcloud) node management service |
| 123 | UDP | NTP servers synchronization (systemd-timesync) |
| 123 | UDP | NTP servers synchronisation (systemd-timesync) |
| 53 | TCP/UDP | Allow domain name resolution (systemd-resolve) |
| 111 | TCP | rpcbind (only if using NFS client) |
| 4443 | TCP | Metrics server communication |
Expand Down Expand Up @@ -266,16 +266,18 @@ To prevent network conflicts, it is recommended to **keep the DHCP service runni

#### Reserved IP ranges

The following ranges are used by the cluster, and should not be used elsewhere on the private network attached to the cluster:
By default, the following ranges are used by the cluster, and should not be used elsewhere on the private network attached to the cluster:

```bash
10.240.0.0/13 # Subnet used by pods
10.3.0.0/16 # Subnet used by services
```

However, these ranges can be customised either when creating a cluster or when resetting an existing one by following this guide: [Customising IP allocation on an OVHcloud Managed Kubernetes cluster (Standard plan only)](/pages/public_cloud/containers_orchestration/managed_kubernetes/configuring-pods-services-ip-allocation).

> [!warning]
>
> These ranges are fixed for now but will be configurable in a future release. Do not use them elsewhere in your private network.
> The subnet ranges cannot be modified on a running cluster without resetting it and losing all data.
>

## Cluster health
Expand Down
Loading