-
Notifications
You must be signed in to change notification settings - Fork 522
CNV-67916: hypershift, kubevirt, add enhancement for IP management #1855
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@qinqon: This pull request references CNV-67916 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.21.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
b8d3daf to
b6cb5a1
Compare
|
/cc @maiqueb |
|
/cc @orenc1 |
| // Addresses specify IP ranges from which addresses will be allocated for this interface. | ||
| // Supports CIDR notation, hyphenated ranges, and single IPs. | ||
| // +required | ||
| Addresses []string `json:"addresses"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably use hostedcluster_types things like CIDRBlock or ipNet, or maybe define a new one that can be CEL validated if it is really necessary to have ranges and single IPs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably use hostedcluster_types things like CIDRBlock or ipNet, or maybe define a new one that can be CEL validated if it is really necessary to have ranges and single IPs
Since this can be a cidr, ip range or single IP I cannot use those types, so this has to be a string slice, I will add a CEL validation.
|
|
||
| **Management Cluster Components**: | ||
| - Cluster API Provider KubeVirt controller (KubevirtMachineTemplate controller): Enhanced with IP allocation logic | ||
| - HyperShift operator (NodePool controller): Updated to handle network configuration in NodePool spec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't it also be necessary to update the ignition generation for the the netconfig?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't it also be necessary to update the ignition generation for the the netconfig?
The fact that we implmenet "IP allocation logic" using virtual machine cloud init network data it's implementation details, so it's part of the Cluster API Provider Kubevirt controller changes
|
|
||
| - Adds complexity to the hypershift kubevirt controllers: | ||
| - implementing and ip pool mechanism | ||
| - Needs to parse and understand the openshift network config format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/openshift/openstack/ ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/openshift/openstack/ ?
Right, I will also add the it should unerstand either openstack or netplan depending of the kind of networkData we want to use either configdrive or nocloud
| - Symptom: New KubevirtMachine resources remain in Pending state | ||
| - Detection: Check KubevirtMachine events and status conditions | ||
| - Log output: "No available IPs in pool for interface X" | ||
| - Metric: `kubevirt_ippool_exhausted{cluster="X"}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this going to be a status condition on the NodePool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this going to be a status condition on the NodePool?
It should, I will add that
maiqueb
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you.
|
|
||
| - Enable static IP address assignment for HyperShift guest cluster nodes running on KubeVirt | ||
| - Provide a flexible API for defining IP address pools with support for various IP range formats (CIDR, hyphenated ranges, single IPs) | ||
| - Support both IPv4 and IPv6 addressing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so the kubevirt VMs which are actually the hosted cluster nodes can be ipv6 nodes ?
do we support single stack ipv6 ? or only dual-stack ?
Just trying to figure out the test matrix we need to have later on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so the kubevirt VMs which are actually the hosted cluster nodes can be ipv6 nodes ?
do we support single stack ipv6 ? or only dual-stack ?
Just trying to figure out the test matrix we need to have later on.
For hyperhsift kubevirt we don't use KubeVirt DHCP mechanism we use OVN DHCPOptions instead this allow us to support single stack ipv4/ipv6 and dual stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but isn't this about localnet secondary interfaces ?
AFAICT we do not have OVN DHCPOpts for localnet secondaries - only on the pod network (masquerade or primary UDN attachments).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but isn't this about localnet secondary interfaces ?
AFAICT we do not have OVN
DHCPOptsfor localnet secondaries - only on the pod network (masquerade or primary UDN attachments).
Right, I mixed things up, so since this is static IP assignment and we don't depend on KubeVirt DHCP capabilities we support sintle stack (both ipv4 and ipv6) and dual stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, that's aligned with my expectations.
OK, so the test matrix will be complex. Do we need to have anything in origin ?...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, that's aligned with my expectations.
OK, so the test matrix will be complex. Do we need to have anything in origin ?...
At origin we test dual stack
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
enhancements/hypershift/kubevirt-static-ip-address-management.md
Outdated
Show resolved
Hide resolved
| Migrating nodepools nodes from dynamic to static IPs is not expected, those nodes | ||
| will be reconstructed and new ip assigned. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry, but I don't follow this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry, but I don't follow this.
I will reword this
| - BootstrapNetworkConfig in KubevirtMachineSpec (cluster-api-provider-kubevirt) | ||
| - BootstrapNetworkConfig in nodepool's KubevirtPlatformSpec (hypershift) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm forced to say one of the most confusing things to me (I'm not HCP savvy) was to differentiate between these two CRDs.
When I would - as a hosted cluster "owner" - use one of these versus the other ?
Which of these should I use and why ? Or do I need to use both ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm forced to say one of the most confusing things to me (I'm not HCP savvy) was to differentiate between these two CRDs.
When I would - as a hosted cluster "owner" - use one of these versus the other ?
Which of these should I use and why ? Or do I need to use both ?
At hypershift you use the NodePool CRD, KubevirtMachine is internal to hypershift.
| 1. **IP Pool Exhaustion**: Machine creation fails with clear error. Operator must expand pool or delete unused machines. | ||
| 2. **Invalid Network Configuration**: HostedCluster creation fails validation. Operator must correct configuration. | ||
| 3. **Network Config Application Failure**: Node fails to boot or is unreachable. Visible in VM console logs. Operator must verify network configuration correctness. | ||
| 4. **IP Allocation Conflict**: Different nodepools using same network but with overlapping IPs should fail with clear error. Operator should fix the subnets. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why should this matter ? The way I see it, this would become an issue if the node pools are being used at the same time in the same node
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why should this matter ? The way I see it, this would become an issue if the node pools are being used at the same time in the same node
Nodepools cannot be used for the same node, since they are definining those nodes, issue is that one nodepool can contribute to the ip exhaustion of the other if they have overlapping subnets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what I don't follow is how can one network have multiple node pools.
Can you point me towards documentation about the node pool ? What does it reflect, and how it relates to the network ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the nodepool kubevirt specific api hass the additionalNetworks field, to specify the NAD name, this enhancement is adding the IPAM configuration at same API level, and a hosted cluster can have multiple nodepools, this means multiple kubevirt nodepools can have conflicting IPAM configuration, and that's what we want to detect.
| - Resolution: Verify OpenStack network config format correctness, check gateway/DNS reachability | ||
|
|
||
| **IP Allocation Errors**: | ||
| - Symptom: Machine creation fails or succeeds with incorrect IP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it realistic to include incorrect IP allocation in this metric ?
Asking because (in my own limited mental model) we could prevent this, instead of capturing it in a metric.
Couldn't we ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it realistic to include incorrect IP allocation in this metric ?
Asking because (in my own limited mental model) we could prevent this, instead of capturing it in a metric.
Couldn't we ?
Agree since we are already using CEL to ensure propre IPs are configured at the addresses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it realistic to include incorrect IP allocation in this metric ?
Asking because (in my own limited mental model) we could prevent this, instead of capturing it in a metric.
Couldn't we ?
Agree since we are already using CEL to ensure propre IPs are configured at the addresses, I will remove it
| - New nodes will use DHCP | ||
| - No impact on running workloads |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not so fast - I'm not so sure this is this easy / there aren't impacts.
Let's say you're using static IPs, then you stop using it. You remove the NetworkConfig from the template spec.
- Your existing nodes with static IPs will continue to function.
- a new node will gets its IP via DHCP
- what will prevent this new node from getting an IP which is already in use by the static IPs counter part ?
AFAIU, you don't have anything to map the static IPs from the node pool to the DHCP pool availability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not so fast - I'm not so sure this is this easy / there aren't impacts.
Let's say you're using static IPs, then you stop using it. You remove the
NetworkConfigfrom the template spec.
- Your existing nodes with static IPs will continue to function.
- a new node will gets its IP via DHCP
- what will prevent this new node from getting an IP which is already in use by the static IPs counter part ?
AFAIU, you don't have anything to map the static IPs from the node pool to the DHCP pool availability.
Even though I think we should not support this, it may work if NodePool is changed so bootstrap network config is removed and virtual machines are restarted.
908013f to
5db1dd5
Compare
c7dcbdb to
8287292
Compare
8287292 to
7be01ce
Compare
jparrill
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Things to have in mind:
- This implementation will touch CPO and HO at Hypershift level, so we need to limit the Hypershift API changes to only work on newest versions of CPO.
- Validations at multiple levels to avoid issues with the conflicting IPs.
- How config errors will fail at reconciliation level? (block? fail and continue?) And at what level? (at HO (for NodePools, this could short circuit the reconciliation of other HCs) and at CPO (for internal issues))
- How config errors will be communicated to the MGMT cluster administrator.
- Dropped some other questions in the review too
| This proposal introduces static IPAM capabilities through modifications to three key components: | ||
|
|
||
| 1. **Cluster API Provider KubeVirt**: Extend the API to support IP pool definitions and network configuration, implement IP allocation logic from defined pools | ||
| 2. **HyperShift**: Add network configuration options to the KubeVirt platform specification at nodepool, implement translation from this network configuration to capk ip pool and openstack network config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this new API customizable day-0 or day-0 and day-2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this new API customizable day-0 or day-0 and day-2?
It's going to be inmmutable, I need to update the enhancement with that, if we suffer from ip exhaustion customer can create new nodpool with bigger subnet.
|
|
||
| This proposal introduces static IPAM capabilities through modifications to three key components: | ||
|
|
||
| 1. **Cluster API Provider KubeVirt**: Extend the API to support IP pool definitions and network configuration, implement IP allocation logic from defined pools |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the overlapping? Is that allowed? (IIRC this is relevant at machineNetwork level):
- Between different NPs
- Between MGMT and HC
- Between different HCs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the overlapping? Is that allowed? (IIRC this is relevant at machineNetwork level):
- Between different NPs
Overlapping is check by network, so NodePools at same network are not allow to overlap, but are allowed if they are at
different networks
- Between MGMT and HC
We don't check that, it's up to the customer when they decide the IPAM config at nodepool to not collide with their infra
- Between different HCs
That's up to the customer, in case they want to share network between HCs they should be sure to not collide, from what we have gather that is good enough.
|
|
||
| 1. **Cluster API Provider KubeVirt**: Extend the API to support IP pool definitions and network configuration, implement IP allocation logic from defined pools | ||
| 2. **HyperShift**: Add network configuration options to the KubeVirt platform specification at nodepool, implement translation from this network configuration to capk ip pool and openstack network config | ||
| 3. **CoreOS Afterburn**: Enable parsing and application of OpenStack or netplan network config standard data as dracut network kernel args from cloud-init (config drive or nocloud) for KubeVirt VMs, this is similar what is done at [proxmoxve provider](https://github.com/coreos/afterburn/pull/1023). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this have issues with conflicting NADs already deployed in the field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this have issues with conflicting NADs already deployed in the field?
This is going to configure virtual machine networking statically, so possible nads affecting the primary interface will be ignored.
| #### Failure Handling | ||
|
|
||
| - **IP Pool Exhaustion**: If all IPs in the pool are allocated, machine creation will fail with a clear error message indicating pool exhaustion at the NodePool CRD status with a condition like `KubevirtIPPoolExhausted=true` | ||
| - **Invalid Network Configuration**: The API will validate network configuration format during HostedCluster creation, rejecting invalid configurations, using CEL is a perfect fit to implement this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to implement this at 2 levels:
- CEL first as a api-machinery protection layer
- NetworkValidations at Hypershift level, in order to avoid conflicts regarding internal rules
| "id": "network0", | ||
| "type": "ipv4", | ||
| "link": "eno101", | ||
| "ip_address": "192.168.1.100", <-- this is the part allocated by "cluster api provider kubevirt controller" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC This should be validated in 3 phases:
- CEL
- Hypershift Networking validations
- Openstack provider validation (to avoid conflicting allocations)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC This should be validated in 3 phases:
- CEL
- Hypershift Networking validations
The ip address is generated by cluster-api-provider-kubevirt so we don't need to validate it.
- Openstack provider validation (to avoid conflicting allocations)
This is already merged at fcos and will be part of rhcos
| - It will be supported for ovn-kubernetes team, not only hypershift kubevirt maintainers. | ||
|
|
||
| Cons: | ||
| - No clear solution for implementing IPv6 RAs since there is no logical router. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe an extra component like RaDVD deployed only if the Kubevirt cluster is IPv6 based could help. I don't know if Konnectivity handles the RAs properly through the tunnel to guest side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe an extra component like RaDVD deployed only if the Kubevirt cluster is IPv6 based could help. I don't know if Konnectivity handles the RAs properly through the tunnel to guest side.
The point is just that there is not a clear solution for that problem, we don't need to find wich one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe an extra component like RaDVD deployed only if the Kubevirt cluster is IPv6 based could help. I don't know if Konnectivity handles the RAs properly through the tunnel to guest side.
The point is just that there is not a clear solution for that problem, we don't need to find wich one.
| **Performance Testing**: | ||
| - IP allocation performance with large pools (1000+ addresses) | ||
| - Scale testing with 100+ node clusters | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any hint about a disaster recovery scenario? (EG: Conflicting IP entries among diff HCs on restoration)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any hint about a disaster recovery scenario? (EG: Conflicting IP entries among diff HCs on restoration)
We are going to re-fill the in memory allocator cache at capi pod's restart so we ensure that we calculate conflicts correctly.
|
Having a choice of setting a Static Network Subnet for each nodePool is a great feature to have! I hope that we also make sure that those Subents don't overlap between multiple nodePools (within one HCP) and I believe that this was taken into consideration too already. |
@AmrGanz is having the IPAM configuration inmutable ok for you ? if you suffer from not enough IPs you can always create another nodepool with new range not colliding with the previous one. |
@qinqon IF we can allow increasing the range of the already configured Subent that would be great, but this will again require checking the new range against the used ones in other nodePools. |
ed34ce1 to
e1c2764
Compare
Signed-off-by: Enrique Llorente <[email protected]>
e1c2764 to
5acd494
Compare
|
@qinqon: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
This enhancement proposes adding static IP address management (IPAM) capabilities to HyperShift when using the KubeVirt provider and the KubeVirt nodepool is not attached to default pod network using a multus layer2 network to replace it.
The implementation enables operators to define IP pools and network configurations that are automatically allocated to virtual machines during cluster provisioning. This functionality addresses the need for predictable, static network configuration in environments where DHCP is not available or desirable, particularly in on-premises and edge deployments.
https://issues.redhat.com/browse/CNV-67916