Skip to content

Conversation

@Jaganathancse
Copy link

This changes to add new nodeset CR for networker ovs dpdk.

@openshift-ci openshift-ci bot requested review from olliewalsh and slagle January 28, 2025 10:28
@Jaganathancse Jaganathancse force-pushed the networker-dpdk branch 2 times, most recently from 1b70b33 to 6de2309 Compare January 28, 2025 10:40
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/69056a251ac840629e73619e1bc58c24

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 10m 19s
podified-multinode-edpm-deployment-crc FAILURE in 1h 18m 46s
cifmw-crc-podified-edpm-baremetal FAILURE in 1h 24m 23s
openstack-operator-tempest-multinode FAILURE in 1h 56m 20s

Copy link

@christophefontaine christophefontaine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The global diff should be similar to a diff between a classic compute node nic-config vs a dpdk-enabled compute node nic-config.

Comment on lines 28 to 53
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic1
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in nodeset_networks if network not in ["external", "tenant"] %}
- type: vlan
mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }}
vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}
addresses:
- ip_netmask:
{{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }}
routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}
{% endfor %}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we must not mix ovs_bridge and ovs_user_bridge in a single deployment.
Here, we should have the control plane on top of a linux interface or a linux bond:

- type: interface
  name: nic1
  mtu: {{ ctlplane_mtu }}
  use_dhcp: false
  addresses:
  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
  routes: {{ ctlplane_host_routes }}
- type: linux_bond
  name: bond_api
  mtu: {{ min_viable_mtu }}
  bonding_options: {{ edpm_bond_interface_ovs_options }}
  use_dhcp: false
  dns_servers: {{ ctlplane_dns_nameservers }}
  members:
  - type: interface
    name: nic2
    mtu: {{ min_viable_mtu }}
    primary: true
  - type: interface
    name: nic3
    mtu: {{ min_viable_mtu }}

Then you'd add additional vlans (if required) for the other osp networks {% if network not in ["external", "tenant"] %}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we must not mix ovs_bridge and ovs_user_bridge in a single deployment. Here, we should have the control plane on top of a linux interface or a linux bond:

- type: interface
  name: nic1
  mtu: {{ ctlplane_mtu }}
  use_dhcp: false
  addresses:
  - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
  routes: {{ ctlplane_host_routes }}
- type: linux_bond
  name: bond_api
  mtu: {{ min_viable_mtu }}
  bonding_options: {{ edpm_bond_interface_ovs_options }}
  use_dhcp: false
  dns_servers: {{ ctlplane_dns_nameservers }}
  members:
  - type: interface
    name: nic2
    mtu: {{ min_viable_mtu }}
    primary: true
  - type: interface
    name: nic3
    mtu: {{ min_viable_mtu }}

Then you'd add additional vlans (if required) for the other osp networks {% if network not in ["external", "tenant"] %}

done

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 29, 2025

@christophefontaine: changing LGTM is restricted to collaborators

In response to this:

The global diff should be similar to a diff between a classic compute node nic-config vs a dpdk-enabled compute node nic-config.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@Jaganathancse Jaganathancse force-pushed the networker-dpdk branch 2 times, most recently from bcf917d to a6d3b76 Compare February 6, 2025 11:11
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 6, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Jaganathancse
Once this PR has been reviewed and has the lgtm label, please assign lewisdenny for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Jaganathancse Jaganathancse force-pushed the networker-dpdk branch 2 times, most recently from e05774e to 384a0c8 Compare March 6, 2025 09:37
@Jaganathancse
Copy link
Author

The global diff should be similar to a diff between a classic compute node nic-config vs a dpdk-enabled compute node nic-config.

done

Copy link

@christophefontaine christophefontaine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quick review, looks good to me.

- type: ovs_dpdk_bond
name: dpdkbond0
mtu: 9000
rx_queue: 4

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If my calculation is correct, we have 24 queues, and 34 pmd threads for ovs, right ?
If we hit any performance limit (and I doubt that we will with 10Gb interfaces), keep in mind that we can still increase the number of queues to spread the load.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If my calculation is correct, we have 24 queues, and 34 pmd threads for ovs, right ? If we hit any performance limit (and I doubt that we will with 10Gb interfaces), keep in mind that we can still increase the number of queues to spread the load.

ack

@Jaganathancse Jaganathancse force-pushed the networker-dpdk branch 2 times, most recently from 31cb949 to 7176a78 Compare March 6, 2025 10:50
@Jaganathancse
Copy link
Author

recheck

Copy link

@christophefontaine christophefontaine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack. lgtm.

@Jaganathancse
Copy link
Author

recheck

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/f68fc3ea1ef646aa9f43a384300f2c42

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 07m 16s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 12m 08s
cifmw-crc-podified-edpm-baremetal FAILURE in 1h 35m 36s
openstack-operator-tempest-multinode FAILURE in 1h 50m 47s

@Jaganathancse
Copy link
Author

recheck

1 similar comment
@Jaganathancse
Copy link
Author

recheck

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/3dd23c7785de41a793e818791f73ad55

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 43m 54s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 13m 40s
cifmw-crc-podified-edpm-baremetal RETRY_LIMIT in 13m 39s
✔️ openstack-operator-tempest-multinode SUCCESS in 1h 28m 13s

This changes to add new nodeset CR for networker ovs dpdk.
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/48d08eb74349485a96fc41d274c4a952

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 49m 05s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 09m 35s
cifmw-crc-podified-edpm-baremetal FAILURE in 1h 32m 33s
✔️ openstack-operator-tempest-multinode SUCCESS in 1h 29m 06s
openstack-operator-kuttl FAILURE in 28m 42s (non-voting)

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 11, 2025

@Jaganathancse: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/openstack-operator-build-deploy-kuttl 3561d1b link true /test openstack-operator-build-deploy-kuttl

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants