Skip to content

Latest commit

 

History

History
225 lines (142 loc) · 7.08 KB

experiments-settings.md

File metadata and controls

225 lines (142 loc) · 7.08 KB

*** Use minimum sized (64 bytes) packets** Q: Will a packet size test make sense? Since we are already saturating the PCI bus I don’t think this is needed. Packets size doesn't matter

packet_len=64

start -f stl/udp_for_benchmarks.py -t packet_len=64 --port 0 -m 100%

*** Disable hyper threading** echo 0 > /sys/devices/system/cpu/<cpu_id>/online https://www.golinuxhub.com/2018/05/how-to-disable-or-enable-hyper/

control how many cpus are online

*** Configure Hardware Receive Side Scaling(RSS)** The map changes dynamically according to the CPU bitmask to /sys/class/net//queues/rx-/rps_cpus. For example, if we want to make the queue use the first 3 CPUs in a 8 CPUs system, we first construct the bitmask, 0 0 0 0 0 1 1 1, to 0x7, and

# echo 7 > /sys/class/net/eth0/queues/rx-0/rps_cpus

https://garycplin.blogspot.com/2017/06/linux-network-scaling-receives-packets.html

sudo ethtool -L DEVNAME combined N

control how many cpus process packets for a given interface

Ethernet flow control

Recieve Queue size

Building

XDP paper uses kernel BPF samples from kernel source to run tests

XDP Project uses libbpf and custom programs to run

https://nakryiko.com/posts/libbpf-bootstrap/

Use kernel programs

https://elixir.bootlin.com/linux/v5.15/source/samples/bpf/README.rst https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/#24478279

XDP native not supported

vagrant@xdp-DUT:~/linux-5.15.0/samples/bpf$ sudo ./xdp1 eth0
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
link set xdp fd failed

https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md#xdp

vagrant@xdp-DUT:~/linux-5.15.0$ sudo lspci -v | grep -A9 'Ethernet' 
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 02)
	Subsystem: Intel Corporation PRO/1000 MT Desktop Adapter
	Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 19
	Memory at f0000000 (32-bit, non-prefetchable) [size=128K]
	I/O ports at d010 [size=8]
	Capabilities: [dc] Power Management version 2
	Capabilities: [e4] PCI-X non-bridge device
	Kernel driver in use: e1000
	Kernel modules: e1000

Network settings

vagrant@xdp-DUT:~/linux-5.15.0$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    100    0        0 eth0
10.0.2.0        0.0.0.0         255.255.255.0   U     100    0        0 eth0
_gateway        0.0.0.0         255.255.255.255 UH    100    0        0 eth0
10.0.2.3        0.0.0.0         255.255.255.255 UH    100    0        0 eth0
192.168.56.0    0.0.0.0         255.255.255.0   U     0      0        0 eth3
192.168.253.0   0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.254.0   0.0.0.0         255.255.255.0   U     0      0        0 eth2

DROP Test

+-----------------------------+ +-----------------------------+ | Root namespace | | Testenv namespace 'test01' | | | From 'test01' | | | +--------+ TX-> RX-> +--------+ | | | test01 +--------------------------+ veth0 | | | +--------+ <-RX <-TX +--------+ | | | From 'veth0' | | +-----------------------------+ +-----------------------------+

DUT Traffic generator

vagrant@xdp-DUT:~/linux-5.15.0/samples/bpf$ sudo ./xdp1 xdptut-6937
proto 58:          0 pkt/s
proto 58:          1 pkt/s
proto 58:          1 pkt/s
proto 58:          1 pkt/s

Linux(raw) iptables -t raw -j DROP

Test parameter exceeds line rate

trex>start -f stl/udp_for_benchmarks.py --port 0 -m 100mpps -t packet_len=64,stream_count=1

Removing all streams from port(s) [0._]:                     [SUCCESS]


Attaching 1 streams to port(s) [0._]:                        [SUCCESS]


Starting traffic on port(s) [0._]:                           [FAILED]


start - Port 0 : *** Expected L1 B/W: '67.2 Gbps' exceeds port line rate: '1 Gbps'
trex>start -f stl/udp_for_benchmarks.py --port 0 -m 10mpps -t packet_len=64,stream_count=1

Removing all streams from port(s) [0._]:                     [SUCCESS]


Attaching 1 streams to port(s) [0._]:                        [SUCCESS]


Starting traffic on port(s) [0._]:                           [FAILED]


start - Port 0 : *** Expected L1 B/W: '6.72 Gbps' exceeds port line rate: '1 Gbps'

** DROP Test - 1 mpps **

trex>start -f stl/udp_for_benchmarks.py --port 0 -m 1mpps -t packet_len=64,stream_count=1

Removing all streams from port(s) [0._]:                     [SUCCESS]


Attaching 1 streams to port(s) [0._]:                        [SUCCESS]


Starting traffic on port(s) [0._]:                           [SUCCESS]

22.79 [ms]

trex>streams
Port 0:

    ID     |      name       |     profile     |     packet type     |  length  |       mode       |      rate       |    PG ID     |     next     
-----------+-----------------+-----------------+---------------------+----------+------------------+-----------------+--------------+-------------
    4      |        -        |        _        | Ethernet:IP:UDP:Raw |       64 |    Continuous    |      1 pps      |      -       |      -       

DROP Test

# TRex server

sudo ./t-rex-64 -i  # start t-rex server in stateless mode

# TRex console

./trex-console
trex> start -f stl/udp_for_benchmarks.py --port 0 -m 1mpps -t packet_len=64,stream_count=1
trex>streams
Port 0:

    ID     |      name       |     profile     |     packet type     |  length  |       mode       |      rate       |    PG ID     |     next     
-----------+-----------------+-----------------+---------------------+----------+------------------+-----------------+--------------+-------------
    5      |        -        |        _        | Ethernet:IP:UDP:Raw |       64 |    Continuous    |      1 pps      |      -       |      -       

trex>stop

Stopping traffic on port(s) [0._]:                           [SUCCESS]

6.29 [ms]

trex>clear

Clearing stats :                                             [SUCCESS]

7.23 [ms]

trex>stats


# XDP DUT
Running XDP on dev:eth1 (ifindex:3) action:XDP_DROP options:no_touch
XDP stats       CPU     pps         issue-pps  
XDP-RX CPU      0       81681       0          
XDP-RX CPU      total   81681      

RXQ stats       RXQ:CPU pps         issue-pps  
rx_queue_index    0:0   81681       0          
rx_queue_index    0:sum 81681      

Testing

Q: Where can I see the ENA device stats

A: ethtool -S DEVNAME