Performance
We use Cisco TRex to test Tempesta xFW performance.
At the moment we use two machines: Generator generating traffic with TRex and SUT running Tempesta xFW,
with following specifications:
- SUT: Dell R650 with 2xGold 6348, 256 GB RAM, 1 x 256 GB SSD 4 x 375 GB NVMe.
- Generator: R750 with 2xGold 6348 – 64 GB RAM – 2 x 960 SSD.
- Both the machines have: DELL F6FXM Mellanox CX623106A ConnectX-6 Dx EN 100Gigabit Ethernet Card 0F6FXM, Dual port 100G.
You can find more details in our repository.
Tempesta xFW is configured for the dual interface host mode (xfw.json):
{
"devices": "enp202s0f0np0 enp202s0f1np1",
"devices-mode": "native",
"verbose": true,
"sysctl-tcp-max-syn-backlog": 4096,
"sysctl-tcp-syncookies": 1
}
Ubuntu 24.04.3 LTS was used during the tests.
While SUT has dual Intel Gold 6348, we use only one processor in the tests. This way the performance numbers at the below are for single Intel Gold 6348.
ICMPv6 Flood๐
Use Tempesta xFW configuration with 150K source IPs white list.
TRex call (use icmpv6_fix_cs.py):
./t-rex-64 -i -c 28
./trex-console
trex>start -f icmpv6_fix_cs.py -m 122000000
Results:
-Per port stats table
ports | 0 | 2
-----------------------------------------------------------------------------------------
opackets | 1704026304 | 1700013447
obytes | 139730156928 | 139401102654
ipackets | 8689 | 5
ibytes | 817218 | 922
ierrors | 0 | 0
oerrors | 0 | 0
Tx Bw | 64.66 Gbps | 64.32 Gbps
-Global stats enabled
Cpu Utilization : 67.4 % 8.3 Gb/core
Platform_factor : 1.0
Total-Tx : 128.97 Gbps
Total-Rx : 405.98 Kbps
Total-PPS : 196.61 Mpps
Total-CPS : 0.00 cps
Expected-PPS : 0.00 pps
Expected-CPS : 0.00 cps
Expected-BPS : 0.00 bps
Active-flows : 0 Clients : 0 Socket-util : 0.0000 %
Open-flows : 0 Servers : 0 Socket : 0 Socket/Clients : -nan
Total_queue_full : 376921385
drop-rate : 128.97 Gbps
current time : 81.5 sec
test duration : 0.0 sec
- Workload is 129 Gbps and 196 Mpps.
- No
ierrorsandoerrors.
TCP/UDP Flood๐
Use Tempesta xFW configuration with 150K source IPs white list.
UDP flood of packet length 1514 bytes and various types of TCP flood traffic (length 54 bytes):
- ACK flood
- FIN flood
- NULL flood
- RST flood
- SYN flood
- SYN-ACK flood
- URG flood
- XMAS flood (FIN-PUSH-URG)
Use (tcpudp.yaml):
./t-rex-64 -f tcpudp4.yaml -m 650 -c 23
Results:
-Per port stats table
ports | 0 | 2
-----------------------------------------------------------------------------------------
opackets | 1148508744 | 1148933658
obytes | 427244952632 | 427403180584
ipackets | 2 | 2
ibytes | 338 | 338
ierrors | 0 | 0
oerrors | 0 | 0
Tx Bw | 88.32 Gbps | 87.30 Gbps
-Global stats enabled
Cpu Utilization : 83.6 % 9.1 Gb/core
Platform_factor : 1.0
Total-Tx : 175.62 Gbps
Total-Rx : 0.00 bps
Total-PPS : 59.01 Mpps
Total-CPS : 0.00 cps
Expected-PPS : 52.00 Mpps
Expected-CPS : 52.00 Mcps
Expected-BPS : 154.75 Gbps
Active-flows : 32000 Clients : 65535 Socket-util : 0.0008 %
Open-flows : 32000 Servers : 499 Socket : 32000 Socket/Clients : 0.5
Total_queue_full : 1081051726
drop-rate : 175.62 Gbps
current time : 40.8 sec
test duration : 3559.2 sec
- Workload is 176 Gbps and 59 Mpps.
- No
ierrorsandoerrors.
TCP SYN Flood๐
Host mode (syncookies)๐
Use Tempesta xFW configuration with basic TCP SYN cookies (1 second passive and flood timers, i.e. it’s expected to see only half of the traffic in RX).
./t-rex-64 -f syn_flood.yaml -m 200000 -c 23 -d 60
Results:
-Per port stats table
ports | 0 | 2
-----------------------------------------------------------------------------------------
opackets | 2306853814 | 2307351312
obytes | 179934597492 | 179973402336
ipackets | 799734937 | 797611975
ibytes | 62379325268 | 62213734232
ierrors | 0 | 0
oerrors | 0 | 0
Tx Bw | 46.58 Gbps | 46.67 Gbps
-Global stats enabled
Cpu Utilization : 97.7 % 4.1 Gb/core
Platform_factor : 1.0
Total-Tx : 93.25 Gbps
Total-Rx : 32.12 Gbps
Total-PPS : 149.45 Mpps
Total-CPS : 0.00 cps
Expected-PPS : 100.00 Mpps
Expected-CPS : 100.00 Mcps
Expected-BPS : 62.40 Gbps
Active-flows : 100000 Clients : 65535 Socket-util : 0.0024 %
Open-flows : 100000 Servers : 50 Socket : 100000 Socket/Clients : 1.5
Total_queue_full : 495155776
drop-rate : 61.13 Gbps
current time : 32.8 sec
test duration : 27.2 sec
- Workload is 93 Gbps and 149 Mpps.
- No
ierrorsandoerrors.
Gateway scrubbing mode (TCP SYN rate limit)๐
Use Tempesta xFW configuration with basic TCP SYN ratelimit for 100Mpps. For this configuration we dropped all packets passed by xFW to not to cause an overload at the Linux TCP/IP stack leading to performance bottleneck:
iptables -t mangle -I PREROUTING 1 -s 16.0.0.0/16 -j DROP
./t-rex-64 -f syn_flood.yaml -m 150000 -c 23 -d 60
-Per port stats table
ports | 0 | 2
-----------------------------------------------------------------------------------------
opackets | 3496608453 | 3496930490
obytes | 272735459334 | 272760578220
ipackets | 2 | 2
ibytes | 338 | 338
ierrors | 0 | 0
oerrors | 0 | 0
Tx Bw | 46.69 Gbps | 46.52 Gbps
-Global stats enabled
Cpu Utilization : 85.8 % 4.7 Gb/core
Platform_factor : 1.0
Total-Tx : 93.21 Gbps
Total-Rx : 0.29 bps
Total-PPS : 149.37 Mpps
Total-CPS : 0.00 cps
Expected-PPS : 75.00 Mpps
Expected-CPS : 75.00 Mcps
Expected-BPS : 46.80 Gbps
Active-flows : 100000 Clients : 65535 Socket-util : 0.0024 %
Open-flows : 100000 Servers : 50 Socket : 100000 Socket/Clients : 1.5
Total_queue_full : 411381505
drop-rate : 93.21 Gbps
current time : 48.9 sec
test duration : 11.1 sec
- Workload is 93 Gbps and 149 Mpps.
- No
ierrorsandoerrors.