You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: source/linux/Foundational_Components/PRU-ICSS/Linux_Drivers/PRU_ICSSG_XDP.rst
+154-4Lines changed: 154 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ XDP allows running a BPF program just before the skbs are allocated in the drive
20
20
- XDP_TX :- Send the packet back to same NIC with modification(if done by the program).
21
21
- XDP_REDIRECT :- Send the packet to another NIC or to the userspace through AF_XDP Socket(discussed below).
22
22
23
-
.. Image:: /images/xdp-packet-processing.png
23
+
.. Image:: /images/XDP-packet-processing.png
24
24
25
25
As explained above, the XDP_REDIRECT can be used to send a packet directly to the userspace.
26
26
This works by using the AF_XDP socket type which was introduced specifically for this usecase.
@@ -29,7 +29,157 @@ In this process, the packet is directly sent to the userspace without going thro
29
29
30
30
.. Image:: /images/xdp-packet.png
31
31
32
-
Running XDP on EVM
33
-
##################
32
+
Use Cases for XDP
33
+
******************
34
34
35
-
The ICSSG driver supports XDP. Any application based on XDP can use ICSSG XDP Capablities. By default CONFIG_XDP_SOCKETS is enabled in .config of ti-linux-kernel.
35
+
XDP is particularly useful for these common networking scenarios:
36
+
37
+
1. **DDoS Mitigation**: High-speed packet filtering and dropping malicious traffic
38
+
2. **Load Balancing**: Efficient traffic distribution across multiple servers
39
+
3. **Packet Capture**: High-performance network monitoring without performance penalties
40
+
4. **Firewalls**: Wire-speed packet filtering based on flexible rule sets
41
+
5. **Network Analytics**: Real-time traffic analysis and monitoring
42
+
6. **Custom Network Functions**: Specialized packet handling for unique requirements
43
+
44
+
How to run XDP with PRU_ICSSG
45
+
********************************
46
+
47
+
Following configs need to be enabled in the kernel config to use XDP with PRU_ICSSG:
48
+
49
+
.. code-block:: console
50
+
51
+
CONFIG_DEBUG_INFO_BTF=y
52
+
CONFIG_BPF_PRELOAD=y
53
+
CONFIG_BPF_PRELOAD_UMD=y
54
+
CONFIG_BPF_EVENTS=y
55
+
CONFIG_BPF_LSM=y
56
+
CONFIG_DEBUG_INFO_REDUCED=n
57
+
CONFIG_FTRACE=y
58
+
CONFIG_XDP_SOCKETS=y
59
+
60
+
Tools for debugging XDP Applications
61
+
*************************************
62
+
63
+
Debugging tools for XDP development:
64
+
65
+
- bpftool - For loading and managing BPF programs
66
+
- xdpdump - For capturing XDP packet data
67
+
- perf - For performance monitoring and analysis
68
+
- bpftrace - For tracing BPF program execution
69
+
70
+
AF_XDP Sockets
71
+
###############
72
+
73
+
What are AF_XDP Sockets?
74
+
*************************
75
+
76
+
AF_XDP is a socket address family specifically designed to work with the XDP framework.
77
+
These sockets provide a high-performance interface for userspace applications to receive
78
+
and transmit network packets directly from the XDP layer, bypassing the traditional kernel networking stack.
79
+
80
+
Key characteristics of AF_XDP sockets include:
81
+
82
+
- Direct path from network driver to userspace applications
83
+
- Shared memory rings for efficient packet transfer
84
+
- Minimal overhead compared to traditional socket interfaces
85
+
- Optimized for high-throughput, low-latency applications
86
+
87
+
How AF_XDP Works
88
+
*****************
89
+
90
+
AF_XDP sockets operate through a shared memory mechanism:
91
+
92
+
1. XDP program intercepts packets at driver level
93
+
2. XDP_REDIRECT action sends packets to the socket
94
+
3. Shared memory rings (RX/TX/FILL/COMPLETION) manage packet data
95
+
4. Userspace application directly accesses the packet data
96
+
5. Zero or minimal copying depending on the mode used
97
+
98
+
The AF_XDP architecture uses several ring buffers:
99
+
100
+
- **RX Ring**: Received packets ready for consumption
101
+
- **TX Ring**: Packets to be transmitted
102
+
- **FILL Ring**: Pre-allocated buffers for incoming packets
For more details on AF_XDP please refer to the official documentation: `AF_XDP Sockets <https://www.kernel.org/doc/html/latest/networking/af_xdp.html>`_.
106
+
107
+
Current Support Status in PRU_ICSSG
108
+
************************************
109
+
110
+
The PRU_ICSSG Ethernet driver currently supports:
111
+
112
+
- Native XDP mode
113
+
- Generic XDP mode (SKB-based)
114
+
- Zero-copy mode
115
+
116
+
XDP Zero-Copy in PRU_ICSSG
117
+
###########################
118
+
119
+
Introduction to Zero-Copy Mode
120
+
*******************************
121
+
122
+
Zero-copy mode is an optimization in AF_XDP that eliminates packet data copying between the kernel and userspace. This results in significantly improved performance for high-throughput network applications.
123
+
124
+
How Zero-Copy Works
125
+
********************
126
+
127
+
In standard XDP operation (copy mode), packet data is copied from kernel memory to userspace memory when processed. Zero-copy mode eliminates this copy operation by:
128
+
129
+
1. Using memory-mapped regions shared between the kernel and userspace
130
+
2. Allowing direct DMA from network hardware into memory accessible by userspace applications
131
+
3. Managing memory ownership through descriptor rings rather than data movement
132
+
133
+
This approach provides several benefits:
134
+
- Reduced CPU utilization
135
+
- Lower memory bandwidth consumption
136
+
- Decreased latency for packet processing
137
+
- Improved overall throughput
138
+
139
+
Requirements for Zero-Copy
140
+
***************************
141
+
142
+
For zero-copy to function properly with PRU_ICSSG, ensure:
143
+
144
+
1. **Driver Support**: Verify the PRU_ICSSG driver is loaded with zero-copy support enabled
145
+
2. **Memory Alignment**: Buffer addresses must be properly aligned to page boundaries
146
+
3. **UMEM Configuration**: The UMEM area must be correctly configured:
147
+
- Properly aligned memory allocation
148
+
- Sufficient number of packet buffers
149
+
- Appropriate buffer sizes
150
+
4. **Hugepages**: Using hugepages for UMEM allocation is recommended for optimal performance
151
+
152
+
Performance Comparison
153
+
***********************
154
+
155
+
Performance testing shows that zero-copy mode can provide substantial throughput improvements compared to copy mode:
156
+
157
+
`xdpsock <https://github.com/xdp-project/bpf-examples/tree/main/AF_XDP-example>`_ opensource tool was used for testing XDP zero copy.
158
+
AF_XDP performance using 64 byte packets in Kpps:
159
+
160
+
.. list-table::
161
+
:header-rows: 1
162
+
163
+
* - Benchmark
164
+
- XDP-SKB
165
+
- XDP-Native
166
+
- XDP-Native(ZeroCopy)
167
+
* - rxdrop
168
+
- 253
169
+
- 473
170
+
- 656
171
+
* - txonly
172
+
- 350
173
+
- 354
174
+
- 855
175
+
176
+
Performance Considerations
177
+
***************************
178
+
179
+
When implementing XDP applications, consider these performance factors:
180
+
181
+
1. **Memory Alignment**: Buffers should be aligned to page boundaries for optimal performance
182
+
2. **Batch Processing**: Process multiple packets in batches when possible
183
+
3. **Poll Mode**: Use poll() or similar mechanisms to avoid blocking on socket operations
184
+
4. **Core Affinity**: Bind application threads to specific CPU cores to reduce cache contention
185
+
5. **NUMA Awareness**: Consider NUMA topology when allocating memory for packet buffers
0 commit comments