logo

Could not compile stylesheet for simplistic. Using last compiled stylesheet.

changing ethernet MTU on IGEPv5

posted in IGEPv5
Monday, March 03 2014, 04:53 PM
bwolfe
bwolfe
Offline
0
Does the IGEPv5 ethernet interface allow changing the MTU size to support jumbo packets? For example, the default MTU size is 1500, but I want to change it to 7200. In linux this would be done with the command
ifconfig eth0 mtu 7200

This is critical to an application I'm using.

Accepted Answer

mcaro
mcaro
Offline
Tuesday, March 04 2014, 09:07 AM - #permalink
0
Yes, you can set this MTU:

root@localhost:~# ifconfig eth0 mtu 7200
root@localhost:~# ifconfig
eth0 Link encap:Ethernet HWaddr 02:21:05:a4:1a:40
inet addr:192.168.2.114 Bcast:192.168.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:7200 Metric:1
RX packets:782 errors:0 dropped:0 overruns:0 frame:0
TX packets:50 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:107536 (107.5 KB) TX bytes:6846 (6.8 KB)
The reply is currently minimized Show
Responses (13)
  • Accepted Answer

    pbeeson
    pbeeson
    Offline
    Wednesday, March 19 2014, 12:00 AM - #permalink
    0
    While you can indeed change the MTUs up to 9000 for the smsc75xx driver using ifconfig, the actual chipset used (unknown to me due to a lack of lspci support) only supports receiving MTUs up to 1934. I spent hours today maxing out sysctl settings and binary searching multiple programs I have theat communicate with jumbo frames, and the eth0 device will not under any circumstance receive packets sent with > 1934 MTU size, regardless of what ifconfig says it is set to.

    This makes this device unusable for any UDP streaming applications that require large jumbo frames (e.g. MTU==7200). Unfortunate, but true.
    The reply is currently minimized Show
  • Accepted Answer

    mcaro
    mcaro
    Offline
    Wednesday, March 19 2014, 09:24 AM - #permalink
    0
    the actual chipset used (unknown to me due to a lack of lspci support)

    What chipset do you mean? lspci didn't work due this board not have PCI ...

    I spent hours today maxing out sysctl settings and binary searching multiple programs I have theat communicate with jumbo frames

    I don't know what you're trying to do but when I answer to bwolfe I tested the interface as:

    ifconfig eth0 mtu 7000


    And the same (setup) in the other point and then I executed ping with packet size = 6000 without any issue and without set any sysclt ...

    This makes this device unusable for any UDP streaming applications that require large jumbo frames (e.g. MTU==7200). Unfortunate, but true.

    In this case your affirmation is false.

    * In the SMSC7500 manual in the first page explain that it support jumbo packets UP to 9K
    * I tested with 8000 packet size (with mtu = 9000) with this result:

    MacBook-Pro-Work-Manel-Caro:~ mcaro$ ping -s 8000 192.168.2.114 
    PING 192.168.2.114 (192.168.2.114): 8000 data bytes
    8008 bytes from 192.168.2.114: icmp_seq=0 ttl=64 time=1.882 ms
    8008 bytes from 192.168.2.114: icmp_seq=1 ttl=64 time=1.377 ms
    ^C
    --- 192.168.2.114 ping statistics ---
    2 packets transmitted, 2 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 1.377/1.629/1.882/0.252 ms
    


    tcpdump from the IGEPv5

    08:20:48.715921 IP 192.168.2.220 > 192.168.2.114: ICMP echo request, id 56855, seq 1, length 8008
    08:20:48.716037 IP 192.168.2.114 > 192.168.2.220: ICMP echo reply, id 56855, seq 1, length 8008


    IP: 192.168.2.114 is the IGEPv5
    I send the ping from my PC because you said that the IGEPv5 is not able to receive packets with more 2K ...

    That mean that it works ...

    for future questions, please share what steps are you trying to do or what do you want setup, without these information probably you didn't get the appropriate answer ...
    The reply is currently minimized Show
  • Accepted Answer

    bwolfe
    bwolfe
    Offline
    Wednesday, March 19 2014, 12:53 PM - #permalink
    0
    mcaro,

    I thought the "ping -s 8000 192.168.2.114" command that you use was incorrect because the '-s' option doesn't set the packet size. The correct syntax would be "ping -l 8000 192.168.2.114". Then I noticed that you're using a Mac which uses totally different syntax. Typical Apple. Anyway...

    We have a device, let's call it a camera, that sends jumbo frames of size 7200. We can set the send packet size smailler (e.g., 1500) on the camera, but the camera's timing is so critical that the extra time required to chop up the packet into smaller chunks causes performance problems on the camera.

    With other computers that support jumbo frames, we can set the PC MTU (using "sudo ifconfig eth0 mtu 7200") and then successfully run our application on that PC to consume the data from the camera. When we do the exact same thing on the IGEPv5, the application doesn't see the packets. Short of having a ethernet protocol analyzer, I'm not sure how to diagnose the failure. I'll try using a 3rd computer running WireShark to see the contents of the packets, but that will only tell me what the camera is sending, not what the computer is getting, so I'm not sure this will help diagnose the problem since it is clearly the combination of our application with the IGEPv5 hardware/OS that is failing.
    The reply is currently minimized Show
  • Accepted Answer

    mcaro
    mcaro
    Offline
    Wednesday, March 19 2014, 01:07 PM - #permalink
    0
    I thought the "ping -s 8000 192.168.2.114" command that you use was incorrect because the '-s' option doesn't set the packet size. The correct syntax would be "ping -l 8000 192.168.2.114". Then I noticed that you're using a Mac which uses totally different syntax. Typical Apple. Anyway...


    that is wrong ....

    MacBook-Pro-Work-Manel-Caro:~ mcaro$ ping  --help
    usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize] [-g sweepminsize]
                [-h sweepincrsize] [-i wait] [-l preload] [-M mask | time] [-m ttl]
                [-p pattern] [-S src_addr] [-s packetsize] [-t timeout]
                [-W waittime] [-z tos] host
           ping [-AaDdfLnoQqRrv] [-c count] [-I iface] [-i wait] [-l preload]
                [-M mask | time] [-m ttl] [-p pattern] [-S src_addr]
                [-s packetsize] [-T ttl] [-t timeout] [-W waittime]
                [-z tos] mcast-group
    MacBook-Pro-Work-Manel-Caro:~ mcaro$ ping -s 8000 192.168.2.114 
    MacBook-Pro-Work-Manel-Caro:~ mcaro$ ping -s 8000 192.168.2.114 
    PING 192.168.2.114 (192.168.2.114): 8000 data bytes
    8008 bytes from 192.168.2.114: icmp_seq=0 ttl=64 time=2.157 ms
    8008 bytes from 192.168.2.114: icmp_seq=1 ttl=64 time=1.449 ms
    8008 bytes from 192.168.2.114: icmp_seq=2 ttl=64 time=1.408 ms
    8008 bytes from 192.168.2.114: icmp_seq=3 ttl=64 time=1.522 ms
    8008 bytes from 192.168.2.114: icmp_seq=4 ttl=64 time=1.422 ms
    8008 bytes from 192.168.2.114: icmp_seq=5 ttl=64 time=1.465 ms


    the tcpdump in the IGEPv5 shows the packet length is 8008 bytes as I post before ...

    So, I suggest you install tcpdump in the IGEPv5 and check if the board receive the packets ...
    The reply is currently minimized Show
  • Accepted Answer

    pbeeson
    pbeeson
    Offline
    Wednesday, March 19 2014, 08:22 PM - #permalink
    0
    MCaro,

    Something (either hardware, driver, or OS socket layer) is NOT properly handling jumbo frames. Pinging is NOT a definitive test of network capability. My test is much more rigours than yours. I have an FPGA that broadcasts THOUSANDS of MEGABYTES per second of UDP packets using whatever size MTU I want. This data, at 7200 MTU, can be received by every modern desktop computer that I've ever tried it with -- GiGE with jumbo frame support connected *directly* to the streaming device. It DOES NOT work 100% with your IGEPv5 connected directly to my device at MTUs greater than 1500.

    This is a simple, yet network intensive, test that I ran, where a single dropped UDP packet ruins an entire message of hundreds of megabytes. Pinging once a second using TCP is one thing. Having hardware that can support UDP jumbo frames coming in in the millions per second for hours at time is something entirely different. Either this hardware, or the driver, or the Linux ARM port socket layer hardware can not handle this.
    The reply is currently minimized Show
  • Accepted Answer

    mcaro
    mcaro
    Offline
    Wednesday, March 19 2014, 09:15 PM - #permalink
    0
    Please post your logs as attachments.

    Post the tcpdump output or log without it be are not able to help you and you will not get any response.
    The reply is currently minimized Show
  • Accepted Answer

    bwolfe
    bwolfe
    Offline
    Wednesday, March 19 2014, 09:50 PM - #permalink
    0
    By "post the tcpdump output" I assume you want some specific filtering, so if you could tell me what tcpdump parameters you want to see, that might help. Of course, if you want the binary dump, I can do that, but it's going to be a big attachment.
    The reply is currently minimized Show
  • Accepted Answer

    mcaro
    mcaro
    Offline
    Wednesday, March 19 2014, 10:44 PM - #permalink
    0
    In my reply to you I wrote the ping output and the tcpdump capture (as you see is NOT tcp, ping uses IP in this case, I not doubt your application is better but I don't have your application), the better way for know if any packet is lost is use tcpdump, filter the packets for your application ...
    The reply is currently minimized Show
  • Accepted Answer

    pbeeson
    pbeeson
    Offline
    Thursday, March 20 2014, 01:06 AM - #permalink
    0
    MCaro,

    We are not saying that ping with jumbo frames does not work on our computer like it does yours. It does work exactly like you show above. But that simply shows that low-bandwidth large MTU packets can be handled. This is indeed the case.

    What we are seeing is that when we stress the network (we are constantly sending UDP packets at 300Mbps or higher), then we get dropped/missing packets when MTU size > 1500 (the larger the MTU the more dropped packets). We see dropped packets in our application.

    Posting the tcpdump won't be helpful, because it will show large UDP packets (e.g. 7140) do exist on the network. What it won't show you is that large numbers of packets that we know were sent are missing. They are being lost somewhere in either hardware, kernel driver, or the socket layer. tcpdump output is not adequate for demonstrating this high bandwidth packet loss.
    The reply is currently minimized Show
  • Accepted Answer

    mcaro
    mcaro
    Offline
    Thursday, March 20 2014, 09:44 AM - #permalink
    1
    pbeeson,

    The problem is NOT the MTU. the mtu works fine the problem is the bandwidth, IGEPv5 is not able to provide more than 300 Mb trhoughtput due it uses a LAN7500 controller that is USB <-> GEthernet controller, if you limit the bandwidth to 300 Mb it should work fine, if you up the bandwidth over upd the result is loss several packets.

    I left here a more "realistic test" using your information:

    Test A - MTU 9000, packet size = 8000 - bw: 100Mb
    ------------------------------------------------------------
    Client connecting to 192.168.2.114, UDP port 5001
    Sending 8000 byte datagrams
    UDP buffer size: 9.00 KByte (default)
    ------------------------------------------------------------
    [  4] local 192.168.2.220 port 62802 connected with 192.168.2.114 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec   119 MBytes   100 Mbits/sec
    [  4] Sent 15626 datagrams
    [  4] Server Report:
    [  4]  0.0-10.0 sec   119 MBytes   100 Mbits/sec   0.055 ms    0/15625 (0%)
    [  4]  0.0-10.0 sec  1 datagrams received out-of-order
    


    Test B - MTU 9000, packet size = 8000 - bw: 200Mb
    ------------------------------------------------------------
    Client connecting to 192.168.2.114, UDP port 5001
    Sending 8000 byte datagrams
    UDP buffer size: 9.00 KByte (default)
    ------------------------------------------------------------
    [  4] local 192.168.2.220 port 61801 connected with 192.168.2.114 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec   238 MBytes   200 Mbits/sec
    [  4] Sent 31251 datagrams
    [  4] Server Report:
    [  4]  0.0-10.0 sec   238 MBytes   200 Mbits/sec   0.286 ms    1/31250 (0.0032%)
    [  4]  0.0-10.0 sec  1 datagrams received out-of-order


    Test C - MTU 9000, packet size = 8000 - bw: 300Mb
    ------------------------------------------------------------
    Client connecting to 192.168.2.114, UDP port 5001
    Sending 8000 byte datagrams
    UDP buffer size: 9.00 KByte (default)
    ------------------------------------------------------------
    [  4] local 192.168.2.220 port 64769 connected with 192.168.2.114 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec   358 MBytes   300 Mbits/sec
    [  4] Sent 46950 datagrams
    [  4] Server Report:
    [  4]  0.0-10.0 sec   358 MBytes   300 Mbits/sec   0.179 ms    1/46949 (0.0021%)
    [  4]  0.0-10.0 sec  1 datagrams received out-of-order
    


    Test D - MTU 9000, packet size = 8000 - bw: 400Mb
    ------------------------------------------------------------
    Client connecting to 192.168.2.114, UDP port 5001
    Sending 8000 byte datagrams
    UDP buffer size: 9.00 KByte (default)
    ------------------------------------------------------------
    [  4] local 192.168.2.220 port 61201 connected with 192.168.2.114 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.0 sec   477 MBytes   400 Mbits/sec
    [  4] Sent 62501 datagrams
    [  4] Server Report:
    [  4]  0.0-10.3 sec   383 MBytes   313 Mbits/sec  15.761 ms 12289/62500 (20%)
    [  4]  0.0-10.3 sec  1 datagrams received out-of-order
    

    This last test show that lose 20% packets ... that can be similar to your situation even is limited to 400 Mb if you use a higher bw that you will lose more packets ...
    The reply is currently minimized Show
  • Accepted Answer

    pbeeson
    pbeeson
    Offline
    Thursday, March 20 2014, 01:40 PM - #permalink
    0
    Mcaro,

    1) I think you've successfully reproduced our problem. Are you using standard Linux command line tools for the above analysis or some other program? Could you post how I can reproduce this test?

    2) My belief is that if you lower the MTU to 1500 on your Test 4, you will see less packet loss. We are seeing that high bandwidth + large MTU = more dropped packets,. This is why I think MTU size is still a factor in this scenario.

    Unfortunately we were hoping to use the IGEPv5 inside of a complex sensor we are planning to sell that generates altogether 1Gbps of packets (our test was only with a portion of the data).

    I would like to use your analysis above to see how high we can actually get with bandwidth so we can decide if we are still getting enough data or not.

    If not, we will need to find another dual a15 board that actually supports full 1Gbps bandwidth.

    Thanks so much.
    The reply is currently minimized Show
  • Accepted Answer

    mcaro
    mcaro
    Offline
    Thursday, March 20 2014, 05:01 PM - #permalink
    0
    I used iperf, you can install it using the command:

    apt-get install iperf


    I setup the mtu to 9000 in my MAC and in the IGEPv5 too, I used IGEPv5 as server and my PC as client in both cases I set the datagram size to 8000 and in every test case I set the max bandwidth, I left here the command used in my pc:

    MacBook-Pro-Work-Manel-Caro:iperf-2.0.5-i686-apple-darwin10.5.0 mcaro$ ./iperf -c 192.168.2.114 -u  -b 400M -l 8000


    Where
    -u : udp
    -b : bandwidth
    -l : datagram size
    -c : client
    -s : server

    In IGEPv5 you should execute:
    iperf -s -l 8000


    You can play with different bw values ...

    If you wish high bandwidth with Cortex-A15 I suggest you check this:

    http://www.ti.com/product/66ak2h14

    But maybe the cost is high $330 - 1K units (only processor) ...

    Manel
    The reply is currently minimized Show
  • Accepted Answer

    pbeeson
    pbeeson
    Offline
    Thursday, March 20 2014, 05:24 PM - #permalink
    0
    Yes, I figured out this was iperf from googling the ouput, and then we verifed that indeed high bandwidth is the culprit. In fact lower MTUs (1500) artifically lower the bandwidth (presumably due to the overhead of more packets), so that is why low MTUs have less dropped packets (1500 MTU enforces < 300 Mbps, regardless of how fast you ask data to be sent in iperf).

    Thanks for the pointer. We may end up needing to move to Atom and away from ARM in order to get full 1Gbps performance.
    The reply is currently minimized Show
Your Reply

SUPPORT


This email address is being protected from spambots. You need JavaScript enabled to view it.
This email address is being protected from spambots. You need JavaScript enabled to view it.
IGEP Community Wiki
IGEP Community Forum
IGEP Community Online Chat