ECN is enabled by default. Connections shaped by an ECN enabled shaper will mark, rather than drop, ECN enabled packets. This should improve (theoretically) overall TCP performance and fairness in ways that normal packet drop cannot.
We are encouraging more people to enable ECN throughout their networks, on their hosts, and servers. By early 2011, ECN was enabled on approximately 12% of all the servers in the web top 100, but it is enabled on relatively few desktops, clients, and handheld devices.
Another means by which ECN enablement can help, is that by using the web proxy server, both sides of the connection may be able to negotiate an ECN or non-ECN connection separately.
Connections originating from the router have ECN enabled by default. If you experience problems making connections to your router for maintenance or other purposes, you may have a problem with ECN, and you can easily turn it on or off, to diagnose it, from the network parameters menu. The problem more likely lies with your client!
Bufferbloat.net is very interested in reports of ECN success or failures with given pieces of network equipment. We have high hopes that ECN can be more widely deployed.
TCP SACK and DSACK are enabled by default. These are optimizations that improve TCP's behavior when losses are encountered. (They only apply for connections originating from, not going through, the router)
In practice, enabling these seem to cause no harm in the general case, so long as smaller TCP windows are used.
The default TCP algorithm in this router is "Westwood+", which has been shown to work better than Vegas, over wired and wireless links. Available as options are Vegas, Yeah, Reno, Cubic, and Bic - the last two are the algorithms in most common use in the Linux world today.
Changing the default TCP algorithm on the router has no effect on performance unless you are using the bundled web proxy, or doing significant amounts of traffic to/from the local rsync or web server.
Packets going through the router use their own algorithms from the host and servers involved.
Changing the default TCP algorithm, however, can be very useful for simulating normal traffic during simulations. At some point in the near future we hope to have a bandwidth and latency under load test that more accurately measures the available bandwidth between the router and the rest of the Internet. We hope to be able to thoroughly analyze the behavior of the QoS subsystems as well as the behavior of all the intervening routers on the path.
Other TCP algorithms, such as TCP-Fit, are being evaluated.
To see what TCP algorithms are actually available:
If you wish to change the default TCP algorithm, you can do a:
echo the_algorithm_you_want > /proc/sys/net/ipv4/tcp_congestion_control
There are many other parameters affecting TCP behavior, notably (configured low) default limits on the amount of outstanding data in flight, and the various congestion windows, that need also to be modified to correctly emulate another machine's behavior on the router. Merely changing the algorithm will not yield valid, comparable results! TCP is very - almost chaotically - sensitive to initial preconditions.
We strongly recommend that you capture your pre-existing sysctl values, and your iptables and traffic shaping configurations before doing a test.
Please see the evaluating TCP behavior wiki pages for more details.
Good QoS is a holy grail that has been chased over the Internet for very long time. With the advent of streaming video delivery, simultaneously with the increasing popularity of video teleconferencing, and handheld, wireless devices, existing traffic shaping methods, passive queue management and active queue management systems no longer work as well as they did only a few years ago.
Fixing bufferbloat alone may only be part of an answer.
This router comes with QoS mechanisms enabled that are a combination of HTB, HFSC, SFQ, and RED. Set properly, it works fairly well, but new traffic shapers such as SFB and DRR have emerged that may dramatically simplify QoS to where it could be a set and forget operation.
Unfortunately, "set and forget" is not now the case. CeroWrt QoS needs to be set to the explicit up/download bandwidths available from your provider, after running a sufficiently long duration test, longer than that most online bandwidth tests run for.
The onboard QoS system is easily disabled, and re-enabled for tests of this sort. Also, as we are shipping nearly every available shaper available, we hope that you will experiment with various forms of QoS in order to get the best results for a multi-user and very mixed workload - both out to the Internet, and to your wireless devices, and under good-to-difficult conditions. We hope at some point to have a plug-able QoS system that 'just works', but we are a long way from that.
In the meantime, please experiment with what we've delivered today, but be sure to set it correctly for your bandwidth! The default is setup for a network that is 20Mbit down/4Mbit up, which most likely isn't your configuration.
There are multiple other projects experimenting with forms of QoS. This is a very active hotbed of research and development.
See Evaluating QoS Behavior for more up-to-date details.
CeroWrt includes the NETEM network emulator. With it, you can simulate delay, packet loss, packet randomization, and many other factors that can effect network connections. For more details about netem, please see the online documentation on its web site.
Packet drops occur throughout the network stack for a variety of reasons, many of which are good, and more than a few, bad. To date, few have had any insight into how, when, and why packet drop happens. The Packet drop monitor (which requires a separate tool to use) can poke into the depths of the kernel to determine this information.