My webserver is the same machine I use for everything else. So when someone starts downloading 10 files at once at 200KB/s max per tcp session, my web browsing gets very slow. Unacceptable. With thttpd I could easily set webserver wide bandwidth throttling, but it's SSI features leave much to be desired. So nginx.
Throttling bandwidth with nginx is hard. Limiting speeds by connection rate and max connections doesn't really do the job. You either end up with high max connections with each having a low $limit_rate, or only one or two max connections and a high $limit_rate. With the former single files take forever to download with some http clients, while lots of cocurrent file downloads will behave 'appropriately'. With the later single files download fine, but additional cocurrent downloads will go over the bandwidth limit. And if you avoid that by setting the max connection limit low thenen pages with many files to download (say, 350+ images) will take forever or 503.
My years long search came to an end when I found https://github.com/bigplum/Nginx-limit-traffic-rate-module. It does per-IP bandwidth throttling from within nginx in a clean and flexible way. I didn't really want per-IP but since most of my problems with bandwidth are generated by one IP and not multiple IPs at once this should work too.
http { limit_traffic_rate_zone rate $remote_addr 10m; # shared ram server { location /library/ { limit_traffic_rate rate 200k; } } }
Unfortunately this module is no longer maintained as of 2016. It does not play well with nginx versions from 1.9 and onwards unless you go and patch incompatiblities in the c as they come up manually.
The quest for *all* server traffic limiting never really left mind mind as I'd constantly have problems with limiting by IP. One day I realized that setting the trigger variable for the nginx traffic limit zone to something that is the same for all users of the site at once, like $server_name, might be acceptable and then the entire server would be in the same pool.
limit_traffic_rate_zone thisisaratezone $server_name 10m; # enable per-server RAM for tracking rate limiting
It just needed *something* there. This seems to work better but I'm still getting plenty of bursts 10x above the actual limit set. But better.
[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.
$ sudo /usr/bin/trickle -u 200 /usr/sbin/nginx # for testing, I made an /etc/init.d/trickle-nginx script too
My first throught was to try the userspace tool, trickle. But a little testing reminded me that the preloaded library method that it uses won't be inherited by the worker processes and only the master will be rate limited... and it isn't even sending data. So trickle doesn't work.
$ sudo tc qdisc add dev eth0 root handle 1:0 htb default 10; $ sudo tc class add dev eth0 parent 1:0 classid 1:10 htb rate 180kbps ceil 200kbps prio 0; $ sudo iptables -A OUTPUT -t mangle -p tcp --dport 80 -j MARK --set-mark 10; $ sudo tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 10 fw flowid 1:10;
I then tried to use the linux traffic control (tc) and iptables for this wherein iptables is marking the tcp packet headers so tc knows which to connections to throttle. This didn't seem to work either. It is important to note that tc considers "kbps" to be kilobytes per second, not kilobits.
Pre-configuration defaults so I don't forget.
$ tc qdisc show qdisc pfifo_fast 0: dev eth0 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 $ tc -s qdisc ls dev eth0 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 3488978147 bytes 1169465 pkt (dropped 0, overlimits 0 requeues 1567) rate 0bit 0pps backlog 0b 0p requeues 1567 # how to delete and return to defaults $ sudo tc qdisc del dev eth0 root $ sudo iptables -t mangle -v -L --line-numbers $ sudo iptables -t mangle -D OUTPUT <num-of-mangle-entry>
Post-setup it looks like,
$ tc qdisc show qdisc htb 1: dev eth0 root refcnt 2 r2q 10 default 10 direct_packets_stat 36 $ tc class show dev eth0 class htb 1:10 root prio 0 rate 1440Kbit ceil 1600Kbit burst 1599b cburst 1600b $ sudo iptables -t mangle -v -L --line-numbers Chain OUTPUT (policy ACCEPT 278K packets, 656M bytes) num pkts bytes target prot opt in out source destination 1 49 6456 MARK tcp -- any any anywhere anywhere tcp dpt:www MARK xset 0xa/0xffffffff
But this bandwidth limit effects me as localhost as well! That's not cool. And it seems to throttling my HTTP requests to other webservers as well as my local HTTP servers outgoing bandwidth. This means my web browsing is slow as shit. Which is what I was trying to avoid. Which means everything is shit. Fucking shit. Shit.
What if I white-list my IP addresses?
iptables -t mangle -A OUTPUT -o eth0 -p tcp -d 127.0.0.1 -j ACCEPT iptables -t mangle -A OUTPUT -o eth0 -p tcp -d 192.168.1.121 -j ACCEPT iptables -t mangle -A OUTPUT -o eth0 -p tcp -d <public-ip> -j ACCEPT
Even doing this does shit. Furthermore, even when I have limited everything to half of my upstream the minute some jackass begins mirroring my webserver full speed my latencies go up to multiple seconds. And my CPU usage goes through the roof any time there is any data over 100KB/s being transfered.
tc and iptables requires you to know the details of exactly what you are doing and what your current idiosyncratic setup demands. They also require a lot of CPU power. For most people, myself included, tc is shit.
I miss thttpd and its simple bandwidth throttling so much.
Since I couldn't figure out an iptables mangle rule specific enough for just the webserver's outgoing packets I'm turning to a third option: wondershaper. wondershaper just uses traffic controller (tc) too, but with more finesse apparently. It won't just be shaping nginx traffic, though, and that's a major downside.
Type, "/@say/Your message here." after the end of any URL on my site and hit enter to leave a comment. You can view them here. An example would be, http://superkuh.com/rtlsdr.html/@say/Your message here.
You may not access or use the site superkuh.com if you are under 90 years of age. If you do not agree then you must leave now.
The US Dept. of Justice has determined that violating a website's terms of service is a felony under CFAA 1030(a)2(c). Under this same law I can declare that you may only use one IP address to access this site; circumvention is a felony. Absurd, isn't it?
It is my policy to regularly delete server logs. I don't log at all for the tor onion service.
search. (via google)
I enjoy recursion, dissipating local energy gradients, lipid bilayers, particle acceleration, heliophysics instrumentation and generally anything with a high rate of change in electrical current. This site is a combination of my efforts to archive what I find interesting and my shoddy attempts to implement it as cheap as possible.
I get all email sent to @superkuh.com
Make-up any address *@superkuh.com
If I don't respond check your "spam" folder. Megacorps like google used to mark me as spam.