The OpenNET Project / Index page

[ новости /+++ | форум | теги | ]

Поиск:  Каталог документации

14. Controlling flows

In this chapter we are going to try to control the flows using the Linux kernel queue disciplines. Perhaps, depending on how you compiled your kernel, you will again need to run make menuconfig, re-configure your options, re-compile and re-install your kernel.

This chapter is not and does not pretend to be a tutorial about the implementation of QoS (Quality of Service) in Linux. If you don't have previous experience with QoS it's better to read some references at the end of this document to acquire the concepts required for QoS implementation.

With this advice, I'm not going to explain in detail each of the commands needed to control flows in Linux because it is not the goal of this HOWTO. However, the implementation of some of these techniques will serve us to show the bandwidth meter (based on libiptc) behaviour.

First check if you have QoS implementation options implemented in your kernel. Run make menuconfig, follow the menu to Networking options and look for last menu of this option QoS and/or fair queueing. Here use (or check if they are active) these options:

       [*] QoS and/or fair queueing
       <M> CBQ packet scheduler
       <M> CSZ packet scheduler
       [*] ATM pseudo-scheduler
       <M> The simplest PRIO pseudoscheduler
       <M> RED queue
       <M> SFQ queue
       <M> TEQL queue
       <M> TBF queue
       <M> GRED queue
       <M> Diffserv field marker
       <M> Ingress Qdisc
       [*] QoS support
       [*]   Rate estimator
       [*] Packet classifier API
       <M>   TC index classifier
       <M>   Routing table based classifier
       <M>   Firewall based classifier
       <M>   U32 classifier
       <M>   Special RSVP classifier
       <M>   Special RSVP classifier for IPv6
       [*]   Traffic policing (needed for in/egress)

Save your configuration, recompile your kernel and modules, and re-install it. We are going to use the CBQ packet scheduler to implement some queues to control bytes flow in our PC #1 NIC.

Personally I preferred the excellent HTB queueing discipline implementation by Martin Devera but actually this implementation is not in standard Linux (but it will be); for implementing it you have to patch your kernel before recompiling and it's better not to complicate things more. However I have to say that this queue discipline is a lot more simple to use than CBQ happens to be. More information on HTB queueing discipline are linked at the end of this document.

Having compiled and re-installed your kernel you have to install the iproute2 package that will be used to run the commands needed to implement the queues. Download this package from ftp://ftp.inr.ac.ru/ip-routing.

I'm working with version 2.2.4-now-ss001007. To install it follow these instructions:

bash# cp iproute2-2.2.4-now-ss001007.tar.gz /usr/local/src
bash# tar xzvf iproute2-2.2.4-now-ss001007.tar.gz
bash# cd iproute2
bash# make

After make compiles the iproute2 package successfully the ip utility will be in iproute2/ip directory and the tc utility in iproute2/tc directory. Copy both of them to /usr/bin directory:

bash# cp ip/ip /usr/bin
bash# cp tc/tc /usr/bin

Now, using the tc utility, we are going to create a CBQ queue in the interface eth0 of the PC #1 computer. This queue will have 4 classes as children and each of these classes will be used to control the 4 flows from 192.168.1.1 to 192.168.1.2 through ports 1001 to 1004.

Write and run the following commands:

bash# tc qdisc add dev eth0 root handle 1:0 cbq bandwidth 10Mbit \
avpkt 1000 cell 8

This command creates the main (root) cbq queue 1:0 in the eth0 interface; the bandwidth of this queue is 10Mbit/sec corresponding to our Ethernet interface.

Now write and run:

bash# tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth 10Mbit \
rate 1000kbit prio 8 allot 1514 cell 8 maxburst 20 avpkt 1000 bounded

This command create the main cbq class 1:1. The rate of this class will be 1000kbit/sec.

Now we are going to create 4 classes ownned by this class; the classes will have rates of 100kbit, 200kbit, 300kbit and 400kbit respectively. Write and run these commands:

bash# tc class add dev eth0 parent 1:1 classid 1:3 cbq bandwidth 10Mbit \
rate 100kbit prio 5 allot 1514 cell 8 maxburst 20 avpkt 1000

bash# tc class add dev eth0 parent 1:1 classid 1:4 cbq bandwidth 10Mbit \
rate 200kbit prio 5 allot 1514 cell 8 maxburst 20 avpkt 1000

bash# tc class add dev eth0 parent 1:1 classid 1:5 cbq bandwidth 10Mbit \
rate 300kbit prio 5 allot 1514 cell 8 maxburst 20 avpkt 1000

bash# tc class add dev eth0 parent 1:1 classid 1:6 cbq bandwidth 10Mbit \
rate 400kbit prio 5 allot 1514 cell 8 maxburst 20 avpkt 1000

Each of these classes will have a sfq queue discipline attached to them to dispatch their packets. Write and run these commands:

bash# tc qdisc add dev eth0 parent 1:3 handle 30: sfq perturb 15
bash# tc qdisc add dev eth0 parent 1:4 handle 40: sfq perturb 15
bash# tc qdisc add dev eth0 parent 1:5 handle 50: sfq perturb 15
bash# tc qdisc add dev eth0 parent 1:6 handle 60: sfq perturb 15

These commands create 4 sfq queue disciplines, one for each class. sfq queue discipline is some kind of fair controlling queue. It tries to give to each connection in an interface same oportunity to their packets to be dispatched to at all.

Finally we are going to create filters to assign flows to ports 1001, 1002, 1003 and 1004 to classes 1:3, 1:4, 1:5 and 1:6 respectively. Write and run as follows:

bash# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip \
dport 1001 0xffff flowid 1:3

bash# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip \
dport 1002 0xffff flowid 1:4

bash# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip \
dport 1003 0xffff flowid 1:5

bash# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip \
dport 1004 0xffff flowid 1:6

After running all these commands, now check your bw meter (you must be running netcat listening at ports 1001 to 1004 in PC #2 and talking in PC #1 as was explained in previous chapter and bw running in current -c mode). You will have something like this:

Current flow values ...
   1099.9k:  108.8k  196.5k  337.9k  456.8k 
   1104.2k:  115.3k  184.9k  339.9k  464.1k 
   1102.1k:  117.3k  174.7k  339.7k  470.5k 
   1114.4k:  113.6k  191.7k  340.7k  468.4k 
   1118.4k:  113.7k  194.3k  340.5k  469.9k 

bw show us how flows are controlling using queue disciplines of the Linux kernel. As you see, CBQ queue discipline is not a very precise queue but you more or less have a flow of approximately 1000=100+200+300+400 on interface eth0.

To step back, write and run as follows:

bash# tc qdisc del dev eth0 root handle 1:0 cbq

on PC #1, to delete the main (root) queue discipline and owned classes and filters.

bash# killall nc

on PC #2 and PC #1, to stop netcat.

bash# iptables -F
bash# iptables -X

on PC #1, to clear iptables rules and chains.

bash# Ctrl-C

on PC #1, tty1 to stop bw bandwidth meter.




Партнёры:
PostgresPro
Inferno Solutions
Hosting by Hoster.ru
Хостинг:

Закладки на сайте
Проследить за страницей
Created 1996-2024 by Maxim Chirkov
Добавить, Поддержать, Вебмастеру