home  wiki

MWRPFrottleDevDiscussion


Overview

This discussion is driving changes we are making to the frottle application. There have been three major things we wanted to address:
  1. Support for multiple interfaces on a machine
  2. Expose magic cookies in application as configuration parameters
  3. Introduce the concept of control domains

This has led to a number of other enhancements, the most significant being the desire to pass a control token from master to master. This was considered necessary to utilize frottle effectively in a mesh type network.

Multiple Interfaces

Nodes that form part of a routing backbone have multiple interfaces. As there is only one QUEUE target we needed to be able to define additional interfaces and split the incoming traffic into the appropriate set of queues for the outgoing interface.
A Node may be the master of one domain on one interface and a client in a second domain via a second interface.

Flexible configuration

The intial thought was we would be running multiple copies of frottle ona multi-interface machine. We wanted to have a way to simplify the configuration. This led to adding a number of command line args, pre- and post- script callouts and the externalization of some hard coded parameters.

Control Domains

It became clear as soon as we started looking at multiple interfaces that we needed to support the concept of a control domain. These were added and clients and masters specified in terms of the domain they belong to.

Control Token

This is new work to better support the use of frottle in a mesh environment. The problem faced here is that a mesh uses the same frequency and frottle sucks at controlling the traffic from a node that is not directly connected. A multi master approach ( as used in a routed infrastructure ) still has problems due to the single frequency used. We are now looking at using a co-master approach where masters pass control to the next master in the chain. The chain is a ring that supports splitting and healing.

Configuration


Frottle Config File

DanFlett: Here's how I'd like to be able configure a multi-interface version of Frottle...

proposed frottle.conf
daemon 1
verbose 1
logfile /var/log/frottle.log
clientstatsfile /var/www/html/frottle/client.html
masterinfofile /var/www/html/frottle/master.html
#MODE-CLIENT  INTERFACE  MASTER IP       PRIORITY PORTS  PORT    QUEUESIZE
client        wlan0      10.10.144.49    22,53           999     100
client        eth0       10.10.255.1     22,53,5001      -       100
client        eth2       10.10.129.1     -               1001
# CLIENTMODE #MODE-MASTER INTERFACE (SELF/CLIENT/NONE) TIMEOUT POLLPARAMS master eth1 self 100 60000,10,6000,7,5000,5,4000 master wlan1 - 200

You get the idea - leaving a paramater blank (if there are no parameters to be specified after it) or specifying '-' (if there are parameters to be specified after it) for a parameter sets it to default - where appropriate. This is similar to the way External linkShorewall sets out its External linkconfig files.

MW frottle patch

OK, here it is, everything you asked for and a little bit more. The major change from above is the introduction of domains. Define the domains and associate the master / client instance to a domain.

frottle-mw-patch.tar
frottle_0.2.1-MwPre0.5_mipsel.ipk

This is beta code, I have tested the patch on Minitar MNWAPB (RTL8181 mips ) and openwrt(BROADCOM mipsel).

Change log

MwPre0.3 is now built with iptables 1.3.1. and staticly linked (no need for libpthread.o)

MwPre0.4 adds manual override of RF Rate setting in client for hosts that do not support /proc/net/wireless

MwPre0.5 adds configuration of rate bands for queues ( used in conjunction with pollparams ) and supports calling an external command to get rate

MwPre0.6 adds the priority client option ( allowing the specification of a bigger slice for a know high priority machine- such as an internet gateway or a trunk ) and adds multiple sets of Queues to finish of the multi-domain work.

MwPre0.7 has a primitive token passing mechanism for passing off control from master to master. Masters and Domains need to be defined in the .conf file.

openwrt version

The openwrt kernel does not include the ip_queue code so it needs to be loaded as a module.
The ipkg for the correct module is/was here:External link iptables_extra

Install the ipkg then:
 insmod ip_queue.o
 lsmod

and away you go....

README for the patch

This patch contains a number of small changes to frottle 0.2.1

To install:

   apply patch from inside the frottle directory
   patch -p1 < path-to-patch/frottle-0.2.1-mwPre0.1.diff


The changes were done in order to make configuring frottle easier in a
multi-interface environment.

1. New command line arguments
 frottle master | client <interface> <port>
--mode master | client This argument is used to select an alternative configuration file: master /etc/frottle-master.conf client /etc/fottle-client.conf Mode parameters still need to be set in the configuration file. --interface <interface> This argument allows the interface specified in the configuration file to be overridden.
--port <port> This argument allows the Master port specified in the configuration file to be overriden. Care should be used with this argument as only a correct combination of Master IP addr and Master port will allow a client to connect to a master instance.

2. Support for re-named binary.

If the frottle binary is launched using the name frottlemaster
either renamed, hard or soft linked then it will default to -mode master

3. New configuration file options

#pidFile
#pidfile /var/run/frottle.pid

If this option is defined the process ID will be written to the file specified.
No checking for multiple instance is done at this stage. If the file already
exists then the pid will be added to the file. On shutdown the file is unlinked
(this will be a problem in the case of multiple instances).
pre and post scripts are supported for Master and Client configurations.
The specified script will be called with the following arguments:
 scriptname interface masterport
 

#Master pre-script
#masterprescript /usr/local/frottle_master_pre.sh

#Client pre-script
#clientprescript /usr/local/frottle_client_pre.sh

#Master post-script
#masterpostscript /usr/local/frottle_master_post.sh

#Client post-script
#clientpostscript /usr/local/frottle_client_post.sh

#Client re-register timeout
#clientreregister 10

The client retry time was hard coded. this is now configurable. If the client
does not hear from the master after this time it assumes it has been dropped
and will try to re-connect. 10 was the hard coded default, may be a little high.

#Client timeout
#clienttimeout 60

The client will timeout if nothing is hear from the master after this time.
It will fall out of the frottle configuration. This was hard coded and
is now configurable.

#Client link speed
#clientlink 0

In some cases the IOCTL that is used to return the link speed will fail in
the client. This will leave the client at the fastest linkspeed default. This
may not be desirable. Setting clientlink to non zero ( set to the link rate )
it will override the internal default. Use this to reduce the TX window in
clients on a slow link to prevent them hogging the available time.

4. Bug fixes

Bandwidth allocation was testing for link speed >=5 else ==2 else other.
This was changed to >=5 else >=2 else other.

Errors in the management of the client concetion state meant that once a
conncetion was timed out it would repeatedly go through flushing the queued
packets when this was not necessary.

5. Known Issues
Need some more work on the thread join for multiple interface support. This is
only on shutdown and sems to work Ok.

Config file parsing logic has a error where it is looking for blank lines.
This is cosmetic but should be fixed at some stage.

The link speed setting from the wireless interface divides the value returned
from the IOCTL cal by 1000000. It may be it should be divided by 10,000,000.
As it stands if the value returned is 54,000,000 then it is going to be 54.
In the code the time slice is allocated according to >=5, >=2, <2. As it
stands all links will be in the top allocation category. ( this needs checking)

Netfilter interaction


Frottle uses the iptables QUEUE target to delay outgoing traffic. The QUEUE target was designed to allow userspace applications, like Frottle, to inspect and control network traffic.

The QUEUE target is explained here:
External linkhttp://www.netfilter.org/documentation/HOWTO//packet-filtering-HOWTO-7.html#ss7.4

Here is the relevant exerpt:


QUEUE is a special target, which queues the packet for userspace processing. For this to be useful, two further components are required:

  • a "queue handler", which deals with the actual mechanics of passing packets between the kernel and userspace; and
  • a userspace application to receive, possibly manipulate, and issue verdicts on packets.

The standard queue handler for IPv4 iptables is the ip_queue module, which is distributed with the kernel and marked as experimental.

The following is a quick example of how to use iptables to queue packets for userspace processing:

# modprobe iptable_filter
# modprobe ip_queue
# iptables -A OUTPUT -p icmp -j QUEUE

With this rule, locally generated outgoing ICMP packets (as created with, say, ping) are passed to the ip_queue module, which then attempts to deliver the packets to a userspace application. If no userspace application is waiting, the packets are dropped.

To write a userspace application, use the libipq API. This is distributed with iptables. Example code may be found in the testsuite tools (e.g. redirect.c) in CVS.

The status of ip_queue may be checked via:

/proc/net/ip_queue

The maximum length of the queue (i.e. the number packets delivered to userspace with no verdict issued back) may be controlled via:

/proc/sys/net/ipv4/ip_queue_maxlen

The default value for the maximum queue length is 1024. Once this limit is reached, new packets will be dropped until the length of the queue falls below the limit again. Nice protocols such as TCP interpret dropped packets as congestion, and will hopefully back off when the queue fills up. However, it may take some experimenting to determine an ideal maximum queue length for a given situation if the default value is too small.


Presently, Frottle is only capable of acting on one network interface at a time on the host on which it is running. To be part of a truly scalable network, it would be desireable for Frottle to act on any and all of a hosts' wireless interfaces. For instance, a host may have one interface which is acting as a Access Point, and two more interfaces making client connections to two remote APs. It would be desireable for Frottle to be able to simultaneously act as a Master on the AP interface and as a Client on each of the client interfaces.


As mentioned in the exeprt above, the QUEUE target requires:
a userspace application to receive, possibly manipulate, and issue verdicts on packets.

Frottle does this, but currently it is able to manipulate the transmit order of packets in the QUEUE based on port number only. This is the controlled by the "HIPORT" paramater in the Frottle config file.

For Frottle to be able to act on different interfaces, it needs to be able to manipulate packets in the QUEUE based on their destination IP or Ethernet MAC address also. Packets being sent to one interface will be within a certain IP address range, packets being sent to another interface will be from a different IP address range.



LIBIPQ is the API that allows userspace applications to inspect an manipulate the packets in the QUEUE. Doing a search on Google turns up some info:
External linkGoogle Search - libipq
External linkGoogle Search - "libipq" address
A good place to start is here:
External linkQuick Intro to libipq

From reading the various pages out there on LIBIPQ, it appears it can read a field called "hw_addr" from the "ipq_packet_msg" record. This is the source MAC address of the packet. This is useful to us, as it tells us out which interface the packet is destined to leave the host. It is unclear at this point wether or not packets sent to the QUEUE from the FORWARD and OUTPUT chains will have "hw_addr" set (it looks like they don't). Perhaps QUEUEing packets from the POSTROUTING chain would work better (if this is allowed).



OK, we don't need "hw_addr", iptables MARK, routing table lookups or anything like that. The name of the outgoing is contained in the packet. LIBIPQ can read it from packets in the QUEUE.
From External linkSuperHac.com:

ipq_packet_msg structure
Just a dump of the ipq_packet_msg structure.

ipq_packet_msg structure defined in /usr/include/linux/netfilter_ipv4/ip_queue.h:


typedef struct ipq_packet_msg {
   unsigned long packet_id; /* ID of queued packet */
   unsigned long mark; /* Netfilter mark value */
   long timestamp_sec; /* Packet arrival time (seconds) */
   long timestamp_usec; /* Packet arrvial time (+useconds) */
   unsigned int hook; /* Netfilter hook we rode in on */
   char indev_nameIFNAMSIZ; /* Name of incoming interface */

char outdev_nameIFNAMSIZ; /* Name of outgoing interface */
   unsigned short hw_protocol; /* Hardware protocol (network order) */
   unsigned short hw_type; /* Hardware type */
   unsigned char hw_addrlen; /* Hardware address length */
   unsigned char hw_addr8; /* Hardware address */
   size_t data_len; /* Length of packet data */
   unsigned char payload0; /* Optional packet data */

} ipq_packet_msg_t;

"outdev_name" is what we're after. We can use this to put packets into subqueues (inside Frottle, in userspace) based on outgoing interface name.

Ongoing discussion


Version 14 (old) modified Mon, 26 Jul 2021 12:49:29 +0000 by Dan
[EditText] [Spelling] [Current] [Raw] [Code] [Diff] [Subscribe] [VersionHistory] [Revert] [Delete] [RecentChanges]
> home> about> events> files> members> maps> wiki board   > home   > categories   > search   > changes   > formatting   > extras> site map

Username
Password

 Remember me.
>

> forgotten password?
> register?
currently 0 users online
Node Statistics
building132
gathering192
interested515
operational242
testing216