Monday, 4 January 2016

Dealing with Ultra High Packet Loss

"The Brown Fox was quick, even in the face of obstacles"
  - Ian Munsie, 2016

Over the last couple of weeks my Internet connection has developed a fault which is resulting in rather high packet loss. Even doing a simple ping test to shows up to about 26% packet loss! Think about that - that means that 1 in every 4 packets might get dropped. A technician from my ISP visited last week and a Telstra technician (the company responsible for the copper phone lines) coming this Friday to hopefully sort it out, but in the meantime I'm stuck with this lossy link.

Trying to use the Internet with such high packet loss really reveals just how poorly TCP is designed for this situation. See, TCP is designed around the assumption that a lost packet means the link is congested and that it should slow down. But that is not the case for my link - a packet has a high chance of being dropped even there is no congestion whatsoever.

That leads to a situation where anything using TCP will slow down after only a few packets have been sent and at least one has been lost, and then a few packets later it will slow down again, and then again, and again, and again... While my connection should be able to maintain 300KB/s (theoretically more, but that's a ball park figure that it has been able to achieve in practice in the past), right now I'm only getting closer to 3KB/s, and some connections just hang indefinitely (it's hit or miss whether I can even finish a run). Interactive traffic is also affected, but fares slightly better - a HTML page probably only needs one packet for the HTML, so there's a 3/4 chance it will load in the first attempt... but every javascript or css file it links only has a 3/4 chance of loading and every image has a lower chance (since they are larger and will take several packets or more) - some of those will try again, but some will ultimately give up when several retries also get lost.

Now, TCP is a great protocol - the majority of the Internet runs on it and things generally work pretty well, but it's just not designed for this situation (and there's a couple of other situations which it is not suitable for either, such as very high latency links - it will not be a good choice as we advance further into space for example). The advent of WiFi led to some improvements in congestion avoidance protocols and tunables so that it doesn't immediately assume that packet loss means congestion, but even then it can only tolerate a very small amount of packet loss before performance starts to suffer - and experimenting with different algorithms and tunables made no appreciable difference to my situation whatsoever.

So, I started thinking - what we need is a protocol that does not take packet loss to mean congestion. This protocol would instead base it's estimation of the available bandwidth on how much data was actively being received, and more to the point - how this changed as it changes how much data was being transmitted.

So, for instance, if it started transmitting at (lets pick an arbitrary number) 100KB/s and then the receiver would reply back to tell the sender that it was receiving 75KB/s (25% packet loss). At this point TCP would go "oh shit, congestion - slow down!", but our theoretical protocol would instead try sending 125KB/s to see what happens - if the receiver replies to say it is now receiving 100KB/s then it knows that it has not yet hit the bandwidth limit and the discrepancy is just down to packet loss. It could then increase to 200KB/s, then 300KB/s until finally it finds when the receiver is no longer able to receive any more data.

It could also try reducing the data being sent - if there is no change in the amount being received than it knows that it was sending too fast for no good reason, while if there is a change then it knows that the original rate was ok. The results would of course need to be smoothed out to cope with real world fluctuations and the algorithm would have to periodically repeat this experiment to cope with changes in actual congestion, but with some tuning the result should be quite a bit better than what we can achieve with TCP in this situation (at least for longer downloads over relatively low latency links that can respond to changes in bandwidth faster - this would still not be a good choice for space exploration).

This protocol would need to keep track of which packets have been transmitted but not yet acknowledged, and just resend them after a while. It should not slow down until all acknowledgements have been received - if it has other packets that haven't been sent yet it could just send them and resend unacknowledged packets a little later, or if there's only a few packets it should just opportunistically resend them until they are acknowledged. It would want to be a little smart in how acknowledgements themselves are handled - in this situation an acknowledgement itself has just as much chance of being lost as a data packet, and each lost acknowledgement would mean the packets it was trying to acknowledge will be resent. But we can make some of these redundant and acknowledge a packet several times to have the best chance that the sender will see at least one acknowledgement before it tries to resend the packet.

So, I've started working on an implementation of this in Python. This is very much a first cut and should largely be considered a highly experimental proof of concept - it currently just transfers a file over UDP between two machines, has no heuristics to estimate available bandwidth (I just tell it what rate to run at), and it's acknowledgement & resend systems needs some work to reduce the number of unnecessary packets being resent, but given this download was previously measured in Bytes per second (and Steam was estimating this download would take "more than 1 year", so I had my remote server download it using steamcmd), I'd say this is a significant improvement:

Sent: 928M/2G 31% (238337 chunks, 69428 resends totalling 270M, 101 not acked) @ 294K/s
Received: 928M (238322 chunks, 5672 duplicates) @ 245K/s

The "101 not acknowledged" is just due to my hardcoding that as the maximum number of unacknowledged packets that can be pending before it starts resending - it needs to be switched to use a length of time that has elapsed since the packet was last sent compared to the latency of the network and some heuristics. With some work I should also be able to get the number of unnecessary packets being resent down quite a bit (but 5672 / 69428 is close to 10%, which is actually pretty good - this is resending 30% of the packets and the link has 26% packet loss).

Feel free to check out the code - just keep in mind that it is still highly experimental (and one 3GB file I transferred earlier ended up corrupt and had to be repaired with rsync - still need to investigate exactly what happened there) and the usage is likely to change so I won't document how to use it (hint: it supports --help):

Thursday, 24 December 2015

Stereo Photography

I have two main hobbies at the moment - I'm one of the top currently active shaderhackers that make video games work in stereo 3D and one of the developers on 3DMigoto to make this possible, and I am also into photography. I sometimes combine both of these hobbies as well, in the form of stereo photography and was recently asked about this subject.

Stereo photography can become a rather tricky subject due to some (unsolvable) technical issues I'll touch on a little below, but it can be quite fun nevertheless.


The camera I mostly use for this is a Fujifilm FinePix Real 3D W3:

It includes two lenses separated by some distance similar to human eyes (it's actually a little wider than my eyes) and takes two photos of the same subject simultaneously from different perspectives (it has other 3D modes as well, but nothing that couldn't be done with a regular camera). It also has a glasses-free 3D display built into the camera ("sweet spot" based, meaning you have to look at it straight on), which allows you to see in advance how the 3D photos will look, and is handy to show subjects themselves in 3D, which they always like.

It is also possible to take stereo photographs with any camera by taking two photos from slightly different perspectives, but this can be difficult to get the orientation right between the two, and if the subject moves (or wind blows a leaf, etc) it means the photos will not quite match up between both eyes. There are various rigs available to remove some of the error from this process.

It is also possible to use two individual (preferably identical) cameras simultaneously if their settings (focal point, focal length, f-stop) are identical and their shutters are synchronised. At some point, I'd really like to try this set up using two DSLRs with "tilt-shift" lenses rather than ordinary lenses as my experience working with stereo projections in computer graphics leads me to believe that could result in a superior stereo photograph if setup correctly with a known display size, but trying that would be somewhat expensive and I have never heard of anyone else doing it.

Viewing Options

There are a number of options available to view a stereo photograph, each with their advantages and disadvantages: computer monitors, TVs or projectors using either active shutter glasses or passive polarised glasses, anaglyph (red-cyan) glasses with any display, displays / photographs with a lenticular lens array over the top for glasses-free 3D viewing, or just simply using the cross-eyed or distance viewing techniques to see a 3D photo with no special display, or by using the mirror technique.

I personally have a laptop with a 3D display (no longer being manufactured), and a 3D DLP projector (BenQ W1070).

3D computer monitors usually use nvidia 3D Vision and are 120Hz (or higher) active displays and the V2 ones feature a low-persistence backlight (turns off while the glasses change eyes to reduce crosstalk and increase the perceived brightness). These use nvidia's proprietary active shutter glasses, which are 60Hz per eye. These types of displays are a pretty good choice, but do suffer from some degree of crosstalk, and depend on nvidia's proprietary drivers (also, for Linux I believe that a Quadro card may be required from the documentation, though I have seen reports that it might be possible to make it work with a GeForce card like we do in Windows).

3D televisions have several different 3D formats they may use. side-by-side is usually the easiest option (though not necessarily the best as it halves the horizontal resolution) and is supported by geeqie and mplayer. 3D televisions are a poor choice for stereo content as they tend to suffer from exceptionally bad crosstalk thanks to the long time it takes the pixels to change (that is, each eye can see part of the image intended for the other eye), and they tend to have pretty high latency (fine for photos, not good for gaming), but have the advantage that they are fairly common and you may already have one. Which glasses they use and whether they are active or passive will depend on the specific TV. I believe that some use DLP glasses, which are standard.

For 3D projectors we only really consider 3D DLP projectors. These are similar to 3D TVs, but they are generally a much better choice - they have zero crosstalk thanks to the speed at which the DPL mirrors are able to switch (much faster than even the best LCD) and when used for gaming are generally much lower latency than TVs. Their disadvantages are the space required (short throw versions are available for smaller rooms), need to keep the room dark (or use a rather expensive black projector screen), and replace the bulb every now and then. The active DPL glasses they use are a standard so you are not forced to use the projector's brand glasses, though beware that the projector probably won't come with any and they will need to be purchased separately. The IR signal used to synchronise the glasses is emitted from the projector and simply bounced off the projector screen.

Given the typical screen size of a projector, these have the highest risk of violating infinity for pre-rendered content (displaying an object further apart than your eyes) and photos may require a parallax adjustment to offset their left and right images to be able to comfortably view. Movies already are calibrated for a larger screen (IMAX), so no need to worry there (but 3D movies also generally suck as a result of this), and games can calibrate to whatever screen size they are being used with for the best result.

Anaglyph glasses are a low-cost option ($2 from ebay) that can be used with any display, but I would not recommend this for anything other than trying out 3D since the false colours and high crosstalk result in eye-strain. I cannot tolerate anaglyph for more than a few minutes, whereas I can comfortably wear active shutter glasses all day with 3D games. In Linux, geeqie and mplayer can both output stereo content in several forms of anaglyph (compromising between more realistic colours and less crosstalk between the eyes).

Displays with a lenticular lens array do not require glasses to view - the Fujifilm camera I use has one of these on the back. They usually require the viewer to have their head in a specific position ("sweet spot") however, though there are some that use eye-tracking to compensate for this in real time and can support a very small number of viewers anywhere in the room (I'm not sure if any of those are consumer grade yet though).

Fujifilm also produces a 3D photo frame that is aimed at users of their camera with the same sort of lenticular lens array over it. I have yet to purchase this as I have my doubts as to it's general usefulness since the fact that it still has a sweet spot means the viewer must stand in a specific spot and cannot enjoy the photos from anywhere in the room.

It is also possible to print out a photo with the left and right views interlaced and place a lenticular lens array on the photo itself, allowing for 3D prints. Fujifilm has a service to do this, but it is not available in Australia and I have yet to track down an alternative print service available here. Apparently it is possible to purchase the supplies to do this yourself.

The cross-eyed and distance viewing methods do not require any special displays as they are simply a technique you can use to view a pair of stereo images placed side-by-side. The images must be fairly close together and should not be more than about 7cm or so wide, perhaps even less. The further apart the images are on the screen, the harder these techniques are to achieve. These will not give you the full impact as using glasses with a full 3D display, but they don't cost anything and with a bit of practice can become easy.

This is an example of a photo I took with the left and right reversed for cross-eyed viewing. The trick is to go cross-eyed until the two images merge into one. To help practice this technique, hold your finger up half way between your face and the display and look at your finger instead of the display. Focus on your finger and slowly move it forwards or backwards until the images on the display behind it have merged together, then try to refocus your eyes on the 3D image without pointing them back at the screen. It may take a few attempts while you get used to the technique.

This image is set up for the distance viewing method. For this method you need to relax your eyes and allow them to defocus from the screen and look behind the display until the images merge, then try to refocus on the image without looking back at the screen.

The mirror technique works by placing a mirror in front of your nose (in this case facing to the left) so you can see a reflection of the image in the mirror. Focus on the image in the mirror and it should pop into 3D. This can be easier than the above techniques since it does not require your eyes to be looking in a different direction to their focus, and can comfortably be used to view larger stereo images, though it can be difficult to fit the entire image in the mirror (you may have to move your head back or forwards). Also, since most mirrors are imperfect (especially at this angle) they may show a double image (click for a larger version which may be easier to use this technique):


I've found that there are certain subjects that work well in stereo that don't work at all in 2D, yet just as many that work better in 2D than 3D. If you ever see a scene that looks really interesting to your eyes, but plain and uninteresting on a 2D photo when the depth has been lost (or replaced with a depth of field blur) it might just be a candidate to try in stereo - here's a good example of this:

Crosseyed Distance Mirror Left Mirror Right Anaglyph

In 2D all the rocks blend together and it becomes a plan and uninteresting shot, but in 3D the individual rocky outcroppings can easily be distinguished from one another and the shot is interesting. Here's another example:

Crosseyed Distance Mirror Left Mirror Right Anaglyph

In 2D there is nothing interesting about this shot and I would delete it, but in 3D the depth of the hole is apparent and the shot is interesting (still not really a keeper, just interesting to show the 3D).

If the subject will not gain much from 3D, it may be better shot with the additional control that a DSLR provides in 2D and without the technical problems that stereo photography brings. 3D tends to work better for closer subjects rather than those further away, and when the subject links multiple depths together.

If the subjects are too far away or too far apart they may appear as layered 2D images, which can be ok, but does not really do stereo photography justice. Zooming in on a distant subject with the camera will not provide the same stereo effect as moving closer to it (the same thing happens in 2D - you might be familiar with the dolly zoom effect, but in 3D it is far more pronounced).

For instance, this photo did not gain much from being shot in stereo as everything is just too far away and the effect is not very pronounced (displaying this on a larger screen may help a little):

Crosseyed Distance Mirror Left Mirror Right Anaglyph

Stereo photography can work especially well to show detail that is lost in a 2D image - most photographers will see running water and immediately set their camera to use a longer exposure time to get that classic artistic streaking effect, but in 3D you might do the opposite and try to freeze the water in the frame so you can examine it's structure in detail (I have better examples, but not that I can post here):

Crosseyed Distance Mirror Left Mirror Right Anaglyph

In video games, playing in stereo brings out a lot of detail that players would usually ignore - grass, leaves and rocks are no longer just there to "not look weird because they are missing" - they now have real detail and players will stop and admire just how much effort the 3D artist put into them (or in some cases how little). The same works in a stereo photo - if I were taking these in 2D I would probably have focused on an individual flower or leaf and used depth of field to emphasise it, but in 3D the wider scene is interesting as the detail on every single flower, leaf and blade of grass is apparent (if possible, best viewed on a larger screen to see the detail more clearly):

Crosseyed Distance Mirror Left Mirror Right Anaglyph

Crosseyed Distance Mirror Left Mirror Right Anaglyph

Crosseyed Distance Mirror Left Mirror Right Anaglyph


The Fujifilm camera should only be used in landscape orientation when both lenses are used since the lenses must be aligned horizontally - otherwise the images will be misaligned between the eyes and will cause eye-strain and will not be pleasant to view in stereo (if possible at all). This can be corrected in post to some extent, but only to a point - if the photo was a full 90 degrees out it will not be possible to correct (you could still salvage either of the two images as a 2D photo).

That's not to say that portraits can't be taken in stereo, but the lenses have to be aligned horizontally, whether that means using a different rig, or taking a wider angle landscape shot and cropping it to portrait.

A stereo camera sees the world in much the same way our eyes do:

\   \    /   /
 \   \  /   /
  \   \/   /
   \  /\  /
    \/  \/

But the problem is that is not beamed directly into our eyes, but rather has to be displayed on an intermediate display and we don't quite see that display the same way. There's not much that can be done about this in photography or fimography, which is one of several reasons that 3D movies are usually not considered to be very good. I do think that a pair of tilt-shift lenses could help here, but even that would not help with the fact that we do not know ahead of time what size display will be used to view the image later.

The reason the display size is important, is that if the left and right images of an object is displayed on the screen further apart than the viewer's eyes (regardless of how far away the display is), the object will appear to be beyond infinity, which quickly becomes uncomfortable or impossible to view. The only way to combat this is to shift the offset of the two images until nothing is more than 7cm apart on the largest display it might ever be displayed on. Displaying the content on a smaller screen will quickly diminish the strength of the stereo effect - therefore, the IMAX theatre in Sydney is another reason that 3D movies are considered poor as their 3D effect is reduced on anything other than the IMAX theatre in Sydney, and by the time you are viewing it in a home theatre there is almost no 3D left.

But video games do not suffer this same problem - they are rendered live and know the size of the display they are being rendered on, and can use this information to skew the projection so the viewing frustrum for each eye will touch the edge of the screen at the point of convergence, plus they can dial the overall strength of the 3D effect and the point of convergence up and down as desired:

\-    \                           /    -/
  \-   \                         /   -/
    \-  \                       /  -/
      \- \      screen of      / -/
        \-\     known size    /-/
           \-----------------/   <-- point of convergence
            \\-           -//
             \ \-       -/ /
              \  \-   -/  /
               \   \-/   /
                \ -/ \- /
                 o     o

3D screenshots of games are still a problem however - if they are scaled up to a larger display they may violate infinity, and if they are scaled down to a smaller display they will have a reduced 3D effect. Now that you know this, here are some screenshots I have taken in various games that are calibrated to a 17" display for a comparison of how they look compared to the photos:

Without the nvidia plugin that site is pretty useless, but I made a user script to add a download button to it to get at the raw side-by-side images that can be saved as .jps files and opened with a stereo photo viewer such as geeqie (Linux), sView (Windows, Linux, Mac, Android) or nvidia photo viewer (Windows):

Tuesday, 28 August 2012

Nokia N9 Bluetooth PAN, USB & Dummy Networks

Please note: All of these instructions assume you have developer mode enabled and are familiar with using the Linux console. One of the variants of dummy networking I present here also requires a package to be installed with Inception or use of an open-mode kernel to disable aegis. I present an alternative method to use a pseudo-dummy network for people who do not wish to do that.


Earlier this year I bought a Nokia N9 (then took it in for service TWICE due to a defective GPS, then returned it for a refund since Nokia had returned it un-repaired both times, then bought a new one for $200 less than I originally paid, then bought a second for my fiancé).

The SIM card I use in the N9 is a pretty basic TPG $1/month deal, which is fine for the small amount of voice calls I make, but it's 50MB of data per month is a not really enough, so I'd like it to use alternative networks wherever possible.

When working on another computer with an Internet connection, I could simply hook up the N9 via USB networking and have the computer give it a route to the Internet. That works well, but has the problem that any applications using the N9's Internet Connectivity framework (anything designed for the platform is supposed to do this via libconic) would not know that there was an Internet connection and would refuse to work - so I had to find a way to convince them that there was an active Internet connection using a dummy network. Also, this obviously wouldn't work when I was away from a computer.

I also happen to carry a pure data SIM card in my Optus MyTab with me all the time (being my primary Internet connection), so when I'm on the go I'd like to be able to connect to the Internet on the N9 via the tablet rather than use the small amount of data from the TPG SIM.

The MyTab is running CyanogenMod 7 (I'm not a fan of Android, but at $130 to try it out the price was right), so I am able to switch on the WiFi tethering on the tablet and connect that way, but it has a couple of problems:

  • It needs to be manually activated before use
  • It needs to be manually deactivated to allow the bluetooth tethering to work
  • It isn't very stable (holding a wakelock helps a lot - the terminal application can be used for this purpose)
  • It's a bit of a battery drain (at least the tablet has a huge battery)

The MyTab also supports tethering over bluetooth PAN (which I regularly use at home), so it made a lot of sense to me to connect the N9 to the tablet using that as well when I am out and about. Unfortunately, the N9 does not come with any software to connect to a bluetooth network, and I couldn't manage to find anyone else who had successfully done this (There are a couple of threads discussing it).

Fortunately, the N9 has a normal Linux userspace under the hood (one reason I'd take this over Android any day), which includes bluez 4.x and as such I was able to use that to make it do bluetooth PAN.

USB Network

Let's start with USB Networking since it is already supported on the N9 and works out of the box once developer mode is enabled (select SDK mode when plugging in).

Here's a few tricks you can do to streamline the process of using the USB network to gain an Internet connection. You will also want to follow the steps under one of the Dummy Networking sections below to allow applications (such as the web browser) to use it.

On the host, add this section to your /etc/network/interfaces (this is for Debian based distributions, if you use something else you will have work out the equivalent):

allow-hotplug usb0
iface usb0 inet static
    up iptables -t nat -I POSTROUTING -j MASQUERADE
    up iptables -A FORWARD -i usb0 -j ACCEPT
    up iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
    up echo 1 > /proc/sys/net/ipv4/ip_forward
    down echo 0 > /proc/sys/net/ipv4/ip_forward
    down iptables -F FORWARD
    down iptables -t nat -F POSTROUTING

Next, modify the same file on the N9 so that the usb0 section looks like this (this section already exists - I've just extended it a little):

auto usb0
iface usb0 inet static
    up /usr/lib/sdk-connectivity-tool/
    down /usr/lib/sdk-connectivity-tool/ stop
    up echo nameserver >> /var/run/resolv.conf
    up echo nameserver >> /var/run/resolv.conf
    down rm /var/run/resolv.conf

Now whenever you plug in the N9 and choose SDK mode it should automatically get an Internet connection with no further interaction required and you should be able to ping hosts on the Internet :)

But, you will probably notice that most applications (like the web browser) will still bring up the "Connect to internet" dialog whenever you use them and will refuse to work. To make these applications work we need to create a dummy network that they can "connect" to, while in reality they actually use the USB network.

    USB Networking Notes:
  • The iptables commands on the host will alter the firewall and routing rules to allow the N9 to connect to the Internet through the host. If you use your own firewall with other forwarding rules you may want to remove those lines and add the appropriate rules to your firewall instead.
  • The above commands will turn off all forwarding on the host and purge the FORWARD and POSTROUTING tables when the N9 is unplugged - if your host is a router for other things you definitely will want to remove those lines.
  • The two IP addresses used for the DNS lookups on the N9 are those of - you might want to replace them with some other appropriate servers. OpenDNS should be accessible from any Internet connection, which is why I chose them.
  • The N9 will use the most recently modified file under /var/run/resolv.conf* (specifically those listed in /etc/dnsmasq.conf) for DNS lookups. Which means that connecting to a WiFi/3G network AFTER bringing up the USB network would override the DNS settings. I suggest setting the DNS settings for your dummy network to match to avoid that problem.
  • The N9 doesn't run the down rules when it should, rather they seem to be delayed until the USB cable is plugged in again, when they are run immediately before the up rules. Because of the previous note, this isn't really an issue for the dnsmasq update, but it may be an issue if you wanted to do something more advanced.
  • Alternatively, there is an icd2 plugin for USB networking for the N900 available on gitorious. I haven't had a look at this yet to see if it works on the N9 or how it compares to the above technique. This would require installation with Inception.

Dummy Network

This approach to setting up a dummy network isn't for everyone. You are going to need to compile a package in the Harmattan platform SDK (or bug me to upload the one I built somewhere) and install it on the device with Inception, or use an open mode kernel. If you don't feel comfortable with this, you might prefer to use the technique discussed in the Alternative Dummy Network section instead.

First grab the dummy icd plugin from

[host]$ cd /scratchbox/users/$USER/home/$USER
[host]$ git clone git://
[host]$ scratchbox
[sbox]$ sb-menu
[sbox]$ cd libicd-network-dummy
[sbox]$ dpkg-buildpackage -rfakeroot

Now copy /scratchbox/users/$USER/home/$USER/libicd-network-dummy_0.14_armel.deb to the N9, then install and configure it on the N9 with:

[N9]$ /usr/sbin/incept libicd-network-dummy_0.14_armel.deb

[N9]$ gconftool-2 -s -t string /system/osso/connectivity/IAP/DUMMY/type DUMMY
[N9]$ gconftool-2 -s -t string /system/osso/connectivity/IAP/DUMMY/name 'Dummy network'

[N9]$ devel-su
[N9]# /sbin/initctl restart xsession/icd2

Next time the connect to Internet dialog appears you should see a new entry called 'Dummy network' that you can "connect" to so that everything thinks there is an Internet connection, while they really use your USB or bluetooth connection.

Alternative Dummy Network

This isn't ideal in that it enables the WiFi & creates a network that nearby people can see, but it does have the advantage that it works out of the box and does not require Inception or Open Mode.

Open up settings -> internet connection -> create new connection

Fill out the settings like this:

Connection name: dummy
Network Name (SSID): dummy
Use Automatically: No
network mode: ad hoc
Security method: None

Under Advanced settings, fill out these:

Auto-retrieve IP address: No
IP address:
Subnet mask:
Default gateway:

Auto-retrieve DNS address: No
Primary DNS address:
Secondary DNS address:

These are the DNS servers - feel free to substitute your own.

Then if the 'Connect to internet' dialog comes up you can connect to 'dummy', which will satisfy that while leaving your real USB/bluetooth network alone.

Bluetooth Personal Area Networking (PAN)

This is very much a work in progress that I hope to polish up and eventually package up and turn into an icd2 plugin so that it will nicely integrate into the N9's internet connectivity framework.

First thing's first - you will need to enable the bluetooth PAN plugin on the N9, by finding the line DisabledPlugins in /etc/bluetooth/main.conf and removing 'network' from the list so that it looks something like:


# List of plugins that should not be loaded on bluetoothd startup
# DisablePlugins = network,hal
DisablePlugins = hal

# Default adaper name

Then restart bluetooth by running:

[N9]$ devel-su
[N9]# /sbin/initctl restart xsession/bluetoothd

Until I package this up more nicely you will need to download my bluetooth tethering script from:

You will need to edit the dev_dbaddr in the script to match the bluetooth device you are connecting to. Note that I will almost certainly change this to read from a config file in the very near future, so you should double check the instructions in the script first.

Put the modified script on the N9 under /home/user/

You first will need to pair with the device you are connecting to in the N9's bluetooth GUI like usual.

Once paired, you may run the script from the terminal with develsh -c ./

The bluetooth connection will remain up until you press enter in the terminal window. Currently it does not detect if the connection goes away, so you would need to restart it in that case.

For convenience you may create a desktop entry for it by creating a file under /usr/share/applications/blue-tether.desktop with this contents:

[Desktop Entry]
Name=Blue Net
Exec=invoker --type=e /usr/bin/meego-terminal -n -e develsh -c /home/user/

Again, this is very much an active work in progress - expect to see a packaged version soon, and hopefully an icd2 plugin before not too long.

One Outstanding Graphical Niggle

You may have noticed that the dummy plugin doesn't have it's own icon - in the connect to Internet dialog it seems to pick a random icon, and once connected the status bar displays it as though it was a cellular data connection. As far as I can tell, the icons (and other connectivity related GUI elements) are selected by /usr/lib/conniaptype/lib* which is loaded by /usr/lib/ which is in turn used by /usr/bin/sysuid. I haven't managed to find any API references or documentation for these and I suspect being part of Nokia's GUI that they fall into the firmly closed source side of Harmattan. This would be nice to do properly if I want to create my own icd2 plugins, so if anyone has some pointers for this, please leave a note in the comments.

Why is Inception required for real dummy networking?

Well, it's because the Internet Connectivity Daemon requests CAP::sys_module (i.e. The capability to load kernel modules):

~ $ ariadne sh
Password for 'root':

/home/user # accli -I -b /usr/sbin/icd2

Because of this, aegis will only allow it to load libraries that originated from a source that has the ability to grant CAP::sys_module, which unfortunately (but understandably given what the capability allows) is only the system firmware by default, so attempting to load it would result in this (in dmesg):

credp: icd2: credential 0::16 not present in source SRC::9990007
Aegis: credp_kcheck failed 9990007
Aegis: verification failed (source origin check)

Ideally the developers would have thought of this and separated the kernel module loading out into a separate daemon so that icd2 would not require this credential and therefore would allow third-party plugins to be loaded, but since that is not the case we have to use Inception to install the dummy plugin from a source that has the ability to grant the same permissions that the system firmware enjoys (Note that the library does not actually request any permissions because libraries always inherit the permissions of the binary that loaded them - it just needs to have come from a source that could have granted it that permission).

Also, if anyone could clarify what the icd2::icd2-plugin credential is for I would appreciate it - I feel like I've missed something because it's purpose as documented (to load icd2 plugins) seems rather pointless to me (icd2 loads libraries based on gconf settings, which it can do just as well without this permission... so what is the point of this?).

Thursday, 23 February 2012

Tiling tmux Keybindings

When most people use a computer, they are are using either a compositing or stacking window manager - which basically means that windows can overlap. The major alternative to this model is known as a tiling window manager, where the window manager lays out and sizes windows such that they do not overlap each other.

I started using a tiling window manager called wmii some years ago after buying a 7" EeePC netbook and trying to find alternative software more suited to the characteristics of that machine. Most of the software I ended up using on that machine I now use on all of my Linux boxes, because I found that it suits my workflow so much better.

Wmii as a window manager primarily focuses on organising windows into tags (like multiple desktops) and columns. Within a column windows can either be sized evenly, or a single window can take up the whole height of the column, optionally with the title bars of the other windows visible (think minimised windows on steroids).

Wmii is very heavily keyboard driven (which is one of it's strengths from my point of view), though a mouse can be used for basic navigation as well. It is also heavily extensible with scripting languages and in fact almost all interactions with the window manager are actually driven by the script. It defaults to using a shell script, but also ships with equivalent python and ruby scripts (the base functionality is the same in each), and is easy to extend.

By default keyboard shortcuts provide ways to navigate left and right between columns, up and down between windows within a column, and to switch between 10 numbered tags (more tags are possible, but rarely needed). Moving a window is as simple as holding down shift while performing the same key combos used to navigate, and columns and tags are automatically created as needed (moving a window to the right of the rightmost column would create a new column for example), and automatically destroyed when no longer used.

Recent versions of wmii also work really well with multiple monitors (though there is still some room for improvement in this area) allowing windows to really easily be moved between monitors with the same shortcuts used to move windows between columns (and they way it differentiates between creating a new column on the right of the left monitor versus moving the window to the right monitor is pure genius).

Naturally with such a powerful window manager, I want to use it to manage all my windows and all my shells. The problem with this of course is SSH - specifically, when I have many remote shells open at the same time and what happens when the network goes away. You see, I've been opening a new terminal and SSH connection for each remote shell so I can use wmii to manage them, which works really great until I need to suspend my laptop or unplug it to go to a meeting, then have to spend some time re-establishing each session, getting it back to the right working directory, etc. And, I've lost the shell history specific to each terminal.

Normally people would start screen on the remote server if they expect their session to go away, and screen can also manage a number of shells simultaneously, which would be great... except that it is no where near as good at managing those shells as wmii can manage windows and if I'm going to switch it would need to be pretty darn close.

I've been aware for some time of an alternative to screen called tmux which seemed to be much more sane and feature-rich than screen, so the other day I decided to see if I could configure tmux to be a realistic option for managing many shells on a remote machine that I could detach and re-attach from when suspending my laptop.

Tmux supports multiple sessions, "windows" (like tags in wmii), and "panes" (like windows in wmii). I managed to come up with the below configuration file which sets up a bunch of keybindings similar to the ones I use in wmii (but using the Alt modifier instead of the Windows key) to move windows... err... "panes" and to navigate between them.

Unlike wmii, tmux is not focussed around columns, which technically gives it more flexibility in how the panes are arranged, but sacrifices some of the precision that the column focus gives wmii (in this regard tmux is more similar to some of the other tiling window managers available).

None of these shortcut keys need to have the tmux prefix key pressed first, as that would have defeated the whole point of this exercise:

Alt + ' - Split window vertically *
Alt + Shift + ' - Split window horizontally

Alt + h/j/k/l - Navigate left/down/up/right between panes within a window
Alt + Shift + h/j/k/l - Swap window with the one before or after it **

Alt + Ctrl + h/j/k/l - Resize pane *** - NOTE: Since many environments use Ctrl+Alt+L to lock the screen, you may want to change these to use the arrow keys instead.

Alt + number - Switch to this tag... err... "window" number, creating it if it doesn't already exist.
Alt + Shift + number - Send the currently selected pane to this window number, creating it if it doesn't already exist.

Alt + d - Tile all panes **
Alt + s - Make selected pane take up the maximum height and tile other panes off to the side **
Alt + m - Make selected pane take up the maximum width and tile other panes below **

Alt + f - Make the current pane take up the full window (actually, break it out into a new window). Reverse with Alt + Shift + number **

Alt + PageUp - Scroll pane back one page and enter copy mode. Release the alt and keep pressing page up/down to scroll and press enter when done.

* Win+Enter opens a new terminal in wmii, but Alt+Enter is already used by xterm, so I picked the key next to it

** These don't mirror the corresponding wmii bindings because I could find no exact equivalent, so I tried to make them do something similar and sensible instead.

*** By default there is no shortcut key to resize windows in wmii (though the python version of the wmiirc script provides a resize mode which is similar), so I added some to my scripts.

~/.tmux.conf (Download Latest Version Here)

# Split + spawn new shell:
# I would have used enter like wmii, but xterm already uses that, so I use the
# key next to it.
bind-key -n M-"'" split-window -v
bind-key -n M-'"' split-window -h

# Select panes:
bind-key -n M-h select-pane -L
bind-key -n M-j select-pane -D
bind-key -n M-k select-pane -U
bind-key -n M-l select-pane -R

# Move panes:
# These aren't quite what I want, as they *swap* panes *numerically* instead of
# *moving* the pane in a specified *direction*, but they will do for now.
bind-key -n M-H swap-pane -U
bind-key -n M-J swap-pane -D
bind-key -n M-K swap-pane -U
bind-key -n M-L swap-pane -D

# Resize panes (Note: Ctrl+Alt+L conflicts with the lock screen shortcut in
# many environments - you may want to consider the below alternative shortcuts
# for resizing instead):
bind-key -n M-C-h resize-pane -L
bind-key -n M-C-j resize-pane -D
bind-key -n M-C-k resize-pane -U
bind-key -n M-C-l resize-pane -R

# Alternative resize panes keys without ctrl+alt+l conflict:
# bind-key -n M-C-Left resize-pane -L
# bind-key -n M-C-Down resize-pane -D
# bind-key -n M-C-Up resize-pane -U
# bind-key -n M-C-Right resize-pane -R

# Window navigation (Oh, how I would like a for loop right now...):
bind-key -n M-0 if-shell "tmux list-windows|grep ^0" "select-window -t 0" "new-window -t 0"
bind-key -n M-1 if-shell "tmux list-windows|grep ^1" "select-window -t 1" "new-window -t 1"
bind-key -n M-2 if-shell "tmux list-windows|grep ^2" "select-window -t 2" "new-window -t 2"
bind-key -n M-3 if-shell "tmux list-windows|grep ^3" "select-window -t 3" "new-window -t 3"
bind-key -n M-4 if-shell "tmux list-windows|grep ^4" "select-window -t 4" "new-window -t 4"
bind-key -n M-5 if-shell "tmux list-windows|grep ^5" "select-window -t 5" "new-window -t 5"
bind-key -n M-6 if-shell "tmux list-windows|grep ^6" "select-window -t 6" "new-window -t 6"
bind-key -n M-7 if-shell "tmux list-windows|grep ^7" "select-window -t 7" "new-window -t 7"
bind-key -n M-8 if-shell "tmux list-windows|grep ^8" "select-window -t 8" "new-window -t 8"
bind-key -n M-9 if-shell "tmux list-windows|grep ^9" "select-window -t 9" "new-window -t 9"

# Window moving (the sleep 0.1 here is a hack, anyone know a better way?):
bind-key -n M-')' if-shell "tmux list-windows|grep ^0" "join-pane -d -t :0" "new-window -d -t 0 'sleep 0.1' \; join-pane -d -t :0"
bind-key -n M-'!' if-shell "tmux list-windows|grep ^1" "join-pane -d -t :1" "new-window -d -t 1 'sleep 0.1' \; join-pane -d -t :1"
bind-key -n M-'@' if-shell "tmux list-windows|grep ^2" "join-pane -d -t :2" "new-window -d -t 2 'sleep 0.1' \; join-pane -d -t :2"
bind-key -n M-'#' if-shell "tmux list-windows|grep ^3" "join-pane -d -t :3" "new-window -d -t 3 'sleep 0.1' \; join-pane -d -t :3"
bind-key -n M-'$' if-shell "tmux list-windows|grep ^4" "join-pane -d -t :4" "new-window -d -t 4 'sleep 0.1' \; join-pane -d -t :4"
bind-key -n M-'%' if-shell "tmux list-windows|grep ^5" "join-pane -d -t :5" "new-window -d -t 5 'sleep 0.1' \; join-pane -d -t :5"
bind-key -n M-'^' if-shell "tmux list-windows|grep ^6" "join-pane -d -t :6" "new-window -d -t 6 'sleep 0.1' \; join-pane -d -t :6"
bind-key -n M-'&' if-shell "tmux list-windows|grep ^7" "join-pane -d -t :7" "new-window -d -t 7 'sleep 0.1' \; join-pane -d -t :7"
bind-key -n M-'*' if-shell "tmux list-windows|grep ^8" "join-pane -d -t :8" "new-window -d -t 8 'sleep 0.1' \; join-pane -d -t :8"
bind-key -n M-'(' if-shell "tmux list-windows|grep ^9" "join-pane -d -t :9" "new-window -d -t 9 'sleep 0.1' \; join-pane -d -t :9"

# Set default window number to 1 instead of 0 for easier key combos:
set-option -g base-index 1

# Pane layouts (these use the same shortcut keys as wmii for similar actions,
# but don't really mirror it's behaviour):
bind-key -n M-d select-layout tiled
bind-key -n M-s select-layout main-vertical \; swap-pane -s 0
bind-key -n M-m select-layout main-horizontal \; swap-pane -s 0

# Make pane full-screen:
bind-key -n M-f break-pane
# This isn't right, it should go back where it came from:
# bind-key -n M-F join-pane -t :0

# We can't use shift+PageUp, so use Alt+PageUp then release Alt to keep
# scrolling:
bind-key -n M-PageUp copy-mode -u

# Don't interfere with vi keybindings:
set-option -s escape-time 0

# Enable mouse. Mostly to make selecting text within a pane not also grab pane
# borders or text from other panes. Unfortunately, tmux' mouse handling leaves
# something to be desired - no double/tripple click support to select a
# word/line, all mouse buttons are intercepted (middle click = I want to paste
# damnit!), no automatic X selection integration(*)...
set-window-option -g mode-mouse on
set-window-option -g mouse-select-pane on
set-window-option -g mouse-resize-pane on
set-window-option -g mouse-select-window on

# (*) This enables integration with the clipboard via termcap extensions. This
# relies on the terminal emulator passing this on to X, so to make this work
# you will need to edit your X resources to allow it - details below.
set-option -s set-clipboard on

You may also need to alter your ~/.Xresources file to make some things work (this is for xterm):

~/.Xresources (My Personal Version)

/* Make Alt+x shortcuts work in xterm */
XTerm*.metaSendsEscape: true
UXTerm*.metaSendsEscape: true

/* Allow tmux to set X selections (ie, the clipboard) */
XTerm*.disallowedWindowOps: 20,21,SetXprop
UXTerm*.disallowedWindowOps: 20,21,SetXprop

/* For some reason, this gets cleared when reloading this file: */
*customization: -color

To reload this file without logging out and back in, run:
xrdb ~/.Xresources

There's a pretty good chance that I'll continue to tweak this, so I'll try to update this post anytime I add something cool.

Edit 27/02/2012: Added mouse & clipboard integration & covered changes to .Xresources file.

Friday, 17 February 2012

SSH passwordless login WITHOUT public keys

I was recently in a situation where I needed SSH & rsync over SSH be to able to log into a remote site without prompting for a password (as it was being called from within a script and would have been non-trivial to make the script pass in a password, especially as OpenBSD-SSH does not provide a trivial mechanism for scripts to pass in passwords - see below).

Normally in this situation one would generate a public / private keypair and use that to log in without a prompt, either by leaving the private key unencrypted (ie, not protected by a passphrase), or by loading the private key into an SSH agent prior to attempting to log in (e.g. with ssh-add).

Unfortunately the server in question did not respect my ~/.ssh/authorized_keys file, so public key authentication was not an option (boo).

Well, it turns out that you can pre-authenticate SSH sessions such that an already open session is used to authenticate new sessions (actually, new sessions are basically tunnelled over the existing connection).

The option in question needs a couple of things set up to work, and it isn't obviously documented as a way to allow passwordless authentication - I had read the man page multiple times and hadn't realised what it could do until Mikey at work pointed it out to me.

To get this to work you first need to create (or modify) your ~/.ssh/config as follows:

Host *
  ControlPath ~/.ssh/master_%h_%p_%r

Now, manually connect to the host with the -M flag to ssh and enter your password as normal:

ssh -M user@host

Now, as long as you leave that connection open, further normal connections (without the -M flag) will use that connection instead of creating their own one, and will not require authentication.

Note that you may instead edit your ~/.ssh/config as follows to have SSH always create and use Master connections automatically without having to specify -M. However, some people like to manually specify when to use shared connections so that the bandwidth between the low latency interactive sessions and high throughput upload/download sessions doesn't mix as that can have a huge impact on the interactive session.

Host *
  ControlPath ~/.ssh/master_%h_%p_%r

  ControlMaster auto

Alternate method, possibly useful for scripting

Another method I was looking at using was specifying a program to return the password in the SSH_ASKPASS environment variable. Unfortunately, this environment variable is only used in some rare circumstances (namely, when no tty is present, such as when a GUI program calls SSH or rsync), and would not normally be used when running SSH from a terminal (or in the script as I was doing).

Once I found out about the -M option I stopped pursuing this line of thinking, but it may be useful in a script if the above pre-authentication method is not practical (perhaps for unattended machines).

To make SSH respect the SSH_ASKPASS environment variable when running from a terminal, I wrote a small LD_PRELOAD library that intercepts calls to open("/dev/tty") and causes them to fail.

If anyone is interested, the code for this is in my junk repository ( & You will also need a small script that echos the password (I hope it goes without saying that you should check the permissions on it) and point the SSH_ASKPASS environment variable to it.

Git trick: Deleting non-ancestor tags

Today I cloned the git tree for the pandaboard kernel, only to find that it didn't include the various kernel version tags from upstream, so running things like git describe or git log v3.0.. didn't work.

My first thought was to fetch just the tags from an upstream copy of the Linux kernel I had on my local machine:

git fetch -t ~/linus

Unfortunately I hadn't thought that though very well, as that local tree also contained all the tags from the linux-next tree, the tip tree as well as a whole bunch more from various distro trees and several other random ones, which I didn't want cluttering up my copy of the pandaboard kernel tree.

This lead me to try to find a way to delete all the non-ancestor tags (compared to the current branch) to simplify the tree. This may be useful to others to remove unused objects and make the tree smaller after a git gc -- that didn't factor into my needs as I had specified ~/linus to git clone with --reference so the objects were being shared.

Anyway, this is the script I came up with, note that this only compares the tags with the ancestors of the *current HEAD*, so you should be careful that you are on a branch with all the tags you want to keep first. Alternatively you could modify this script to collate the ancestor tags of every local/remote branch first, though this is left as an exercise for the reader.


echo -n Looking up ancestor tags...\ 
git log --simplify-by-decoration --pretty='%H' > $ancestor_tags
echo done.

for tag in $(git tag --list); do
 echo -n "$tag"
 commit=$(git show "$tag" | awk '/^commit [0-9a-f]+$/ {print $2}' | head -n 1)
 echo -n ...\ 
 if [ -z "$commit" ]; then
  echo has no commit, deleting...
  git tag -d "$tag"
 if grep $commit $ancestor_tags > /dev/null; then
  echo is an ancestor
  echo is not an ancestor, deleting...
  git tag -d "$tag"

rm -fv $ancestor_tags

Also note that this may still leave unwanted tags in if they are a direct ancestor of the current HEAD - for instance, I found a bunch of tags from the tip tree had remained afterwards, but they were much more manageable to delete with a simple for loop and a pattern.

Sunday, 14 November 2010

Bluetooth 3G Modems on Debian Linux: Chatscripts and rfcomm bluez

I've been using 3G mobile broadband to my primary Internet connection for a couple of years now, and ever since I moved out of college it has become my only Internet connection at home - It's saved me the cost, delays and headache of dealing with Telstra to sort out some kind of wired link.

In my particular setup I removed the 3G data SIM card from the USB modem that came with my plan and placed it in my Nokia N900, which I use as a bluetooth modem for my various computers (my N95 used to fill this role) as well as having the convenience of having the N900 itself connected wherever and whenever I want.

Every now and again I get asked about my setup - a lot of people seem to have had trouble setting up bluetooth modems in Linux. This is understandable - last time I checked out Network Manager I found that it could set up a USB 3G modem pretty easily but had zero provisions to set up a bluetooth modem, and the Linux bluetooth stack (bluez) also leaves something to be desired (try using bluez 4 to pair to something without X... fail). I've previously been directing these people to some posts I made on the CLUG mailing list that had my configuration files, but it's clear that it will be easier to direct people to a blog post.

The quickstart guide for those people would be scan this post, grab the file excerpts and place them where they belong, restart bluetooth and run the pon <profile> command to try to bring up the 3G connection. Then when that inevitably doesn't work read the rest of the article to figure out what you need to change to make it work. I should note that I'm using Debian so some of this article may not apply to other non Debian derived distributions (the pon and poff commands came from Debian, for example)

Firstly, a little background on the technical details we care about: 3G modems provide PPP (Point-to-Point Protocol) links to your ISP, just like the dial-up modems of old did. We even use the same protocol and method to talk to them that we used to use to talk to dial-up modems - the AT command set over some kind of serial like interface (itself encapsulated in a USB or bluetooth link).

A few things have changed though - for one they are much faster than dial-up modems. Authentication is also handled differently - we no longer (typically) use a username and password, instead handling the authentication in the SIM card. And instead of calling a phone number for your local ISP, we instead call a special number (such as *99#) to establish the link. Added to this, we now also have something called an APN (Access Point Name) to identify the IP packet data network that we want to communicate with.

There are a few important consequences of all of this. Firstly, we are using the same infrastructure (ppp, chatscripts, wvdial, ...) in Linux to connect to 3G that we used to use to connect old dial-up connections. Secondly, despite not requiring a username and password any more we still have to provide something in their stead to make everything happy even though they are ignored. We also still have the same nonsense of every ISP having a subtle difference in their authentication that affects how we connect to them. There can also be subtle differences in the AT commands we need to communicate with different modems to get them to do what we want.

Some people like using wvdial to establish their ppp links. If that works for you that's great, but my experience has been that wvdial fails in many circumstances, and getting it to work in those cases is quite often impossible, so I'm going to cover a much more tunable back to basics method: ppp + chatscripts + rfcomm.

Firstly, make sure ppp is installed (apt-get install ppp)... I hope you have some other connection than your 3G link to get that... Perhaps whatever you are reading this blog on?

We'll start with a USB connection - no sense adding the extra complexities of a bluetooth link to the mix until we have that working. I'll show the profiles I use for both the Huawei E220 USB modem that came with the plan and the USB link to my N900 (or N95).

We need two configuration files for each profile - the configuration for the ppp side of the link goes under /etc/ppp/peers/<profile> and the chatscript which tells the modem how to establish the ppp link under /etc/chatscripts/<profile>. The chatscript is referenced from the ppp configuration file, so it is possible to use one chatscript for multiple profiles, assuming the profiles are talking to the same modem (or at least that one modem doesn't require special treatment) and using the same APN.

The chatscript is responsible for initialising the modem and getting the connection to the point where pppd can take over, so I'll start with that. Here's the chatscript that I use for my Nokia N900 (USB and bluetooth), Nokia N95 and Huawei E220 USB mdoem:

"" "ATZ"
OK "ATE1V1&D2&C1S0=0+IFC=2,2"

OK "ATDT*99#"


IMPORTANT: Replace <APN> with the APN for your connection (for me on Optus post-paid mobile broadband that is "connect", for Lucy on Three pre-paid mobile broadband that is "3services" - refer to the documentation that came with your plan to find out what it is for you). If you don't you will run into inexplicable problems later.

I said above that some modems need to be treated specially in the chatscript. I used to have to use this on my Huawei E220 because I could not find one script that would satisfy both it and my N95 (the AT+IPR line below was necessary for the E220, but caused the N95 to fail), but the differences no longer seem to be necessary (firmware upgrade? Some other change I made and forgot about? Phase of the moon? I can't recall), but it might help someone so here it is:

"" "ATZ"
OK AT+CGDCONT=1,"ip","connect"
OK "ATE1V1&D2&C1S0=0+IFC=2,2"
OK "AT+IPR=115200"


"" "ATD*99#"


Now we need a profile for ppp that references that chatscript and contains all the settings necessary to establish a successful ppp link. I have a number of these for different profiles, depending on which modem I'm using and whether I'm using my Optus link or Lucy's Three link, but they all pretty similar and include some common elements, so I'll just show one combined file with comments for differences between them. All these options and more are described in man pppd:

# This can help track down problems:

# The modem device to talk to:
/dev/ttyACM0 # N900/N95 USB
#/dev/ttyUSB0 # Huawei USB
#/dev/rfcomm0 # N900 Bluetooth

# In some cases it may be necessary to specify a baud rate,
# but generally it's best to let ppp detect this:
#... etc

# Optus requires both of these options, Three requires neither.
# Other ISPs may have different authentication requirements:

# When to detach from the console:

# These are generally necessary:

# If the connection drops out try to reopen it:

# We want this to be the default internet connection:

# Get DNS settings from the ISP:

# not used, but we must provide something:
user "na"
password "na"

# Playing with these compression options *may* improve
# performance, but get it working first:

#What chatscript we are using in this profile:
connect "/usr/sbin/chat -s -S -V -f /etc/chatscripts/optus-n900"
#connect "/usr/sbin/chat -s -S -V -f /etc/chatscripts/optus-huawei"

Got that? Great, let's give it a go! Connect your modem by USB, do whatever magic incantations you need to get your modem to reveal it's modem aspects to Linux (for Nokia phones this is usually select the PC suite mode when you plug it in, some people report having to do strange things with kernel modules and udev to poke their Huawei E220 modems, though I have never found that necessary myself), shutdown your network manager and run this in a terminal:

pon <profile>

All going well hopefully you will see some output like this:

CONNECTchat: Nov 14 13:12:13 CONNECT
Serial connection established.
Using interface ppp0
Connect: ppp0 <--> /dev/rfcomm0
PAP authentication succeeded
Cannot determine ethernet address for proxy ARP
local IP address
remote IP address
primary DNS address
secondary DNS address

Obviously the exact output will vary, but usually if you see some IP and DNS addresses you have successfully connected. Otherwise you really should try to get this working before continuing to the bluetooth part. If you got as far as the CONNECT... "Serial connection established." your modem and chatscripts are probably working (assuming you APN in the chatscript is correct) and you may need to look at the ppp configuration, though you might just try a few times first - sometimes my connections take a few attempts to come up successfully.

If you haven't got as far as the CONNECT you'll need to check your modem, coverage and chatscripts to try to locate the problem. Also double check that you have specified the correct device in the ppp configuration. If you are using a phone as your modem you might try rebooting it. If you get a NO CARRIER you are likely out of coverage or your modem couldn't connect to a nearby base station for some other reason (such as it being full), though the symptoms for that are unfortunately not always consistent - failing to connect to the modem at all can also be a symptom of that (and a host of other possible causes) for instance.

There's just too many things that can go wrong by this point for me to cover here. Google is your friend. You may be able to find other people's chatscripts and ppp configuration for your modem and/or ISP that you could try.

Now you've successfully got a connection with ppp + chatscripts it's time to add bluetooth into the mix. Serial connections over bluetooth are handled with the rfcomm protocol. They are controlled with the rfcomm program and once bound show up as /dev/rfcomm0 and similar. A device can have different serial services listening on different rfcomm "channels" (like IP ports), and there is no guarantee for which services appear on which rfcomm channel. My Nokia N95 reveals it's modem on rfcomm channel 2 and it's GPS on rfcomm channel 5 (via ExtGPS), while my N900 reveals it's modem on rfcomm channel 1 (In fact it is actually running rfcomm -S -- listen -1 1 /usr/bin/pnatd {}). You can use an rfcomm scanner like rfcomm_scan from Collin Mulliner's BT Audit suite or do some trial and error to find the channel you need (there's only 30 channels and it's usually a low number).

Add a section like the following to your /etc/bluetooth/rfcomm.conf:

rfcomm0 {
 bind yes;
 device AA:BB:CC:DD:EE:FF;
 channel 1;
 comment "N900 Data";

Replacing the bluetooth address and channel number as appropriate. Then tell rfcomm to bind rfcomm0 to this device with rfcomm bind 0 (this will also happen automatically at boot).

You should now see a new file /dev/rfcomm0 which we use to communicate with the modem over bluetooth. You should make a copy of the /etc/ppp/peers/<profile> you were using to connect over bluetooth and change the new profile to use /dev/rfcomm0.

Now, we need to pair the devices together and tell the phone to trust the computer to connect whenever it wants. Pairing in bluez is still a bit hairy, particularly if you aren't using KDE or GNOME (like me) which provide their own bluez agents. In that case you don't have many options available to you. Bluez 3 used to have a hack in which you could specify a PIN to pair with under /var/lib/bluetooth/<device>/pincodes to allow pairing without an agent, however that does not work in bluez 4. Bluez provides an example console agent in the examples directory, but I have never managed to get it to work reliably with bluez 3 or bluez 4, so we now need a bluez agent, which lacking any decent console/curses agents means we need X (FAIL). This nonsense is now true even of HID devices which could previously be paired and activated with a simple hidd --search, which now doesn't trust them to re-pair to the computer so they stop working as soon as they start power saving (FAIL). Sigh, one day I'll get around to writing a decent ncurses bluez agent if no one beats me to it, but I digress.

If you aren't using GNOME or KDE you might try using the GTK bluez agent blueman instead. You'll need to have it's system tray applet (blueman-applet) running for blueman-manager to work properly (FAIL - I don't have a system tray. At least it doesn't actually need to show the tray icon to work, though if you want that "trayer" or "stalonetray" can be used to provide a temporary system tray).

Anyway, once you have some kind of bluez agent running, be it KDE's kbluetooth, gnome-bluetooth or blueman you can try to pair your phone. I say "try" because even with an agent, pairing with bluez is still hairy. In theory running the pon <profile> command will attempt to open the bluetooth link and initiate pairing, causing both phone and computer to ask for a PIN to authenticate each other - enter the same on each. If you're really lucky they might even remember that they have been paired so you don't have to do it again the next time. If you're unlucky and that didn't work you can try deleting any existing pairing from the computer and phone then using your bluetooth agent's interface to initiate a pairing. Rebooting and walking around your computer in circles while chanting "all hail bluez" over and over may also help - I wish you luck.

The good news is that you only need the bluez agent while pairing - once you successfully pair and manage to get the 3G link up (and down and up a second time to make sure it remembered what to do) you usually don't have to touch bluez again and things get a lot easier. Unless one of the devices pairings get lost or confused... Or your bluetooth address changes, or ...

Hopefully by this stage you have successfully managed to pair your computer and phone you should be able to use the pon and poff commands to bring the connection up and down as above. Congratulations, you're done! You can stop reading now. If you are getting a "host is down" error you have not successfully paired or the bluetooth link has otherwise failed. Another symptom of (non-pairing) bluetooth related problems that I've seen was getting no OK response after the initial ATZ. If you are pairing OK but only getting partway through the connection sequence you may have to go back to debugging your chatscripts and ppp options like I talked about above.

The (broadcom) bluetooth dongle I use on my EeePC introduces another complexity to the process - every time it is plugged in a couple of bits in it's bluetooth address change at random for no good reason (check with hciconfig), which as you can imagine makes it rather hard to maintain a pairing between it and anything else. I've also come across some (broadcom) bluetooth dongles with a bluetooth address of 00:00:00:00:00:00. Oddly enough, very few devices like pairing with them, and fewer still will re-pair with them automatically. If you have this problem tell broadcom they suck buy a CSR dongle you might try the dbaddr utility in the bluez source to force them to use a particular bluetooth address (if they support changing it through software, which of course is no guarantee). The script I use on my EeePC to connect shuts down my network manager and any running DHCP client, changes the bluetooth address on the dongle and opens the 3G connection:

/etc/init.d/wicd stop
killall dhclient
killall dhclient3

/usr/local/sbin/dbaddr AA:BB:CC:DD:EE:FF
hciconfig hci0 reset

pon optus-blue