tag:blogger.com,1999:blog-64851671144453490712024-03-14T04:06:41.861+11:00DarkStarShoutDarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.comBlogger21125tag:blogger.com,1999:blog-6485167114445349071.post-50828021162839654222023-05-09T05:50:00.002+10:002023-05-16T16:28:34.413+10:00Let's Encrypt on OPNSense, using a local Bind server because I'm too cheap for Namecheap API<p>I've recently been migrating my home network to use an ProxMox + OPNSense based router. I used to use a fairly high end consumer grade tri-band router/AP flashed with dd-wrt, but I've long been frustrated with the fact that it basically could not be updated - whenever I tried a newer versions of dd-wrt it always ended in major stability issues forcing me to downgrade, and even if that wasn't an issue, dd-wrt recommends erasing the nvram when applying an update, which effectively means wiping all the settings and having to configure it again from scratch. This means that even if those stability issues have been resolved I'm still not really able to afford trying to update to find out, and as such I'm effectively running firmware that is almost a decade old and who knows what kind of security vulnerabilities it might be susceptible to as a result.</p>
<p>I've been pondering what to do about this for years, but a few recent factors have finally pushed me to upgrade:</p>
<ul>
<li>We have a smart home now, and the number of devices trying to connect to the 2.4GHz WiFi simultaneously was overwhelming our consumer grade WiFi devices and we'd often find a device unable to connect ("Kettle isn't responding", or we'd see one of the esphome fallback hotspots show up). Our TpLink router provided by our old ISP has a hard limit of 30 devices, and I don't think my other consumer grade APs were doing much better. When every light switch/bulb is a device on your network, this becomes an issue very quickly.</li>
<li>We recently upgraded to NBN Fibre to the Premesis with gigabit down, and our old WiFi devices were nowhere near this fast. Even the brand new TpLink WiFi 6 router provided by our new ISP cannot actually handle this speed - on WiFi with the largest channel width it supports (80MHz) it maxes out just shy of 700mbps even at point blank range.</li>
<li>We had a recent incident where our dd-wrt access point/router mysteriously locked up for several hours paralyzing our home network and smart home, and nothing I could do would make it responsive. WiFi was down, the switch was down, I couldn't even get to the admin page to find out what in the blazes was going on, and no amount of rebooting would help - actually, it seemed like every time it was about to bring up the WiFi the fault light illuminated and it rebooted itself. After a few hours it mysteriously started working again, and since dd-wrt doesn't save logs I have no idea what happened, but given how old the firmware was it wouldn't surprise me at all if it was the victim of a wireless Denial of Service attack. Unfortunately I didn't have any other devices that supported monitor mode ready to run Kismet or similar to prove this.</li>
</ul>
<p>So, given that consumer grade WiFi+router combo devices tend to be poor at both tasks we've now separated them - our WiFi is now on a Ubiquiti WiFi 6 Pro access point, which is capable of doing around 1.5gbps on the 5GHz network (to nearby devices on a 160MHz channel, but even the 80MHz channel can do over 900mbps, whipping the ISP provided TpLink) and claims to be able to support 300+ simultaneous devices, which should hopefully sort out our smart home connectivity issues for the forseable future (though we might still need a second for devices with poor signal strength on the other side of the house - still using a consumer grade AP for those...).</p>
<p>As for the router component - that's now an OPNSense software router running in a virtual machine under ProxMox on <a href="https://www.aliexpress.com/item/1005004680185160.html">one of these mini routers</a> from AliExpress.</p>
<p>As for choosing OPNSense over PFSense - for the moment that choice is made for me as PFSense doesn't yet support the 2.5gbps network ports on this device. When that changes I may consider it as I do generally value stability over bleeding edge, and OPNSense has not exactly been bug free so far (though the development team have responded near instantly to the bug reports I've filed so far, so that's a huge plus). The nice thing about running these under ProxMox is that I'll be able to shut down OPNSense VM and boot up a PFSense VM in it's place when it's ready to try out and I can easily switch back if need be.</p>
<p>Since installing the new router I've been slowly migrating services over to it from my previous router and old HP Microserver - Dynamic DNS, regular DNS and DHCP are now on OPNSense (not exactly without incident - but DHCP bug report was filed and the OPNSense dev team had fixed the issue in under 2 hours. I do miss being able to just edit a dnsmasq config file directly as we could do in dd-wrt, but realistically the web forms work fine in OPNSense). The unifi controller is now in one ProxMox container and frigate is in another. I've still got a few other services to move like Home Assistant and Plex, but there's a few others I want to set up that will need signed SSL certificates, so today's task was figuring out how to get Let's Encrypt working in OPNSense... and oh gawd this turned out to be not such an easy task. This was very much a one thing after another after another after another... And this is why I'm writing this blog post now, while it's still fresh in my mind and so next time I go through this I can refer back to it.</p>
<p>Previously I've had this all working on Debian on my HP Microserver, where it basically places a challenge file on the web server to prove to Let's Encrypt that I own the web server that the domain name points to, and I remember it taking me a while to figure out how to make that work, but I remember that it wasn't too difficult in the end - at least I didn't deem that experience worthy of a blog post! OPNSense's os-acme-client plugin supports essentially this same method so my first thought was to use that... but there was a couple of problems that meant I ultimately did not attempt using this:</p>
<ul>
<li>They introduction page in the OPNSense ACME plugin says they are "not recommended" and "Other challenge types should be preferred".</li>
<li>This method requires that the acme plugin temporarily takes over port 80 / 443 on the router, leading to some brief downtime when this happens. My current setup under Debian is not subject to this as the plugin is able to use the running apache web server so can complete the challenge with no downtime. In reality this probably isn't much concern for a home network, as the downtime would be infrequent and brief, and home internet doesn't exactly have the best uptime anyway... but it is still not desirable.</li>
<li>They have three settings "IP Auto-Discovery", "Interface" and "IP Address" that all state "NOTE:This will ONLY work if the official IP addresses are LOCALLY configured on your OPNsense firewall", which is not currently the case for me as I still have the ISP provided router between my OPNSense router and the Internet (so my OPNSense router has a private IP on its WAN interface), as it is needed to provide a VoIP service (why this ISP doesn't use one of the UNI-V ports on the NBN NTD Box like my previous ISP I don't know).</li>
<li>Even if I bypassed my ISP router so that the OPNSense router would have a public IP, if the "IP Address" field is mandatory (which is unclear, possibly one or both of the other settings would suffice in its place), my IP address is not static (ISP charges extra for that), and I do not want to have to edit anything if my IP changes (this will be a recurring theme throughout the rest of this post).</li>
</ul>
<p>Ok, that leaves... DNS-01 as the only option... that or forgoing setting this up on OPNSense altogether, but I also want to play with using OpenVPN under OPNSense at a later date, and as I understand it that needs a signed SSL certificate so I have multiple reasons to push on (Edit: DO NOT use Let's Encrypt for OpenVPN, there are serious security concerns with doing so. Always use your own personal CA for OpenVPN)...</p>
<p>My darkstarsword.net domain is registered through Namecheap, and Namecheap is supported by the acme.sh/Let's Encrypt script, and it looks very simple to use - only needing a user and API key filled out. I already have an API key that I use for dynamic DNS and I don't even need to fill out my IP address - perfect!!! Or at least that's what I would be saying if I hadn't read the acme script's <a href="https://github.com/acmesh-official/acme.sh/wiki/dnsapi#53-use-namecheap">documentation on Namecheap</a> first or noted some bug reports warning of dynamic DNS entries being wiped out after running the script. The API key they want is not the one used for dynamic DNS - it's a business / dev tools API key that is only available if your account has more than $50 credit (the fact that I've already paid 10 years in advance doesn't count apparently) or meets some other requirements. And you DO need to fill in your IP address on Namecheap's side - and as noted earlier, I don't want to go and edit anything when my IP changes.</p>
<p>So, that's out.</p>
<p>What are my options? Migrate to a different DNS provider that doesn't have such arduous requirements? Self hosting a name server doesn't seem viable given - again, my IP address is not static and I want darkstarsword.net to be stable as many of the subdomains I've added point to various cloud servers that should be available even if my home internet is down - like for instance, this blog. The acme.sh documentation does talk about a <a href="https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mode">DNS Alias mode</a>, but that suggests it needs a second domain and then I'd need to register that at another name provider which doesn't seem much better than just migrating my existing domain... but wait, why does it need a separate domain? It's just setting up a CNAME record pointing at the other domain - couldn't that point to a subdomain of my existing domain instead? Could that subdomain have its nameserver be self hosted on my own equipment and then have OPNSense update that? Yes, yes it can.</p>
<p>To try to clarify things I'm going to substitute some of the fun hostnames I'm using for more descriptive ones. In namecheap (or whatever other DNS provider you are using) you want similar to the following entries:</p>
<ul>
<li>Type="A+ Dynamic DNS Record" Host="dyndns" - This will be dynamically updated to point to your home IP.</li>
<li>Type="NS Record" Host="home_subdomain" Value="dyndns.example.net." - This creates a subdomain managed by a nameserver running on your home IP.</li>
<li>Type="CNAME Record" Host="_acme-challenge.dyndns" Value="_acme-challenge.home_subdomain.example.net." - This tells the Let's Encrypt acme.sh challenge script to look for the challenge TXT record in your home_subdomain when creating an SSL certificate for "dyndns.example.net".</li>
</ul>
<p>The A+ Dynamic DNS record type is specific to namecheap I think, other providers might work differently. On OPNSense this is updated via the os-ddclient plugin - install via System -> Firmware -> Plugins and configure under Services -> Dynamic DNS. This was reasonably straight forward to set up and I didn't encounter any issues here. Make sure that the name is resolving to your home IP before proceeding.</p>
<p>You can add additional CNAME records for additional hosts that you want certificates for, just substituting "....dyndns" in the Host field, or if you want to create a wildcard certificate just use Host="_acme-challenge" instead.</p>
<p>Next step is to install a DNS server on OPNSense... well, it already has Unbound and/or dnsmasq for your internal DNS, but AFAIK neither of those will work and so we need another one, and of course we can't just replace them because there's a bunch of features in OPNSense that only work with one or both of those, so... we'll be running two DNS servers on different ports. Some people elect to have one of these forward requests to the other, but I'm not going to do that as my internal network has no need of BIND, and the Internet has no need of my internal DNS, so at least for now I'll keep them independent of each other.</p>
<p>Head over to System -> Firmware -> Plugins and install os-bind. Start setting it up under Services -> BIND -> Configuration.</p>
<p>In the ACLs tab, create a new ACL, call it "anywhere" and set networks to "0.0.0.0/0" (maybe we can lock this down to just Let's Encrypt IPs + localhost/LAN?).</p>
<p>Back in the General tab, enable the plugin, change "Listen IPs" from "0.0.0.0" to "any" (this will be unecessary soon - I spotted they fixed this in github earlier today), change "Allow Query" to the "anywhere" ACL you just created and save. At this point you might want to verify that you can connect to BIND from your LAN - I was stuck here for some time until I worked out the issue with Listen IPs:</p>
<pre>dig @192.168.1.1 -p 53530 example.com +short
93.184.216.34</pre>
<p>Now, head over to the Primary Zones tab (I guess this used to be called Master Zones?) and create a zone for your home subdomain. Following the naming examples above and substituting with your own, set "Zone Name" to "home_subdomain.example.net", "Allow Query" to the "anywhere" ACL, "Mail Admin" to your email, and "DNS Server" to "dyndns.example.net".</p>
<p>Now create an NS record in this zone - without this BIND will refuse to load the zone. Leave the "Name" field blank, set "Type" to "NS" and set "Value" to "dyndns.example.net." - note, the trailing . is important here to indicate this is a fully qualified domain name, otherwise it would point to a sub-sub-sub...sub?-domain and BIND would complain about that too. Note that just because you need the trailing . here doesn't mean you need it elsewhere, and there's probably a few places that would break if you add it (and some where it won't matter or gets automatically added if it's missing, like on namecheap).</p>
<p>Now go and look at the Log Files section for BIND, and make sure you see "zone home_subdomain.example.net/IN: loaded serial ..." and not some error.</p>
<p>Next head on over to Firewall -> NAT -> Port Forward and add a new entry. Interface should be "WAN" (probably already set), Protocol needs to be changed to "TCP/UDP" (important, DNS needs both), Destination should be "WAN Address", "Destination Port Range" should have both From and To set to "DNS", "Redirect Target IP" should be "127.0.0.1" and "Redirect Target Port" should be "(other)" 53530. Put something meaningful in the Description field, such as "External DNS -> BIND (for ACME LetsEncrypt)", and save, then apply changes to the firewall when prompted.</p>
<p>At this point you might want to test whether this is working - I added a "test" A record to my zone in BIND to a recognisable IP address and was able to confirm that "test.home_subdomain.example.net" successfully resolved to that IP, and I didn't have to explicily point dig to my name server - it was able to find it through the breadcrumb trail through namecheap, to my BIND server then find the record. I did this test from an external server, but since we didn't set up any forwarding between Unbound and BIND testing from your LAN should be nearly equivelent.</p>
<p>Alright, home stretch - all that's left is setting up the ACME Plugin to use Let's Encrypt and start issuing certificates. Unfortunately this part went anything but smoothly for me, but given how quickly OPNSense devs move, the issues I encountered will likely already be fixed for you by the time you read this - they're already in github while I'm writing this.</p>
<p>Over in System -> Firmware -> Plugins install os-acme-client. Then head on over to Services -> ACME Client to configure it. Under Settings enable the plugin and apply. Under Accounts create two new accounts, one with the ACME CA set to "Let's Encrypt" and the second set to "Let's Encrypt Test CA" - the former is the real one, the later we use to make sure things work without worrying about being rate limited if something goes wrong. Give them distinct names so you can tell them apart at a glance and fill out your email. You can ignore the EAB fields.</p>
<p>Take a detour over to System -> Access -> Users and edit the root user. Find "API Keys" near the bottom and click the plus to add a new one. This will give you an apikey.txt file that you should open as you will need it in a moment.</p>
<p>Head back over to Services -> ACME Client -> Challenge Types and add a new entry. I named mine "OPNSense Bind Plugin" and set the type to "DNS-01" and "DNS Service" to "OPNSense BIND Plugin". I left "OPNSense Server (FQDN)" set to "localhost" (this is for the dns update script running on OPNSense to find the OPNSense API, it's not used by Let's Encrypt so I don't see any reason to use anything other than localhost here) and "OPNSense Server Port" on 443 - you may need to change this if you are using that port for another service like nginx and have relocated the OPNSense web interface to another port (in my case 443 is still being port forwarded to my old server, though this will likely change soon). "User API key" and "User API token" should be filled out with the "key=....." and "secret=....." (without the literal "key=" and "secret=" part) values from the apikey.txt file you obtained in the previous step. Save.</p>
<p>Almost done - under Certificates create a new certificate. Set the "Common Name" to "dyndns.example.net" (substituting for your own host and domain, obviously). If you are going to create a test certificate first (recommended), write something like "test" in the Description field and set the account to the "Let's Encryt Test CA" from earlier. "Challenge Type" should be "OPNSense Bind Plugin" and "DNS Alias Mode" should be "Challenge Alias Mode" (meaning the CNAME record you added in Namecheap a few pages ago is pointing to a record in your home subdomain named "_acme-challenge" - you can use the other option here if you decided you were too cool for that name. Automatic might work too - I haven't tried it), and "Challenge Alias" should be "home_subdomain.example.net".</p>
<p>Save. Make sure your certificate is enabled and click the "Issue/Renew All Certificates" button (or the one next to the certificate if you want to do it individually). Check the logs (both system + ACME), see if it worked. For me it didn't - I got an <a href="https://github.com/opnsense/plugins/issues/3420">"Invalid domain" error</a> that cost me a few hours of debugging to find it was fallout from the global movement to strike the potentially insensitive terms "master" and "slave" from general use, but that's fixed now (in github at the time of writing, hopefully live by the time anyone reads this).</p>
<p>If that worked, then duplicate the certificate, change the description and account to the real live "Let's Encrypt" CA, save, disable the test certificate and issue the real one. Also maybe delete the test certificate from System -> Trust -> Certificates.</p>
<p>That's as far as I've got for now - I haven't actually started using the certificate for anything yet (hopefully that part will be a bit easier), but I think this is enough for one blog post. Before I go though some food for thought - while setting this up I have been wondering if there might be any security concerns with this setup and potentially there could be - if an attacker was using the same ISP as you they could potentially try to take your IP - say they went to your house and shut off your power at your breaker box, then started rapidly connecting and disconnecting their own internet hoping to be randomly assigned the IP address that you were using and your dynamic DNS entry still points to until you get back online to refresh it. If they succeed they would potentially be able to issue certificates for your domains that they could then use to masquerade as your servers in future MITM attacks - maybe it's a good idea not to set your wildcard _acme-challenge so they are limited to hijacking names you intended for your home service which are probably not going to be of much use to them anyway - sure, they could theoretically MITM you while you're in a coffee shop WiFi connecting back to your home servers, but if they are capable of that you have much bigger problems on your hands. I don't think most people should be overly concerned about this, and if you are consider asking your ISP for a static IP address - after all, if this is of legitimate concern in your threat model it's worth remembering that there are a host of other similar issues possible with using a dynamic IP.</p>
DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-52208300987521690532016-01-04T20:40:00.000+11:002016-01-04T20:40:00.279+11:00Dealing with Ultra High Packet Loss<p>"The Brown Fox was quick, even in the face of obstacles"<br />
- Ian Munsie, 2016</p>
<p>Over the last couple of weeks my Internet connection has developed a fault
which is resulting in rather high packet loss. Even doing a simple ping test to
8.8.8.8 shows up to about 26% packet loss! Think about that - that means that 1
in every 4 packets might get dropped. A technician from my ISP visited last
week and a Telstra technician (the company responsible for the copper phone
lines) coming this Friday to hopefully sort it out, but in the meantime I'm
stuck with this lossy link.</p>
<p>Trying to use the Internet with such high packet loss really reveals just how
poorly TCP is designed for this situation. See, TCP is designed around the
assumption that a lost packet means the link is congested and that it should
slow down. But that is not the case for my link - a packet has a high chance of
being dropped even there is no congestion whatsoever.</p>
<p>That leads to a situation where anything using TCP will slow down after only a
few packets have been sent and at least one has been lost, and then a few
packets later it will slow down again, and then again, and again, and again...
While my connection should be able to maintain 300KB/s (theoretically more, but
that's a ball park figure that it has been able to achieve in practice in the
past), right now I'm only getting closer to 3KB/s, and some connections just
hang indefinitely (it's hit or miss whether I can even finish a speedtest.net
run). Interactive traffic is also affected, but fares slightly better - a HTML
page probably only needs one packet for the HTML, so there's a 3/4 chance it
will load in the first attempt... but every javascript or css file it links
only has a 3/4 chance of loading and every image has a lower chance (since they
are larger and will take several packets or more) - some of those will try
again, but some will ultimately give up when several retries also get lost.</p>
<p>Now, TCP is a great protocol - the majority of the Internet runs on it and
things generally work pretty well, but it's just not designed for this
situation (and there's a couple of other situations which it is not suitable
for either, such as very high latency links - it will not be a good choice as
we advance further into space for example). The advent of WiFi led to some
improvements in congestion avoidance protocols and tunables so that it doesn't
immediately assume that packet loss means congestion, but even then it can only
tolerate a very small amount of packet loss before performance starts to suffer
- and experimenting with different algorithms and tunables made no appreciable
difference to my situation whatsoever.</p>
<p>So, I started thinking - what we need is a protocol that does not take packet
loss to mean congestion. This protocol would instead base it's estimation of
the available bandwidth on how much data was actively being received, and more
to the point - how this changed as it changes how much data was being
transmitted.</p>
<p>So, for instance, if it started transmitting at (lets pick an arbitrary number)
100KB/s and then the receiver would reply back to tell the sender that it was
receiving 75KB/s (25% packet loss). At this point TCP would go "oh shit,
congestion - slow down!", but our theoretical protocol would instead try
sending 125KB/s to see what happens - if the receiver replies to say it is now
receiving 100KB/s then it knows that it has not yet hit the bandwidth limit and
the discrepancy is just down to packet loss. It could then increase to 200KB/s,
then 300KB/s until finally it finds when the receiver is no longer able to
receive any more data.</p>
<p>It could also try reducing the data being sent - if there is no change in the
amount being received than it knows that it was sending too fast for no good
reason, while if there is a change then it knows that the original rate was ok.
The results would of course need to be smoothed out to cope with real world
fluctuations and the algorithm would have to periodically repeat this
experiment to cope with changes in actual congestion, but with some tuning the
result should be quite a bit better than what we can achieve with TCP in this
situation (at least for longer downloads over relatively low latency links that
can respond to changes in bandwidth faster - this would still not be a good
choice for space exploration).</p>
<p>This protocol would need to keep track of which packets have been transmitted
but not yet acknowledged, and just resend them after a while. It should not
slow down until all acknowledgements have been received - if it has other
packets that haven't been sent yet it could just send them and resend
unacknowledged packets a little later, or if there's only a few packets it
should just opportunistically resend them until they are acknowledged. It would
want to be a little smart in how acknowledgements themselves are handled - in
this situation an acknowledgement itself has just as much chance of being lost
as a data packet, and each lost acknowledgement would mean the packets it was
trying to acknowledge will be resent. But we can make some of these redundant
and acknowledge a packet several times to have the best chance that the sender
will see at least one acknowledgement before it tries to resend the packet.</p>
<p>So, I've started working on an implementation of this in Python. This is very
much a first cut and should largely be considered a highly experimental proof
of concept - it currently just transfers a file over UDP between two machines,
has no heuristics to estimate available bandwidth (I just tell it what rate to
run at), and it's acknowledgement & resend systems needs some work to
reduce the number of unnecessary packets being resent, but given this download
was previously measured in Bytes per second (and Steam was estimating this
download would take "more than 1 year", so I had my remote server download it
using steamcmd), I'd say this is a significant improvement:</p>
<pre><code>Sent: 928M/2G 31% (238337 chunks, 69428 resends totalling 270M, 101 not acked) @ 294K/s
Received: 928M (238322 chunks, 5672 duplicates) @ 245K/s
</code></pre>
<p>The "101 not acknowledged" is just due to my hardcoding that as the maximum
number of unacknowledged packets that can be pending before it starts resending
- it needs to be switched to use a length of time that has elapsed since the
packet was last sent compared to the latency of the network and some
heuristics. With some work I should also be able to get the number of
unnecessary packets being resent down quite a bit (but 5672 / 69428 is close to
10%, which is actually pretty good - this is resending 30% of the packets and
the link has 26% packet loss).</p>
<p>Feel free to check out the code - just keep in mind that it is still highly
experimental (and one 3GB file I transferred earlier ended up corrupt and had
to be repaired with rsync - still need to investigate exactly what happened
there) and the usage is likely to change so I won't document how to use it
(hint: it supports --help):</p>
<p><a href="https://raw.githubusercontent.com/DarkStarSword/junk/master/quickfox.py">https://raw.githubusercontent.com/DarkStarSword/junk/master/quickfox.py</a></p>
DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com3tag:blogger.com,1999:blog-6485167114445349071.post-76117113294477510072015-12-24T14:02:00.000+11:002015-12-24T15:41:21.379+11:00Stereo Photography<!-- If editing this remember to use the original markdown on darkstarsword.net -->
<script><!--
function change_viewing_mode(img, mode)
{
url = "http://valen.darkstarsword.net/photos/stereo/misc/" + mode + "/" + img
document.getElementById(img + "-img").src = url + ".jpg";
document.getElementById(img + "-a").href = url + ".html";
return false; // return from onclick to prevent a tag navigating to top of page
}
//--></script>
<p>I have two main hobbies at the moment - I'm one of the <a href="https://forums.geforce.com/member/1966196/">top currently
active</a> <a href="http://wiki.bo3b.net">shaderhackers</a> that <a href="http://helixmod.blogspot.com">make video games work in stereo 3D</a> and
one of the developers on <a href="https://github.com/bo3b/3Dmigoto/releases">3DMigoto</a> to make this possible, and I am also
into photography. I sometimes combine both of these hobbies as well, in the
form of stereo photography and was recently asked about this subject.</p>
<p>Stereo photography can become a rather tricky subject due to some (unsolvable)
technical issues I'll touch on a little below, but it can be quite fun
nevertheless.</p>
<h3>Camera</h3>
<p>The camera I mostly use for this is a Fujifilm FinePix Real 3D W3:</p>
<p><a href="https://en.wikipedia.org/wiki/Fujifilm_FinePix_Real_3D">https://en.wikipedia.org/wiki/Fujifilm_FinePix_Real_3D</a></p>
<p>It includes two lenses separated by some distance similar to human eyes (it's
actually a little wider than my eyes) and takes two photos of the same subject
simultaneously from different perspectives (it has other 3D modes as well, but
nothing that couldn't be done with a regular camera). It also has a
glasses-free 3D display built into the camera ("sweet spot" based, meaning you
have to look at it straight on), which allows you to see in advance how the 3D
photos will look, and is handy to show subjects themselves in 3D, which they
always like.</p>
<p>It is also possible to take stereo photographs with any camera by taking two
photos from slightly different perspectives, but this can be difficult to get
the orientation right between the two, and if the subject moves (or wind blows
a leaf, etc) it means the photos will not quite match up between both eyes.
There are various rigs available to remove some of the error from this process.</p>
<p>It is also possible to use two individual (preferably identical) cameras
simultaneously if their settings (focal point, focal length, f-stop) are
identical and their shutters are synchronised. At some point, I'd really like
to try this set up using two DSLRs with "tilt-shift" lenses rather than
ordinary lenses as my experience working with stereo projections in computer
graphics leads me to believe that could result in a superior stereo photograph
if setup correctly with a known display size, but trying that would be somewhat
expensive and I have never heard of anyone else doing it.</p>
<h3>Viewing Options</h3>
<p>There are a number of options available to view a stereo photograph, each with
their advantages and disadvantages: computer monitors, TVs or projectors using
either active shutter glasses or passive polarised glasses, anaglyph (red-cyan)
glasses with any display, displays / photographs with a lenticular lens array
over the top for glasses-free 3D viewing, or just simply using the cross-eyed
or distance viewing techniques to see a 3D photo with no special display, or by
using the mirror technique.</p>
<p>I personally have a laptop with a 3D display (no longer being manufactured),
and a 3D DLP projector (BenQ W1070).</p>
<p>3D computer monitors usually use nvidia 3D Vision and are 120Hz (or higher)
active displays and the V2 ones feature a low-persistence backlight (turns off
while the glasses change eyes to reduce crosstalk and increase the perceived
brightness). These use nvidia's proprietary active shutter glasses, which are
60Hz per eye. These types of displays are a pretty good choice, but do suffer
from some degree of crosstalk, and depend on nvidia's proprietary drivers
(also, for Linux I believe that a Quadro card may be required from the
documentation, though I have seen reports that it might be possible to make it
work with a GeForce card like we do in Windows).</p>
<p>3D televisions have several different 3D formats they may use. side-by-side is
usually the easiest option (though not necessarily the best as it halves the
horizontal resolution) and is supported by geeqie and mplayer. 3D televisions
are a poor choice for stereo content as they tend to suffer from exceptionally
bad crosstalk thanks to the long time it takes the pixels to change (that is,
each eye can see part of the image intended for the other eye), and they tend
to have pretty high latency (fine for photos, not good for gaming), but have
the advantage that they are fairly common and you may already have one. Which
glasses they use and whether they are active or passive will depend on the
specific TV. I believe that some use DLP glasses, which are standard.</p>
<p>For 3D projectors we only really consider 3D DLP projectors. These are similar
to 3D TVs, but they are generally a much better choice - they have zero
crosstalk thanks to the speed at which the DPL mirrors are able to switch (much
faster than even the best LCD) and when used for gaming are generally much
lower latency than TVs. Their disadvantages are the space required (short throw
versions are available for smaller rooms), need to keep the room dark (or use a
rather expensive black projector screen), and replace the bulb every now and
then. The active DPL glasses they use are a standard so you are not forced to
use the projector's brand glasses, though beware that the projector probably
won't come with any and they will need to be purchased separately. The IR
signal used to synchronise the glasses is emitted from the projector and simply
bounced off the projector screen.</p>
<p>Given the typical screen size of a projector, these have the highest risk of
violating infinity for pre-rendered content (displaying an object further apart
than your eyes) and photos may require a parallax adjustment to offset their
left and right images to be able to comfortably view. Movies already are
calibrated for a larger screen (IMAX), so no need to worry there (but 3D movies
also generally suck as a result of this), and games can calibrate to whatever
screen size they are being used with for the best result.</p>
<p>Anaglyph glasses are a low-cost option ($2 from ebay) that can be used with any
display, but I would not recommend this for anything other than trying out 3D
since the false colours and high crosstalk result in eye-strain. I cannot
tolerate anaglyph for more than a few minutes, whereas I can comfortably wear
active shutter glasses all day with 3D games. In Linux, geeqie and mplayer can
both output stereo content in several forms of anaglyph (compromising between
more realistic colours and less crosstalk between the eyes).</p>
<p>Displays with a lenticular lens array do not require glasses to view - the
Fujifilm camera I use has one of these on the back. They usually require the
viewer to have their head in a specific position ("sweet spot") however, though
there are some that use eye-tracking to compensate for this in real time and
can support a very small number of viewers anywhere in the room (I'm not sure
if any of those are consumer grade yet though).</p>
<p>Fujifilm also produces a 3D photo frame that is aimed at users of their camera
with the same sort of lenticular lens array over it. I have yet to purchase
this as I have my doubts as to it's general usefulness since the fact that it
still has a sweet spot means the viewer must stand in a specific spot and
cannot enjoy the photos from anywhere in the room.</p>
<p>It is also possible to print out a photo with the left and right views
interlaced and place a lenticular lens array on the photo itself, allowing for
3D prints. Fujifilm has a service to do this, but it is not available in
Australia and I have yet to track down an alternative print service available
here. Apparently it is possible to purchase the supplies to do this yourself.</p>
<p>The cross-eyed and distance viewing methods do not require any special displays
as they are simply a technique you can use to view a pair of stereo images
placed side-by-side. The images must be fairly close together and should not be
more than about 7cm or so wide, perhaps even less. The further apart the images
are on the screen, the harder these techniques are to achieve. These will not
give you the full impact as using glasses with a full 3D display, but they
don't cost anything and with a bit of practice can become easy.</p>
<p>This is an example of a photo I took with the left and right reversed for
cross-eyed viewing. The trick is to go cross-eyed until the two images merge
into one. To help practice this technique, hold your finger up half way between
your face and the display and look at your finger instead of the display. Focus
on your finger and slowly move it forwards or backwards until the images on the
display behind it have merged together, then try to refocus your eyes on the 3D
image without pointing them back at the screen. It may take a few attempts
while you get used to the technique.</p>
<p><a href="http://valen.darkstarsword.net/photos/stereo/workshops/crosseyed/reaching.html">
<img src="http://valen.darkstarsword.net/photos/stereo/workshops/crosseyed/reaching.jpg" style="width: 100%;" />
</a></p>
<p>This image is set up for the distance viewing method. For this method you need
to relax your eyes and allow them to defocus from the screen and look behind
the display until the images merge, then try to refocus on the image without
looking back at the screen.</p>
<p><a href="http://valen.darkstarsword.net/photos/stereo/misc/distance/DSCF0292.html">
<img src="http://valen.darkstarsword.net/photos/stereo/misc/distance/DSCF0292.jpg" style="width: 100%;">
</a></p>
<p>The mirror technique works by placing a mirror in front of your nose (in this
case facing to the left) so you can see a reflection of the image in the
mirror. Focus on the image in the mirror and it should pop into 3D. This can be
easier than the above techniques since it does not require your eyes to be
looking in a different direction to their focus, and can comfortably be used to
view larger stereo images, though it can be difficult to fit the entire image
in the mirror (you may have to move your head back or forwards). Also, since
most mirrors are imperfect (especially at this angle) they may show a double
image (click for a larger version which may be easier to use this technique):</p>
<p><a href="http://valen.darkstarsword.net/photos/stereo/misc/mirrorl/DSCF1270.html">
<img src="http://valen.darkstarsword.net/photos/stereo/misc/mirrorl/DSCF1270.jpg" style="width: 100%;">
</a></p>
<h3>Subjects</h3>
<p>I've found that there are certain subjects that work well in stereo that don't
work at all in 2D, yet just as many that work better in 2D than 3D. If you ever
see a scene that looks really interesting to your eyes, but plain and
uninteresting on a 2D photo when the depth has been lost (or replaced with a
depth of field blur) it might just be a candidate to try in stereo - here's a
good example of this:</p>
<p><a id="DSCF0074-a" href="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF0074.html">
<img id="DSCF0074-img" src="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF0074.jpg" style="width: 100%;"></a><br />
<a href="#" onclick="return change_viewing_mode('DSCF0074', 'crosseyed');">Crosseyed</a>
<a href="#" onclick="return change_viewing_mode('DSCF0074', 'distance');">Distance</a>
<a href="#" onclick="return change_viewing_mode('DSCF0074', 'mirrorl');">Mirror Left</a>
<a href="#" onclick="return change_viewing_mode('DSCF0074', 'mirrorr');">Mirror Right</a>
<a href="#" onclick="return change_viewing_mode('DSCF0074', 'anaglyph');">Anaglyph</a>
</p>
<p>In 2D all the rocks blend together and it becomes a plan and uninteresting
shot, but in 3D the individual rocky outcroppings can easily be distinguished
from one another and the shot is interesting. Here's another example:</p>
<p><a id="DSCF1991-a" href="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1991.html">
<img id="DSCF1991-img" src="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1991.jpg" style="width: 100%;"></a><br />
<a href="#" onclick="return change_viewing_mode('DSCF1991', 'crosseyed');">Crosseyed</a>
<a href="#" onclick="return change_viewing_mode('DSCF1991', 'distance');">Distance</a>
<a href="#" onclick="return change_viewing_mode('DSCF1991', 'mirrorl');">Mirror Left</a>
<a href="#" onclick="return change_viewing_mode('DSCF1991', 'mirrorr');">Mirror Right</a>
<a href="#" onclick="return change_viewing_mode('DSCF1991', 'anaglyph');">Anaglyph</a>
</p>
<p>In 2D there is nothing interesting about this shot and I would delete it, but
in 3D the depth of the hole is apparent and the shot is interesting (still not
really a keeper, just interesting to show the 3D).</p>
<p>If the subject will not gain much from 3D, it may be better shot with the
additional control that a DSLR provides in 2D and without the technical
problems that stereo photography brings. 3D tends to work better for closer
subjects rather than those further away, and when the subject links multiple
depths together.</p>
<p>If the subjects are too far away or too far apart they may appear as layered 2D
images, which can be ok, but does not really do stereo photography justice.
Zooming in on a distant subject with the camera will not provide the same
stereo effect as moving closer to it (the same thing happens in 2D - you might
be familiar with the dolly zoom effect, but in 3D it is far more pronounced).</p>
<p>For instance, this photo did not gain much from being shot in stereo as
everything is just too far away and the effect is not very pronounced
(displaying this on a larger screen may help a little):</p>
<p><a id="DSCF1318-a" href="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1318.html">
<img id="DSCF1318-img" src="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1318.jpg" style="width: 100%;"></a><br />
<a href="#" onclick="return change_viewing_mode('DSCF1318', 'crosseyed');">Crosseyed</a>
<a href="#" onclick="return change_viewing_mode('DSCF1318', 'distance');">Distance</a>
<a href="#" onclick="return change_viewing_mode('DSCF1318', 'mirrorl');">Mirror Left</a>
<a href="#" onclick="return change_viewing_mode('DSCF1318', 'mirrorr');">Mirror Right</a>
<a href="#" onclick="return change_viewing_mode('DSCF1318', 'anaglyph');">Anaglyph</a>
</p>
<p>Stereo photography can work especially well to show detail that is lost in a 2D
image - most photographers will see running water and immediately set their
camera to use a longer exposure time to get that classic artistic streaking
effect, but in 3D you might do the opposite and try to freeze the water in the
frame so you can examine it's structure in detail (I have better examples, but
not that I can post here):</p>
<p><a id="DSCF1317-a" href="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1317.html">
<img id="DSCF1317-img" src="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1317.jpg" style="width: 100%;"></a><br />
<a href="#" onclick="return change_viewing_mode('DSCF1317', 'crosseyed');">Crosseyed</a>
<a href="#" onclick="return change_viewing_mode('DSCF1317', 'distance');">Distance</a>
<a href="#" onclick="return change_viewing_mode('DSCF1317', 'mirrorl');">Mirror Left</a>
<a href="#" onclick="return change_viewing_mode('DSCF1317', 'mirrorr');">Mirror Right</a>
<a href="#" onclick="return change_viewing_mode('DSCF1317', 'anaglyph');">Anaglyph</a>
</p>
<p>In video games, playing in stereo brings out a lot of detail that players would
usually ignore - grass, leaves and rocks are no longer just there to "not look
weird because they are missing" - they now have real detail and players will
stop and admire just how much effort the 3D artist put into them (or in some
cases how little). The same works in a stereo photo - if I were taking these in
2D I would probably have focused on an individual flower or leaf and used depth
of field to emphasise it, but in 3D the wider scene is interesting as the
detail on every single flower, leaf and blade of grass is apparent (if
possible, best viewed on a larger screen to see the detail more clearly):</p>
<p><a id="DSCF1217-a" href="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1217.html">
<img id="DSCF1217-img" src="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1217.jpg" style="width: 100%;"></a><br />
<a href="#" onclick="return change_viewing_mode('DSCF1217', 'crosseyed');">Crosseyed</a>
<a href="#" onclick="return change_viewing_mode('DSCF1217', 'distance');">Distance</a>
<a href="#" onclick="return change_viewing_mode('DSCF1217', 'mirrorl');">Mirror Left</a>
<a href="#" onclick="return change_viewing_mode('DSCF1217', 'mirrorr');">Mirror Right</a>
<a href="#" onclick="return change_viewing_mode('DSCF1217', 'anaglyph');">Anaglyph</a>
</p>
<p><a id="DSCF1224-a" href="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1224.html">
<img id="DSCF1224-img" src="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1224.jpg" style="width: 100%;"></a><br />
<a href="#" onclick="return change_viewing_mode('DSCF1224', 'crosseyed');">Crosseyed</a>
<a href="#" onclick="return change_viewing_mode('DSCF1224', 'distance');">Distance</a>
<a href="#" onclick="return change_viewing_mode('DSCF1224', 'mirrorl');">Mirror Left</a>
<a href="#" onclick="return change_viewing_mode('DSCF1224', 'mirrorr');">Mirror Right</a>
<a href="#" onclick="return change_viewing_mode('DSCF1224', 'anaglyph');">Anaglyph</a>
</p>
<p><a id="DSCF1214-a" href="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1214.html">
<img id="DSCF1214-img" src="http://valen.darkstarsword.net/photos/stereo/misc/crosseyed/DSCF1214.jpg" style="width: 100%;"></a><br />
<a href="#" onclick="return change_viewing_mode('DSCF1214', 'crosseyed');">Crosseyed</a>
<a href="#" onclick="return change_viewing_mode('DSCF1214', 'distance');">Distance</a>
<a href="#" onclick="return change_viewing_mode('DSCF1214', 'mirrorl');">Mirror Left</a>
<a href="#" onclick="return change_viewing_mode('DSCF1214', 'mirrorr');">Mirror Right</a>
<a href="#" onclick="return change_viewing_mode('DSCF1214', 'anaglyph');">Anaglyph</a>
</p>
<h3>Issues</h3>
<p>The Fujifilm camera should only be used in landscape orientation when both
lenses are used since the lenses must be aligned horizontally - otherwise the
images will be misaligned between the eyes and will cause eye-strain and will
not be pleasant to view in stereo (if possible at all). This can be corrected
in post to some extent, but only to a point - if the photo was a full 90
degrees out it will not be possible to correct (you could still salvage either
of the two images as a 2D photo).</p>
<p>That's not to say that portraits can't be taken in stereo, but the lenses have
to be aligned horizontally, whether that means using a different rig, or taking
a wider angle landscape shot and cropping it to portrait.</p>
<p>A stereo camera sees the world in much the same way our eyes do:</p>
<pre><code>\ \ / /
\ \ / /
\ \/ /
\ /\ /
\/ \/
</code></pre>
<p>But the problem is that is not beamed directly into our eyes, but rather has to
be displayed on an intermediate display and we don't quite see that display the
same way. There's not much that can be done about this in photography or
fimography, which is one of several reasons that 3D movies are usually not
considered to be very good. I do think that a pair of tilt-shift lenses could
help here, but even that would not help with the fact that we do not know ahead
of time what size display will be used to view the image later.</p>
<p>The reason the display size is important, is that if the left and right images
of an object is displayed on the screen further apart than the viewer's eyes
(regardless of how far away the display is), the object will appear to be
beyond infinity, which quickly becomes uncomfortable or impossible to view. The
only way to combat this is to shift the offset of the two images until nothing
is more than 7cm apart on the largest display it might ever be displayed on.
Displaying the content on a smaller screen will quickly diminish the strength
of the stereo effect - therefore, the IMAX theatre in Sydney is another reason
that 3D movies are considered poor as their 3D effect is reduced on anything
other than the IMAX theatre in Sydney, and by the time you are viewing it in a
home theatre there is almost no 3D left.</p>
<p>But video games do not suffer this same problem - they are rendered live and
know the size of the display they are being rendered on, and can use this
information to skew the projection so the viewing frustrum for each eye will
touch the edge of the screen at the point of convergence, plus they can dial
the overall strength of the 3D effect and the point of convergence up and down
as desired:</p>
<pre><code>\- \ / -/
\- \ / -/
\- \ / -/
\- \ screen of / -/
\-\ known size /-/
\-----------------/ <-- point of convergence
\\- -//
\ \- -/ /
\ \- -/ /
\ \-/ /
\ -/ \- /
o o
</code></pre>
<p>3D screenshots of games are still a problem however - if they are scaled up to
a larger display they may violate infinity, and if they are scaled down to a
smaller display they will have a reduced 3D effect. Now that you know this,
here are some screenshots I have taken in various games that are calibrated to
a 17" display for a comparison of how they look compared to the photos:</p>
<p><a href="http://photos.3dvisionlive.com/DarkStarSword/">http://photos.3dvisionlive.com/DarkStarSword/</a></p>
<p>Without the nvidia plugin that site is pretty useless, but I made a user script
to add a download button to it to get at the raw side-by-side images that can
be saved as .jps files and opened with a stereo photo viewer such as geeqie
(Linux), sView (Windows, Linux, Mac, Android) or nvidia photo viewer (Windows):</p>
<p><a href="https://github.com/DarkStarSword/3d-fixes/raw/master/3dvisionlive_download_button.user.js">https://github.com/DarkStarSword/3d-fixes/raw/master/3dvisionlive_download_button.user.js</a></p>
DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com3tag:blogger.com,1999:blog-6485167114445349071.post-16212974722598129352012-08-28T16:09:00.001+10:002012-08-28T17:53:39.237+10:00Nokia N9 Bluetooth PAN, USB & Dummy Networks<p>Please note: All of these instructions assume you have developer mode enabled
and are familiar with using the Linux console. One of the variants of dummy
networking I present here also requires a package to be installed with
Inception or use of an open-mode kernel to disable aegis. I present an
alternative method to use a pseudo-dummy network for people who do not wish to
do that.</p>
<h3>Background</h3>
<p>Earlier this year I bought a Nokia N9 (then took it in for service TWICE due to
a <a href="http://talk.maemo.org/showthread.php?t=82011">defective GPS</a>, then returned it for a refund since Nokia had returned it
un-repaired both times, then bought a new one for $200 less than I originally
paid, then bought a second for my fiancé).</p>
<p>The SIM card I use in the N9 is a pretty basic TPG $1/month deal, which is fine
for the small amount of voice calls I make, but it's 50MB of data per month is
a not really enough, so I'd like it to use alternative networks wherever possible.</p>
<p>When working on another computer with an Internet connection, I could simply
hook up the N9 via <a href="#usb">USB networking</a> and have the computer give it a route to the
Internet. That works well, but has the problem that any applications using the
N9's Internet Connectivity framework (anything designed for the platform is
supposed to do this via libconic) would not know that there was an Internet
connection and would refuse to work - so I had to find a way to convince them
that there was an active Internet connection using a <a href="#dummy">dummy
network</a>. Also, this obviously wouldn't work when I was away from a
computer.</p>
<p>I also happen to carry a pure data SIM card in my Optus MyTab with me all the
time (being my primary Internet connection), so when I'm on the go I'd like to
be able to connect to the Internet on the N9 via the tablet rather than use the
small amount of data from the TPG SIM.</p>
<p>The MyTab is running CyanogenMod 7 (I'm not a fan of Android, but at $130 to
try it out the price was right), so I am able to switch on the WiFi tethering
on the tablet and connect that way, but it has a couple of problems:</p>
<ul>
<li> It needs to be manually activated before use</li>
<li> It needs to be manually deactivated to allow the bluetooth tethering to work</li>
<li> It isn't very stable (holding a wakelock helps a lot - the terminal application can be used for this purpose)</li>
<li> It's a bit of a battery drain (at least the tablet has a huge battery)</li>
</ul>
<p>The MyTab also supports tethering over bluetooth PAN (which I regularly use
at home), so it made a lot of sense to me to connect the N9 to the tablet using
that as well when I am out and about. Unfortunately, the N9 does not come with
any software to connect to a bluetooth network, and I couldn't manage to find
anyone else who had successfully done this (There are a couple of threads
discussing it).</p>
<p>Fortunately, the N9 has a normal Linux userspace under the hood (one reason I'd
take this over Android any day), which includes bluez 4.x and as such I was
able to use that to make it do <a href="#blue_pan">bluetooth PAN</a>.</p>
<a name="usb"><h3>USB Network</h3></a>
<p>Let's start with USB Networking since it is already supported on the N9 and
works out of the box once developer mode is enabled (select SDK mode when
plugging in).</p>
<p>Here's a few tricks you can do to streamline the process of using the USB
network to gain an Internet connection. You will also want to follow the steps
under one of the Dummy Networking sections below to allow applications (such as
the web browser) to use it.</p>
<p>On the host, add this section to your <code>/etc/network/interfaces</code>
(this is for Debian based distributions, if you use something else you will
have work out the equivalent):</p>
<div class="scriptexcerpt"><pre>
allow-hotplug usb0
iface usb0 inet static
address 192.168.2.14
netmask 255.255.255.0
up iptables -t nat -I POSTROUTING -j MASQUERADE
up iptables -A FORWARD -i usb0 -j ACCEPT
up iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
up echo 1 > /proc/sys/net/ipv4/ip_forward
down echo 0 > /proc/sys/net/ipv4/ip_forward
down iptables -F FORWARD
down iptables -t nat -F POSTROUTING
</pre></div>
<p>Next, modify the same file on the N9 so that the usb0 section looks like this (this section already exists - I've just extended it a little):</p>
<div class="scriptexcerpt"><pre>
auto usb0
iface usb0 inet static
address 192.168.2.15
netmask 255.255.255.0
gateway 192.168.2.14
up /usr/lib/sdk-connectivity-tool/usbdhcpd.sh 192.168.2.14
down /usr/lib/sdk-connectivity-tool/usbdhcpd.sh stop
up echo nameserver 208.67.222.222 >> /var/run/resolv.conf
up echo nameserver 208.67.220.220 >> /var/run/resolv.conf
down rm /var/run/resolv.conf
</pre></div>
<p>Now whenever you plug in the N9 and choose SDK mode it should automatically
get an Internet connection with no further interaction required and you should
be able to ping hosts on the Internet :)</p>
<p>But, you will probably notice that most applications (like the web browser)
will still bring up the "Connect to internet" dialog whenever you use them and
will refuse to work. To make these applications work we need to create a <a href="#dummy">dummy
network</a> that they can "connect" to, while in reality they actually use the USB
network.</p>
<ul>
<lh><b>USB Networking Notes:</b></lh>
<li>The iptables commands on the host will alter the firewall and routing rules
to allow the N9 to connect to the Internet through the host. If you use your
own firewall with other forwarding rules you may want to remove those lines
and add the appropriate rules to your firewall instead.</li>
<li> The above commands will turn off all forwarding on the host and purge the
FORWARD and POSTROUTING tables when the N9 is unplugged - if your host is a
router for other things you definitely will want to remove those lines.</li>
<li> The two IP addresses used for the DNS lookups on the N9 are those of
<a href="http://www.opendns.org">OpenDNS.org</a> - you might want to replace
them with some other appropriate servers. OpenDNS should be accessible from
any Internet connection, which is why I chose them.</li>
<li> The N9 will use the most recently modified file under /var/run/resolv.conf*
(specifically those listed in /etc/dnsmasq.conf) for DNS lookups. Which means
that connecting to a WiFi/3G network AFTER bringing up the USB network would
override the DNS settings. I suggest setting the DNS settings for your dummy
network to match to avoid that problem.</li>
<li> The N9 doesn't run the down rules when it should, rather they seem to be
delayed until the USB cable is plugged in again, when they are run
immediately before the up rules. Because of the previous note, this isn't
really an issue for the dnsmasq update, but it may be an issue if you wanted
to do something more advanced.</li>
<li> Alternatively, there is an icd2 plugin for USB networking for the N900
available on <a href="https://maemo.gitorious.org/icd2-network-modules">gitorious</a>. I haven't had
a look at this yet to see if it works on the N9 or how it compares to the
above technique. This would require installation with Inception.</li>
</ul>
<a name="dummy"><h3>Dummy Network</h3></a>
<p>This approach to setting up a dummy network isn't for everyone. You are going
to need to compile a package in the Harmattan <em>platform</em> SDK (or bug me to
upload the one I built somewhere) and <a href="#inception">install it on the device with
Inception</a>, or use an open mode kernel. If you don't feel comfortable with
this, you might prefer to use the technique discussed in the <a href="#dummy_alt">Alternative Dummy
Network</a> section instead.</p>
<p>First grab the dummy icd plugin from <a href="https://maemo.gitorious.org/icd2-network-modules">https://maemo.gitorious.org/icd2-network-modules</a></p>
<code><pre>
[host]$ cd /scratchbox/users/$USER/home/$USER
[host]$ git clone git://gitorious.org/icd2-network-modules/libicd-network-dummy.git
[host]$ scratchbox
[sbox]$ sb-menu
Select -> HARMATTAN_ARMEL
[sbox]$ cd libicd-network-dummy
[sbox]$ dpkg-buildpackage -rfakeroot
</pre></code>
<p>Now copy /scratchbox/users/$USER/home/$USER/libicd-network-dummy_0.14_armel.deb
to the N9, then install and configure it on the N9 with:</p>
<code><pre>
[N9]$ /usr/sbin/incept libicd-network-dummy_0.14_armel.deb
[N9]$ gconftool-2 -s -t string /system/osso/connectivity/IAP/DUMMY/type DUMMY
[N9]$ gconftool-2 -s -t string /system/osso/connectivity/IAP/DUMMY/name 'Dummy network'
[N9]$ devel-su
[N9]# /sbin/initctl restart xsession/icd2
</pre></code>
<p>Next time the connect to Internet dialog appears you should see a new entry
called 'Dummy network' that you can "connect" to so that everything thinks
there is an Internet connection, while they really use your USB or bluetooth
connection.</p>
<a name="dummy_alt"><h3>Alternative Dummy Network</h3></a>
<p>This isn't ideal in that it enables the WiFi & creates a network that
nearby people can see, but it does have the advantage that it works out
of the box and does not require Inception or Open Mode.</p>
<p>Open up <code>settings -> internet connection -> create new connection</code></p>
<p>Fill out the settings like this:</p>
<div class="scriptexcerpt"><pre>
Connection name: dummy
Network Name (SSID): dummy
Use Automatically: No
network mode: ad hoc
Security method: None
</pre></div>
<p>Under Advanced settings, fill out these:</p>
<div class="scriptexcerpt"><pre>
Auto-retrieve IP address: No
IP address: 0.0.0.0
Subnet mask: 0.0.0.0
Default gateway: 0.0.0.0
Auto-retrieve DNS address: No
Primary DNS address: 208.67.222.222
Secondary DNS address: 208.67.220.220
</pre></div>
<p>These are the <a href="http://opendns.org">OpenDNS.org</a> DNS servers -
feel free to substitute your own.</p>
<p>Then if the 'Connect to internet' dialog comes up you can connect to 'dummy',
which will satisfy that while leaving your real USB/bluetooth network
alone.</p>
<a name="blue_pan"><h3>Bluetooth Personal Area Networking (PAN)</h3></a>
<p>This is very much a work in progress that I hope to polish up and eventually
package up and turn into an icd2 plugin so that it will nicely integrate into
the N9's internet connectivity framework.</p>
<p>First thing's first - you will need to enable the bluetooth PAN plugin on the N9, by finding the line DisabledPlugins in <code>/etc/bluetooth/main.conf</code> and removing 'network' from the list so that it looks something like:</p>
<div class="scriptexcerpt"><pre>
[General]
# List of plugins that should not be loaded on bluetoothd startup
# DisablePlugins = <strike>network,</strike>hal
DisablePlugins = hal
# Default adaper name</pre><i>...</i></div>
<p>Then restart bluetooth by running:</p>
<code><pre>
[N9]$ devel-su
[N9]# /sbin/initctl restart xsession/bluetoothd
</pre></code>
<p>Until I package this up more nicely you will need to download my bluetooth
tethering script from:</p>
<a href="https://raw.github.com/DarkStarSword/junk/master/blue-tether.py">https://raw.github.com/DarkStarSword/junk/master/blue-tether.py</a>
<p>You will need to edit the dev_dbaddr in the script to match the bluetooth
device you are connecting to. Note that I will almost certainly change this to
read from a config file in the very near future, so you should double check the
instructions in the script first.</p>
<p>Put the modified script on the N9 under /home/user/blue-tether.py</p>
<p>You first will need to pair with the device you are connecting to in the N9's
bluetooth GUI like usual.</p>
<p>Once paired, you may run the script from the terminal with <code>develsh -c ./blue-tether.py</code></p>
<p>The bluetooth connection will remain up until you press enter in the terminal
window. Currently it does not detect if the connection goes away, so you would
need to restart it in that case.</p>
<p>For convenience you may create a desktop entry for it by creating a file under /usr/share/applications/blue-tether.desktop with this contents:</p>
<div class="scriptexcerpt"><pre>
[Desktop Entry]
Type=Application
Name=Blue Net
Categories=System;
Exec=invoker --type=e /usr/bin/meego-terminal -n -e develsh -c /home/user/blue-tether.py
Icon=icon-m-bluetooth-lan
</pre></div>
<p>Again, this is very much an active work in progress - expect to see a packaged
version soon, and hopefully an icd2 plugin before not too long.</p>
<h3>One Outstanding Graphical Niggle</h3>
<p>You may have noticed that the dummy plugin doesn't have it's own icon - in the
connect to Internet dialog it seems to pick a random icon, and once connected
the status bar displays it as though it was a cellular data connection. As far
as I can tell, the icons (and other connectivity related GUI elements) are
selected by /usr/lib/conniaptype/lib*iaptype.so which is loaded by
/usr/lib/libconinetdui.so which is in turn used by /usr/bin/sysuid. I haven't
managed to find any API references or documentation for these and I suspect
being part of Nokia's GUI that they fall into the firmly closed source side of
Harmattan. This would be nice to do properly if I want to create my own icd2
plugins, so if anyone has some pointers for this, please leave a note in the
comments.</p>
<ul>
<lh>The icd2 API (i.e. the non-GUI parts) are documented here:</lh>
<li><a href="http://harmattan-dev.nokia.com/docs/platform-api-reference/showdoc.php?pkn=icd2-public&wb=daily-docs&url=Li94bWwvZGFpbHktZG9jcy9pY2QyLXB1YmxpYw%3D%3D">icd2-public</a></li>
<li><a href="http://harmattan-dev.nokia.com/docs/platform-api-reference/xml/daily-docs/icd2-public/group__network__module__api.html">icd2 Network Module API</a></li>
</ul>
<a name="inception"><h3>Why is Inception required for real dummy networking?</h3></a>
<p>Well, it's because the Internet Connectivity Daemon requests CAP::sys_module
(i.e. The capability to load kernel modules):</p>
<code><pre>
~ $ ariadne sh
Password for 'root':
/home/user # accli -I -b /usr/sbin/icd2
Credentials:
UID::root
GID::root
CAP::kill
CAP::net_bind_service
CAP::net_admin
CAP::net_raw
CAP::ipc_lock
<b>CAP::sys_module</b>
SRC::com.nokia.maemo
AID::com.nokia.maemo.icd2.
icd2::icd2
icd2::icd2-plugin
Cellular
</pre></code>
<p>Because of this, aegis will only allow it to load libraries that originated
from a source that has the ability to grant CAP::sys_module, which
unfortunately (but understandably given what the capability allows) is only
the system firmware by default, so attempting to load it would result in this
(in dmesg):</p>
<code><pre>
credp: icd2: credential 0::16 not present in source SRC::9990007
Aegis: credp_kcheck failed 9990007 libicd_network_dummy.so
Aegis: libicd_network_dummy.so verification failed (source origin check)
</pre></code>
<p>Ideally the developers would have thought of this and separated the kernel
module loading out into a separate daemon so that icd2 would not require this
credential and therefore would allow third-party plugins to be loaded, but
since that is not the case we have to use Inception to install the dummy plugin
from a source that has the ability to grant the same permissions that the
system firmware enjoys (Note that the library does not actually request any
permissions because libraries always inherit the permissions of the binary that
loaded them - it just needs to have come from a source that could have granted
it that permission).</p>
<p>Also, if anyone could clarify what the icd2::icd2-plugin credential is for I
would appreciate it - I feel like I've missed something because it's purpose as
documented (to load icd2 plugins) seems rather pointless to me (icd2 loads
libraries based on gconf settings, which it can do just as well without this
permission... so what is the point of this?).</p>
DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com17tag:blogger.com,1999:blog-6485167114445349071.post-35667982648999121562012-02-23T15:31:00.010+11:002012-09-05T01:19:39.680+10:00Tiling tmux KeybindingsWhen most people use a computer, they are are using either a compositing or stacking window manager - which basically means that windows can overlap. The major alternative to this model is known as a tiling window manager, where the window manager lays out and sizes windows such that they do not overlap each other.<br /><br />I started using a tiling window manager called <a href="http://wmii.suckless.org/">wmii</a> some years ago after buying a 7" EeePC netbook and trying to find alternative software more suited to the characteristics of that machine. Most of the software I ended up using on that machine I now use on all of my Linux boxes, because I found that it suits my workflow so much better.<br /><br />Wmii as a window manager primarily focuses on organising windows into tags (like multiple desktops) and columns. Within a column windows can either be sized evenly, or a single window can take up the whole height of the column, optionally with the title bars of the other windows visible (think minimised windows on steroids).<br /><br />Wmii is very heavily keyboard driven (which is one of it's strengths from my point of view), though a mouse can be used for basic navigation as well. It is also heavily extensible with scripting languages and in fact almost all interactions with the window manager are actually driven by the script. It defaults to using a shell script, but also ships with equivalent python and ruby scripts (the base functionality is the same in each), and is easy to extend.<br /><br />By default keyboard shortcuts provide ways to navigate left and right between columns, up and down between windows within a column, and to switch between 10 numbered tags (more tags are possible, but rarely needed). Moving a window is as simple as holding down shift while performing the same key combos used to navigate, and columns and tags are automatically created as needed (moving a window to the right of the rightmost column would create a new column for example), and automatically destroyed when no longer used.<br /><br />Recent versions of wmii also work really well with multiple monitors (though there is still some room for improvement in this area) allowing windows to really easily be moved between monitors with the same shortcuts used to move windows between columns (and they way it differentiates between creating a new column on the right of the left monitor versus moving the window to the right monitor is pure genius).<br /><br />Naturally with such a powerful window manager, I want to use it to manage all my windows and all my shells. The problem with this of course is SSH - specifically, when I have many remote shells open at the same time and what happens when the network goes away. You see, I've been opening a new terminal and SSH connection for each remote shell so I can use wmii to manage them, which works really great until I need to suspend my laptop or unplug it to go to a meeting, then have to spend some time re-establishing each session, getting it back to the right working directory, etc. And, I've lost the shell history specific to each terminal.<br /><br />Normally people would start screen on the remote server if they expect their session to go away, and screen can also manage a number of shells simultaneously, which would be great... except that it is no where near as good at managing those shells as wmii can manage windows and if I'm going to switch it would need to be pretty darn close.<br /><br />I've been aware for some time of an alternative to screen called tmux which seemed to be much more sane and feature-rich than screen, so the other day I decided to see if I could configure tmux to be a realistic option for managing many shells on a remote machine that I could detach and re-attach from when suspending my laptop.<br /><br />Tmux supports multiple sessions, "windows" (like tags in wmii), and "panes" (like windows in wmii). I managed to come up with the below configuration file which sets up a bunch of keybindings similar to the ones I use in wmii (but using the Alt modifier instead of the Windows key) to move windows... err... "panes" and to navigate between them.<br /><br />Unlike wmii, tmux is not focussed around columns, which technically gives it more flexibility in how the panes are arranged, but sacrifices some of the precision that the column focus gives wmii (in this regard tmux is more similar to some of the other tiling window managers available).<br /><br />None of these shortcut keys need to have the tmux prefix key pressed first, as that would have defeated the whole point of this exercise:<br /><br /><code>Alt + '</code> - Split window vertically <b>*</b><br /><code>Alt + Shift + '</code> - Split window horizontally<br /><br /><code>Alt + h/j/k/l</code> - Navigate left/down/up/right between panes within a window<br /><code>Alt + Shift + h/j/k/l</code> - Swap window with the one before or after it <b>**</b><br /><br /><code>Alt + Ctrl + h/j/k/l</code> - Resize pane <b>***</b> - NOTE: Since many environments use Ctrl+Alt+L to lock the screen, you may want to change these to use the arrow keys instead.<br /><br /><code>Alt + number</code> - Switch to this tag... err... "window" number, creating it if it doesn't already exist.<br /><code>Alt + Shift + number</code> - Send the currently selected pane to this window number, creating it if it doesn't already exist.<br /><br /><code>Alt + d</code> - Tile all panes <b>**</b><br /><code>Alt + s</code> - Make selected pane take up the maximum height and tile other panes off to the side <b>**</b><br /><code>Alt + m</code> - Make selected pane take up the maximum width and tile other panes below <b>**</b><br /><br /><code>Alt + f</code> - Make the current pane take up the full window (actually, break it out into a new window). Reverse with Alt + Shift + number <b>**</b><br /><br /><code>Alt + PageUp</code> - Scroll pane back one page and enter copy mode. Release the alt and keep pressing page up/down to scroll and press enter when done.<br /><br /><small><b>*</b> Win+Enter opens a new terminal in wmii, but Alt+Enter is already used by xterm, so I picked the key next to it</small><br /><br /><small><b>**</b> These don't mirror the corresponding wmii bindings because I could find no exact equivalent, so I tried to make them do something similar and sensible instead.</small><br /><br /><small><b>***</b> By default there is no shortcut key to resize windows in wmii (though the python version of the wmiirc script provides a resize mode which is similar), so I added some to my scripts.</small><br /><br /><br />~/.tmux.conf (<a href="https://raw.github.com/DarkStarSword/junk/master/config/home/.tmux.conf">Download Latest Version Here</a>)<div class="scriptexcerpt"><br /># Split + spawn new shell:<br /># I would have used enter like wmii, but xterm already uses that, so I use the<br /># key next to it.<br />bind-key -n M-"'" split-window -v<br />bind-key -n M-'"' split-window -h<br /><br /># Select panes:<br />bind-key -n M-h select-pane -L<br />bind-key -n M-j select-pane -D<br />bind-key -n M-k select-pane -U<br />bind-key -n M-l select-pane -R<br /><br /># Move panes:<br /># These aren't quite what I want, as they *swap* panes *numerically* instead of<br /># *moving* the pane in a specified *direction*, but they will do for now.<br />bind-key -n M-H swap-pane -U<br />bind-key -n M-J swap-pane -D<br />bind-key -n M-K swap-pane -U<br />bind-key -n M-L swap-pane -D<br /><br /># Resize panes (Note: Ctrl+Alt+L conflicts with the lock screen shortcut in<br /># many environments - you may want to consider the below alternative shortcuts<br /># for resizing instead):<br />bind-key -n M-C-h resize-pane -L<br />bind-key -n M-C-j resize-pane -D<br />bind-key -n M-C-k resize-pane -U<br />bind-key -n M-C-l resize-pane -R<br /><br /># Alternative resize panes keys without ctrl+alt+l conflict:<br /># bind-key -n M-C-Left resize-pane -L<br /># bind-key -n M-C-Down resize-pane -D<br /># bind-key -n M-C-Up resize-pane -U<br /># bind-key -n M-C-Right resize-pane -R<br /><br /># Window navigation (Oh, how I would like a for loop right now...):<br />bind-key -n M-0 if-shell "tmux list-windows|grep ^0" "select-window -t 0" "new-window -t 0"<br />bind-key -n M-1 if-shell "tmux list-windows|grep ^1" "select-window -t 1" "new-window -t 1"<br />bind-key -n M-2 if-shell "tmux list-windows|grep ^2" "select-window -t 2" "new-window -t 2"<br />bind-key -n M-3 if-shell "tmux list-windows|grep ^3" "select-window -t 3" "new-window -t 3"<br />bind-key -n M-4 if-shell "tmux list-windows|grep ^4" "select-window -t 4" "new-window -t 4"<br />bind-key -n M-5 if-shell "tmux list-windows|grep ^5" "select-window -t 5" "new-window -t 5"<br />bind-key -n M-6 if-shell "tmux list-windows|grep ^6" "select-window -t 6" "new-window -t 6"<br />bind-key -n M-7 if-shell "tmux list-windows|grep ^7" "select-window -t 7" "new-window -t 7"<br />bind-key -n M-8 if-shell "tmux list-windows|grep ^8" "select-window -t 8" "new-window -t 8"<br />bind-key -n M-9 if-shell "tmux list-windows|grep ^9" "select-window -t 9" "new-window -t 9"<br /><br /># Window moving (the sleep 0.1 here is a hack, anyone know a better way?):<br />bind-key -n M-')' if-shell "tmux list-windows|grep ^0" "join-pane -d -t :0" "new-window -d -t 0 'sleep 0.1' \; join-pane -d -t :0"<br />bind-key -n M-'!' if-shell "tmux list-windows|grep ^1" "join-pane -d -t :1" "new-window -d -t 1 'sleep 0.1' \; join-pane -d -t :1"<br />bind-key -n M-'@' if-shell "tmux list-windows|grep ^2" "join-pane -d -t :2" "new-window -d -t 2 'sleep 0.1' \; join-pane -d -t :2"<br />bind-key -n M-'#' if-shell "tmux list-windows|grep ^3" "join-pane -d -t :3" "new-window -d -t 3 'sleep 0.1' \; join-pane -d -t :3"<br />bind-key -n M-'$' if-shell "tmux list-windows|grep ^4" "join-pane -d -t :4" "new-window -d -t 4 'sleep 0.1' \; join-pane -d -t :4"<br />bind-key -n M-'%' if-shell "tmux list-windows|grep ^5" "join-pane -d -t :5" "new-window -d -t 5 'sleep 0.1' \; join-pane -d -t :5"<br />bind-key -n M-'^' if-shell "tmux list-windows|grep ^6" "join-pane -d -t :6" "new-window -d -t 6 'sleep 0.1' \; join-pane -d -t :6"<br />bind-key -n M-'&' if-shell "tmux list-windows|grep ^7" "join-pane -d -t :7" "new-window -d -t 7 'sleep 0.1' \; join-pane -d -t :7"<br />bind-key -n M-'*' if-shell "tmux list-windows|grep ^8" "join-pane -d -t :8" "new-window -d -t 8 'sleep 0.1' \; join-pane -d -t :8"<br />bind-key -n M-'(' if-shell "tmux list-windows|grep ^9" "join-pane -d -t :9" "new-window -d -t 9 'sleep 0.1' \; join-pane -d -t :9"<br /><br /># Set default window number to 1 instead of 0 for easier key combos:<br />set-option -g base-index 1<br /><br /># Pane layouts (these use the same shortcut keys as wmii for similar actions,<br /># but don't really mirror it's behaviour):<br />bind-key -n M-d select-layout tiled<br />bind-key -n M-s select-layout main-vertical \; swap-pane -s 0<br />bind-key -n M-m select-layout main-horizontal \; swap-pane -s 0<br /><br /># Make pane full-screen:<br />bind-key -n M-f break-pane<br /># This isn't right, it should go back where it came from:<br /># bind-key -n M-F join-pane -t :0<br /><br /># We can't use shift+PageUp, so use Alt+PageUp then release Alt to keep<br /># scrolling:<br />bind-key -n M-PageUp copy-mode -u<br /><br /># Don't interfere with vi keybindings:<br />set-option -s escape-time 0<br /><br /># Enable mouse. Mostly to make selecting text within a pane not also grab pane<br /># borders or text from other panes. Unfortunately, tmux' mouse handling leaves<br /># something to be desired - no double/tripple click support to select a<br /># word/line, all mouse buttons are intercepted (middle click = I want to paste<br /># damnit!), no automatic X selection integration(*)...<br />set-window-option -g mode-mouse on<br />set-window-option -g mouse-select-pane on<br />set-window-option -g mouse-resize-pane on<br />set-window-option -g mouse-select-window on<br /><br /># (*) This enables integration with the clipboard via termcap extensions. This<br /># relies on the terminal emulator passing this on to X, so to make this work<br /># you will need to edit your X resources to allow it - details below.<br />set-option -s set-clipboard on</div><br /><br />You may also need to alter your ~/.Xresources file to make some things work (this is for xterm):<br /><br />~/.Xresources (<a href="https://raw.github.com/DarkStarSword/junk/master/config/home/.Xresources">My Personal Version</a>)<div class="scriptexcerpt"><br />/* Make Alt+x shortcuts work in xterm */<br /> XTerm*.metaSendsEscape: true<br />UXTerm*.metaSendsEscape: true<br /><br />/* Allow tmux to set X selections (ie, the clipboard) */<br /> XTerm*.disallowedWindowOps: 20,21,SetXprop<br />UXTerm*.disallowedWindowOps: 20,21,SetXprop<br /><br />/* For some reason, this gets cleared when reloading this file: */<br />*customization: -color</div><br />To reload this file without logging out and back in, run:<br /><code>xrdb ~/.Xresources</code><br /><br />There's a pretty good chance that I'll continue to tweak this, so I'll try to update this post anytime I add something cool.<br /><br /><b>Edit 27/02/2012:</b> Added mouse & clipboard integration & covered changes to .Xresources file.
DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-69802548984441557092012-02-17T12:08:00.013+11:002012-02-17T13:29:31.575+11:00SSH passwordless login WITHOUT public keysI was recently in a situation where I needed SSH & rsync over SSH be to able to log into a remote site without prompting for a password (as it was being called from within a script and would have been non-trivial to make the script pass in a password, especially as OpenBSD-SSH does not provide a trivial mechanism for scripts to pass in passwords - see below).<br /><br />Normally in this situation one would generate a public / private keypair and use that to log in without a prompt, either by leaving the private key unencrypted (ie, not protected by a passphrase), or by loading the private key into an SSH agent prior to attempting to log in (e.g. with ssh-add).<br /><br />Unfortunately the server in question did not respect my ~/.ssh/authorized_keys file, so public key authentication was not an option (boo).<br /><br /><br />Well, it turns out that you can pre-authenticate SSH sessions such that an already open session is used to authenticate new sessions (actually, new sessions are basically tunnelled over the existing connection).<br /><br />The option in question needs a couple of things set up to work, and it isn't obviously documented as a way to allow passwordless authentication - I had read the man page multiple times and hadn't realised what it could do until Mikey at work pointed it out to me.<br /><br />To get this to work you first need to create (or modify) your ~/.ssh/config as follows:<br /><br /><code>Host *<br /> ControlPath ~/.ssh/master_%h_%p_%r</code><br /><br />Now, manually connect to the host with the -M flag to ssh and enter your password as normal:<br /><br /><code>ssh -M user@host</code><br /><br />Now, as long as you leave that connection open, further normal connections (without the -M flag) will use that connection instead of creating their own one, and will not require authentication.<br /><br /><br /><b>Edit:</b><br />Note that you may instead edit your ~/.ssh/config as follows to have SSH always create and use Master connections automatically without having to specify -M. However, some people like to manually specify when to use shared connections so that the bandwidth between the low latency interactive sessions and high throughput upload/download sessions doesn't mix as that can have a huge impact on the interactive session.<br /><br /><code>Host *<br /> ControlPath ~/.ssh/master_%h_%p_%r</code><br /> ControlMaster auto</code><br /><br /><br /><br /><H4>Alternate method, possibly useful for scripting</H4><br />Another method I was looking at using was specifying a program to return the password in the SSH_ASKPASS environment variable. Unfortunately, this environment variable is only used in some rare circumstances (namely, when no tty is present, such as when a GUI program calls SSH or rsync), and would not normally be used when running SSH from a terminal (or in the script as I was doing).<br /><br />Once I found out about the -M option I stopped pursuing this line of thinking, but it may be useful in a script if the above pre-authentication method is not practical (perhaps for unattended machines).<br /><br />To make SSH respect the SSH_ASKPASS environment variable when running from a terminal, I wrote a small LD_PRELOAD library libnotty.so that intercepts calls to open("/dev/tty") and causes them to fail.<br /><br />If anyone is interested, the code for this is in my junk repository (libnotty.so & notty.sh). You will also need a small script that echos the password (I hope it goes without saying that you should check the permissions on it) and point the SSH_ASKPASS environment variable to it.<br /><br /><a href="https://github.com/DarkStarSword/junk">https://github.com/DarkStarSword/junk</a>DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-29990363509129692342012-02-17T11:38:00.004+11:002012-02-17T12:00:34.172+11:00Git trick: Deleting non-ancestor tagsToday I cloned the git tree for the pandaboard kernel, only to find that it didn't include the various kernel version tags from upstream, so running things like git describe or git log v3.0.. didn't work.<br /><br />My first thought was to fetch just the tags from an upstream copy of the Linux kernel I had on my local machine:<br /><br /><code>git fetch -t ~/linus</code><br /><br />Unfortunately I hadn't thought that though very well, as that local tree also contained all the tags from the linux-next tree, the tip tree as well as a whole bunch more from various distro trees and several other random ones, which I didn't want cluttering up my copy of the pandaboard kernel tree.<br /><br />This lead me to try to find a way to delete all the non-ancestor tags (compared to the current branch) to simplify the tree. This may be useful to others to remove unused objects and make the tree smaller after a git gc -- that didn't factor into my needs as I had specified ~/linus to git clone with --reference so the objects were being shared.<br /><br />Anyway, this is the script I came up with, note that this only compares the tags with the ancestors of the *current HEAD*, so you should be careful that you are on a branch with all the tags you want to keep first. Alternatively you could modify this script to collate the ancestor tags of every local/remote branch first, though this is left as an exercise for the reader.<br /><br /><code><br />#!/bin/sh<br /><br />ancestor_tags=$(mktemp)<br />echo -n Looking up ancestor tags...\ <br />git log --simplify-by-decoration --pretty='%H' > $ancestor_tags<br />echo done.<br /><br />for tag in $(git tag --list); do<br /> echo -n "$tag"<br /> commit=$(git show "$tag" | awk '/^commit [0-9a-f]+$/ {print $2}' | head -n 1)<br /> echo -n ...\ <br /> if [ -z "$commit" ]; then<br /> echo has no commit, deleting...<br /> git tag -d "$tag"<br /> continue<br /> fi<br /> if grep $commit $ancestor_tags > /dev/null; then<br /> echo is an ancestor<br /> else<br /> echo is not an ancestor, deleting...<br /> git tag -d "$tag"<br /> fi<br />done<br /><br />rm -fv $ancestor_tags<br /></code><br /><br />Also note that this may still leave unwanted tags in if they are a direct ancestor of the current HEAD - for instance, I found a bunch of tags from the tip tree had remained afterwards, but they were much more manageable to delete with a simple for loop and a pattern.DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-32507706550637647022010-11-14T10:47:00.018+11:002010-11-14T17:44:02.173+11:00Bluetooth 3G Modems on Debian Linux: Chatscripts and rfcomm bluezI've been using 3G mobile broadband to my primary Internet connection for a couple of years now, and ever since I moved out of college it has become my only Internet connection at home - It's saved me the cost, delays and headache of dealing with Telstra to sort out some kind of wired link.<br /><br />In my particular setup I removed the 3G data SIM card from the USB modem that came with my plan and placed it in my Nokia N900, which I use as a bluetooth modem for my various computers (my N95 used to fill this role) as well as having the convenience of having the N900 itself connected wherever and whenever I want.<br /><br />Every now and again I get asked about my setup - a lot of people seem to have had trouble setting up bluetooth modems in Linux. This is understandable - last time I checked out Network Manager I found that it could set up a USB 3G modem pretty easily but had zero provisions to set up a bluetooth modem, and the Linux bluetooth stack (bluez) also leaves something to be desired (try using bluez 4 to pair to something without X... fail). I've previously been directing these people to some posts I made on the CLUG mailing list that had my configuration files, but it's clear that it will be easier to direct people to a blog post.<br /><br />The quickstart guide for those people would be scan this post, grab the file excerpts and place them where they belong, restart bluetooth and run the <code>pon <profile></code> command to try to bring up the 3G connection. Then when that inevitably doesn't work read the rest of the article to figure out what you need to change to make it work. I should note that I'm using Debian so some of this article may not apply to other non Debian derived distributions (the pon and poff commands came from Debian, for example)<br /><br />Firstly, a little background on the technical details we care about: 3G modems provide <a href="http://en.wikipedia.org/wiki/Point-to-Point_Protocol">PPP</a> (Point-to-Point Protocol) links to your ISP, just like the dial-up modems of old did. We even use the same protocol and method to talk to them that we used to use to talk to dial-up modems - the <a href="http://en.wikipedia.org/wiki/AT_command_set">AT command set</a> over some kind of serial like interface (itself encapsulated in a USB or bluetooth link).<br /><br />A few things have changed though - for one they are much faster than dial-up modems. Authentication is also handled differently - we no longer (typically) use a username and password, instead handling the authentication in the SIM card. And instead of calling a phone number for your local ISP, we instead call a special number (such as *99#) to establish the link. Added to this, we now also have something called an <a href="http://en.wikipedia.org/wiki/Access_Point_Name">APN</a> (Access Point Name) to identify the IP packet data network that we want to communicate with.<br /><br />There are a few important consequences of all of this. Firstly, we are using the same infrastructure (ppp, chatscripts, wvdial, ...) in Linux to connect to 3G that we used to use to connect old dial-up connections. Secondly, despite not requiring a username and password any more we still have to provide something in their stead to make everything happy even though they are ignored. We also still have the same nonsense of every ISP having a subtle difference in their authentication that affects how we connect to them. There can also be subtle differences in the AT commands we need to communicate with different modems to get them to do what we want.<br /><br />Some people like using wvdial to establish their ppp links. If that works for you that's great, but my experience has been that wvdial fails in many circumstances, and getting it to work in those cases is quite often impossible, so I'm going to cover a much more tunable back to basics method: ppp + chatscripts + rfcomm.<br /><br />Firstly, make sure ppp is installed (apt-get install ppp)... I hope you have some other connection than your 3G link to get that... Perhaps whatever you are reading this blog on?<br /><br />We'll start with a USB connection - no sense adding the extra complexities of a bluetooth link to the mix until we have that working. I'll show the profiles I use for both the Huawei E220 USB modem that came with the plan and the USB link to my N900 (or N95).<br /><br />We need two configuration files for each profile - the configuration for the ppp side of the link goes under /etc/ppp/peers/<profile> and the chatscript which tells the modem how to establish the ppp link under /etc/chatscripts/<profile>. The chatscript is referenced from the ppp configuration file, so it is possible to use one chatscript for multiple profiles, assuming the profiles are talking to the same modem (or at least that one modem doesn't require special treatment) and using the same APN.<br /><br />The chatscript is responsible for initialising the modem and getting the connection to the point where pppd can take over, so I'll start with that. Here's the chatscript that I use for my Nokia N900 (USB and bluetooth), Nokia N95 and Huawei E220 USB mdoem:<br /><br />/etc/chatscripts/optus-n900<br /><div class="scriptexcerpt">ABORT BUSY<br />ABORT ERROR<br />ABORT 'NO CARRIER'<br />REPORT CONNECT<br />TIMEOUT 10<br />"" "ATZ"<br />OK "ATE1V1&D2&C1S0=0+IFC=2,2"<br />OK AT+CGDCONT=1,"IP","<APN>"<br />OK "ATE1"<br /><br />OK "ATDT*99#"<br /><br />CONNECT \c</div><br /><span style="font-weight:bold;">IMPORTANT:</span> Replace <APN> with the APN for your connection (for me on Optus post-paid mobile broadband that is "connect", for Lucy on Three pre-paid mobile broadband that is "3services" - refer to the documentation that came with your plan to find out what it is for you). If you don't you will run into inexplicable problems later.<br /><br />I said above that some modems need to be treated specially in the chatscript. I used to have to use this on my Huawei E220 because I could not find one script that would satisfy both it and my N95 (the AT+IPR line below was necessary for the E220, but caused the N95 to fail), but the differences no longer seem to be necessary (firmware upgrade? Some other change I made and forgot about? Phase of the moon? I can't recall), but it might help someone so here it is:<br /><br />/etc/chatscripts/optus-huawei<br /><div class="scriptexcerpt">ABORT BUSY<br />ABORT ERROR<br />ABORT 'NO CARRIER'<br />REPORT CONNECT<br />TIMEOUT 10<br />"" "ATZ"<br />OK AT+CGDCONT=1,"ip","connect"<br />OK "ATE1V1&D2&C1S0=0+IFC=2,2"<br />OK "AT+IPR=115200"<br /><br />OK "ATE1"<br /><br />TIMEOUT 60<br />"" "ATD*99#"<br /><br />CONNECT \c</div><br />Now we need a profile for ppp that references that chatscript and contains all the settings necessary to establish a successful ppp link. I have a number of these for different profiles, depending on which modem I'm using and whether I'm using my Optus link or Lucy's Three link, but they all pretty similar and include some common elements, so I'll just show one combined file with comments for differences between them. All these options and more are described in man pppd:<br /><br />/etc/ppp/peers/<profile><br /><div class="scriptexcerpt"># This can help track down problems:<br />#debug<br /><br /># The modem device to talk to:<br />/dev/ttyACM0 # N900/N95 USB<br />#/dev/ttyUSB0 # Huawei USB<br />#/dev/rfcomm0 # N900 Bluetooth<br /><br /># In some cases it may be necessary to specify a baud rate,<br /># but generally it's best to let ppp detect this:<br />#115200<br />#230400<br />#460800<br />#... etc<br /><br /># Optus requires both of these options, Three requires neither.<br /># Other ISPs may have different authentication requirements:<br />refuse-chap<br />require-pap<br /><br /># When to detach from the console:<br />updetach<br />#nodetach<br /><br /># These are generally necessary:<br />crtscts<br />noauth<br />noipdefault<br /><br /># If the connection drops out try to reopen it:<br />persist<br /><br /># We want this to be the default internet connection:<br />defaultroute<br />replacedefaultroute<br /><br /># Get DNS settings from the ISP:<br />usepeerdns<br /><br /># not used, but we must provide something:<br />user "na"<br />password "na"<br /><br /># Playing with these compression options *may* improve<br /># performance, but get it working first:<br />noccp<br />nobsdcomp<br />novj<br />#nodeflate<br /><br />#What chatscript we are using in this profile:<br />connect "/usr/sbin/chat -s -S -V -f /etc/chatscripts/optus-n900"<br />#connect "/usr/sbin/chat -s -S -V -f /etc/chatscripts/optus-huawei"</div><br /><br />Got that? Great, let's give it a go! Connect your modem by USB, do whatever magic incantations you need to get your modem to reveal it's modem aspects to Linux (for Nokia phones this is usually select the PC suite mode when you plug it in, some people report having to do strange things with kernel modules and udev to poke their Huawei E220 modems, though I have never found that necessary myself), shutdown your network manager and run this in a terminal:<br /><code><br />pon <profile><br /></code><br />All going well hopefully you will see some output like this:<br /><code><br />ATZ<br />OK<br />ATE1V1&D2&C1S0=0+IFC=2,2<br />OK<br />AT+CGDCONT=1,"IP","connect"<br />OK<br />ATE1<br />OK<br />ATDT*99#<br />CONNECTchat: Nov 14 13:12:13 CONNECT<br />Serial connection established.<br />Using interface ppp0<br />Connect: ppp0 <--> /dev/rfcomm0<br />PAP authentication succeeded<br />Cannot determine ethernet address for proxy ARP<br />local IP address www.xxx.yyy.zzz<br />remote IP address 10.6.6.6<br />primary DNS address 211.29.132.12<br />secondary DNS address 61.88.88.88<br /></code><br />Obviously the exact output will vary, but usually if you see some IP and DNS addresses you have successfully connected. Otherwise you really should try to get this working before continuing to the bluetooth part. If you got as far as the CONNECT... "Serial connection established." your modem and chatscripts are probably working (assuming you APN in the chatscript is correct) and you may need to look at the ppp configuration, though you might just try a few times first - sometimes my connections take a few attempts to come up successfully.<br /><br />If you haven't got as far as the CONNECT you'll need to check your modem, coverage and chatscripts to try to locate the problem. Also double check that you have specified the correct device in the ppp configuration. If you are using a phone as your modem you might try rebooting it. If you get a NO CARRIER you are likely out of coverage or your modem couldn't connect to a nearby base station for some other reason (such as it being full), though the symptoms for that are unfortunately not always consistent - failing to connect to the modem at all can also be a symptom of that (and a host of other possible causes) for instance.<br /><br />There's just too many things that can go wrong by this point for me to cover here. Google is your friend. You may be able to find other people's chatscripts and ppp configuration for your modem and/or ISP that you could try.<br /><br />Now you've successfully got a connection with ppp + chatscripts it's time to add bluetooth into the mix. Serial connections over bluetooth are handled with the <a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Radio_frequency_communication_.28RFCOMM.29">rfcomm</a> protocol. They are controlled with the rfcomm program and once bound show up as /dev/rfcomm0 and similar. A device can have different serial services listening on different rfcomm "channels" (like IP ports), and there is no guarantee for which services appear on which rfcomm channel. My Nokia N95 reveals it's modem on rfcomm channel 2 and it's GPS on rfcomm channel 5 (via ExtGPS), while my N900 reveals it's modem on rfcomm channel 1 (In fact it is actually running <code>rfcomm -S -- listen -1 1 /usr/bin/pnatd {}</code>). You can use an rfcomm scanner like rfcomm_scan from <a href="http://mulliner.org/">Collin Mulliner</a>'s <a href="http://www.betaversion.net/btdsd/download/">BT Audit</a> suite or do some trial and error to find the channel you need (there's only 30 channels and it's usually a low number).<br /><br />Add a section like the following to your /etc/bluetooth/rfcomm.conf:<br /><br />/etc/bluetooth/rfcomm.conf:<br /><div class="scriptexcerpt">rfcomm0 {<br /> bind yes;<br /> device AA:BB:CC:DD:EE:FF;<br /> channel 1;<br /> comment "N900 Data";<br />} </div><br />Replacing the bluetooth address and channel number as appropriate. Then tell rfcomm to bind rfcomm0 to this device with <code>rfcomm bind 0</code> (this will also happen automatically at boot).<br /><br />You should now see a new file /dev/rfcomm0 which we use to communicate with the modem over bluetooth. You should make a copy of the /etc/ppp/peers/<profile> you were using to connect over bluetooth and change the new profile to use /dev/rfcomm0.<br /><br />Now, we need to pair the devices together and tell the phone to trust the computer to connect whenever it wants. Pairing in bluez is still a bit hairy, particularly if you aren't using KDE or GNOME (like me) which provide their own bluez agents. In that case you don't have many options available to you. Bluez 3 used to have a hack in which you could specify a PIN to pair with under /var/lib/bluetooth/<device>/pincodes to allow pairing without an agent, however that does not work in bluez 4. Bluez provides an example console agent in the examples directory, but I have never managed to get it to work reliably with bluez 3 or bluez 4, so we now need a bluez agent, which lacking any decent console/curses agents means we need X (FAIL). This nonsense is now true even of HID devices which could previously be paired and activated with a simple hidd --search, which now doesn't trust them to re-pair to the computer so they stop working as soon as they start power saving (FAIL). Sigh, one day I'll get around to writing a decent ncurses bluez agent if no one beats me to it, but I digress.<br /><br />If you aren't using GNOME or KDE you might try using the GTK bluez agent <a href="http://blueman-project.org/">blueman</a> instead. You'll need to have it's system tray applet (blueman-applet) running for blueman-manager to work properly (FAIL - I don't have a system tray. At least it doesn't actually need to show the tray icon to work, though if you want that "trayer" or "stalonetray" can be used to provide a temporary system tray).<br /><br />Anyway, once you have some kind of bluez agent running, be it KDE's kbluetooth, gnome-bluetooth or blueman you can try to pair your phone. I say "try" because even with an agent, pairing with bluez is still hairy. In theory running the pon <profile> command will attempt to open the bluetooth link and initiate pairing, causing both phone and computer to ask for a PIN to authenticate each other - enter the same on each. If you're really lucky they might even remember that they have been paired so you don't have to do it again the next time. If you're unlucky and that didn't work you can try deleting any existing pairing from the computer and phone then using your bluetooth agent's interface to initiate a pairing. Rebooting and walking around your computer in circles while chanting "all hail bluez" over and over may also help - I wish you luck.<br /><br />The good news is that you only need the bluez agent while pairing - once you successfully pair and manage to get the 3G link up (and down and up a second time to make sure it remembered what to do) you usually don't have to touch bluez again and things get a lot easier. Unless one of the devices pairings get lost or confused... Or your bluetooth address changes, or ...<br /><br />Hopefully by this stage you have successfully managed to pair your computer and phone you should be able to use the pon and poff commands to bring the connection up and down as above. Congratulations, you're done! You can stop reading now. If you are getting a "host is down" error you have not successfully paired or the bluetooth link has otherwise failed. Another symptom of (non-pairing) bluetooth related problems that I've seen was getting no OK response after the initial ATZ. If you are pairing OK but only getting partway through the connection sequence you may have to go back to debugging your chatscripts and ppp options like I talked about above.<br /><br /><br />The (broadcom) bluetooth dongle I use on my EeePC introduces another complexity to the process - every time it is plugged in a couple of bits in it's bluetooth address change at random for no good reason (check with hciconfig), which as you can imagine makes it rather hard to maintain a pairing between it and anything else. I've also come across some (broadcom) bluetooth dongles with a bluetooth address of 00:00:00:00:00:00. Oddly enough, very few devices like pairing with them, and fewer still will re-pair with them automatically. If you have this problem <strike>tell broadcom they suck</strike> <strike>buy a CSR dongle</strike> you might try the dbaddr utility in the bluez source to force them to use a particular bluetooth address (if they support changing it through software, which of course is no guarantee). The script I use on my EeePC to connect shuts down my network manager and any running DHCP client, changes the bluetooth address on the dongle and opens the 3G connection:<br /><br /><div class="scriptexcerpt">/etc/init.d/wicd stop<br />killall dhclient<br />killall dhclient3<br /><br />/usr/local/sbin/dbaddr AA:BB:CC:DD:EE:FF<br />hciconfig hci0 reset<br /><br />pon optus-blue</div>DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-2170071900826930722010-11-09T18:12:00.014+11:002010-11-10T11:52:46.389+11:00Remind+wyrd events in other timezones & other tricksWhen I bought my EeePC I challenged myself to wherever possible find lightweight (console/curses if possible) and keyboard friendly alternatives to the software I had been using. What I discovered was that I quickly began to prefer that way of interacting with the computer to my previous KDE centric setup, so now almost all of my desktop and laptops have the same setup.<br /><br />One application which I sought to replace was a calendar. I discovered a lightweight console calendar program called "<a href="http://www.roaringpenguin.com/products/remind">remind</a>" with a ncurses frontend known as "<a href="http://pessimization.com/software/wyrd/">wyrd</a>":<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_xxiLBp-WqaI/TNnmnMP9RcI/AAAAAAAAA50/XGBqfh2TEw8/s1600/wyrd.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 230px;" src="http://1.bp.blogspot.com/_xxiLBp-WqaI/TNnmnMP9RcI/AAAAAAAAA50/XGBqfh2TEw8/s320/wyrd.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5537710777806177730" /></a><br />A basic event in file processed by remind might look something like this:<br /><br /><div class="scriptexcerpt">REM Nov 09 2010 AT 18:00 MSG Write a blog entry</div><br />That should be reasonably self explanatory. You can also specify some quite advanced recurring events in fairly natural ways:<br /><br /><div class="scriptexcerpt">REM Mon Tue Wed Thu Fri AT 9:00 MSG Go to work<br /><br />REM Dec 25 MSG Christmas!</div><br />Or to specify the fourth Thursday of every month (Technically the next Thursday on or after the 22nd of any month):<br /><br /><div class="scriptexcerpt">REM Thursday 22 AT 19:00 DURATION 3:00 MSG Canberra Linux Users Group Meeting</div><br />There are also syntaxes for advanced reminders (+) and repetition (*) - but this isn't a full remind tutorial, read the man pages or search google (tip: add wyrd in your search to narrow the results down).<br /><br />You may have noticed that I never specified a timezone in those examples. Unfortunately remind was written a long time ago on a hermit like platform that knew nothing of how time worked elsewhere in the world (DOS) and as a result doesn't have any support for events in other timezones built in. Just defining the event in local time may not be suitable depending on what both timezones do with daylight savings.<br /><br />But there is another thing you should know about remind - it's not just a calendar domain specific language (though as you can see from those examples it certainly includes plenty of DSL constructs), it is in fact a calendar oriented programming language and we can use that to work around this limitation.<br /><br />Seriously, let me say that one more time. My calendar is specified in a programming language. That is awesome. I can specify events to only occur once every blue moon---for real. I could shell out and have reminders only occur if my IP address indicates I'm at the office. Seriously, it could remind me to catch the bus only if I haven't already done so (note to self: make it do that, that would be cool).<br /><br />Specifying a one off event in another timezone isn't in itself terribly difficult:<br /><br /><div class="scriptexcerpt">REM [trigger(tzconvert('2010-09-11@18:20', "US/Pacific"))] +30 DURATION 1:00 MSG Look up</div><br />The problem with this method is that there is no way to specify advanced recursion. tzconvert takes a datetime and returns a datetime. There's no way to say "every monday in that timezone" or "every fortnight commencing on x in that timezone" or "on the last Sunday of October every year in that timezone", which remind has no trouble doing for local events.<br /><br />Remind's programing language capability is unfortunately somewhat limited - mixing the DSL grammar and functions together is a bit kludgey. It's easy to cast the output of a function to a string and use it in the grammar (as above), but going the other way is a little more difficult. For instance, variables are set using the SET command, but if there is any way to set a variable from a function it has escaped me. Functional programing techniques may be usable to work around this, but I get the impression that remind's author didn't exactly design it with that in mind - for one thing recursive calls are explicitly disallowed.<br /><br />But, we can INCLUDE another file, which will then be executed by remind (even if it's included multiple times) and will be able to use the DSL commands and have access to any variables already defined, so we can use that mechanism to create a function that will do what we want. After a bit of playing around today I finally settled on this:<br /><br /><div class="scriptexcerpt"># USAGE:<br /># SET these variables then INCLUDE this script:<br />#<br /># tz_src - the timezone the event is in<br /># tz_src_date - the date component of the event as would be passed to REM,<br /># including any repetition and reminders<br /># tz_src_time - the time component of the event in hh:mm form<br /># tz_src_trem - any time repetition, reminders, DURATION, etc. as passed into<br /># REM (if not desired, set to "")<br /># tz_msg - The message to print.<br />#<br /># Afterwards tz_dst_time will be set for *today's* occurrence of the event in<br /># localtime, or unset if no event occurs.<br /><br /><br /># Find next date in src timezone that occurs today() in localtime:<br />REM [tz_src_date] SCANFROM [trigger(today()-2)] UNTIL [trigger(today()+2)] SATISFY \<br /> coerce("DATE", tzconvert(datetime(trigdate(), tz_src_time), tz_src)) == today()<br />IF trigvalid()<br /> # We know local date is today from SATISFY, convert time to local:<br /> SET __dst_dt tzconvert(datetime(trigdate(), tz_src_time), tz_src)<br /> SET tz_dst_time coerce("TIME", __dst_dt)<br /><br /> REM [trigger(today())] AT [tz_dst_time] [tz_src_trem] MSG [tz_msg]<br />ELSE<br /> UNSET tz_dst_time<br />ENDIF</div><br />That searches for a date the event occurs on the other timezone that satisfies the condition that the event occurs today() in the local timezone (today() is not necessarily the actual system date, it could be a specific date being looked up or the date of a calendar entry being computed). The source date can be specified with any of the usual remind recurrence constructs, just like an ordinary event. I've noticed some parse errors using this with a one off event on days the event does not occur - I think it might be a bug in remind for non-recurring events with a SATISFY clause that returns 0, but if someone can see something I've done wrong there I'd welcome the feedback. Anyway, for one off events you can just use the more concise syntax above, I've tried a few different forms of recurring events and haven't yet seen it on any of them.<br /><br /><br />The title of this post says "and other tricks", so I should probably show you some. I have a weekly meeting who's time varies depending on daylight savings (to better accommodate people elsewhere in the world who call in), so I've come up with this trick checking if every Friday is in (local) daylight savings time to accommodate this (try doing this in iCal!):<br /><br /><div class="scriptexcerpt">REM Fri SATISFY 1<br />IF isdst(trigdate())<br /> REM [trigger(trigdate())] +2 SKIP AT 09:30 DURATION 0:30 Some meeting<br />ELSE<br /> REM [trigger(trigdate())] +2 SKIP AT 08:30 DURATION 0:30 Some meeting<br />ENDIF</div><br /><br /><br />Finally, for anyone in Canberra, here is a list of public holidays you can import into your remind file. These should take care of any of the floating public holidays as well, and you can use the SKIP keyword to have events automatically be cancelled if it falls on a public holiday, or the BEFORE or AFTER keywords to move it to another day. The only thing these can't predict is any meddling from the Government:<br /><br /><div class="scriptexcerpt"># Public Holidays<br />FSET next_monday(x) x + (7-wkdaynum(x-1))<br />FSET next_monday_inc(x) x + (7-wkdaynum(x-1))%7<br />FSET weekend(x) wkdaynum(x) == 0 || wkdaynum(x) == 6<br /><br />OMIT Jan 1 SPECIAL COLOR 255 255 255 New Year's Day<br />REM Jan 1 SCANFROM [trigger(today()-7)] SATISFY weekend(trigdate())<br />OMIT [trigger(next_monday_inc(trigdate()))] SPECIAL COLOR 255 255 255 New Year's Day Holiday<br />OMIT Jan 26 SPECIAL COLOR 255 255 255 Australia Day<br />REM Jan 26 SCANFROM [trigger(today()-7)] SATISFY weekend(trigdate())<br />OMIT [trigger(next_monday_inc(trigdate()))] SPECIAL COLOR 255 255 255 Australia Day Holiday<br />REM Mon Mar 8 SCANFROM [trigger(today()-7)] SATISFY 1<br />OMIT [trigger(trigdate())] SPECIAL COLOR 255 255 255 Canberra Day<br />SET easter EASTERDATE(YEAR(TODAY()))<br />OMIT [TRIGGER(easter-2)] SPECIAL COLOR 255 255 255 Good Friday<br />REM [TRIGGER(easter-1)] SPECIAL COLOR 255 255 255 Easter Saturday<br />REM [TRIGGER(easter)] SPECIAL COLOR 255 255 255 Easter Sunday<br />OMIT [TRIGGER(easter+1)] SPECIAL COLOR 255 255 255 Easter Monday<br />OMIT Apr 25 SPECIAL COLOR 255 255 255 Anzac Day<br />REM Apr 25 SCANFROM [trigger(today()-7)] SATISFY weekend(trigdate())<br />OMIT [trigger(next_monday_inc(trigdate()))] SPECIAL COLOR 255 255 255 Anzac Day Holiday<br />REM Mon Jun 8 SCANFROM [trigger(today()-7)] SATISFY 1<br />OMIT [trigger(trigdate())] SPECIAL COLOR 255 255 255 Queen's Birthday<br />REM Mon Oct SCANFROM [trigger(today()-7)] SATISFY 1<br />OMIT [trigger(trigdate())] SPECIAL COLOR 255 255 255 Labour Day<br />OMIT 25 Dec SPECIAL COLOR 255 255 255 Christmas<br />OMIT 26 Dec SPECIAL COLOR 255 255 255 Boxing Day<br />REM 25 Dec SCANFROM [trigger(today()-7)] SATISFY weekend(trigdate())<br />IF trigvalid()<br /> OMIT [trigger(next_monday_inc(trigdate()) )] SPECIAL COLOR 255 255 255 Christmas Holiday<br /> OMIT [trigger(next_monday_inc(trigdate())+1)] SPECIAL COLOR 255 255 255 Boxing Day Holiday<br />ENDIF<br />REM 26 Dec SCANFROM [trigger(today()-7)] SATISFY wkdaynum(trigdate()) == 6<br />OMIT [trigger(next_monday_inc(trigdate()))] SPECIAL COLOR 255 255 255 Boxing Day Holiday</div>DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-30906488731994356912010-07-14T11:43:00.015+10:002010-07-14T14:15:38.226+10:00Fun with Foreign Debian BootstrappingYesterday I found myself booting Linux on a device with no attached permanent storage - all I had was several gigabytes of RAM and the ability to netboot it through TFTP. I had been using a very minimal root filesystem inside the kernel image, but I began to wonder if it would be possible to have an entire Debian installation in the ramdisk instead - the box certainly had enough RAM to fit a minimal installation.<br /><br />Ordinarily one could just use debootstrap to set up a minimal Debian installation inside a directory and make a ramdisk from that, but this was further complicated by the fact that this was a PowerPC device. Debootstrap does have a --foreign option to perform the first part of the installation on a different architecture, but the --second-stage still needs to be run as root on native hardware and assumes that it is being run from within an existing Linux installation with a bunch of standard tools available to it.<br /><br />The only machines I had root on were all x86 (other than the device in question, but the ramdisk I had been using had some limitations that would have complicated matters) and some other test boxes (which I would have had to wait to requisition). So instead I decided to do a partial debootstrap on my local x86 box and complete the installation using only my local x86 box and that partial image on the PowerPC box.<br /><br />If you are following this article as a guide I should note that it assumes you are able to compile and boot your own kernel and have a decent familiarity with Linux in general.<br /><br />So first, begin the debootstrap process, but use --foreign to only perform the first part of the bootstrapping process (NOTE: almost everything here needs to be run as root, signified by the # at the start of each line):<br /><code><br /># mkdir deb-ppc<br /># debootstrap --arch=powerpc --foreign squeeze deb-ppc http://<mirror>/debian<br /></code><br />After this command completes you have an incomplete Debian installation in deb-ppc - some basic tools are installed (but not configured) and some packages have been downloaded but not installed. I did not select any additional packages into the initial root disk at this stage, though had I been thinking ahead it would have been useful to also include openssh-server and rsync, but that was not a major setback for me. You might want to include them, and if you don't like vi or nano you might also want to install your console editor of choice. At the moment the root disk is not bootable, so let's fix that:<br /><code><br /># ln -s /bin/bash deb-ppc/init<br /></code><br />This still won't boot into a full Debian installation - after the kernel finishes it's initialisation and tries to spawn the init userspace process to take over booting, it will instead spawn an interactive shell which can be used to complete the bootstrapping process. Since I'm bundling this inside the kernel image as an initramfs as opposed to an initrd loaded separately, I link an interactive shell into /init. If you were doing this with an initrd you would instead link it to /initrd.<br /><br />Before we can make a ramdisk image from that directory we need to save this script as mkinitramfs.sh from Documentation/filesystems/ramfs-rootfs-initramfs.txt in the kernel sources:<br /><div class="scriptexcerpt">#!/bin/sh<br /><br /># Copyright 2006 Rob Landley <rob@landley.net> and TimeSys Corporation.<br /># Licensed under GPL version 2<br /><br />if [ $# -ne 2 ]<br />then<br /> echo "usage: mkinitramfs directory imagename.cpio.gz"<br /> exit 1<br />fi<br /><br />if [ -d "$1" ]<br />then<br /> echo "creating $2 from $1"<br /> (cd "$1"; find . | cpio -o -H newc | gzip) > "$2"<br />else<br /> echo "First argument must be a directory"<br /> exit 1<br />fi<br /></div><br />NOTE: when using this script be sure you are calling this script and not a separate program also named mkinitramfs from your distribution.<br /><br />Let's bundle the root disk into a cpio image:<br /><code><br /># ./mkinitramfs.sh deb-ppc ramdisk.cpio.gz<br /></code><br />Now you need to compile the kernel and netboot it - I'll leave the details of how to actually do that out of this article - there's plenty of good resources for that around already and the netboot procedure may vary depending on your setup (if you are netbooting at all). If you are doing this with an initramfs like I am you will need to point CONFIG_INITRAMFS_SOURCE to that image - once you have configured the kernel edit the .config file and remove the 'CONFIG_INITRAMFS_SOURCE=""' line. Then run make oldconfig which will ask you to set that option as well as some UID and GUI mapping (which you can leave as 0 since the image already should already have the correct ownership). After that you can run make and wait for the kernel to build. I'll also assume you know which zImage is the correct one to boot on your hardware.<br /><br />Once you have successfully booted the kernel you should find yourself at a bash prompt. You should be aware that the environment is extremely limited at this point - for one thing there is no job control so don't try to spawn a process that you need to ctrl+c out of (I made the mistake of pinging a host to check that the network was up).<br /><br />The debootstrap --second-stage did not work for me, so instead I completed the installation manually:<br /><code><br /># export PATH=/usr/sbin:/usr/bin:/sbin:/bin<br /># dpkg --force-depends --install /var/cache/apt/archives/*.deb<br /></code><br />A few things may complain during that and you may need to tell apt to fix up any problems:<br /><code><br /># apt-get -f install<br /></code><br />Now you will have a much more complete userspace - including vi. There's a few more things we need to do to get the system usable. Firstly, let's edit /etc/fstab and add an entry for /proc since so much userspace depends on it:<br /><code><br /># vi /etc/fstab</code><br /><div class="scriptexcerpt">proc /proc proc defaults 0 0</div><br /><code># mount /proc<br /></code><br />Now we should probably get networking set up (I'm assuming you are using DHCP and your interface is eth0):<br /><code><br /># vi /etc/network/interfaces</code><br /><div class="scriptexcerpt">auto lo<br />iface lo inet loopback<br /><br />auto eth0<br />iface eth0 inet dhcp</div><br /><code># vi /etc/hostname<br /># ifup lo<br /># ifup eth0<br /></code><br />Do not make the mistake I made of checking if the interface is up by pinging something. You can run ifconfig to make sure your IP address looks right.<br /><br />And set up apt (note /debian postfix in sources.list which isn't in the template provided by debootstrap - I spent around 10 minutes contemplating the 403 I was getting before I noticed that):<br /><code><br /># vi /etc/apt/sources.list</code><br /><div class="scriptexcerpt">deb http://<mirror>/debian squeeze main</div><br /><code># vi /etc/apt/apt.conf.d/10local</code><br /><div class="scriptexcerpt">APT::Install-Recommends "0";<br />APT::Install-Suggests "0";</div><br /><code># apt-get update<br /></code><br />Now you can install any additional packages you may need (if you didn't do this in the initial debootstrap), so let's install what we need to be able to copy our changes out of the machine (interactive SSH won't work just yet, but file copying will):<br /><code><br /># apt-get install openssh-server rsync<br /># passwd<br /></code><br />Note that if you are interacting with the machine via serial it may be a bit awkward to interact with the configuration for some packages (such as localepurge) so just install the bare essentials for the moment. After installing some packages it's probably a good idea to clean the apt cache since we are likely pretty tight on RAM:<br /><code><br /># apt-get clean<br /></code><br />Speaking of serial, if you are logging into the machine via serial (as I was) you may want to spawn a console on the serial line:<br /><code><br /># vi /etc/inittab</code><br /><div class="scriptexcerpt">T0:2345:respawn:/sbin/getty -L ttyS0 57600 vt100</div><br />Back on the x86 box we can now copy all those changes back into the ramdisk and make it actually boot Debian:<br /><code><br /># rsync -avx <host>:/ deb-ppc</code> (NOTE: the x is important, otherwise /proc will be copied as well)<code><br /># rm deb-ppc/init<br /># ln -s /sbin/init deb-ppc/init<br /># ./mkinitramfs.sh deb-ppc ramdisk.cpio.gz</code> (again, the mkinitramfs from the kernel doc, not a distro)<br /><br />Again, compile the kernel and boot it. You will need to do this last part every time you make a change in the ramdisk that you want to make persistent.<br /><br />Once booted you will be able to interactively SSH into it and will find you now have a complete Debian installation you can do whatever you like with within the constraints of the available RAM. With full SSH, job control and proper TTY management you can now perform some changes that would have been a little tricky earlier, such as reconfiguring any packages you couldn't configure properly earlier (tzdata for me) and stripping out unneeded locales (this messed up a little for me since locales wasn't installed before localepurge. I haven't tested this and it's probably longer than it needs to be, but I think it will work):<br /><code><br /># apt-get install locales<br /># locale-gen en_AU-UTF-8<br /># dpkg-reconfigure locales<br /># apt-get install localepurge<br /># localepurge<br /># apt-get clean<br /></code><br />You might also want to strip out some unneeded packages, for example with:<br /><code><br /># apt-get purge logrotate mac-fdisk rsyslog yaboot info install-info man-db manpages nano<br /></code><br />Remember to follow the above instructions to make those changes persistent if you are happy with them. Later I'll probably play around with docpurge (from maemo) and look at other ways of reducing the size of the image (disabling logging is probably a good place to start).<br /><br />If you're after some further reading on booting the kernel with initial ramdisks, check out Documentation/early-userspace/README and Documentation/filesystems/ramfs-rootfs-initramfs.txt in the kernel source.DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-48329367995992757242009-02-04T02:08:00.017+11:002009-02-11T14:57:03.278+11:00Of Rips and Magical Musical DVDsIf there is one thing that irks me almost as much as mistagged mp3s, it's poorly encoded videos. Why is it that AVI is still so popular when Matroska containers are superior in every way? Why is MP3 still being used for the audio when any device that can play the video certainly will have enough grunt to play OGG Vorbis (codec support notwithstanding)? God forbid something encoded in ... [gasp] MPEG2 - H.264 people, H.264 (patent law notwithstanding)! Even mplayer on my Eee 701SD (running Debian Lenny) can handle all that without missing a frame!<br />This rant comes about as a result of me trying to buy a certain music DVD since about 5 months ago. I've had it on backorder for months, I've tried JB HiFi and some other local shops and not really trusting eBay for these kind of purchases I eventually decided to just use my left over internet quota for the month and download the thing.<br />On a side note, record companies - if you want to make money, why do you make it so difficult to buy things from you? "On backorder, you will receive an email when the item is back in stock", "Unfortunately xxxxx are sold out. Would you like to...", "still out of stock, there's some sorta licensing issue which is taking forever to resolve." - these are just some of the quotes I've heard as a consumer over the last year. Then there was that incident with those CDs stuck in US customs for those two months without anyone knowing where they were and costing more money as replacements were sent, then more money as they were returned after customs finally released them (So, the <span style="font-style:italic;">only</span> thing that Free Trade Agreement did was stuff up our legal system then?)... Yeah, I'm a little off buying CDs over the Internet by now, but of course the alternative of buying in a store is quite difficult given that the bands practically have to have achieved worldwide fame to have a snowballs chance in hell of actually being in stock (ok, I am exaggerating that a <span style="font-style:italic;">little</span>).<br />Now, since I live in Australia and have limits on how much I can download in a month, I strongly preference not downloading any files larger than ~700mb - and why should I? <span style="font-style:italic;">All</span> the music DVDs I've ripped myself sound (and to a lesser extent, look) supurb at that size, surely it couldn't be <span style="font-style:italic;">that much</span> worse than mine, right? wrong.<br />Now, ask yourself this - if you were ripping a <span style="font-weight:bold;">music</span> DVD, you would make sure that you set a decent bitrate on the audio track wouldn't you? I certainly would - at least 196kbps, perhaps even as high as 256kbps for those of you with ultra sensitive hearing. Well, let's just say that this particular download was a tad less than that and not go into the details too much. I won't even mention just how excruciatingly painful it was to try to listen to.<br />Now, it's not <span style="font-style:italic;">that</span> hard to do a decent encoding, but it is important to have a reasonable understanding of what's actually involved in the process. It is important to know your source media - is it interlaced? Does it need to be cropped? Is there a subtitle track that you should rip as well? Is there just the one audio track; which one is the right one? Does the aspect ratio need to be fixed?<br />Many of those answers will vary from situation to situation and from DVD to DVD, so there isn't a one size perfectly fits all solution. Of course there are graphical tools to do this for you and some of them are no doubt pretty good, though they do not remove the need to have at least a basic understanding of what is actually happening if you want good results. I'm not going to cover any graphical tool though, I learned how to do this on the command line years ago and have stuck with that, merely expanding my knowledge when new codecs and options came out. This shell script (which I know needs work - patches welcome) is my current best practice for ripping DVDs for my personal use.<br />It does make a few assumptions - that the DVD is interlaced and that you want it de-interlaced (because you will be playing it on a computer monitor as opposed to a TV), that there is no subtitle track that you want to extract (if you do, add "-sid n" without the quotes and where n is the subtitle track you want, usually 0, to the end of each line starting with mencoder, though also note that there are "better" ways to do this), that this is a music DVD and not a movie (I recommend lowering the audiobitrate to 128 if it is a movie), that you only want the one default audio track (if not, specify it with mplayer's -aid option and find the appropriate ID with mplayer's -identify option), and that it doesn't need to be cropped (too error prone to automate - look at the -vf cropdetect and -vf crop options in mplayer if you need it).<br />You will need a few dependencies: You need the Matroska tools, Vorbis tools and x264 libraries. You will also need to make sure that you have mplayer AND mencoder built with x264 support and able to play your DVD. This probably means you will need to compile it from source, which is outside the scope of this article on account of me needing to sleep soon. Also note that depending on your location you may find that you may have legal issues regarding the patents surrounding the H.264 codec. Not to mention that you may live in a country where you cannot legally format shift or where breaking Technological Protection Measures (such as encrypted DVDs) is plain illegal - I leave it to the reader to verify that they can legally do these things or go away and complain loudly to their Government if they can't, just don't go and drag me into it all, I'm just not in the mood.<br /><br />So, if you've kept reading instead of going to complain to someone in authority than I guess that you are bearing the responsibility and want to know how to actually use this.<br />Save it as something like rip.sh and use it like<br /><code>./rip.sh filename track</code><br />where filename is the base filename you will end up with and track is the DVD track number to extract - if you leave the track blank then it will rip whatever would have played with mplayer dvd://<br /><code><br />#!/bin/bash<br /><br />targetfilesize=$[ 700 * 1024 * 1024]<br />audiobitrate=256<br /><br />file=$1<br />dvddump="dvd://$2"<br />rawaudio="$file-rawaudio.wav"<br />compressedaudio="$file-compressedaudio.ogg"<br />pass1out="$file-pass1.avi"<br />pass2out="$file-pass2.avi"<br />finalcut="$file.mkv"<br /><br />#extract audio<br />mplayer "$dvddump" -vc null -vo null -ao pcm:file="$rawaudio":fast </dev/null<br /><br />#compress audio<br />oggenc "$rawaudio" -b $audiobitrate -o "$compressedaudio"<br />rm "$rawaudio"<br /><br />#Sometimes the length of the video is misreported, so use the length of the audio track instead since it was just encoded and therefore more likely to be accurate:<br />#NOTE: There is a rare situation where the audio track is really not the same length as the video track - if that is the case you will need to alter this section appropriately<br />videolength=`echo \`mplayer -identify "$dvddump" -vo null -ao null -frames 0 2>/dev/null |awk -F= '/ID_LENGTH/ {print $2}'\` / 1 + 1 | bc`<br />audiolength=`echo \`mplayer -identify "$compressedaudio" -vo null -ao null -frames 0 2>/dev/null |awk -F= '/ID_LENGTH/ {print $2}'\` / 1 + 1 | bc`<br />echo videolength: $videolength<br />echo audiolength: $audiolength<br />length=$audiolength<br />echo length: $length<br /><br />#calculate video bitrate<br />videotargetsize=$[ $targetfilesize - `du -b "$compressedaudio" | awk '{print $1}'` ]<br />videobitrate=`echo "$videotargetsize * 8 / $length / 1000" | bc`<br />echo video bitrate: $videobitrate<br /><br />#video pass 1<br />rm divx2pass.log<br />mencoder "$dvddump" -vf kerndeint,scale -ovc x264 -oac lavc -lavcopts abitrate=64 -x264encopts bitrate=$videobitrate:threads=auto:pass=1:turbo=1 -o "$pass1out"<br /><br />#video pass 2<br />mencoder "$dvddump" -vf kerndeint,scale -ovc x264 -oac lavc -lavcopts abitrate=64 -x264encopts bitrate=$videobitrate:threads=auto:pass=2 -o "$pass2out"<br /><br />#compile<br />mkvmerge -o "$finalcut" -A "$pass2out" "$compressedaudio"<br /></code><br />If anyone does want to submit patches for this, the main features I've been intending to implement are a more flexible command line usage, <span style="text-decoration:line-through;">a better way to extract the audio (that doesn't have the same risk of pressing left/right yet still produces perfectly synced audio),</span> get all the subtitle tracks embedded into the mkv file and convert the DVD chapters into a format that can be embedded into the mkv.<br /><span style="font-weight:bold;">Update:</span> I can't believe that I didn't think of this earlier - simply redirecting stdin from /dev/null solves the keyboard input issue when dumping the audio with mplayer.<br /><br />As for me, well, I guess I'll just eBay it after all hoping it's not a bootleg and go to sleep.<br /><br /><span style="font-weight:bold;">Update:</span> I'm just going to go over an issue I mentioned in this post - how to deal with media that needs it's aspect ratio corrected. The symptoms of this are generally that while you are watching a video everything just feels slightly distorted - in many cases this will be your imagination playing tricks on you, but if you are fairly certain that it isn't, read on. I'm going to use the music video for "Stick Together" which was on the bonus DVD from the album "Rock Music" by "The Superjesus" as an example. Everytime I watched this it looked distorted, so today I paused it at this frame and used the GIMP to take a screenshot:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_xxiLBp-WqaI/SZI7uni_CDI/AAAAAAAAASw/s121dW4pbsM/s1600-h/aspect.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 252px;" src="http://2.bp.blogspot.com/_xxiLBp-WqaI/SZI7uni_CDI/AAAAAAAAASw/s121dW4pbsM/s320/aspect.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5301365383444236338" /></a><br />Now, the reason I took the screenshot here is that there is a fairly large (easier to measure) drum (circular object) reasonably close to the centre of the screen (not too heavily distorted by the camera's lens) and facing the camera almost perfectly straight on (avoids perspective distortion). Using the measure tool in the GIMP I find that the drum is approximately 170 pixels wide but about 184 pixels high - clearly the aspect ratio is way out and in this case it wasn't just my imagination (phew).<br />You will also notice the large black bars above and below the image - these need to be cropped. And here's another reason I chose this video - take a look at this screenshot:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_xxiLBp-WqaI/SZI92sOy2yI/AAAAAAAAAS4/VRoukFip1f0/s1600-h/sticktogether.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 252px;" src="http://1.bp.blogspot.com/_xxiLBp-WqaI/SZI92sOy2yI/AAAAAAAAAS4/VRoukFip1f0/s320/sticktogether.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5301367721163938594" /></a><br />Notice where the black bars are and how large they are this time? This is exactly why it's important to know your source media. If I simply run mplayer -vf cropdetect over this it's going to change it's mind 3 times during playback - within the first second as that fades in it changes from crop=560:496:80:38 to crop=576:496:72:38. Then when the widescreen video starts it decides on crop=688:496:18:38. None of these are correct - the first two would cut off the left and right of the video and the last one will still leave small black bars at the top and bottom. This is one of the reasons why I mentioned that automating cropping is just too error prone. So, what's the solution? Tell mplayer to start playback after the intro artwork is gone of course! If, hypothetically, you wanted to modify my above script to attempt to detect how to crop it, I would suggest adding a line something like this (and using the crop variable in the appropriate place in the video filter chain - I cover this later):<br /><code><br />crop=`mplayer "$dvddump" -vf cropdetect -vo null -ao null -fps 1000 -ss 60 -endpos 5|grep CROP|tail -n 1|sed 's/^.*(-vf //'|sed 's/).*$//'`<br /></code><br />This starts the playback one minute in, quickly runs the video for 5 seconds and gives me a crop parameter of crop=688:432:18:72 - checking this with mplayer -vf crop=688:432:18:72 video.vob looks about right so it's time to move back to the problem of the aspect ratio (you could also crop the video after changing the aspect ratio - just remember to keep your video filters, <span style="font-style:italic;">including cropdetect</span>, in the same order that you are working with).<br />So, let's see - I have a width of 688 pixels with the drum 170 pixels wide, and a height of 432 pixels with the drum 184 pixels high. Personally, I want to keep the width as is and scale the height to adjust the aspect. So, the currect aspect ratio is about 1.6 (688/432) and I probably want about 1.7 (432/184*170) - plugging this value into mplayer still doesn't look quite right, but I know that this is close to the standard 16:9 (1.<span style="text-decoration: overline;">7</span>) aspect and a little more eyeballing tells me that's probably a bit closer. What I'm trying to get at here is that despite your best measuring efforts, it's quite difficult to get this exact and you eventually will need to just eyeball it and see if it looks good enough.<br />So, all together now the filter chain will look something like this:<br /><code><br />mplayer -vf kerndeint,crop=688:432:18:72,dsize=16:9,scale=-1:-2<br /></code><br />Breaking that down:<br />1. Deinterlace the video before any other processing (the absolute last thing you would ever want to do is scale first and then try to deinterlace, unless of course you like to make your eyes bleed).<br />2. Crop the black bars away (again, if you altered the aspect ratio before cropping the video this would be at the end of the chain).<br />3. dsize is used to change the <span style="font-style:italic;">intended</span> aspect ratio used by all the following video filters (but doesn't change the aspect ratio itself).<br />4. Actually change the aspect ratio: a width of -1 tells it to use the original width (688 pixels), and a height of -2 tells it to scale the height using the other dimension and the intended aspect ratio.<br />Ok, I have lied a little - my source media is actually not interlaced in this case, so I did not use the kerndeint filter, but I wanted to drive home the point about the importance of getting the video filter order correct - I've seen it done wrong. My eyes started to bleed.DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-29080309095114291622009-01-16T04:05:00.019+11:002009-02-05T14:01:51.408+11:00New Years Resolution: Massive Music Tag CleanupOnce again I find that months have passed since my last entry. The blog will be a year old in little over a week and I will once again be attending linux.conf.au, this time down in Hobart. I've got myself some new gadgets - in particular a Eee PC 701SD which only cost $327 AU from JB Hifi so I have a decent computer for the conference. I'll be posting a lot more about it in the coming weeks, but am just mentioning it now as it is linked to today's post. Allow me to explain - while I've kept the default Xandros install on the internal 8 gig solid state drive I've installed Debian on a 2 gig SD card. 2 gig. yep. small, isn't it? and encrypted, but that's for another post. The point is that I've been looking for lightweight alternatives to all the software that I traditionally use in my day to day tasks, so while I'll happily leave Amarok alone on Xandros, I didn't really want to pull in all the KDE dependencies to have it on Debian, and I've come across a nice little ncurses music player called cmus to use instead.<br /><br />Now, on my desktop and main laptop I use Amarok pretty much exclusively and have tried to keep all the tags in my music collection accurate - I try to check the track listing, the genre, the year and that the capitalisation complies with English capitalisation rules (except when it is apparent that the odd capitalisation is a concious decision on part of the artist and forms part of the art). I'm well aware that I've missed some - some of the artists that have been in my collection for longer still have bad capitalisation and I've only started to check the accuracy of the album years recently.<br /><br />But there is a larger problem - Amarok doesn't reveal every tag to me. While that doesn't matter in the least as long as I'm only using Amarok, it can matter when I use other media players. I'm not worried about any of those albums I own a physical copy of - they're all in ogg (but if you do need a powerful ogg tag editor, tagtool's advanced mode _looks_ promising), but rather the music I've downloaded and have left in mp3. I've been aware of the issue for a while because I occasionally observe some of the symptoms on the various media players available on the Internet Tablet. I have looked at dedicated tag editors, but until now I haven't been able to find one that would show me *every* tag - not just the one's it's programmed to recognise, not just the id3v2.3 tags, but all of them. And not just the first 30 characters of them either.<br /><br />Why is this *so* important, they're just extra tags, right? Well, my biggest annoyance is that cmus uses the contents of the TPE2 tag if it is present for the Artist in it's library view rather than the TPE1 tag which Amarok uses. TPE1 is defined as "Lead performer(s)/Soloist(s)", while TPE2 is defined as "Band/orchestra/accompaniment". Now, the TPE2 tag may well be perfectly valid and correct, but it is not a tag that I have been organising or validating so far with Amarok, so I'd like to get everything consistent and delete the TPE2 tags. While I'm at it, why not remove all the cover art from the mp3s - I've always felt it wasteful to keep 12 copies of the same image when I could and do just put a single image in the same folder. In fact, why not go and remove all the tags that aren't recognised by Amarok - do I really care that it was encoded with lame? I might be happy to leave the 'free download from http://www.last.fm' comment tags alone and I certainly don't want to destroy any comments that I've added, but do I really want any of the other comment tags in there?<br /><br />So I finally found a id3 tag editing tool that can show me most of the tags - <a href="http://eyed3.nicfit.net/">eyeD3</a>. It's still not perfect - <span style="text-decoration:line-through;">there isn't any support for id3v2.2,</span> it doesn't show me the tags that replaygain uses and it did crash while parsing some of the mp3s - I dare say I'll have to come back to those later with another tool, even if it is hexedit. <span style="font-weight:bold;">Edit:</span> As the author pointed out, eyeD3 is in fact able to read id3v2.2 tags, just not write them and those crashes will doubtless be solved in no time.<br /><br />The first step was to find out what tags are actually present in my collection:<br /><code><br />find music -iname "*.mp3" -exec eyeD3 -v {} \; | tee index<br />sort -u index | awk -F\): '/^<.*$/ {print $1}' | uniq | awk -F\)\> '{print $1}' | awk -F\( '{print $(NF)}' > tags<br /></code><br />So, that gives me a list of all the different types of tags in my collection - 44 unique tags in my case. Next step is to work out which ones are used by Amarok and if I want to keep any of the others. While I could go through and speculate on which of the three tags I can immediately see that might be a year, it's probably a better idea to look at the source code.<br /><code><br />apt-get source amarok libtag1c2a<br />view amarok-1.4.9.1/amarok/src/metabundle.cpp<br />view taglib-1.4/taglib/mpeg/id3v2/id3v2tag.cpp<br /></code><br />Some immediately obvious tags because it names their identifier directly are TPOS (Disc number), TBPM (beats per minute), TCOM (Composer - admittedly this is one tag that I have not been validating), TPE2 (which is marked as a non-standard MS/Apple extension - so it is aware of it but since it's messing up my collection and Amarok doesn't seem to display it anywhere I'm getting rid of it anyway) and TCMP (Compilation album, ie, show under various artists. Unfortunately cmus doesn't appear to use this tag, though does seem to have some logic for compilation albums - this is a matter I will need to investigate further later on).<br />Digging deeper to look past the nice friendly names that the programmers can recognise to the harsh id3 reality I also identify that I'll need to keep title (TIT2), artist (TPE1), album (TALB), comment (COMM), genre (TCON), year (TDRC) and track (TRCK) - as well as anything that is used when playing the file that isn't identified here.<br /><br />Though Amarok can use images embedded in the mp3s, I don't want any - I much prefer to use Amarok's cover manager combined with <a href="http://www.kde-apps.org/content/show.php/CopyCover+(amaroK+Script)?content=22517">copycover-offline.py</a> to copy them into the appropriate directory (look through the comments for useful patches - hmmm, should probably submit my fix for albums with Various Artists come to think of it).<br /><br />So, I made a list of these tags, one per line in a file called amaroktags. Then found all the tags in my collection that aren't supported by Amarok:<br /><code><br />cat amaroktags tags | sort | uniq -u<br />view taglib-1.4/taglib/mpeg/id3v2/id3v2.4.0-frames.txt<br /></code><br /><br />Which left me with a list of tags that I wanted to keep:<br />COMM, TALB, TBPM, TCMP, TCOM, TCON, TDRC, TIT2, TPE1, TPOS, TRCK, MCDI (Music CD Identifier), TFLT (File type), TLEN (length, used for seeking), TSRC (International Standard Recording Code - the only album using it in my collection is Nine Inch Nail's Ghosts I-IV)<br /><br />And an even larger list of tags to zap:<br />TPE2, APIC (Attached picture), TDTG (Tagging time), GEOB (arbitrary file), PCNT (Play count), POPM (Popularimeter), PRIV (private textual & binary data), TCOP (copyright), TDEN (encoding timestamp), TENC (Encoded by), TIT1 (content group description), TIT3 (Description refinement), TLAN (language), TMED (Media type), TOAL (Original title), TOFN (original filename), <br />TPUB (publisher), TSSE (encoding settings), TXXX (User defined text), UFID (unique file identifier), USLT (lyrics), WCOM (commercial info), WOAR (artist web page), WXXX (other URL)<br /><br />As well as these ones that I couldn't identify, so I'll zap em and hope nothing breaks:<br />NCON, TAGC (appears to be a timestamp)<br /><br />And a couple to manually check later:<br />TOPE (Original artist - I notice that <a href="http://dkcproject.ocremix.org/">Kong in Concert</a> uses these for the original track names, though not accurately - they should probably be in TOAL), TYER and TDRL (years with subtly different meanings - taglib does seem to fallback and use these, but I will need to check for conflicts)<br /><br />So, now I have a pretty definitive list of tags it's time to zap em' (after backing up in case something blows up in my face of course). Although not immediately obvious it appears that using the --set-text-frame specifying the 4 letter name of the frame and no contents will remove it, even if it isn't a text frame. Now, this doesn't appear to actually conserve any space in the file - it shuffles the rest of the tags upwards and zeroes out the gap (presumably conserving the space would be possible, but I don't know an easy way off the top of my head - suggestions welcome). There may be some tags that you want to have more intelligent processing on - maybe only remove some of the images or maybe only remove some of the GEOBs and if that is the case read the eyeD3 documentation, but for me I'm sick of them all and want them gone:<br /><br /><code><br />find music -iname "*.mp3" -exec eyeD3 --set-text-frame=TAGC: --set-text-frame=TPE2: --set-text-frame=TDTG: --set-text-frame=TCOP: --set-text-frame=TDEN: --set-text-frame=TENC: --set-text-frame=TIT1: --set-text-frame=TIT3: --set-text-frame=TLAN: --set-text-frame=TMED: --set-text-frame=TOAL: --set-text-frame=TOFN: --set-text-frame=TPUB: --set-text-frame=TSSE: --set-text-frame=TXXX: --set-text-frame=UFID: --set-text-frame=USLT: --set-text-frame=WCOM: --set-text-frame=WOAR: --set-text-frame=WXXX: --set-text-frame=NCON: --set-text-frame=APIC: --set-text-frame=GEOB: --set-text-frame=PCNT: --set-text-frame=POPM: --set-text-frame=PRIV: --set-text-frame=TCMP: {} \; | tee log<br /></code><br /><br />Depending on how large your collection is, at this stage you may choose to blink, stretch your arms, get some coffee, go to bed or take a vacation. Personally, I wrote a blog post.<br /><br />I still have some things I know I'll have to fix up - the Deus Ex Soundtracks all seem to have multiple redundant comments, and there are some non English comment fields, but you should by this stage have a decent understanding on how to do this - that is of course, if this whole article didn't just go over your head (congrats if it did and you still read this far though :)<br /><br /><span style="font-weight:bold;">update:</span> It turns out that the TCMP frame is not actually set by Amarok, so my solution is to remove all the TCMP flags from the library (I've added it to the above list, though where they are 1 in my collection is correct, but very few of the other tracks in the same album are tagged in the same way and would explain some odd behaviour when importing the albums), then to manually add them for all relevant tracks, which hopefully will ease future migration. Unfortunately as best I can tell, cmus doesn't appear to have any concept of compilation albums in it's id3.c. OGG files will supposedly get them since their tags don't require almost one thousand lines of C code to process (by contrast, cmus' vorbis.c file has a mere 285 lines including 33 lines of tag parsing), which begs the question as to why only 1 of my OGG compilation albums are marked as such in cmus.<br /><code><br />find music/V/Various\ Artists/ -iname "*.mp3" -exec eyeD3 --set-text-frame=TCMP:1 {} \;<br /></code><br /><br /><span style="font-weight:bold;">update:</span> I've written a simple shell script to do this automatically, just save this as striptags.sh and execute it from your music directory:<br /><code><br />#!/bin/sh<br /><br />oktags="COMM TALB TBPM TCMP TCOM TCON TDRC TIT2 TPE1 TPOS TRCK MCDI TFLT TLEN TDTG"<br /><br />indexfile=`mktemp`<br /><br />#Determine tags present:<br />find . -iname "*.mp3" -exec eyeD3 -v {} \; > $indexfile<br />tagspresent=`sort -u $indexfile | awk -F\): '/^<.*$/ {print $1}' | uniq | awk -F\)\> '{print $1}' | awk -F\( '{print $(NF)}' | awk 'BEGIN {ORS=" "} {print $0}'`<br /><br />rm $indexfile<br /><br />#Determine tags to strip:<br />tostrip=`echo -n $tagspresent $oktags $oktags | awk 'BEGIN {RS=" "; ORS="\n"} {print $0}' | sort | uniq -u | awk 'BEGIN {ORS=" "} {print $0}'`<br /><br />#Confirm action:<br />echo<br />echo The following tags have been found in the mp3s:<br />echo $tagspresent<br />echo These tags are to be stripped:<br />echo $tostrip<br />echo The tags will also be converted to ID3 v2.4 where appropriate<br />echo<br />echo -n Press enter to confirm, or Ctrl+C to cancel...<br />read dummy<br /><br />#Strip 'em<br />stripstring=`echo $tostrip | awk 'BEGIN {FS="\n"; RS=" "} {print "--set-text-frame=" $1 ": "}'`<br />find . -iname "*.mp3" -exec eyeD3 --to-v2.4 $stripstring {} \; | tee -a striptags.log<br /></code>DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com4tag:blogger.com,1999:blog-6485167114445349071.post-18701290361045884552008-09-19T08:25:00.005+10:002008-09-19T09:23:42.489+10:00Nokia to Release 3G Internet TabletThe short version of this article is simply "I am excited". For the important part, scroll down to the bold text below, for the background, read on.<br /><br />Out of *all* the gadgets that I have ever owned, the one that stands out literally miles above the rest is my Nokia N800 Internet tablet - it's well designed and can do just about anything under the sun.<br />My biggest (note: blowing way out of per portion for the sake of this article) problem with it has been that the only practical ways that it can access the Internet (what with being an Internet Tablet and all) is either through a wireless access point, or by using a 3G bluetooth modem. Wireless access points are quite common, but wireless access points I can legally use without annoying restrictions are comparatively rare out in the wild.<br />My solution to this problem was to purchase a 3G data plan (since voice + 3G data is so expensive in Australia) and take the SIM card and put it in a Nokia N95 instead of the USB modem they gave me. The advantages of this are:<br />- Between both devices I can do anything!<br />- I can get on the Internet anywhere, any-time, relatively cheap (but still not as cheap as those Americans can).<br />- I have a nice (for a phone) 5 mega-pixel camera that fits in my pocket, unlike my (better quality) Kodak camera.<br />- Not only can I use the GPS from the N95 in the N95, but I can also export it as a bluetooth GPS and use it in the mapping apps on the N800.<br />- In an emergency, I can use Gizmo or Skype to make up for the fact that I can't make ordinary phone calls, though thanks to the packet loss and high latency, this is not always practical.<br />- Should my N800 run out of power, I sometimes still have power left in one of the two batteries for the N95 and can therefore continue listening to music.<br />But, this set-up has disadvantages too:<br />- I'm always carrying around 2 devices<br />- I'm always carrying around a spare battery for the N95 because it often doesn't make it through the day on just one.<br />- The N95 needs rebooting all the time to resolve connectivity issues, especially while sharing it's Internet connection over bluetooth. It's to the point where I have NStarter installed so I can reboot faster.<br /><br />Now, as I said I am very happy with the N800, and saw no reason to spend money upgrading to the N810 when it came out (although the backlit keyboard did tempt me, a lot). The one thing that would definitely make me upgrade, I said, was if Nokia added 3G support to their next Internet Tablet. Failing that I would have to take a long hard look at the specs and my money to decide.<br />I thought it was pretty likely that they would add 3G - it would make sense now with the iPhone out as it would put the tablet in as a direct competitor, but of course Nokia remained silent as always.<br /><br />Finally, the Maemo summit arrives and I start to see an influx of posts. "The Internet Tablet line may be ending in name but the Maemo platform is going strong"? That doesn't surprise me actually. I've been speculating that their long term plans may involve Maemo ending up on their phones. Although not confirmed, it makes sense given their purchase of Trolltech and their pledge to open source Symbian - both just happen to be written in C++ and they will be able to satisfy the licence to be able to share code between them and satisfy most of the open source community at the same time.<br />Also, they only ever promised 5 iterations of Internet Tablets anyway, of which 4 have been released - 770, N800, N810 and N810 Wimax - though I have a feeling that they said one of those didn't count towards the 5, but I can't remember the details off hand. Whatever the future of the tablets, I think it's a safe bet that we can expect to see Maemo more and more in the future.<br /><br /><br /><span style="font-weight:bold;">Now, at last I see the post I have been waiting for - Maemo 5 will have High Speed Packet Access built in - that's a 3G Tablet promised right there! </span>They've even gone so far as to release the patches for the Linux kernel necessary to support it, so it's pretty much guaranteed now! It will also have a high definition camera, and I doubt that they would drop the GPS that the introduced in the N810, so <span style="font-weight:bold;">this next tablet officially obsoletes everything I'm using my current N95 and N800 for</span>! Well, that is of course assuming they don't ditch something else important to me, but I think that the only disadvantage will be less potential storage space upgrades.<br /><br />One final point - the software the runs on the Internet Tablets is now even more Open! I don't know the full details, but the wireless drivers and and low level hardware monitoring drivers (ooh, can I fix that DSME now?) are included among the released code.<br /><br />Ahh, isn't the future exciting?DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-14327087938090912682008-09-16T04:43:00.004+10:002008-09-16T04:53:02.389+10:00Command of the Hour: Top VariantsSo, I'm trying to start something that I've dubbed "Command of the Hour" on my local Canberra Linux Users Group mailing list. Quite simply, everyone just chimes in and tells everyone else about some random, obscure and useful command that they know of. Doesn't matter what, doesn't have to be related to any previous post, it just has to be something that they've found useful or can see that others might find useful.<br /><br />But then I thought, why limit this to just my local LUG list? Sure it's great to test drive the idea, but why not try aiming for a wider audience - so here I am copying my initial get the ball rolling post with some top variants here:<br /><br /><br />atop - I just had an issue where gnome wasn't logging in, but seemed to be stuck constantly accessing the hard drive. This command saved me by showing me exactly which program was using the hard drive and a quick aptitude remove mlocate later my system was working perfectly again. It monitors CPU, memory, disk and network highlighting any that are particularly stressed and shows the processes responsible. Processes are only displayed if they have done something interesting<br />since the last update. Kernel patches can be taken to enhance the experience if one is so inclined.<br /><br />htop - Awesome ncurses graphical top. Looks pretty, coloured, and simply highlighting a process and pressing 'S' will attach strace to it to see what that run away process is actually up to. Tag multiple processes and alter the niceness of them all at once or just kill em' all. 'T' toggles between process tree view and ordinary top view.<br /><br />powertop - I'm sure lots of people know about this one by now, but for anyone who doesn't it can show you various information about what is chewing up energy in your system and provide some recommendations for conserving power.<br /><br />iftop - top for network traffic. Shows the traffic going to and fro on every individual transfer and totalled down the bottom in ncurses bar graph style. Amounts are displayed for the last 2, 10 and 40 seconds. Filters can be applied if one is only interested in a subset of the total traffic, and it can naturally do hostname lookups and show port numbers/service names.<br /><br />ntop - another network top, but this one starts a web server on port 3000 to display it's results with pretty graphs. It has the advantage that it provides much more detail - it breaks packets down by size, protocol, etc. It has many displays to analyse the data in varying and sometimes entertaining ways. Of course, being heavyweight as it is, if all you need to know is that traffic is flowing from A to B, firing this one up may be overkill, though it would easily suit as a very<br />quick and dirty network monitoring solution.<br /><br /><br />And a few others that I haven't found so useful myself, but someone else might:<br /><br />itop - top for interrupts. I can imagine it would be useful for checking if hardware is getting the computers attention when it should be.<br /><br />jnettop - this is another network top. I prefer iftop since it gives me a graphical display (and it's help page is somewhat more detailed than "I must write something here... :)").DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com2tag:blogger.com,1999:blog-6485167114445349071.post-91578289730975400142008-08-30T22:42:00.008+10:002008-09-16T04:58:16.448+10:00MythTV scavanges Scrapheap Challenge episodesMy main desktop box has been running this nifty little program called <a href="http://mythtv.org/">MythTV</a> for over a year now. MythTV is a home-brew Personal Video Recorder for Linux, which essentially means it's kind of like a VCR, but on steroids. It downloads TV guide data from <a href="http://www.oztivo.net">OzTivo</a> and once a week I go through the program guide sorted by genre to see if there is anything on that sounds like it might be worth watching. Then I check it's list of Upcoming Recordings to make sure I don't have any conflicts, resolving them if necessary and go away.<br />Every now and again when I have some time I bring up it's recorded programs view, pick something to watch, check that it's automatic commercial detection has done it's job making any necessary corrections (which usually takes less than a minute), sit back hit transcode (to permanently remove those commercials as well as reduce the file size) and enjoy the show.<br />If I'm not at my computer at the time, I can access MythTV remotely using the MythWeb plugin which lets me change my recording schedules remotely and even stream recorded programs to me. It also integrates with the MythMusic plugin which I have found handy on a number of occasions when I've been working on an assignment in a computer lab on campus and wanted some decent music to listen to ;)<br />Some months ago, I noticed this program in the guide called <a href="http://en.wikipedia.org/wiki/Scrapheap_challenge">"Scrapheap Challenge"</a> airing daily on ABC2. The description sounded interesting so I told MythTV to record it on a daily basis. Since then MythTV has not missed a single episode as all of season 9 was aired and then rolling back to the original season 1 from 1998 and showing every episode since (at the time of writing ABC2 is now up to season 7).<br />Each episode Scrapheap Challenge pits two teams of three members and an expert against each other to build some contraption that has to perform a specific task out of the junk they can find in a scrapheap in only 10 hours (except for some special episodes). The idea for the show came from a scene in Apollo 13 where the astronauts only had a short time to construct a carbon dioxide filter from whatever parts they could find on their space capsule. The show is hosted by Robert Llewellyn (who it took me a long time to realise played Kryten in Red Dwarf all those years ago) and Lisa Rogers (prior to season 5 it was Cathy Rogers) who offer entertaining (and sometimes ridiculous, especially in the episode introductions) commentary throughout the show.<br />Anyway, the reason for this post is mostly a plug for the show as I have been thoroughly enjoying it and highly recommend it to any aspiring engineer or indeed anyone with a technical mindset.DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-52949822119426175152008-08-17T05:30:00.005+10:002008-08-17T13:01:18.790+10:00It's amazing what technology can do these daysSo I was at a friends stopping the night, but unable to sleep I eventually decided to call a cab to take me home. Small dilemma: I didn't want to disturb everyone who was asleep in the house any more than I had to. But not to worry, Nokia came to my rescue!<br />First, I didn't know any more detail than the suburb I was in so I started Nokia Maps on my N95 (my "Mobile Modem", but that's for another post) which fairly quickly acquired a GPS fix even though I was indoors and gave me the street and even number I was at.<br />The web browser on the N95 is somewhat limited and although I could browse to the Canberra Cabs website with it and get their phone number, I could not use the online booking facility. Calling them would clearly disturb those sleeping around me (and I'm not even sure it would be possible given the setup I've got), so I pulled out my Nokia N800 Internet Tablet. Using my N95 as a bluetooth modem I pointed the Mozilla based browser to the Canberra Cabs website, filled in my details and within just a few minutes was in a cab spending the rest of my money for the night getting home.<br /><br />Now to sleep.DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-62949898910357527392008-08-11T14:30:00.006+10:002008-08-11T14:46:05.717+10:00BSoD Advertising at LCA08My good friend <a href="http://blog.christophersmart.com/">Chris</a> just sent me this photo taken at linux.conf.au earlier this year. I'm on the left, Jason's on the right, and well, you can see what's in the middle ;)<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_xxiLBp-WqaI/SJ_BB2Ic4qI/AAAAAAAAAJ4/JjAaSC6sOY8/s1600-h/00005.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="http://1.bp.blogspot.com/_xxiLBp-WqaI/SJ_BB2Ic4qI/AAAAAAAAAJ4/JjAaSC6sOY8/s320/00005.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5233113529482797730" /></a>DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com1tag:blogger.com,1999:blog-6485167114445349071.post-251420541892515712008-07-14T00:04:00.007+10:002008-09-23T00:48:25.566+10:00N800 as Remote Speaker and Overdue LCA OverviewWell it has been a long time, I really don't have any excuse. So much for blogging about LCA, ey? I actually have a half written blog post about it still sitting on my Zaurus, but I never got around to finishing it. My Internet Tablet functioned beautifully (except for a bug regarding the bluetooth keyboard requiring me to reboot it every now and again to get the on screen keyboard to come back) and was all I really needed there. The only use the Zaurus got was a little bit of showoff, typing that unpublished blog post when I didn't want to pull out that bulky dell keyboard and listen to music on the train when I didn't want to exhaust the N800's batteries. The only time a full sized laptop would have been particularly handy was during some of the tutorials where it was impossible to keep up on the tablet.<br />It was awesome being able to meet so many people including some of Nokia's employees and seeing the N810 in the flesh, as well as Linus himself. If you must know I saw him four times - after a kernel talk I saw him sneak into an unmarked room, when he walked in on the kernel dev panel talking about kernel debuggers ("Linus! We were just talking about... ice cream.... Would you like some ice cream Linus?"), when he ducked up the stairs to grab a snack and during the meal on the last day.<br />By now there isn't much point in posting info about any of the talks since it has well and truly been blogged to death. If you haven't already go watch the vids - In particular I recommend Tux' Angels: Incident Response Revealed (about IT forensics using open source tools) and Viktor Olier's talk on the RepRap (I think it was titled The Replicators are Coming or something).<br /><br />Nightwish was *awesome*. I did have to miss out on the Penguin Dinner that night to make it, but ohh was it worth it. Unfortunately I missed out on a shirt since they had stopped selling them before interval when I was planning on getting one. No matter - I have since ordered some Nightwish and Sonata Arctica merchandise from overseas. Google paying for the bar tab at the Students party followed by gelato was pretty sweet - I think that brings the total meals of mine that Google has paid for to 5 :-)<br /><br /><br />But anyway, moving on to today's post: I now have found yet another use for my Internet Tablet!<br /><br />When it's late at night and I'm at college and want to watch a video or listen to music without disturbing my neighbors I use a 5 meter headphone extension cable to reach my bed. I've been meaning to look into a bluetooth headset but haven't got around to it yet. Anyway, I've come back home for a few weeks while uni is on break and it's late at night and I want to watch a video without disturbing my mother. Problem is I forgot to pack my headphone extension cable and it's too cold and uncomfortable to sit next to the computer watching it.<br />So I started wondering if there was any way that I could use my Internet Tablet as remote speakers across the room - then I could plug my earphones into them and effectively have (almost) wireless earphones. Turns out that it is possible and not too difficult at all.<br />The two are connected via wireless using my Linksys WRT54GL - which wouldn't be necessary if I could get my laptop to use ad-hoc properly or if either device supported functioning as master. Another option that I haven't looked into yet is using bluetooth PAN, or possibly hacking the tablet to look like a bluetooth A2DP headset, but that's for another time and probably another person to hack into existence.<br />On my tablet I opened a terminal and ran:<br /><code>esd -public -tcp -nobeeps</code><br />I then opened a second terminal window and SSHd into my laptop. There I ran:<br /><code>export DISPLAY=:0<br />export ESPEAKER=192.168.1.101 (My Internet Tablet's IP)<br />mplayer -ao esd -delay -0.3 video.avi</code><br />I had to use the -delay -0.3 parameter as the audio was slightly delayed due to the overhead of sending it over the network and I found that number gave pretty good lip sync - you would probably have to fiddle around with it to find the optimal setting for your situation. This technique should work for any application that can use esd to output sound.DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com1tag:blogger.com,1999:blog-6485167114445349071.post-46334118163651867942008-01-25T05:01:00.000+11:002008-01-25T06:44:11.196+11:00Music Piracy in my Life and Producers PocketsI am living proof that peer to peer file-sharing networks _can_ increase music sales. I'm not saying it will in every case, but if I had to guess at it's overall effect on sales, I'd say it would be positive.<br />In the beginning there was almost no music I would listen to - all the radio stations played rubbish almost exclusively 24/7, and I certainly didn't enjoy the music that my Sister or Mother listened to, so quite frankly the best music around was to be found in certain video games as far as I was concerned.<br />Then one day while I was chatting with a friend the subject of music came up and I mentioned that I didn't really listen to much because there wasn't very much mainstream music that I liked, so he gave me a copy of his entire music collection. Even going though that I found myself going from one artist to another - there were certainly artists there that I enjoyed more than what I heard on the radio 99% of the time, but there are probably only about 3 or 4 artists in his entire extensive collection that I still listen to today. It was in this time that I got my first few albums on CD - I didn't buy any of them myself mind you - they were all gifts. It has been a long, long time since I've added any of the songs from those albums into my playlist.<br />Keep in mind that I was still on dialup back then, so I was still only minimally engaging in filesharing as it could take hours to download a single song - my music knowledge was limited to little more than the radio I never listened to and the large collection of another person's musical tastes.<br />Two years ago my Internet situation changed and I gained much better access to filesharing networks such that p2p became a viable and attractive option. I was able to try out a much wider variety of musical genres and used Amarok's last.fm related artists functionality and recommendations from other users of the same p2p network to help me find out that what I really like is in fact Symphonic and Power Metal and to a lesser extent, some Alternate rock and Celtic music - far from the punk rock tastes of my friend, and even further than that pop crap so many radio stations love so much (no offence pop fans, your tastes are your own). Since making this discovery which I attribute almost exclusively to filesharing networks I have purchased no less than nine albums with my own money, all of which I continue to listen to extensively today. That's nine albums that I would not have bought if it wasn't for filesharing networks. I will also be attending my second ever concert that I have paid for (I've been to a few others where other people shouted or were free entry) next week when Nightwish perform in Melbourne.<br />Even now that I know the genres that I like, I would not just go out and buy a random album that is labelled as Power Metal from JB-HiFi because there are a lot of bands in the Genre that I dislike, mainly due to their firm belief that since they use metal instruments, their vocals should all be shouted as hard as they possibly can - I do have respect for their throats to be able to cope with that much yelling though. No, I would have to either preview them on last.fm or failing that, download an album or two of theirs to try them out first.<br />I know that I've mentioned last.fm several times and some of you may wonder why I pirated music at all when I could have just used their free service. Well, I've only had an account with them for less than a month - before that I only used the related artist functionality built in to Amarok to tell me what was similar to music I already had. Also, they don't have previews of every artist around and for the most part they are just that - 30 second previews - not enough to get a complete feel for an artist. Actually, come to think of it, I don't think I've pirated any music since joining up - but then again that's not really unusual for me in one month.<br /><br />And yet, despite the fact that I'm more satisfied with my music collection and the producers and bands of the albums I've purchased have deeper pockets now because of filesharing, I cannot recommend anyone engage in illegal downloading over these networks. I can however, vastly recommend last.fm as an excellent and completely legal substitute.DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-71194964582098998482008-01-25T04:34:00.000+11:002008-01-25T05:40:49.582+11:00Fight off RSI with Workrave and Ergonomic KeyboardsWhile I was typing up that last post, Workrave popped up and reminded me to take a 5 minute rest break. Workrave is an excellent little program for Linux and Windows designed to assist in the recovery and prevention of RSI that sits in the tray monitoring the keyboard and mouse usage reminding you to take a break every now and again. I won't cover it in much detail because fsckin w/ linux just did an excellent article on it <a href="http://www.fsckin.com/2008/01/22/howto-help-prevent-rsi-the-silent-killer-with-workrave/">here.</a><br />Grab workrave from <a href="http://www.workrave.org/">www.workrave.org</a> if it's not in your distro's package repository.<br /><br />I don't suffer from RSI yet, but after some talk on the Canberra Linux Users Group mailing list last year I decided that I should take some simple steps to minimise the chances of me getting RSI, so I purchased the Microsoft Natural Ergonomic Keyboard 4000 - what's $80 compared to still being able to type in a decade or so? In addition to the usual split angled keys featured on most ergonomic keyboards, it also features an inverted slope - the front of the keyboard is higher than the back of the keyboard so your wrists just sit on it at a natural angle with almost no strain. I've removed the useless and utterly annoying F-lock key from it because I kept hitting it instead of F12 when I went to pull down the YaKuake Terminal Emulator.<br /><br />These two steps alone should go a long way to ensuring that I won't have to end my future career early due to RSI or related injuries. I suppose I could look into things like ergonomic chairs and so forth and perhaps someday I will, but this should be a cheap and effective start.DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0tag:blogger.com,1999:blog-6485167114445349071.post-88039966587553268002008-01-24T23:13:00.001+11:002008-09-16T04:54:48.983+10:00Another Blog to Shout AboutWell, just about everyone else is blogging these days, so why not me? I've dabbled a bit in just about everything to do with computing, but not really enough in any one area to consider myself an expert. I tend to think of myself as a bit of a Jack of all computer related trades, and this blog will probably start to reflect that with posts on a wide variety of topics.<br /><br />But who can say, ey? Plenty of friends and people I know consider me to be a computer expert, some have even used the term "genius" to describe me. Maybe it's all relative - I consider Andrew Tridgell a computer genius, and I've heard that he thinks of Linus Torvalds as a genius, so the question really is: Who does Linus think of as a Genius, or is he really the top dog?<br /><br />I've been thinking about starting a blog for quite some time now because I've done many random things (not always ending in success mind you) that I felt like documenting at the time but didn't - now I have somewhere to put all of that, my only excuse left can be laziness, lack of time or both. Perhaps I will persist and avoid this blog befalling the same state of disarray as so many of my past projects...<br /><br />I will be attending linux.conf.au in Melbourne next week and fully intend to blog about my experiences there. This will be my first time attending a conference like this and I don't really know what to expect - but it should be fun finding out!<br /><br />What else can I say? I'm a full time Software Engineering student at the Australian National University in Canberra, Australia. I have a passion for Linux and Free/Libre/Open Source Software, and Linux has been my primary OS for many years now. I have a considerable collection of gadgets and machines running Linux, most notably:<br />* A GP2X Personal Entertainment Player which I primarily use for killing time playing old SNES ROMs and I daresay will come in handy on the train to and from Melbourne for the conference next week. Came running Linux out of the box.<br />* A Sharp Zaurus SL-C3200 currently running pdaXii13. This too came with Linux out of the box and QTopia for it's GUI. This little beast doesn't see too much use these days but it's 6GB microdrive comes in handy to store plenty of music for when I don't want to waste the battery on my Most Valued Gadget:<br />* And the MVG (Most Valued Gadget) award goes to my Nokia N800 Internet Tablet running the Linux based Internet Tablet OS 2008. I don't know how I lived without this gadget, I really don't. It does just about everything - Kagu Media Player plays music from one of the two 8GiB SD cards I have in it, while Vagalume Last.fm Client streams new and exciting music from last.fm. It's built in mapping software has helped me to find places on more than a few occasions, while Maemo Mapper does all my mapping related tasks that the built in mapper doesn't. It's mozilla based web browser works like a charm, the webcam has captured a number of amusing moments when I haven't had any other camera available. I can SSH into any of my other boxes from it. Video playback works great, although I haven't actually used that too much. Pidgin, Skype, Gizmo and Modest Email clients all work great although to be honest I don't use them anywhere near as often as everything else I have on the tablet. GPE PIM todo has my shopping list and checklist of things I need to do before I go to Melbourne next week ;) Well, this summary was a little longer than I anticipated - almost enough so to get a post of it's own - perhaps I shall post a full review of this device, everything I use it for, like about it, and the few things I dislike about it at a later date.<br />* A Dell XPS M1710. This one didn't run Linux out of the box, but naturally it didn't take me long to set it up for dualboot with Kubuntu. Yeah, I went all out on this machine - I'm a dormant gamer (would play more if I had the time) and wanted a machine that could easily handle what my old laptop could not (last LAN with my old laptop I was quoted as saying that I "should really upgrade my framerate tower" when it dropped down to seconds per frame during the later rounds of one particular tower defence map), yet still be portable enough to take with me - even if I'm just going home for a few weeks on train or plane - something my desktop is certainly not.<br />* My desktop box. Like most computer enthusiasts I put this together myself so I can't simply quote a brand and model for you to google. Currently it's still set up to dualboot 64bit Kubuntu and XP, the latter of which hasn't been booted since I got the M1710 (it was only ever there for gaming) and I'll probably claim that space for Linux in the not too distant future. This machine is unsurprisingly used for mundane computing tasks when I'm in my room such as typing up this blog post and listening to music. This computer doubles as my MythTV box and most of it's 1.4 Terrabytes of storage are filled up with things MythTV has recorded. Has an annoying tendency to only crash when I've gone away - usually half an hour before I attempt to log in remotely, so that means it's next crash will probably be while I'm in Melbourne next week.<br /><br />Well, that's enough for the first post - I've got a few more things that I want to put up tonight, but they deserve their own posts.DarkStarSwordhttp://www.blogger.com/profile/14402135338628481428noreply@blogger.com0