Fail2ban Logging

Over the last few days I’ve been trying to help one of my users who had an odd connectivity issue for my server. After looking at the obvious issues it started to look more and more like he had triggered one of the fail2ban rules and was being blocked by iptables. This has been the case a few times and normally checking the rules shows the problem, but this time it didn’t show anything obvious.

$ sudo iptables -L
Chain fail2ban-ssh (1 references)
target     prot opt source               destination         
REJECT     all  --  anywhere             reject-with icmp-port-unreachable
RETURN     all  --  anywhere             anywhere   

After trying a few other changes and finidng the results to be very intermittent it started to look/feel more like an ipatbles issue, but probably one that was being triggered and then expiring resulting in the intermittent results. But how to view the IP addresses that had been blocked? There was nothing in the logs…

Don’t Do This!

I started looking for answers to enable logging in iptables, and after finding a few places that detailed how it was done I made some changes – only to lock myself out of the server! Yes, iptables is a very powerful tool and getting it wrong results in real problems when you are connected to your server remotely 🙁 However, as I hadn’t configured iptables to load rules at startup a simple reboot would have restored my access had I thought more about it at the time before using a rescue image 🙂

Logging via Fail2ban

The solution turns out to be very straightforward! My jail.conf file had this configuration

# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overridden globally or per
# section within jail.local file
banaction = iptables-multiport

To enable logging of the actions this is simply changed to use the iptables-multiport-log action.

# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overridden globally or per
# section within jail.local file
banaction = iptables-multiport-log

Messages are logged to /var/log/syslog on this system.

Jun 24 15:01:05 x kernel: [ 3773.082548] fail2ban-ssh:DROP IN=eth0 OUT= MAC=d4:3d:7e:ec:ea:55:cc:e1:7f:ac:56:9e:08:00 SRC= LEN=188 TOS=0x00 PREC=0x00 TTL=51 ID=14196 DF PROTO=TCP SPT=53375 DPT=22 WINDOW=296 RES=0x00 ACK PSH FIN URGP=0 


Having just started migrating away from Ubunutu to FreeBSD I found the return to using /etc/rc.conf to be “quaint”, but after the issue today I have a new found respect. Rather than having to spend time looking around for how/where the service is started on Ubuntu, it’s all in one place with FreeBSD. Not only that, but when using a rescue image I can mount the drive, find /etc/rc.conf and switch off an offending service quickly and easily.

Thankfully my server is still running 14.04 and so hasn’t been ruined by systemd or this wee adventure would have been far more painful. Another good reason to keep the migration moving forward.

Multiple Postfix Instances

I’ve been using Postfix for around 5 years now and it’s been a great solution for mail. Initially I used a single instance, but as the mail volume grew it started to cause bottlenecks and frustrations. The solution was to move from one instance to 3!

Basic Flow

The theory is to have each instance of postfix performing a specific role and using the standard communication mechanism between them. Each instance operates independently and using it’s own queues while sharing a common log.

Submission Instance

The submission instance listens only on localhost and has few additional checks.
This does open a potential issue if the host is compromised as it will be able to access the port directly, but if compromised then there are other issues 🙂

Input Instance

My input instance listens on the usual SMTP and SMTPS ports for all available interfaces, allowing local access as well as external. It has a lot of additional checks, spam checking, DMARC and so on as well as rate limiting.

Output Instance

All mail that is accepted by either of the input instances arrives here, either for local or external delivery. This is the only instance that delivers mail and as such only the appropriate checks are carried out.


The Postfix website has a page explaining how to set up the various instances and it’s a great place to start, but in my experience it takes a while to get the configuration working exactly as you want it. Adding additional checks sometimes has unexpected consequences, so the usual guide of “make small changes, let the dust settle” applies.
As postfix makes use of chroot after the initial installation I found a number of files that were required weren’t available, so I had to copy them across into the correct places within the directory structure created. This has meant that following updates some files and libraries are out of date and so had to be updated manually. Log entries are made for such files following a restart.


This doesn’t change and acts on all instances.

$ sudo postfix reload
postfix/postfix-script: refreshing the Postfix mail system
postfix-out/postfix-script: refreshing the Postfix mail system
postfix-in/postfix-script: refreshing the Postfix mail system

Queue Checking

After making the changes it took me a while to always remember how to check the queues! It’s explained in the explanation but the fact the command still works and doesn’t tell you can be a bit unsettling.

$ mailq
Mail queue is empty

Checking all the instances requires the use of postmulti.

$ postmulti -x mailq
Mail queue is empty
Mail queue is empty
Mail queue is empty

Each instance reports separately, hence the 3 lines in the response.


While it took a while to get every instance running as I wanted, the advantages of having each instance running at their own speed has been a huge increase in throughput with a reduction in the load on the machine.

PostgreSQL & FreeBSD 10.3

I’ve recently started moving back to FreeBSD from Ubuntu. As it’s a large move and I’ve not touched FreeBSD for quite a few years, baby steps are required. With that in mind I’ve started small with my home server and once I’m comfortable with that and how things work I’ll look at moving my online server. The reasons for the move will have to wait for another post 🙂

Previous the home server had both MySQL and PostgreSQL installed and running as that reflected how the online server was setup. With this new start I’ve decided to skip the MySQL server and instead move totally to PostgreSQL. However, the change isn’t without it’s issues and challenges – mainly around getting it installed!


The build was easy enough. I’m using the ports to try and keep things as up to date and configurable as possible as time isn’t a large factor in this build.

$ cd /usr/ports/databases/postgresql95-server
$ sudo make install clean

After answering the various configuration choices all went well and the build completed. Reading the post install text will reveal you need another step before you can do much.


After installing I wanted to choose a different location for the actual database files – one that would be on my zfs pools with all their extra protections. With previous installations I would have edited the configuration file to point to the new location, but looking around there were no configuration files! Hmmm. As a first step, I decided to just run the initdb command and see what was installed.

$ sudo /usr/local/etc/rc.d/postgresql initdb


After running all the expected files were present but the database cluster had also been created under the same tree. Not quite what I wanted. Time to look at the config script and figure out what was going on…

In the startup script I found this block, which points to the “usual” FreeBSD configuration mechanism as being usable for the changes I wanted.

load_rc_config postgresql

# set defaults
postgresql_flags=${postgresql_flags:-"-w -s -m fast"}
eval postgresql_data=${postgresql_data:-"~${postgresql_user}/data"}
postgresql_initdb_flags=${postgresql_initdb_flags:-"--encoding=utf-8 --lc-collate=C"}


After finding that the configuration settings would be honoured (as I should have expected) I just needed to add them.

$ sudo vi /etc/rc.conf
postgresql_initdb_flags="--encoding=utf-8 --lc-collate=C"

Close, but not quite…

After making the changes I tried again.

$ sudo service postgresql initdb
The files belonging to this database system will be owned by user "pgsql".
This user must also own the server process.

The database cluster will be initialized with locale "C".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /usr/local/pgdata ... initdb: could not create directory "/usr/local/pgdata": Permission denied

Hmm, OK, so it’s almost working but as the PostgreSQL commands are run as the pgsql user and not root, the inability to create a new directory isn’t unexpected. I guess what I need to do is create the directory, change ownership and then run the initdb command.

$ sudo mkdir /usr/local/pgdata
$ sudo chown pgsql /usr/local/pgdata
$ sudo service postgresql initdb
Success. You can now start the database server using:

    /usr/local/bin/pg_ctl -D /usr/local/pgdata -l logfile start


The database was installed, configuration files are all in the same location and while for this post I just used /usr/local/pgdata I can now create the database where it needs to be. Interestingly though, removing the /usr/local/pgsql directory caused the initdb script to fail, so the directory needs to be present, though it stays empty throughout the process, probably due to being listed as the home directory of the pgsql user.

For future installs, this will be my process

  1. build and install postgresql
  2. set postgresql variables in /etc/rc.conf
  3. create postgresql data directory and change ownership
  4. run postgresql initdb
  5. start postgresql


It was pointed out to me that I probably wanted to set the encoding of the database to UTF-8, so I needed to add this line to my /etc/rc.conf file

postgresql_initdb_flags="--encoding=utf-8 --lc-collate=C"

This line is given at the top of the script but I’d missed it earlier.

postgrest views

When is a view updateable? The answer becomes important when using views to access the data via postgrest. If a view isn’t updateable then insert, update and delete operations will fail.

It’s possible to check by requesting ‘/’ from postgrest to get information about the endpoints available and looking at the insertable field.

  {u'insertable': False, u'name': u'fruits', u'schema': u'public'}, 
  {u'insertable': True, u'name': u'colours', u'schema': u'public'}

In the above, attempts to insert, update or delete from /colours will fail, but attempts for /fruits will be OK.

Simple Views

Where a view is nothing more than a select statement to return rows from a table, it should be updateable.

  SELECT * FROM data.colour;

Joins, Unions

Having joins or unions will require more work to make them updateable.

  SELECT,, as colour 
    FROM data.fruit AS f INNER JOIN
    data.colour as c ON;

Due to the join, this view isn’t directly updateable.

Function & Trigger

In order to make it updateable a function is needed, together with a trigger to call it.

insert_fruit() RETURNS TRIGGER
LANGUAGE plpgsql AS $$
  colour_id int;
  SELECT id FROM data.colour WHERE name=NEW.colour INTO colour_id;
  INSERT INTO data.fruit (name, colour_id) VALUES (, colour_id);

The trigger then tells postgresql to use the function when an insert is required.

CREATE TRIGGER fruit_action
      fruits FOR EACH ROW EXECUTE PROCEDURE insert_fruit();

Reviewing the endpoints now shows

  {u'insertable': True, u'name': u'fruits', u'schema': u'public'}, 
  {u'insertable': True, u'name': u'colours', u'schema': u'public'}

NB The insertable key refers ONLY to insert, so in this instance with only the insert function and trigger added update and delete operations will fail.

Inserting data is now as simple as 2 post requests.

POST /colours
{"name": "green"}
>> 201
POST /fruits
{"name": "Apple", "colour": "green"}
>> 201

Of course any attempt to update or delete will fail, despite having “insertable” set to True.

PATCH /fruits?name=eq.Apple
{"name": "Green Apple"}
>> 500
{"hint":"To enable updating the view, provide an INSTEAD OF UPDATE trigger or an 
unconditional ON UPDATE DO INSTEAD rule.",
"details":"Views that do not select from a single table or view are not automatically updatable.",
"code":"55000","message":"cannot update view \"fruits\""}


The function required for updating a record is very similar to the insert one.

update_fruit() RETURNS TRIGGER
LANGUAGE plpgsql AS $$
  colour_id int;
  SELECT id FROM data.colour WHERE name=NEW.colour INTO colour_id;
  UPDATE data.fruit set, colour_id=colour_id WHERE;
  return NEW;

CREATE TRIGGER fruit_action
      fruits FOR EACH ROW EXECUTE PROCEDURE update_fruit();
PATCH /fruits?name=eq.Apple
{"name": "Green Apple"}
>> 204

NB It’s worth pointing out that every row matched will be updated, so be careful of the filter criteria provided on the URL.


The delete function needs to return the rows that it deletes. Note that while insert and update relied on NEW, delete uses OLD.

delete_fruit() RETURNS TRIGGER
LANGUAGE plpgsql AS $$
  DELETE FROM data.fruit WHERE;

CREATE TRIGGER fruit_delete
      fruits FOR EACH ROW EXECUTE PROCEDURE delete_fruit();

With the final trigger in place, delete now works.

DELETE /fruits?id=eq.1
>> 204


The idea behind letsencrypt is great. Wanting to add an SSL certificate for one of my domains I decided it was time to see how it worked.


No package is yet available for Ubuntu, so it was onto the “less preferred” git route.

$ git clone
$ cd letsencrypt

The posts I read said to run a command, answer the questions and all would be good.

$ ./letsencrypt-auto --server auth

After answering the questions the authentication failed. Hmm, that didn’t work, despite telling it the webroot to place the auth files in.

Going Manual

The stumbling block was the lack of files to prove the domains are ones I should be asking for certificates for. That’s fine, but using the command line above gives no information to let me fix the problem. There is a manual option, so next step was to try that.

./letsencrypt-auto --server --manual auth

This time I was prompted with the contents of a string and a URL location to make it available. That’s more like what I was expecting, so after creating the file all was well. After reading a little more it appears that using the certonly option was what I really wanted, so the command line would be

./letsencrypt-auto --server --manual certonly

Once the certificates had been created and downloaded, a small edit to the apache configuration files and I have an SSL protected website 🙂


The certificates expire after 90 days, so I needed a command line that I can run via crontab. Using the above command lines above required interaction, so they wouldn’t do. Thankfully there is an easy answer.

./letsencrypt-auto --server --manual renew

Tidying up

After writing a small django app to handle auth for the django powered sites that are going to be using certificates and adding the relevant lines to crontab, I think I’m done 🙂

The A9

The A9 is an unusual road – it has it’s own website!

OK, to be fair, the website is actually for the A9 Road Safety Group (RSG) but their sole focus is on making the A9 safer 🙂 The site provides a lot of details and shows their suggestions (now implemented) to make the road safer together with the various documents they have used to make their decisions. Many of these are the usual documents provided by political bodies, such as the RSG, and are therefore of limited interest. One or two are useful and worth a read.

One fact that quickly comes to light is the reliance on the experiences of the A77 speed camera implementation for comparisons. The roads are very different in nature and usage, but it’s unclear how much allowance for these facts has been made.

The A9 can be a frustrating road. Large sections are single carriageway, with limited visibility through woodland. It’s a busy road with a large proportion of users unfamiliar with the road and travelling long distances. The unfamiliarity combined with the distances inevitably leads to frustration, which in turn leads to many instances of poor overtaking – usually at low speed! For regular A9 travellers the experience of rounding a corner and finding a car coming towards you on the same side of the carriageway isn’t unusual. Often the slow speeds involved are the saving grace, but the frustrations and dangers are only too apparent.

Over the past few months average speed cameras have been added to much of the A9 with the aim of reducing the number of accidents. As speed has rarely been a factor in the nearest misses I’ve experienced I find the decision a little strange.

By way of comparison, the A77 already has large sections “protected” by average speed cameras. As with many people I found myself spending too much time watching my speed rather than looking at the road when using the A77, which given the complexity of the road struck me as being a negative for safety.

One aspect shared by both the A9 and A77 is the confusing and overwhelming number and placement of signs. Approaching junctions it’s not uncommon to find 5 or more signs, all essentially giving the same information. The placement of the signs seems decreed by committee and often signs cover each other or are obscured by vegetation. Given the obsession that exists on the A77 (and in Perth and some parts of the A9) for limiting turn options for lanes, correct lane discipline is important but often awkward and a last minute decision unless familiar with the junction due to the sign issues. Couple this with obsessive watching the speed and it’s a wonder more accidents don’t happen.

Average speed cameras are “fairer” than the instant ones that used to be used, but are they really a good solution for the A9? Monitoring the speed of a vehicle provides a single data point, albeit one that can be objectively measured. Police patrols provide a more subjective measurement of a vehicles behaviour, but they require police officers with all the issues that they bring. It’s a shame that the cameras, with their continuous monitoring of traffic and ability to generate as many tickets as required, has made them the only solution now considered for many organisations.

Of course, alongside the speed cameras the A9 group have also lifted the speed limit for HGV vehicles in an effort to reduce tailbacks and the frustrations that accompany them. It’s an interesting approach, but the usual relationship between speed and energy applies to accidents involving HGVs, so any accidents that take place involving HGVs will be more likely to cause injury. Where the balance between reducing the number of accidents and the additional injuries caused cannot be known at present, but it will be interesting to reflect on.

Another aspect of the introduction that seems strange is the placement of some of the cameras. One of the average speed zones has it’s finish just before one of the most dangerous junctions I regularly pass. The addition of warning signs for turning traffic (that only rarely work and are dazzlingly bright when it’s dark) has been rendered irrelevant as cars now accelerate away from the average speed zone straight into the path of right turning traffic. Moving the zones by a small amount would have avoided this – so why was it not done? Such inattention to detail does not bode well for a multi million pound project that is meant to save lives.

As anyone who drives regularly will attest, the safest roads are those with a steady, predictable stream of traffic. Introducing anything that interrupts the predictability of traffic increases the risk of accidents. Sticking speed cameras at seemingly random locations on roads seems like a sure fire way of doing just that. The sudden braking and rapid acceleration that accompanies such sites is often the trigger for accidents. Following the installation of the cameras on the section of road I travel almost daily, changes in behaviour have been obvious and the near collision between a van and car that I witnessed a few days ago was the first – and closest – I’ve seen in months. Hopefully it’s just a transitional thing and people will adjust.

I’m certain that the reports published will support the decisions made by the RSG, after all that’s the beauty of statistics 🙂 It would be nice to think that they would publish the “raw” detailed information about incidents and accidents, but so far I’ve been unable to find any place online that has such data. If anyone knows of such data then I’d love to have a look at it and try and do something with it, though I suspect that this will be a pipe dream.

All these changes have been described as temporary, meant to provide additional safety while the plans for changing the entire A9 into a dual carriageway are developed and implemented. The fact that several of the average speed camera sites are on existing dual carriageway sections would tend to imply that they will be a permanent fixture. The continuing income from the cameras will no doubt be welcome, even if they don’t provide much improvement in safety.

Detective Story

A little while ago, someone who knows we are both interested in photography gave us a camera that had been in a lost property box for a while and asked if we could find it’s owners. I wasn’t present when the actual exchange took place, but it ended up at our house and sat on a shelf until the other day when things were explained to me.

So, this is what we were given.

Lost Camera

It’s a Sony TX-5. The battery was totally dead when I started looking at it, but we have, by chance, a Sony charger that fitted the battery and so have been able to charge it.

The SD card was 8Gb and contained a lot of pictures, starting in Dec 30th 2010 – was it a christmas present?

There is no name given for copyright in the EXIF data, so I started looking at the pictures to try and find the owner.

I found a car with a legible number plate, but as it looked like a holiday this was probably a rental car.

Discovering that there was a wedding and I could identify the venue, I started to get my hopes up. the venue is still going, had a web page and so I sent an email with some details in the hope they could help. Sadly they couldn’t as the venue has changed hands and no records are available for the date.

Next stop was to look at the recurring pictures of a street. I assume it’s where the owner lives and by looking at street signs and using Google I’ve been able to narrow down the house the pictures were taken from to 2 houses. As the house is in Orgeon and I have no way of getting more information, I sent an email to the local police department – who replied and are looking into what they can do!

Sat, 27th Sept 2014

The property that I identified was in Medford, OR – which is a long way from Scotland! As I wasn’t able to follow up myself I sent an email to the Medford Police Dept hoping that they may be able to assist. The enquiry was passed to Gena Criswell who responded in the best way possible – by offering to help! After a few emails and a few additional pictures being supplied she was able to contact someone from the pictures who confirmed the camera belonged to her brother! Yes, after all this time it’s owner has been identified.

I’m waiting for her brother to get in touch and will then arrange to send the camera to him.

I still can’t really believe that this has had such a good outcome, but a large portion of that is down to the amazing efforts of Gena and the Medford Police Department.

Mon, 6th October 2014

Since hearing from the Medford police, the owner has been in touch and supplied the address for returning the camera. I had intended to send it last week, but events overtook me and so today I am packaging up the camera to send back to it’s owner.

The outcome couldn’t have been any better.

Roundcube Users

We’ve been using Roundcube for webmail for a while now without too many problems. It’s easy to install and simple enough to configure and our users seem to find it easy to use.

Recently one of our email accounts was comprimised, leading to a spammer sending a lot of spam through our server. While trying to trace which account was the culprit it became apparent that the source of the spam was the webmail interface, but reviewing the logs proved that there had been logins but no details were visible (these were after all just the apache logs).

What I needed was for Roundcube to log the users who were using the service. Some searches through Google revealed little of help, but then I came across the possibility of enabling the userlogins file. It’s listed in the default config file, but not many other places, so hopefully this post will help others.

To enable, simply add the following to your file.

$config[‘log_logins’] = true;

Once added, the file will be created with details of every login in the logs directory under the Roundcube installation. It confirmed that the user I suspected from a lot of other log reviewing was the culprit – potentially saving me several hours of effort!

Goodbye Zite

Of all the apps I have installed on my phone, the one I most frequently use is Zite. Following todays news that will be changing and the app will soon be uninstalled. It’s a shame, but doesn’t really come as a big surprise as Zite offered a useful, free service – something that is becoming rarer and rarer.

It was my wife who first introduced me to Zite on her iPad. It was an app that filled a void in the market for me and soon became one of the few apps I would look at every day. It’s ability to find stories of interest to me and display them in an easy to browse format was incredibly refreshing. When their android app went through a period of not working well I tried Flipboard.

As with most people my initial reaction to flipboard was “wow” but that faded within minutes. The odd page flipping that was initially “wow” soon became “ugh” and the limited content was annoying. Despite trying to tailor it to my interests the signal:noise ratio was too low – certainly far, far lower than Zite. I found the interface increasingly became an obstacle to the stories with Flipboard, so it was with some relief that updates to the Zite app made it usable again.

I have no idea what the business model was for Zite, but I suspect that being acquired by Fliboard will be viewed as a success by their investors. With the demise of their app I find my phone increasingly resembling and being used as just that – a phone. While the investors may celebrate, I think there will be a lot of users who will view it as a step backwards.

In fact I find myself wondering why I need a “smart phone” at all. I don’t play games. I don’t download music or movies to it. I do use the camera from time to time. The colour screen is nice, but is a camera and a nice screen ample compensation for battery life that is measured in hours compared with the days I enjoyed with a simpler phone 10 years ago?

Recent activity in the IT world has also shown that apps and services have very little user loyalty. The sudden rush of whatsapp users for alternative services following their acquisition by Facebook may be a recent example, but it’s hardly an isolated instance. Using an app and coming to rely on it for anything seems bound to lead to disappointment. Companies now view you as a commodity to be traded at the first opportunity to sell for massive rewards. How did we get here?

Self Disservice

This morning I had a request for a few purchases. As WHSmith had everything I wanted I popped in and started gathering the items. The store was organised in some random manner meaning it took a few minutes to find everything, but after the exercise I headed for the tills.

Anyone who has been in recently will know that WHSmith have invested in self service tills rather than staff for the last few years, so I was directed to one of these machines. Despite regularly using them in Tesco (preferable to a 15 minute wait for a manned till) the first glitch took only 3 items to appear. The dreaded “unexpected item in bagging area” message required a visit from one of the “floating” staff, but as she was busy with another machine I stood and waited for around a minute.

The next item refused to scan. I wanted to buy 5 of these, but as they wouldn’t scan and not wanting to wait for a member of staff I elected to leave them.

It strikes me as amazing just how poor my experience was this morning. Not only did I leave feeling that I had received poor service but I had spent far less than I was intending, primarily due to the self service experience. I can sympathise with the desire to cut costs, but where is the line between saving money and customer service drawn?

Where’s the mouse???

Another update window popped up in Ubuntu 13.10 a couple of days ago. Lots of packages to be updated (281 if memory serves) so as I wasn’t doing anything that required my laptop, I started the upgrade. After a while it finished and asked to restart. Nothing unusual so far. The restart went OK and the login screen popped up – but where was the mouse pointer? Hmm, that’s odd.

Another restart showed the mouse pointer was present right up to the login screen – when it vanished.

Logging in showed that the mouse was there, but the pointer wasn’t. Moving the mouse around and clicking showed it worked, but there was no pointer. A search of the web threw up a few options, but none have worked. Annoying!

This is the third or fourth time in the last 6 months that an Ubuntu upgrade has created some issue with this laptop. Every time (until now) the fix has been easy enough but has meant spending time hunting around on the web that could have been spent doing productive things. A few years ago this wasn’t an issue I had. The occasional distribution upgrade caused trouble, but generally things just worked. Upgrades and updates were seamless and could be approached without fear. Sadly this appears to no longer be the case which bring to the fore again a question I’ve asked before – is it time for a change?

Of the OS’s I’ve had regular contact with over the last 3 years I find it amazing that the system that has required the least amount of effort is Windows 7! While it has thrown a few issues my way, they’ve all been easy enough to fix. OSX has proved the most frustrating and has caused the most hair pulling incidents, but this may well reflect my lack of experience with it (though things haven’t gotten any easier the more time I have spent). Windows 8 seems like a huge change and may be a step too far, but at least I know it works well on this laptop.

As I’ve been toying with the idea of a new laptop the choice of OS will be a huge factor, so does anyone have any advice or recommendations for me? I need a light laptop as I travel a lot but beyond that I’m open to suggestions…

Update< \h3>
After much work in the terminal I have managed to get back to a desktop with a pointer.

Outlook Woes

Last month we decided to move from our older server to a newer, more powerful box. Moving the majority of services didn’t worry me, but knowing how fragile and potentially awkward the mail can be did give me pause. I spent some time and researched the settings and configuration, tested it as best I could and then made the move. All seemed fine for 75% of the users, but a small issue was troubling the rest, so I adjusted the configuration and watched the results.

As usual things were a mix of good and bad, but some spam did get sent. I quickly fixed the problem and moved on. Now 90% of the users were fine but the remaining 10% comprised the most vocal and so suddenly it felt as if 90% of the users were having troubles.

I tweaked a setting here and there over the next few days, but nothing seemed to work. The complaints grew and the language performed the usual subtle changes of tone that desperation seems to trigger. With hindsight the fact that the affected now numbered less than 5% should have signalled me to pause and take more time. Needless to say I adjusted another setting which opened the floodgates! Initially it didn’t seem like an issue as mail was being delivered and spam was being rejected.

Having removed a level of protection too far eventually a spammer found the issue and exploited it. As always this coincided with me being away from keyboard for 8 hours, so the server was subjected to a massive deluge of spam. As soon as I was back I stopped things and removed as many messages as I could before they were sent. Restoring the old configuration I reviewed my changes and found the problem, adjusting the configuration and eventually restarting deliveries. This time I watched and saw that the spam flood had been stopped. Even better the noisy 5% were now happy. Getting into bed at 3am felt good that night.

Of course that was just the start. Having been open for a short period several blacklists noticed and added the IP to their lists. Many hosts refused to talk to the server, so I started contacting the blacklist providers and attempting to restore the reputation of the server. Over the next few days most accepted the explanations and seeing no more spam originating they removed the IP. Things returned to normality – except for Outlook.

I thought that dealing with AOL was going to be the most problematic given their odd and highly aggressive anti-spam configurations, but actually following the steps on their website had the situation resolved in a matter of days. Outlook on the other hand was a whole different ball game.

The first problem is where do you go for help in getting the problem cured for Outlook domains and addresses? The error message in the logs looks like this…

Jan 6 13:43:04 xxx: xxx: to=, relay=xxx, delay=7.5, delays=2.2/0/0.28/5.1, dsn=5.7.1, status=bounced (host[xxx] said: 550 5.7.1 Service unavailable; Client host [xxx] blocked using Blocklist 1; To request removal from this list please forward this message to (in reply to RCPT TO command))

That’s fine, but of course I didn’t send the message. The person who did send the message wasn’t interested in forwarding it and simply deleted the returned message having noted that it wasn’t delivered. Not an unusual response from an email user I would suggest. As the person who tried to administer the server surely there is a webpage or some such that can be used to accomplish the same thing? Every other blacklist provider has one!

After searching around I find which offers lots of interesting advice and links. Following them I jump through the hoops and sign up for the various programs they highlight. Then I send them a message via with the information that they ask for.


No explanation or further help is offered. When I reply the message is – yes you guessed it – bounced as the server is blacklisted! Oh, you couldn’t make it up. Trying again with a different email on a different service gives the same denied result and all attempts to find out why are met with a blank wall of copy and pasted text that gives no additional information.

I can sympathise that is a huge target for spammers, but making it so hard for others to interact with the service simply means that people will increasingly not interact with it. Large corporations may be able to employ people to spend the time required to deal with the issues, but smaller companies can’t afford such luxuries.

As I typed this I forwarded on a bounced mail to the email address and received 2 responses – one saying the message was being processed and another saying the message couldn’t be processed as they didn’t understand it! How can such a large organisation as Microsoft make things so difficult?

QGIS on Ubuntu 13.10

I’ve been asked about producing shapefiles from the geo data on Variable Pitch. This seems like a good idea, but having no experience with such files I thought maybe I should have an app to test them with. I was pointed at QGis but it needed to be added to the sources list for Ubuntu 13.10. This is how I did it.

I created a file /etc/apt/sources.list.d/qgis.list with the contents

deb saucy main
deb-src saucy main

Then I imported the PGP key as the sources are signed.

~$ gpg –keyserver –recv 47765B75
gpg: requesting key 47765B75 from hkp server
gpg: key 47765B75: public key “Quantum GIS Archive Automatic Signing Key (2013) <>” imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
~$ gpg –export –armor 47765B75 | sudo apt-key add –

This was all that was needed and so installation was then as simple as

~ sudo apt-get update
~ sudo apt-get install qgis python-qgis

There are a large number of required packages!

Sky+ Catch Up

Tonight we wanted to watch something via the Sky On Demand service, but when we tried to find the programme there was no option for ‘Catch Up’. Some scratching of heads ensued, then a quick visit to a web page or two. The solution turned out to be simply resetting the network connection and all was well, but obvious it wasn’t. Hopefully this post will help someone else.

We Need a Register

How many wind farms are there in the UK?
How many are being planned?

Simple questions which surely deserve simple answers?

Today it’s not possible to answer these questions.

  • Ofgem have some information on the stations that claim subsidies.
  • DECC have information on planning applications for renewables.
  • Renewable UK have a statistics page listing some stations with very limited information.

In order to allow the debate around renewable energy to be informed it’s necessary to be able to answer these and more questions. Knowing the details of every station will allow many questions to be answered.

“Just the Facts, ma’am”

Variable Pitch has been running for over a year and I have been contacted by people from all sides of the debate. Such a reception has confirmed my view that sites providing information should be totally impartial, regardless of the views of their authors. Opinion and commentary have no place on such sites and I would suggest that a register is certainly in that class.


At a minimum the information needed for each station is (I believe)

  • Name and address
  • Owner details
  • Dates of key events
  • Geographic location
  • Details of renewable generation
  • Number and type of installed generators
  • Contact information

The information collected should then be

  • be freely available to all
  • easily searchable and exported
  • as up to date as possible
  • as accurate as possible
  • impartial

Following on from work I have done for some of this information can be obtained from Ofgem. Some of it can be obtained from DECC and some is simply not readily available. Quite why so many companies go to great lengths to not reveal the turbines they install is an ongoing source of mystery and amusement.

Once the information has been gathered the next challenge is keeping it up to date. Most of the sources of information listed above are slow to update (if they ever are). There is nothing that forces developers of renewable energy to inform anyone of their plans until they enter the planning stages. Once complete they only need to supply the information requested by Ofgem. Asking them for information will sometimes result in a reply but often simply results in silence. For an industry that claims to be open there is a lot they don’t want to openly discuss!

Making It Happen

There is no point lobbying the government for such a resource. Even if they were interested and motivated we’d end up with another travesty like the Ofgem website (which is barely fit for any form of purpose).

Any collection of information needs to be totally impartial so none of the trade bodies or protest groups are in a position to manage it.

That leaves us.

I believe that such a site should be created. Further I believe it should be created sooner rather than later. There are people throughout the UK developing websites that collect information on developments in their areas, whether geographic or technologic. Pooling that information would be an excellent start, but we need more. We need to push for developers to be more open and for information to be updated on a more regular basis.

The success of online communities shows what’s possible. Those of us who have experience around this subject need to start talking and thinking about how we work together. If that sounds like you, get in touch!

How does this work again?

It’s been a long, long time since I last picked up the Pocket Wizards to do anything but move them from place to place, but after talking the other night about some shots that Rosie would like, it seemed as though I should try and reacquaint myself with the off camera flash toys we have around the house.

As the dogs are always willing subjects (as long as the ball is being thrown from time to time) and I had some time before needing to start cooking, I headed into the garden for a quick refresher.


The processing is a little too dark, but given how long it is since I last played I’m not unhappy with the results. The next step involves heading further into the unknown, so wish me luck!

Variable Pitch Take 2

It’s fair to say that when I started the Variable Pitch website I didn’t know a lot about the energy market or the data that would be useful and available. I had an idea and some data and I started trying to make it available. The site started with just scottish stations but soon I had people asking me about english, welsh and northern irish stations, so I expanded to also cover those countries.

It wasn’t just the coverage that expanded. While I started out focussing on Renewable Obligation certificates (ROC) and their value, I was soon being asked about other aspects of finance related to renewable energy. The Ofgem data provided a way of viewing the output for stations but my early focus on ROC was soon shown to be an error as other financial schemes such as the Feed In Tariff needed to be considered. Then there was the selling of electricity that I hadn’t thought about and the constraint payments made were another area that I had not originally anticipated tracking but something I added to the site.

As with all projects that grow in such an organic manner my initial design of data structures turned out to be totally inadequate. I added the pieces I needed but over time it grew arms and legs on it’s arms and legs. It worked, but adding things was getting harder and harder. Simple maintenance was proving a challenge. Developing the site was also getting harder.

As a result of everything above I finally decided that the time was right to bite the bullet and use the my additional knowledge to develop a better set of structures for the data. This finally allows me to add some of the extra things that I’ve been struggling with – hopefully in a manner that also allows for better maintenance and easier upgrades!

I’ve made a good start on getting things changed and have a lot of the new design work done, so it’s just a question of finding and correcting all the small (and some not so small) bugs before pushing it live. I’m hoping to get that done this week, but it may drift into next week.

Spring Cleaning

Last week I noticed just how poorly organised the music collection on the home server had become. It’s inevitable that over the years things will change and so I was expecting it to be a little messy, but what I found was far, far worse than that. As I have some time off I resolved to fix the problems and so have spent a few hours going through and tidying up the metadata, changing formats and removing duplicates to a point where things are looking pretty good.

Of course, that’s only half the battle as the metadata is then used by the media server to serve the files – destroying much of my good work in the process 🙁 I really wish there was a decent media server for linux.

Spamassassin Storage

We’ve been using SQL for storing various Spamassassin data for the last few years. It’s allowed us to provide our users with a lot of control and has been a good solution. However, a mixture of a inattention and a failure of some of the maintenance actions has led to a problem with the bayes tables growing to a size that is beyond sensible. We’ve started taking action to remedy the issue, but as we’re presently using MySQL the size of the database files isn’t shrinking 🙁 The effort involved in correcting this is large enough to make me wonder whether it’s an opportunity to migrate to Postgres. Has anyone done this or have any opinions whether it’s a sensible move to make?

Continuing CouchDB Experiments

I’ve been developing a small app that should be a good fit for CouchDB. It’s a small app that will allow me to keep a series of notes, some of which are plain text and some are formatted in particular ways. The data needs to be shared between a number of devices with the ability to be edited on any of them – online and offline. It’s not a hugely complex app nor is it a unique problem, but it’s useful place to keep looking at CouchDB.

Why do I think CouchDB is a good fit? It’s NoSQL foundations means that data can be easily arranged and stored in ways that make sense for each type. The replication features will allow the various devices to sync with each other without any extra effort. Thanks to the Android Couchbase project it’s possible to use CouchDB on Android (albeit with a larger binary than I would like) allowing for offline storage.

As I’ll be using multiple devices, I’ve decided to start by letting the server allocate it’s default random _id which should avoid duplicates and not require me to devise some more complex scheme.

With a conventional SQL database the decision of how to arrange the data would be simple – a series of tables connected with joins. However, when contemplating how to arrange the data with NoSQL there doesn’t seem to be as simple a solution. This could be my inexperience with them but having read a lot of articles I think it’s just there are so many possible ways of arranging the data that the right solution will depend on the situation.

I’m keen to keep the complexity down but at the same time I don’t want to have a simpler database at the expense of making the interface more complex to work with. My original thought was to store all the related documents in an array embedded within a single document, for example

"_id": "12345",
"title": "Some Subject"
"notes": [
"type": "text",
"content": "Simple text note"
"type": "text",
"content": "Another simple piece of text"

This is nice and simple and allows me to use a single request to get the entire document, but it adds a lot of complexity to the interface. When editing one of the notes, I would need to

  1. request the document
  2. edit the document
  3. update the entire document

Far simpler from an editing point of view would be to have the notes as seperate records with a link back to their parent document. For example,

"_id": "12345",
"title": "Some Subject"
"_id": "23456",
"parent": "12345",
"type": "text",
"content": "Simple text note"
"_id": "34567",
"parent": "12345",
"type": "text",
"content": "Another simple piece of text"

This allows me to edit each document separately and so avoids the need to request the entire parent document and then manipulate the returned data prior to updating, but does raise the question of how I could efficiently request the document and all it’s related child documents. With some simple changes, a complex key in a view and careful use of parameters it turns out to be very possible.

First I alter the documents slightly by adding a type field to every document.

"_id": "12345",
"title": "Some Subject",
"type": "subject"
"_id": "23456",
"parent": "12345",
"type": "subject:note",
"content-type": "text",
"content": "Simple text note"
"_id": "34567",
"parent": "12345",
"type": "subject:note",
"content-type": "text",
"content": "Another simple piece of text"

Next I write a simple view function.

function(doc) {
if (doc.type) {
if (doc.type == "subject") {
emit([doc._id, 0], doc);
} else if (doc.type == "subject:note" && doc.topic) {
emit([doc.topic, 1], doc);

Running this query (without any parameters) produces the following result,

{"total_rows": 3, "offset": 0, "rows": [
"id": "12345",
"key": ["12345", 0],
"value": {...}
}, {
"id": "23456",
"key": ["12345", 1],
"value": {...}
}, {
"id": "34567",
"key": ["12345", 1],
"value": {...}

The ‘…’ in the value fields will have the contents of the document, so this query returns me all the information I need. Filtering is also possible thanks to CouchDB and the usual key, startkey and endkey parameters. Additionally I can add fields to the complex key without changing how it functions. Nice.

Getting a single subject with all the related notes is as simple as requesting the view with the parameters ?startkey=["12345"]&endkey;=["12345", 2].

If I add additional document types then I can simply adjust my view and return them as well, using the number to differentiate between them. Parsing the data for an interface becomes simple and each document can be edited/updated in isolation.