Webpack: sass & import, names

Having started moving to sass for my project and including the required bits in my webpack configuration (blog post), the next issue I ran into was that importing didn’t seem to work as expected.

Require not Import?

One of the additions I made to my webpack config was to add a resolve section, allowing me to use more convenient and simpler require lines in my javascript.

  resolve: {
    modulesDirectories: ['node_modules', 'components', 'css', 'fonts'],
    extensions: ['', '.js', '.jsx', '.css', '.scss']
  },

This worked exactly as expected wherever I used a require statement, so I had expected that this would transfer to import statements in css and sass files – but it didn’t. As it seems such an obvious thing to do, I had another look at the README for the sass-loader and found what I was looking for.

~, but not as you know it

For my testing I had created as simple a file as I could think of, test.scss.

@import ('../components/_component.scss')

This very simple file just imports another file (which happens to be sass) that belongs to a component I have in the ‘components’ directory. Nothing special, but why do I need the full import path? This was what I needed to get things working, but after looking at the sass-loader again I realised that using the ‘~’ would use the webpack resolve routines – which is what I was hoping. A small change to the file,

@import ('~_component.scss')

resulted in things working as I wanted.

NB the README cautions against using ~ as you may expect (if you’re a command line groupie) as using ~/ implies the home directory and probably isn’t what you want.

Multiple Outputs?

Having decided that I don’t want css to be included in the javascript output, I added the ExtractText plugin which allowed me to bundle all css into a single css file. This is fine, but what if I wanted to have different css bundles? What if I wanted to have different javascript bundles? My current configuration didn’t seem to allow this.

  entry: {
    'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
    'webpack/hot/only-dev-server',
    path.resolve(__dirname, 'components/App.js'),
  }

Thankfully, webpack has this covered. Instead of having a single entry you can have multiple, each of which you can supply a name. Additionally I realised that the entry point doesn’t *need* to be a javascript file as long as it’s a file that can be processed. So I changed the entry section to this.

  entry: {
    bundle: [
      'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
      'webpack/hot/only-dev-server',
      path.resolve(__dirname, 'components/App.js'),
    ],
    test: [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

Running webpack didn’t give me what I expected as I also needed to change the output definition.

  output: {
    path: path.resolve(__dirname, 'html'),
    filename: '[name].js'
  },

Using the [name] simply replaces the name I used in the entry definition with that text, which offers additional possibilities. With the changes made, running webpack produces

html/
     bundle.js
     bundle.css
     test.js
     test.css

The test.js file is a little annoying and in an ideal world it wouldn’t be created, but so far I can’t find any way of preventing it from being created.

To control the output location even more, simply changing the definitiion is all that’s required for simple changes. Updating it to

  entry: {
    ...
    'css/test': [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

results in the files being created in html/css, ie

html/
     bundle.js
     bundle.css
     css/
         test.js
         test.css

NB when using a path the name needs to be in quotes.

Using this setup, component css is still included in the bundle.css and the only things appearing in test.css are those that I have specifically included in the entry file, which opens up a lot of possibilities for splitting things up. As I’m using bootstrap for the project one possibility is to use this to output a customised bootstrap file.

Hot Reload

At present hot reloading of css doesn’t seem to be working. I changed my configuration to this

  entry: {
    'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
    'webpack/hot/only-dev-server',
    bundle: [
      path.resolve(__dirname, 'components/App.js'),
    ],
    test: [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

which still provides hot reloading of the javascript, but the css files don’t seem to work. This seems to be a common issue, but as it’s not a serious one for me at present I’m not going to spend too much time looking for solutions. If anyone knows, then I’d love to hear from you.

sass

Continuing my delve into React, webpack and so on and after adding a bunch of css files, I decided it was time to join the 21st century and switch to one of the css preprocessors. LESS was all the rage a few years ago, but now sass seems to have the mindshare and so I’m going to head down that route.

Installing

Oddly enough, it installs via npm :-)

npm install --save-dev node-sass
npm install --save-dev sass-loader

Webpack Config

The webpack config I’m using is as detailed in my post More Webpack, so the various examples I found online for adding sass support didn’t work as I was already using the ExtractTextPlugin to pull all my css into a single file. The solution turned out to be relatively simple and looks like this.

      {
        test: /\.scss$/,
        loader: ExtractTextPlugin.extract(['css', 'sass'])
      }

Additionally I need to add the .scss extension to the list of those that can be resolved, so another wee tweak.

  resolve {
    ...
    extensions: ['', '.js', '.jsx', '.css', '.scss']
  }

Structure?

One reason for moving to SASS is to allow me to split the huge css files into more manageable chunks, but how to arrange this? Many posts on the matter have pointed me to SMACSS and I’m going to read through the free ebook (easily found via a web search) to see what inspiration I can glean, but I think for each React component I’d like to keep the styles alongside as the bond between the JSX and the styling is very tight and changing one will probably require thinking about further changes. As per previous experiments, the component can then require the file and it will magically appear in the bundled, generated css file, regardless of whether I’ve written it in sass or plain css.

For the “alongside” files I’ll use the same filename and the leading underscore that tells sass not to output the file directly, though with the webpack setup that isn’t a concern now but getting into the habit is likely a good idea for the future :-) This means fora component in a file named App.js I’ll add _App.scss and add a line require(‘_App.scss’); after the rest of the requires.

Variables

I want to use a central variables file for the project, which I can then reference in the sass files, but haven’t quite decided where it should live just yet. Hopefully after reading the ebook and looking at the project a bit more it will make sense.

Now sass handling in place it’s time to start pulling apart my monolithic plain css file and creating the smaller sass files.

Webpack Dev Server

After using webpack for a few days, the attraction of changing to using the dev server are obvious.

The webpack-dev-server is a little node.js Express server, which uses the webpack-dev-middleware to serve a webpack bundle.

Install

Oddly enough, it needs installed via npm! However, as we’re going to run it from the command line, we’ll install it globally.

sudo npm install -g webpack-dev-server

Running

After install, simply running the server (in the same directory as the webpack.config.js file) will show it’s working and the bundle is built and made available. Next step is to get it serving the HTML file we’ve been using. This proves to be as simple as

$ webpack-dev-server --content-base html/

Requesting the page from http://127.0.0.1:8080/ gives the expected response. Removing the bundled files produced by webpack directly from the html directory and refreshing the page proves the files are being loaded from the dev server. Nice.

Hot Loading

Of course, having the bundle served by webpack is only the start – next I want any changes I make to my React code to be reflected straight away – hot loading! This is possible, but requires another module to be loaded.

npm install --save-dev react-hot-loader

The next steps are to tell webpack where things should be served, which means adding a couple of lines to our entry in webpack.config.js.

  entry: [
    'webpack-dev-server/client?http://127.0.0.1:8080',
    'webpack/hot/only-dev-server',
    path.resolve(__dirname, 'components/App.js'),
  ],

As I’m planning on running this from the command line I’m not going to add a plugin line as some sites advise, but rather use the ‘–hot’ command line switch. I may change in future, but at present this seems like a better plan.

The final step needed is to add the ‘react-hot’ loader, but this is where things hit a big snag. The existing entry for js(x) files looked like this.

     {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react', 'es2015']
        }
      },

Adding the loader seemed simple (remembering to change loader to loaders as there was more than one!).

     {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loaders: ['react-hot', 'babel-loader'],
        query: {
          presets: ['react', 'es2015']
        }
      },
Error: Cannot define 'query' and multiple loaders in loaders list

Whoops. The solution was given by reading various posts and eventually I settled on this. It works for my current versions of babel but may not work for future ones. All changes below are applied to the webpack.config.js file.

Add the presets as a variable before the module.exports line.

var babelPresets = {presets: ['react', 'es2015']};

Change the loader definition to use the new variable and remove the existing definition.

      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loaders: ['react-hot', 'babel-loader?'+JSON.stringify(babelPresets)],
      },

Now, when running webpack-dev-server –content base html/ –hot everything is fine and the page is served as expected.

Editing one of the components shows the expected rebuild of the bundle when saved – exactly as expected.

All Change!

As I tried to get this working I discovered that the react-hot-plugin is being deprecated. Until it happens I’m happy with what I have, but the author promises to have a migration guide.

Running

To try and keep things simpler and avoid the inevitable memory lapses leading to scratching of head about lack of hot reloading, I’ve added a line to the package.json file. With this added I can now simply type npm run dev and the expected things will happen.

  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack --progress",
    "dev": "webpack-dev-server --content-base html/ --hot"
  },

React Mixins

Having written a simple test app I’m continuing to use it to try and develop my React knowledge :-) One aspect that I did find a little annoying was the amount of code I seemed to repeat. I kept thinking that I should have a base class and simply inherit from it – a pattern I have used a lot in other languages, but this is React and nothing I had seen suggested that pattern.

Enter the Mixin

A react mixin is a simple idea, a block of code that is common to one or more components. After looking at them I found it was possible to extract a lot of common functionality, resulting in this code.

var AppointmentMixin = {
  componentDidMount: function() {
    this.props.updateInterval(this.props.id, this.totalTime());
  },  
  setAdditional: function(ev) {
    this.state.additional = parseInt(ev.target.value);
    this.props.updateInterval(this.props.id, this.totalTime());
  },
  totalTime: function() {
    return this.state.duration + this.state.additional;
  },  
  finishTime: function() {
    return this.props.start.clone().add(this.totalTime(), "minutes");
  },
};    

There is nothing in the above code that is specific to a component, it’s all plain generic code. To use it in a component you need to add a ‘mixins’ line and remove the old code. This now gives me a component that looks like this.

var Hair = React.createClass({
  mixins: [ AppointmentMixin],
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Start: {this.props.start.format("HH:mm")}</p>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>
        <p>Total Time Required: {this.totalTime()} minutes</p>
        <p>Finish: {this.finishTime().format("HH:mm")}</p>
      </div>
    )
  }
});

This is much closer to what I wanted.

Uh oh…

While looking around for information on mixins I cam across this line repeatedly.

Unfortunately, we will not launch any mixin support for ES6 classes in React. That would defeat the purpose of only using idiomatic JavaScript concepts.

This looked as if support would be coming, but then I found this post and also this.

Higher Order Component – huh?

So looking at some posts about the concept helped me get a better understanding of what it’s trying to do, I decided to try and change my example to use it. Sadly it didn’t work out as I’ve been unable to get the higher order component solution working in a manner akin to a mixin. It’s not so much a replacement but a totally different approach that requires things be done differently.

However, always keen to learn, I rewrote things and ended up with this.

function TimedAppointment(duration, title) {
  const Appointment = React.createClass({
    getInitialState: function() {
      return {duration: duration, 
              additional: 0,
              title: title}
    },
    componentDidMount() {
      this.props.updateInterval(this.props.id, this.totalTime());
    },  
    setAdditional(ev) {
      this.state.additional = parseInt(ev.target.value);
      this.props.updateInterval(this.props.id, this.totalTime());
    },
    totalTime() {
      return this.state.duration + this.state.additional;
    },
    finishTime() {
      return this.props.start.clone().add(this.totalTime(), "minutes");
    },
    render() {
      return (
        <div>
          <h3>{this.state.title}</h3>
          <p>Start: {this.props.start.format("HH:mm")}</p>
          <p>Duration: {this.state.duration} minutes</p>
          <p>Additional duration: <input type="number" step="1" ref="additional" 
                                         value={this.state.additional}
                                         onChange={this.setAdditional}/> minutes</p>
          <p>Total Time Required: {this.totalTime()} minutes</p>
          <p>Finish: {this.finishTime().format("HH:mm")}</p>
        </div>
      )
    }    
  });
  return Appointment;
};
var Hair = TimedAppointment(90, "Hair Appointment");
var Nails = TimedAppointment(30, "Manicure");

This is much neater and certainly gives me a single set of reusable code – no mixins required. It’s possibly far closer to where it should be and is still within my limited comfort zone of understandability.

If anyone cares to point out where I could have gone or what I could have done differently, please let me know :-)

Summary

As time goes on I’m sure that the newer formats of javascript will become more of a requirement and so mixins probably won’t be much use going forward.

React Lessons Learned

I’ve been playing with React for a few days now and, as always, have run across a few things that made me scratch my head and seek help from various places. In an effort to capture some of the errors and mistaken assumptions I have made, I’ll try and show them in a very simple way below. As always, comments and suggestions for improvement are more than welcome.

A Simple Page

To try and show the issues I needed a simple app that I could write that followed what I was working on closely enough to be relevant but without a lot of the complexity that obscures the basic issues. After some thought I’ve come up with a very simple visit planner. It’s going to show a simple timeline of “visits” that can be added in any order and displayed in the same order. No attempt is made to save anything, it’s just data on a page that vanishes upon refresh.

Again, to keep it simple I’m not bothering with anything beyond a straight HTML page containing everything. As I’m not travelling while writing this I’ve used CDNJS for the libraries, but have gone with the non-minified versions to allow for greater debugging. I’m not sure it gets any simpler? I’ve not bothered with any CSS.

Starting Point

Without further fanfare, this is my starting point.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Visit Planner</title>
  </head>
  <body>
    <h1>Visit Planner</h1>
    <div id="content"></div>

    <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.7/react.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.7/react-dom.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.11.2/moment.min.js"></script>
    <script type="text/babel">
var Visit = React.createClass({
  getInitialState: function() {
    var start_times = [<option value="1" key="1">9 am</option>,
                       <option value="2" key="2">Midday</option>,
                       <option value="3" key="3">3 pm</option>];
    var today = moment();
    today.seconds(0).hour(9).minutes(0);
    return { start: today, start_times: start_times }
  },
  changeStartTime: function(ev) {
    switch(ev.target.value) {
      case "1": { this.setState({start: this.state.start.hour(9)}); break; }
      case "2": { this.setState({start: this.state.start.hour(12)}); break; }
      case "3": { this.setState({start: this.state.start.hour(15)}); break; }
    };
  },
  render: function() {
    return (
    <div>
      <p>
        <label>When will you be starting your visit?</label>
        <select onChange={this.changeStartTime}>
          {this.state.start_times}
        </select>
      </p>    
      <p>Visit starts at {this.state.start.format("HH:mm")}</p>
    </div>
    )
  }
});

ReactDOM.render(<Visit />, document.getElementById('content'));
    </script>
  </body>
</html>

There’s not much to say about this, it’s simple plain React. I’m using moment for the date/time and then simply ignoring the date portion as it provides nice functions for dealing with time intervals. As it stands it doesn’t do much :-)

Visits

For the purposes of this each visit will simply be an object that has a description (to differentiate it from the other objects), a duration and an optional additional duration (initially set to 0). I’m going to create these as seperate classes (even though they will largely be identical) as it allows me to show things clearer. In reality they wouldn’t be done this way :-)

My first pass at getting something basic looked like this.

var Hair = React.createClass({
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: {this.state.additional} minutes</p>
      </div>
    )
  }
});

var Nails = React.createClass({
  getInitialState: function() {
    return {duration: 30, additional: 0}
  },
  render: function() {
    return (
      <div>
        <h3>Manicure</h3>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: {this.state.additional} minutes</p>
      </div>
    )
  }
});

Again, nothing too fancy. The next step was to add the code to add them to the main Visit object. Obviously I would need a list of the objects, so I started by changing the initial state to

    return { start: today, start_times: start_times, appointments: [] }

Next, a couple of buttons to add appointments…

      <p>
        <button id="hair-btn" onClick={this.addAppointment}>Add Hair Appointment</button>
        <button id="nail-btn" onClick={this.addAppointment}>Add Manicure</button>
      </p>  

The addAppointment function is also pretty basic – or so I thought…

Try #1 to add Appointments

This was my initial attempt.

  addAppointment: function(ev) {
    var n = this.state.appointments.length;
    switch(ev.target.id) {
      case "hair-btn": { this.state.appointments.push(<Hair key={n}/>>); break; }
      case "nail-btn": { this.state.appointments.push(<Nails key={n}/>); break; }
    }
    this.forceUpdate();
  },

Adding a line to render these out,

      { this.state.appointments }

Gives us what appears to be a working page. Click a button – appointment appears. All looks good, so lets continue to add some other functionality.

Additional Time?

Part of the idea of each appointment is to allow each to have a certain amount of time added, so lets add that by changing the plain output to an input and adding a total time output in render.

var Hair = React.createClass({
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  setAdditional: function(ev) {
    this.setState({additional: parseInt(ev.target.value)});
  },
  totalTime: function() {
    return this.state.duration + this.state.additional;
  },  
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>
        <p>Total Time Required: {this.totalTime()} minutes</p>
      </div>
    )
  }
});

Having done that it’s now possible to add multiple objects and give each their own additional time – each is a self contained unit exactly as we’d expect. Having done that, how do we now create the timeline aspect of the main Visit object?

Timeline

The timeline is simple enough to imagine – we know the start time and how long each appointment takes, so we need to go through the and figure out a start time for each (based on either the start or the previous appointment) and then get a finish time. Changing any appointment should change all those after it, and changing the overall start time should change them all. As I want each object to be as self contained as possible, perhaps passing in the start time via the props makes sense? Changing the Hair object to allow this gave me the code below.

var Hair = React.createClass({
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  setAdditional: function(ev) {
    this.setState({additional: parseInt(ev.target.value)});
  },
  totalTime: function() {
    return this.state.duration + this.state.additional;
  },  
  finishTime: function() {
    return this.props.start.clone().add(this.totalTime(), "minutes");
  },
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Start: {this.props.start.format("HH:mm")}</p>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>
        <p>Total Time Required: {this.totalTime()} minutes</p>
        <p>Finish: {this.finishTime().format("HH:mm")}</p>
      </div>
    )
  }
});

Of course, unless I supply the start time it won’t do anything…

    switch(ev.target.id) {
      case "hair-btn": { this.state.appointments.push(<Hair start={this.state.start} key={n}/>); break; }
      case "nail-btn": { this.state.appointments.push(<Nails start={this.state.start} key={n}/>); break; }
    }

Running this looks good to start with, but there’s a problem – changing the start time doesn’t change the appointment. Changing the additional time (which forces an update) then updates the appointment and shows the correct time. Also, at present every appointment uses the same start time, so that needs fixed :-)

Realisations

After playing with a few different things and much looking around at websites it became obvious that while react components are self contaiend objects, they can’t be used in the same way that I could use objects. So, time for a rethink.

React uses keys to determine whether an object is the same as one already rendered, so as long as the key isn’t changed objects will persist between renderings. Eah time it’s rendered I can change the props I use, so when I initially add an appointment I don’t need/want to create it, just record that enough details for it to be added during the render, probably with varying props.

Each appointment knows how long it needs to be (the fixed plus variable duration), but the main Visit object also needs to know. Time for the appointment to call the parent when something changes, so we need a callback.

First step, change the appointments to handle these changes. We need to trigger the callback in 2 instances – when the appointment is initially created (using the componentDidMount hook) and when the additional duration value is changed (via an event handler).

  componentDidMount: function() {
    this.props.updateInterval(this.props.id, this.totalTime());
  },  
  setAdditional: function(ev) {
    this.state.additional = parseInt(ev.target.value);
    this.props.updateInterval(this.props.id, this.totalTime());
  },

I found that using this.setState(…) in setAdditional always resulted in the value being the previous one, I’m guessing because an update was triggered. This led to the callback not always being called and some very odd results initially, hence my switch to simply setting the value directly and relying on the update triggered by the parent when receiving the callback to update things.

        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>

With these in place, the next step is to modify the main visit class. As we’re adding in strict order and not interested in a dyanmic ordering ability, we will just use the length of the list as the ‘id’ for each appointment. We’ll also store the duration of each appointment in the data, ending up with this code.

  addAppointment: function(ev) {
    var n = this.state.appointments.length;
    switch(ev.target.id) {
      case "hair-btn": { this.state.appointments.push({what: 'hair', key: n, interval: 0}); break; }
      case "nail-btn": { this.state.appointments.push({what: 'nails', key: n, interval: 0}); break; }
    }
    this.forceUpdate();
  },

Now that we have the data being stored, next step is to add the callback that will update the intervals. This is a simple function that takes the index and the new interval duration, as shown below.

  updateInterval: function(idx, interval) {
    this.state.appointments[idx].interval = interval;
    this.forceUpdate();
  },

Finally we need to render the appointments with the correct start times.

  render: function() {
    var _appts = [];
    var _start = this.state.start.clone();
    this.state.appointments.map(function(app, idx) {
      var _props = {key: app.key, start: _start, id: app.key, updateInterval: this.updateInterval};
      switch(app.what) {
        case "hair": { _appts.push(<Hair {..._props}/>); break; }
        case "nail": { _appts.push(<Nails {..._props}/>); break; }
      };
      _start = _start.clone().add(app.intervals, "minutes");
    }.bind(this));
    return (
      ...

We also need to render the _appts list, so one final change.

      <p>Visit starts at {this.state.start.format("HH:mm")}</p>
      { _appts }

Summary

So it all works and the way it works is surprisingly flexible, if a little different than where I started :-) I’m sure there are better ways of doing all this, so get in touch if you can educate me :-)

More Webpack

Following on from yesterdays experiments with webpack and react, I’ve changed things round a little today in an effort to make things even simpler.

webpack.ProvidePlugin

One of the annoying aspects of splitting the single file into smaller pieces was the need to add the require lines to every one. I thought there must be a simpler way – and there is! The ProvidePlugin allows you to tell webpack that certain requires should be added as needed. To use it a few changes are needed in the webpack configuration.

First you’ll need to make sure that webpack is available, so add the require at the top of the file.

var webpack = require('webpack');

Then add a plugins section with the plugin and it’s configuration.

  plugins: [
    new webpack.ProvidePlugin({
      React: "react",
      ReactDOM: "react-dom"
    }),
  ],

Following this change, the javascript files can be simplified by removing the require lines for react or react-dom, so /components/Main.js becomes just

module.exports = React.createClass({
  render: function() {
    return (
      <p>Hello World!</p>
    )
  }
});

This proved very useful for the project as a few components used jquery, but remembering which ones and to include the require line wasn’t an issue once this plugin had been added – with the appropriate configuration line ($: “jquery”).

Source Maps

It always seems like a good idea to create a .map file, so it’s as simple as adding a line telling webpack to do just that.

devtool: "source-map",

CSS

Handling CSS is something that never seems as simple as it should/could be, but initially webpack seems to offer a solution. 2 new loaders are needed, so install them via npm.

npm install --save-dev style-loader css-loader

Then we need to tell webpack that we can use them for css files, by adding a section to the loaders.

  module: {
    loaders: [
      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react']
        }
      },
      {
        test: /.+.css$/,
        loader: 'style-loader!css-loader'
      }
    ]
  }

As we’re using the resolve to keep require usage simple, we should update that as well (I’ve added the css files in an inventively named css directory),

  resolve: {
    modulesDirectories: ['node_modules', 'components', 'css'],
    extensions: ['', '.js', '.jsx', '.css']
  },

After making these changes, we need to actually add a require for some css. This was done in App.js as follows,

require('style.css');

However, running webpack produced a small surprise and a touch of confusion.

$ webpack
Hash: f4a6a633c9be19bddd79
Version: webpack 1.12.13
Time: 1717ms
        Asset    Size  Chunks             Chunk Names
    bundle.js  689 kB       0  [emitted]  main
bundle.js.map  808 kB       0  [emitted]  main
    + 164 hidden modules

Where was the CSS? The answer is simple, but also, to my mind, not obvious. It’s merged into the bundle.js and if you open it in an editor you’ll find it there. However, I want the css in a seperate file for a number of reasons, so this solution only partly works. The answer turns out to be another plugin.

npm install --save-dev extract-text-webpack-plugin

Once installed there are quite a few changes required to use it. First off, we need to extract the css rather than let it be bundled with the javascript. To do this we need to change the css loader.

      {
        test: /.+.css$/,
        loader: ExtractTextPlugin.extract('style-loader', 'css-loader')
      }

Before we can use the ExtractTextPlugin we need to require it, so this line needs to be added at the top of the file.

var ExtractTextPlugin = require("extract-text-webpack-plugin");

Finally we also need to output our extracted css, so we need to add this entry to the plugins.

new ExtractTextPlugin("bundle.css")

Running webpack now gives the expected results.

$ webpack
Hash: 55234e19dea7f3c8f04f
Version: webpack 1.12.13
Time: 1757ms
         Asset       Size  Chunks             Chunk Names
     bundle.js     679 kB       0  [emitted]  main
    bundle.css  107 bytes       0  [emitted]  main
 bundle.js.map     794 kB       0  [emitted]  main
bundle.css.map   87 bytes       0  [emitted]  main
    + 164 hidden modules
Child extract-text-webpack-plugin:
        + 2 hidden modules

With this approach I can simply add a require line into any component javascript file for required css and it will be automatically bundled. This works well but the ordering of things being added isn’t always as I’d wish it, so more care will need to be taken with how I write the css. This approach also opens up the prospect of moving to a LESS/SASS based approach as the css can be processed before being added to the bundle.

I’m reasonably sure I don’t need a map file for the css, but I haven’t found any simple solutions yet. Answers on a postcard.

Starting with React & Webpack

In recent days I’ve been working on a small single page JS app using React and bootstrap. After getting over my initial issues yesterday saw a lot of progress and I’m finding it far simpler to work with React than almost any other JS “thing” I’ve tried. However, (you knew it was coming didn’t you?) at present my single page app is just that – a monster of a single HTML file with a bunch of required files on my hard drive. While it’s allowed me to make good progress, going forward it’s not what I want.

Today was time to dive back into the murky world of npm, webpack and try and start splitting the monster into lots of smaller pieces. After finding a lot of tutorials online that helped, I’m scribbling this to try and capture what I learned today and hopefully help myself in the future from making the same mistakes! If it helps anyone else then that’s a nice bonus.

There are probably many better ways to do this, but I’ve yet to find them, so if you have suggestions or comments then pass them on as I’m keen to learn and improve my understanding.

Starting Point

For this I’m going to start with a really simple HTML page in a newly created directory containing just the files I need.

The HTML file looks like this.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Hello World!</title>
  </head>
  <body>
    <h1>Yet Another Hello World...</h1>
    <div id="example">
    </div>

    <script src="react.min.js"></script>
    <script src="react-dom.min.js"></script>
    <script src="browser.min.js"></script>
    <script type="text/babel">
var Main = React.createClass({
  render: function() {
    return (
      <p>Hello World!</p>
    )
  }
});
ReactDOM.render(<Main />, document.getElementById('example'));
    </script>
  </body>
</html>

This works and produces the expected results, so now to make something simple very much more complicated :-)

NPM

I’ll be using npm, so the next step is to get everything installed in preparation. I’ll skip the pain of figuring out what I missed first time around (and second, third etc) and just list everything that will be needed. I think the logic for the –save and –save-dev is right (–save if it’s needed for the final output, –save-dev if it’s only for the background development), but if it’s not I’m sure someone will yell.

npm init -y
npm install --save-dev webpack
npm install --save-dev babel
npm install --save-dev babel-core
npm install --save-dev babel-loader
npm install --save-dev babel-preset-react
npm install --save-dev babel-preset-es2015
npm install --save react
npm install --save react-dom

As I’m not going to be putting this into git and it’s only for my own use, I added

  "private": true

which stopped the constant warning about no repository being listed.

Structure

This is the area I am still experimenting with. There are loads of boilerplate projects in existance, but while all generally agree o principals the way they arrange things is often different. My small brain can’t handle too much complexity and things can always be changed later, so I’m going to start simple as this is a simple project.

/package.json
/webpack.config.js
/components
           /App.js
           /Main.js
/html
     /index.html

The plan is that all the react components go into (guess where?) components and the HTML file into HTML. I’ll use webpack to generate a single js file also into html so that I could distribute the entire directory. As I want to keep components as components, I’m also going to create an App.js file in components that will include the top level React element.

NB I haven’t decided if I’m going to use .jsx for JSX formatted files. It seems like a good idea but it’s another change :-) For now I’ll try and make sure things will work whichever extension I use, but here it’ll be plain ‘ol .js

/componets/Main.js

var React = require('react');

module.exports = React.createClass({
  render: function() {
    return (
      <p>Hello World!</p>
    )
  }
});

/componets/App.js

var React = require('react');
var ReactDOM = require('react-dom');
var Main = require('./Main.js');

ReactDOM.render(<Main />, document.getElementById('example'));

/html/index.html

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Hello World!</title>
  </head>
  <body>
    <h1>Yet Another Hello World...</h1>
    <div id="example">
    </div>

    <script src="bundle.js"></script>
  </body>
</html>

Having split all the files, I now need to configure webpack to build the bundle.js file.

Webpack

Webpack expects to find a file called webpack.config.js in the directory you call it from, so we’ll create one. We need to tell it which file is the “entry” where it will start building the required file from and where we want it to create the output. If that’s all you supply there will be an error,

Module parse failed: /components/App.js Line 5: Unexpected token < You may need an appropriate loader to handle this file type.

The module configurations allow you to tell webpack how to handle the various files, so need to add a line that tells webpack how to handle our files. To do this we add a module loader entry that tests for the appropriate files and uses babel-loader. To handle the JSX formatting you also need to tell it to use the react preset.

var path = require('path');

module.exports = {
  entry: path.resolve(__dirname, "components/App.js"),
  output: {
    path: path.resolve(__dirname, "html"),
    filename: 'bundle.js'
  },
  module: {
    loaders: [
      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react']
        }
      }
    ]
  }
};

With this file in place, you should now be able to generate the required bundle.js file by running webpack.

Hash: 1a44318bada57501b499
Version: webpack 1.12.13
Time: 1484ms
    Asset    Size  Chunks             Chunk Names
bundle.js  677 kB       0  [emitted]  main
    + 160 hidden modules

After webpack runs OK, the file html/index.html can be loaded in a browser and looks exactly as the original file :-) Success.

Improvements…

While using ‘./Main.js’ for the require is OK, it seems a little messy, so it would be easier to be able to use ‘Main’. To allow this we need to tell webpack a bit more about where to find files, which we do using the resolve entry.

  resolve: {
    modulesDirectories: ['node_modules', 'components'],
    extensions: ['', '.js', '.jsx']
  },

Following this addition, we can change App.js to use

var Main = require('Main');

Using the expected config file for webpack is fine, but if it’s not in the root directory you need to use the –config flag for webpack. I had this in one of my iterations and did forget it a few times – more likely with code you don’t look at for a while. To allow this to work and not forget, it’s possible to add a build command to package.json telling it to use webpack. This can also be used to provide commonly used flags.

  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },

becomes

  "scripts": {
    "build": "webpack --progress --colors",
    "test": "echo \"Error: no test specified\" && exit 1"
  },

To build using the command you need to use npm run build.

Next Steps

Now that I have a structure and a working webpack I can start to split up my monster file. I also need to figure out how to handle CSS and then look at add some extras to webpack to get minification of the output. Figuring out how to strip the CSS to the base essentials would also be nice, but that’s an optimisation for the future :-)

Building postgrest

I’ve long thought that a simple REST layer sitting on top of a database, with some suitable access controls would be an ideal solution for many of the small projects I find myself tinkering with. Until recently I’d never quite found a solution that provided this, but then I came across postgrest.

Having some time to spend looking at it and the base of an idea that might be ideally suited to using it, I decided to install. Rather than install the binary, I cloned the repository so that I had access to the source. However, it’s written in Haskell, a language I had no experience with. So, how do I build it?

1. Install haskell

$ sudo apt-get install haskell-platform

NB As this uses postgresql you also need the development libraries for postgresql installed.
$ sudo apt-get install libpq-dev

2. Build/setup the project?

Initially it wasn’t clear, but after some web searches I found the documentation for the cabal build tool that explained the standard method which led me to do this.

$ cabal sandbox init
$ cabal install -j

This took a while as the various packages were download and installed.

3. Run postgrest

$ .cabal-sandbox/bin/postgrest
Usage: postgrest (-d|--db-name NAME) [-P|--db-port PORT] (-U|--db-user ROLE)
[--db-pass PASS] [--db-host HOST] [-p|--port PORT]
(-a|--anonymous ROLE) [-s|--secure] [--db-pool COUNT]
[--v1schema NAME] [--jwt-secret SECRET]
PostgREST 0.2.10.1 / create a REST API to an existing Postgres database

Available options:
-h,--help Show this help text
-d,--db-name NAME name of database
-P,--db-port PORT postgres server port (default: 5432)
-U,--db-user ROLE postgres authenticator role
--db-pass PASS password for authenticator role
--db-host HOST postgres server hostname (default: "localhost")
-p,--port PORT port number on which to run HTTP
server (default: 3000)
-a,--anonymous ROLE postgres role to use for non-authenticated requests
-s,--secure Redirect all requests to HTTPS
--db-pool COUNT Max connections in database pool (default: 10)
--v1schema NAME Schema to use for nonspecified version (or explicit
v1) (default: "1")
--jwt-secret SECRET Secret used to encrypt and decrypt JWT
tokens) (default: "secret")

Now I have it built, time to start playing around with using it :-)

afpfs-ng

At home we use atalk for the shares on the Home Media Server. It makes things simpler as there are Apple computers around. The client I use for Ubuntu is afpfs-ng by Simon Vetter. Having built it a few times (old school, eh?!) I find myself having to relearn the package dependancies, so this post is intended to fix that.

The box already has the build-essentials package installed.

Before running configure for afpfs-ng I needed to install libfuse and it’s development files.

sudo apt-get install libfuse-dev

Also, gcrypt and gmp libraries were looked for by configure, so I installed them

sudo apt-get install libgcrypt20-dev libgmp-dev
./configure

During the initial make, I found that readline and libraries and development files were needed.

sudo apt-get install libreadline-dev

Then the ncurses library couldn’t be linked, which was fixed by installing the development libraries.

sudo apt-get install libncurses5-dev

Once built, I ran

sudo make install

which installed the libraries into /usr/local/lib. As this path wasn’t listed already for shared libraries, I had to add a file listing the diretcory in /etc/ld.so.conf.d/ and then run

sudo ldconfig

Hopefully this will help me next time I need to build the apps and perhaps even help others!

Broccoli

It’s fair to say that when it comes to the modern world of javascript, I’m something of a luddite. It’s not a world I’ve spent a lot of time with and while looking at options to start projects much of what I read may as well be double dutch. However, I have spent some time and EmberJS is slowly become more familiar and useful. So, now that I’m writing apps, the next step in my learning curve is deploying them. Having read about a few of the tools that are currently in use (this week at least) I chose to try Broccoli. In keeping with my “one step at a time” philosophy I elected to start simple :-)

What follows is what I did after looking at various tutorials, but is largely based on a blog post by Tim Eagan.

The first step was making sure it was installed.

npm install --save-dev broccoli
sudo npm install --global broccoli-cli

Of course this just gets you the tool, so now I needed some plugins to help it do useful stuff. To see what’s available, I looked at http://broccoliplugins.com/. Initially I installed what seemed like the basics.

npm install --save-dev broccoli-merge-trees
npm install --save-dev broccoli-uglify-js
npm install --save-dev broccoli-static-compiler
npm install --save-dev broccoli-concat

The broccoli-sass plugin failed to install for me.

Writing the Brocfile.js was the next step. This is just a javascript file and there were many examples to look at to get started. This was my first attempt.

var concatenate = require('broccoli-concat'),
mergeTrees = require('broccoli-merge-trees'),
pickFiles = require('broccoli-static-compiler'),
uglifyJs = require('broccoli-uglify-js'),
app = '.',
appHtml,
appJs;

appHtml = pickFiles(app, {
srcDir : '/',
files : ['index.html'],
destDir : '/production'
});

appJs = concatenate(app, {
inputFiles : ['**/*.js'],
outputFile : '/production/app.js'
});
appJs = uglifyJs(appJs, {
compress: true
});

module.exports = mergeTrees([appHtml, appJs], {overwrite: true});

After creating the file in the root of my project, I was able to simply run it.

broccoli build 'public'

I now had 2 files, public/production/index.html and public/production/app.js. Tim’s example used the sass plugin to generate css files, but as I wasn’t using that some modification were needed to include the css files I was using.

appHtml = pickFiles(app, {
srcDir : '/',
files : ['index.html', 'css/style.css'],
destDir : '/production'
});

However, after making the changes and running the command again, it failed as the public directory already existed! Sadly, there is no option presently available to force an overwrite, so I had to manually remove the existing directory and will need to do this each time (a small shell script will simplify this!). This is a little annoying, but not too much effort.

After looking at the plugins available, I installed broccoli-manifest to simplify production of the appcache file, which works very well and automates another job for me.

I also installed the broccoli-uncss plugin to eliminate unused css.

UPDATE

There is a plugin that cures this problem, clear-broccoli-build-target. Installing it and adding to Brocfile.js has fixed the existing directory problem.

The A9

The A9 is an unusual road – it has it’s own website!

OK, to be fair, the website is actually for the A9 Road Safety Group (RSG) but their sole focus is on making the A9 safer :-) The site provides a lot of details and shows their suggestions (now implemented) to make the road safer together with the various documents they have used to make their decisions. Many of these are the usual documents provided by political bodies, such as the RSG, and are therefore of limited interest. One or two are useful and worth a read.

One fact that quickly comes to light is the reliance on the experiences of the A77 speed camera implementation for comparisons. The roads are very different in nature and usage, but it’s unclear how much allowance for these facts has been made.

The A9 can be a frustrating road. Large sections are single carriageway, with limited visibility through woodland. It’s a busy road with a large proportion of users unfamiliar with the road and travelling long distances. The unfamiliarity combined with the distances inevitably leads to frustration, which in turn leads to many instances of poor overtaking – usually at low speed! For regular A9 travellers the experience of rounding a corner and finding a car coming towards you on the same side of the carriageway isn’t unusual. Often the slow speeds involved are the saving grace, but the frustrations and dangers are only too apparent.

Over the past few months average speed cameras have been added to much of the A9 with the aim of reducing the number of accidents. As speed has rarely been a factor in the nearest misses I’ve experienced I find the decision a little strange.

By way of comparison, the A77 already has large sections “protected” by average speed cameras. As with many people I found myself spending too much time watching my speed rather than looking at the road when using the A77, which given the complexity of the road struck me as being a negative for safety.

One aspect shared by both the A9 and A77 is the confusing and overwhelming number and placement of signs. Approaching junctions it’s not uncommon to find 5 or more signs, all essentially giving the same information. The placement of the signs seems decreed by committee and often signs cover each other or are obscured by vegetation. Given the obsession that exists on the A77 (and in Perth and some parts of the A9) for limiting turn options for lanes, correct lane discipline is important but often awkward and a last minute decision unless familiar with the junction due to the sign issues. Couple this with obsessive watching the speed and it’s a wonder more accidents don’t happen.

Average speed cameras are “fairer” than the instant ones that used to be used, but are they really a good solution for the A9? Monitoring the speed of a vehicle provides a single data point, albeit one that can be objectively measured. Police patrols provide a more subjective measurement of a vehicles behaviour, but they require police officers with all the issues that they bring. It’s a shame that the cameras, with their continuous monitoring of traffic and ability to generate as many tickets as required, has made them the only solution now considered for many organisations.

Of course, alongside the speed cameras the A9 group have also lifted the speed limit for HGV vehicles in an effort to reduce tailbacks and the frustrations that accompany them. It’s an interesting approach, but the usual relationship between speed and energy applies to accidents involving HGVs, so any accidents that take place involving HGVs will be more likely to cause injury. Where the balance between reducing the number of accidents and the additional injuries caused cannot be known at present, but it will be interesting to reflect on.

Another aspect of the introduction that seems strange is the placement of some of the cameras. One of the average speed zones has it’s finish just before one of the most dangerous junctions I regularly pass. The addition of warning signs for turning traffic (that only rarely work and are dazzlingly bright when it’s dark) has been rendered irrelevant as cars now accelerate away from the average speed zone straight into the path of right turning traffic. Moving the zones by a small amount would have avoided this – so why was it not done? Such inattention to detail does not bode well for a multi million pound project that is meant to save lives.

As anyone who drives regularly will attest, the safest roads are those with a steady, predictable stream of traffic. Introducing anything that interrupts the predictability of traffic increases the risk of accidents. Sticking speed cameras at seemingly random locations on roads seems like a sure fire way of doing just that. The sudden braking and rapid acceleration that accompanies such sites is often the trigger for accidents. Following the installation of the cameras on the section of road I travel almost daily, changes in behaviour have been obvious and the near collision between a van and car that I witnessed a few days ago was the first – and closest – I’ve seen in months. Hopefully it’s just a transitional thing and people will adjust.

I’m certain that the reports published will support the decisions made by the RSG, after all that’s the beauty of statistics :-) It would be nice to think that they would publish the “raw” detailed information about incidents and accidents, but so far I’ve been unable to find any place online that has such data. If anyone knows of such data then I’d love to have a look at it and try and do something with it, though I suspect that this will be a pipe dream.

All these changes have been described as temporary, meant to provide additional safety while the plans for changing the entire A9 into a dual carriageway are developed and implemented. The fact that several of the average speed camera sites are on existing dual carriageway sections would tend to imply that they will be a permanent fixture. The continuing income from the cameras will no doubt be welcome, even if they don’t provide much improvement in safety.

Arizona Road Trip, Day 2

The Plan

From Phoenix, AZ we planned to head to the Petrified Forest National Park before continuing to Chinle, AZ.

The Route

Our route was to take the AZ-101 Loop East from north Scottsdale to E Shea Boulevard. We took E Shea Boulevard east until reaching AZ-87 North to the intersection with AZ-260 E. When we reached the intersection with AZ-277N we turned left and continued north for around 7 miles before turning onto AZ-377N to Holbrook. After lunch in Holbrook we then followed the signs to Petrified Forest entering via the southern gate.

From the Petrified Forest we were going to the I-40 E and then US 191 N to Chinle.

The Day

The weather was terrible when we left Phoenix. Much of the road from the hotel to the AZ-101 Loop was covered in brown water. The rain had started overnight and regular flash flood alerts from our mobiles seemed to suggest it wasn’t stopping soon. In fact as we drove the reports on the radio suggested it was very unusual and we would later hear it referred to as the “hundred year storm“.

Rain!Rain!

Our choice of car suddenly seemed like a good one as the 4 wheel drive and the additional ground clearance made the drive easier than it could have been.

Our transport

Throughout the drive to Holbrook we were dogged by rain and overcast skies, but as we approached Holbrook the sun appeared and the rain stopped. After a quick lunch in Holbrook we drove to the south gate of the Petrified Forest National Park with the idea of driving north through the park and exiting onto I-4o.

Pictures from our trip through the park are online here.Petrified Tree

The pictures we had seen before we travelled didn’t do full justice to the park. It’s a fantastic location with truly amazing scenery and seems surrounded by the phenomenal horizons that the area is famous for. Throughout our visit the sky was overcast with only occasional bursts of sunshine, so my pictures don’t really capture the colours or atmosphere.

The Tepees

The northern exit from the park is at the Painted Desert Visitor Centre and from there we took the I-40E to US-191N onwards to Chinle where we stayed at the Holiday Inn just outside the entrance to Canyon de Chelly.

Not long after we left I-40 our mobile phone service stopped, and this was the case until arriving in Page, several days later! Apparently if you have an international mobile this is the case in Navajo territory as the mobile provider does not have any international agreements. While we routinely had WiFi, the lack of easy mobile use did prove to be inconvenient on a few occasions.

 

Detective Story

A little while ago, someone who knows we are both interested in photography gave us a camera that had been in a lost property box for a while and asked if we could find it’s owners. I wasn’t present when the actual exchange took place, but it ended up at our house and sat on a shelf until the other day when things were explained to me.

So, this is what we were given.

Lost Camera

It’s a Sony TX-5. The battery was totally dead when I started looking at it, but we have, by chance, a Sony charger that fitted the battery and so have been able to charge it.

The SD card was 8Gb and contained a lot of pictures, starting in Dec 30th 2010 – was it a christmas present?

There is no name given for copyright in the EXIF data, so I started looking at the pictures to try and find the owner.

I found a car with a legible number plate, but as it looked like a holiday this was probably a rental car.

Discovering that there was a wedding and I could identify the venue, I started to get my hopes up. the venue is still going, had a web page and so I sent an email with some details in the hope they could help. Sadly they couldn’t as the venue has changed hands and no records are available for the date.

Next stop was to look at the recurring pictures of a street. I assume it’s where the owner lives and by looking at street signs and using Google I’ve been able to narrow down the house the pictures were taken from to 2 houses. As the house is in Orgeon and I have no way of getting more information, I sent an email to the local police department – who replied and are looking into what they can do!

Sat, 27th Sept 2014

The property that I identified was in Medford, OR – which is a long way from Scotland! As I wasn’t able to follow up myself I sent an email to the Medford Police Dept hoping that they may be able to assist. The enquiry was passed to Gena Criswell who responded in the best way possible – by offering to help! After a few emails and a few additional pictures being supplied she was able to contact someone from the pictures who confirmed the camera belonged to her brother! Yes, after all this time it’s owner has been identified.

I’m waiting for her brother to get in touch and will then arrange to send the camera to him.

I still can’t really believe that this has had such a good outcome, but a large portion of that is down to the amazing efforts of Gena and the Medford Police Department.

Mon, 6th October 2014

Since hearing from the Medford police, the owner has been in touch and supplied the address for returning the camera. I had intended to send it last week, but events overtook me and so today I am packaging up the camera to send back to it’s owner.

The outcome couldn’t have been any better.

Arizona Roadtrip

Following a summer that consisted of periods of high stress, extreme distractions and less time spent together than usual, the prospect of a holiday was very appealling. The dates were set before any of the adventures of the summer, which meant that as the dust settled there was a question mark over their availability. A few quick emails and a phone call resulted in the dates being confirmed but very little time to plan and book a holiday! So, in the 7 days we had, we planned and booked a roadtrip from and to Phoenix, AZ.

The Route

We had a total of 10 days, including travel to/from the UK, so our rough itinerary was planned thus

  1. Travel to Phoenix
  2. Phoenix to Chinle
  3. Chinle to Monument Valley
  4. Monument Valley to Page
  5. Page
  6. Page to the Grand Canyon
  7. Grand Canyon
  8. Grand Canyon to Phoenix
  9. Travel home
  10. Arrive home

Most days the drive was around 2.5 hours, so easily manageable. The first and last days had slightly more, but were still manageable. Given the areas we were heading to and the amount of time we would be spending in the car, we booked a standard SUV from Alamo, giving us plenty of room.

Preparation

We both enjoy taking pictures and both have DSLR cameras, so storage of the resulting images was going to be an issue while we travelled. I spent some time going through the laptops we were taking and removing as much as possible to give us some much needed space – which still wasn’t enough for Rosie!

Additionally we cleaned a couple of good size external drives, which together with a bag of cables covered us for the entire trip.

The cameras we took were

  1. Nikon D800
  2. Nikon D300
  3. GoPro HD Hero2
  4. GoPro Hero 3 (Black Edition)

For some of the locations we were planning to visit we’d need tripods, so we took along our tripods, further complicating the packing!

Roundcube Users

We’ve been using Roundcube for webmail for a while now without too many problems. It’s easy to install and simple enough to configure and our users seem to find it easy to use.

Recently one of our email accounts was comprimised, leading to a spammer sending a lot of spam through our server. While trying to trace which account was the culprit it became apparent that the source of the spam was the webmail interface, but reviewing the logs proved that there had been logins but no details were visible (these were after all just the apache logs).

What I needed was for Roundcube to log the users who were using the service. Some searches through Google revealed little of help, but then I came across the possibility of enabling the userlogins file. It’s listed in the default config file, but not many other places, so hopefully this post will help others.

To enable, simply add the following to your config.inc.php file.

$config[‘log_logins’] = true;

Once added, the file will be created with details of every login in the logs directory under the Roundcube installation. It confirmed that the user I suspected from a lot of other log reviewing was the culprit – potentially saving me several hours of effort!

Goodbye Zite

Of all the apps I have installed on my phone, the one I most frequently use is Zite. Following todays news that will be changing and the app will soon be uninstalled. It’s a shame, but doesn’t really come as a big surprise as Zite offered a useful, free service – something that is becoming rarer and rarer.

It was my wife who first introduced me to Zite on her iPad. It was an app that filled a void in the market for me and soon became one of the few apps I would look at every day. It’s ability to find stories of interest to me and display them in an easy to browse format was incredibly refreshing. When their android app went through a period of not working well I tried Flipboard.

As with most people my initial reaction to flipboard was “wow” but that faded within minutes. The odd page flipping that was initially “wow” soon became “ugh” and the limited content was annoying. Despite trying to tailor it to my interests the signal:noise ratio was too low – certainly far, far lower than Zite. I found the interface increasingly became an obstacle to the stories with Flipboard, so it was with some relief that updates to the Zite app made it usable again.

I have no idea what the business model was for Zite, but I suspect that being acquired by Fliboard will be viewed as a success by their investors. With the demise of their app I find my phone increasingly resembling and being used as just that – a phone. While the investors may celebrate, I think there will be a lot of users who will view it as a step backwards.

In fact I find myself wondering why I need a “smart phone” at all. I don’t play games. I don’t download music or movies to it. I do use the camera from time to time. The colour screen is nice, but is a camera and a nice screen ample compensation for battery life that is measured in hours compared with the days I enjoyed with a simpler phone 10 years ago?

Recent activity in the IT world has also shown that apps and services have very little user loyalty. The sudden rush of whatsapp users for alternative services following their acquisition by Facebook may be a recent example, but it’s hardly an isolated instance. Using an app and coming to rely on it for anything seems bound to lead to disappointment. Companies now view you as a commodity to be traded at the first opportunity to sell for massive rewards. How did we get here?

Self Disservice

This morning I had a request for a few purchases. As WHSmith had everything I wanted I popped in and started gathering the items. The store was organised in some random manner meaning it took a few minutes to find everything, but after the exercise I headed for the tills.

Anyone who has been in recently will know that WHSmith have invested in self service tills rather than staff for the last few years, so I was directed to one of these machines. Despite regularly using them in Tesco (preferable to a 15 minute wait for a manned till) the first glitch took only 3 items to appear. The dreaded “unexpected item in bagging area” message required a visit from one of the “floating” staff, but as she was busy with another machine I stood and waited for around a minute.

The next item refused to scan. I wanted to buy 5 of these, but as they wouldn’t scan and not wanting to wait for a member of staff I elected to leave them.

It strikes me as amazing just how poor my experience was this morning. Not only did I leave feeling that I had received poor service but I had spent far less than I was intending, primarily due to the self service experience. I can sympathise with the desire to cut costs, but where is the line between saving money and customer service drawn?

Where’s the mouse???

Another update window popped up in Ubuntu 13.10 a couple of days ago. Lots of packages to be updated (281 if memory serves) so as I wasn’t doing anything that required my laptop, I started the upgrade. After a while it finished and asked to restart. Nothing unusual so far. The restart went OK and the login screen popped up – but where was the mouse pointer? Hmm, that’s odd.

Another restart showed the mouse pointer was present right up to the login screen – when it vanished.

Logging in showed that the mouse was there, but the pointer wasn’t. Moving the mouse around and clicking showed it worked, but there was no pointer. A search of the web threw up a few options, but none have worked. Annoying!

This is the third or fourth time in the last 6 months that an Ubuntu upgrade has created some issue with this laptop. Every time (until now) the fix has been easy enough but has meant spending time hunting around on the web that could have been spent doing productive things. A few years ago this wasn’t an issue I had. The occasional distribution upgrade caused trouble, but generally things just worked. Upgrades and updates were seamless and could be approached without fear. Sadly this appears to no longer be the case which bring to the fore again a question I’ve asked before – is it time for a change?

Of the OS’s I’ve had regular contact with over the last 3 years I find it amazing that the system that has required the least amount of effort is Windows 7! While it has thrown a few issues my way, they’ve all been easy enough to fix. OSX has proved the most frustrating and has caused the most hair pulling incidents, but this may well reflect my lack of experience with it (though things haven’t gotten any easier the more time I have spent). Windows 8 seems like a huge change and may be a step too far, but at least I know it works well on this laptop.

As I’ve been toying with the idea of a new laptop the choice of OS will be a huge factor, so does anyone have any advice or recommendations for me? I need a light laptop as I travel a lot but beyond that I’m open to suggestions…

Update< \h3>
After much work in the terminal I have managed to get back to a desktop with a pointer.

Outlook Woes

Last month we decided to move from our older server to a newer, more powerful box. Moving the majority of services didn’t worry me, but knowing how fragile and potentially awkward the mail can be did give me pause. I spent some time and researched the settings and configuration, tested it as best I could and then made the move. All seemed fine for 75% of the users, but a small issue was troubling the rest, so I adjusted the configuration and watched the results.

As usual things were a mix of good and bad, but some spam did get sent. I quickly fixed the problem and moved on. Now 90% of the users were fine but the remaining 10% comprised the most vocal and so suddenly it felt as if 90% of the users were having troubles.

I tweaked a setting here and there over the next few days, but nothing seemed to work. The complaints grew and the language performed the usual subtle changes of tone that desperation seems to trigger. With hindsight the fact that the affected now numbered less than 5% should have signalled me to pause and take more time. Needless to say I adjusted another setting which opened the floodgates! Initially it didn’t seem like an issue as mail was being delivered and spam was being rejected.

Having removed a level of protection too far eventually a spammer found the issue and exploited it. As always this coincided with me being away from keyboard for 8 hours, so the server was subjected to a massive deluge of spam. As soon as I was back I stopped things and removed as many messages as I could before they were sent. Restoring the old configuration I reviewed my changes and found the problem, adjusting the configuration and eventually restarting deliveries. This time I watched and saw that the spam flood had been stopped. Even better the noisy 5% were now happy. Getting into bed at 3am felt good that night.

Of course that was just the start. Having been open for a short period several blacklists noticed and added the IP to their lists. Many hosts refused to talk to the server, so I started contacting the blacklist providers and attempting to restore the reputation of the server. Over the next few days most accepted the explanations and seeing no more spam originating they removed the IP. Things returned to normality – except for Outlook.

I thought that dealing with AOL was going to be the most problematic given their odd and highly aggressive anti-spam configurations, but actually following the steps on their website had the situation resolved in a matter of days. Outlook on the other hand was a whole different ball game.

The first problem is where do you go for help in getting the problem cured for Outlook domains and addresses? The error message in the logs looks like this…

Jan 6 13:43:04 xxx: xxx: to=, relay=xxx, delay=7.5, delays=2.2/0/0.28/5.1, dsn=5.7.1, status=bounced (host xxx.mail.protection.outlook.com[xxx] said: 550 5.7.1 Service unavailable; Client host [xxx] blocked using Blocklist 1; To request removal from this list please forward this message to delist@messaging.microsoft.com (in reply to RCPT TO command))

That’s fine, but of course I didn’t send the message. The person who did send the message wasn’t interested in forwarding it and simply deleted the returned message having noted that it wasn’t delivered. Not an unusual response from an email user I would suggest. As the person who tried to administer the server surely there is a webpage or some such that can be used to accomplish the same thing? Every other blacklist provider has one!

After searching around I find http://mail.live.com/mail/troubleshooting.aspx which offers lots of interesting advice and links. Following them I jump through the hoops and sign up for the various programs they highlight. Then I send them a message via https://support.live.com/eform.aspx?productKey=edfsmsbl3&ct=eformts with the information that they ask for.

Denied.

No explanation or further help is offered. When I reply the message is – yes you guessed it – bounced as the server is blacklisted! Oh, you couldn’t make it up. Trying again with a different email on a different service gives the same denied result and all attempts to find out why are met with a blank wall of copy and pasted text that gives no additional information.

I can sympathise that outlook.com is a huge target for spammers, but making it so hard for others to interact with the service simply means that people will increasingly not interact with it. Large corporations may be able to employ people to spend the time required to deal with the issues, but smaller companies can’t afford such luxuries.

As I typed this I forwarded on a bounced mail to the delist@messaging.microsoft.com email address and received 2 responses – one saying the message was being processed and another saying the message couldn’t be processed as they didn’t understand it! How can such a large organisation as Microsoft make things so difficult?

Quadcopter #2 is Alive

Following the arrival of the longer screws, all 4 motors were quickly attached and their directions checked. Only one was incorrect and needed the wires swapping. Despite the colours of the arms clearly showing direction I also went with green propellers at the front and black at the back (as per the first quad) to give additional indications.

After a quick tweak of the PI values and a zeroing of the receiver inputs it was time to see how it flew.

The answer was surprisingly well. Compared to the earlier design there was far less yaw evident when lifting off and the extra indicators of direction made figuring out corrections easier. The shorter legs means it sits closer to the ground which appears to make it slightly less stable just as it lifts, but that was easily corrected. The main issue I ran across was some of the receiver leads came loose and didn’t seem to be seated as well as I’d have liked, constantly becoming disconnected. As I have spare cables it’s an easy fix.

As it was hovering around freezing when I was trying, I didn’t stay out long. The early signs are positive. Now if the gale force winds can just abate to allow me to tune the PI settings…