RaspberryPi & SWD

The KroozSD board now comes with a handy SWD connector, a simple 3 pin 1mm JST located in the middle of the board. As debugging is one of the harder aspects of embedded development the connector has always been an interesting addition but finding a way to interact with the port has proved tricky.

kroozsd_swd

While I have several devices that claim to offer the required interfaces, none of them have proved to be supported. However, I came across an article explaining both how to build the required software but also how to physically connect the port to a RaspberryPi yesterday and so less than an hour later I was trying to figure out SWD!

Physical Connections

Figuring out how to connect the SWD pins to the RaspberryPi was my first challenge. Thankfully this post gave a diagram , which while not as clear as it could have been gave me sufficient information to make a start. I used a small breadboard to allow me to put the resistor inline and attached it to the RaspberryPi via some jumper cables.

raspberrypi_swd

Now the port was attached, it was time for the software.

Software

This Adafruit article provided exactly what I needed to build OpenOCD on the RaspberryPi. The instructions aren’t hard to follow but I ignored the part about connecting the device as I had already done that and the configuration section wasn’t relevant for the STM32 board.

As per the Adadfruit tutorial, I saved the config shown below to openocd.cfg.

source [find interface/raspberrypi2-native.cfg]
transport select swd

set BSTAPID 0x06413041
source [find target/stm32f4x.cfg]
reset_config srst_only srst_nogate

init
targets

After saving the file I then just ran openocd 🙂

Hello?

Initially there were just invalid responses, so I used a small connector to force the board to boot using the builtin ST bootloader rather than the Krooz bootloader (UART 3 SYS pin connected to the +3.3V pin). This brought the board to a simplified state and enabled SWD. When running this time things looked a more encouraging.

pi@raspberrypib:~/swd $ sudo openocd
Open On-Chip Debugger 0.10.0-dev-00319-g9728ac3 (2016-05-22-19:00)
Licensed under GNU GPL v2
For bug reports, read
	http://openocd.org/doc/doxygen/bugs.html
BCM2835 GPIO nums: swclk = 25, swdio = 24
BCM2835 GPIO config: srst = 18
srst_only separate srst_gates_jtag srst_push_pull connect_deassert_srst
adapter speed: 2000 kHz
adapter_nsrst_delay: 100
srst_only separate srst_nogate srst_push_pull connect_deassert_srst
cortex_m reset_config sysresetreq
srst_only separate srst_nogate srst_push_pull connect_deassert_srst
Info : BCM2835 GPIO JTAG/SWD bitbang driver
Info : SWD only mode enabled (specify tck, tms, tdi and tdo gpios to add JTAG mode)
Info : clock speed 2002 kHz
Info : SWD DPIDR 0x2ba01477
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
Error: stm32f4x.cpu -- clearing lockup after double fault
Polling target stm32f4x.cpu failed, trying to reexamine
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
    TargetName         Type       Endian TapName            State       
--  ------------------ ---------- ------ ------------------ ------------
 0* stm32f4x.cpu       cortex_m   little stm32f4x.cpu       halted

I will admit the first time it ran successfully I was a little surprised and wondered what to do next, but some quick searches revealed the answer!

SWD Adventures

Given my RaspberryPi is running without a screen or keyboard, the simplest way to access it is via a remote connection. OpenOCD provides just such a connection via a telnet connection, so on my laptop it’s simple enough to gain access.

$ telnet 192.168.1.95 4444
Trying 192.168.1.95...
Connected to 192.168.1.95.
Escape character is '^]'.
Open On Chip Debugger
>

I’m primarily interested in programming the flash memory, so I started by trying to get information about that.

> flash info 0                    
Device Security Bit Set
#0 : stm32f2x at 0x08000000, size 0x00100000, buswidth 0, chipwidth 0
	#  0: 0x00000000 (0x4000 16kB) protected
	#  1: 0x00004000 (0x4000 16kB) protected
	#  2: 0x00008000 (0x4000 16kB) protected
	#  3: 0x0000c000 (0x4000 16kB) protected
	#  4: 0x00010000 (0x10000 64kB) protected
	#  5: 0x00020000 (0x20000 128kB) protected
	#  6: 0x00040000 (0x20000 128kB) protected
	#  7: 0x00060000 (0x20000 128kB) protected
	#  8: 0x00080000 (0x20000 128kB) protected
	#  9: 0x000a0000 (0x20000 128kB) protected
	# 10: 0x000c0000 (0x20000 128kB) protected
	# 11: 0x000e0000 (0x20000 128kB) protected
STM32F4xx - Rev: Z
> flash banks
#0 : stm32f4x.flash (stm32f2x) at 0x08000000, size 0x00100000, buswidth 0, chipwidth 0

One thing that did catch me out initially was that this is just a telnet session, so any files that I referenced needed to be on the RaspberryPi *not* my local machine. Once the required files were transferred across it was time to update the firmware. Thankfully OpenOCD provides a simple way to do this.

> program ap.bin erase verify 0x08004000
adapter speed: 2002 kHz
stm32f4x.cpu: target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0x61000003 pc: 0x2000002e msp: 0x2001fff0
adapter speed: 4061 kHz
** Programming Started **
auto erase enabled
stm32x device protected
failed erasing sectors 1 to 5
embedded:startup.tcl:454: Error: ** Programming Failed **
in procedure 'program' 
in procedure 'program_error' called at file "embedded:startup.tcl", line 510
at file "embedded:startup.tcl", line 454

It makes sense to now allow the device to be programmed while locked, so to unlock the device, the command is stm32f2x.

> stm32f2x unlock 0
stm32f2x unlocked.
INFO: a reset or power cycle is required for the new settings to take effect.
> reset halt
adapter speed: 2002 kHz
stm32f4x.cpu: target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0x61000003 pc: 0x2000002e msp: 0x2001fff0

Now the device is unlocked, I can try programming again.

> program ap.bin erase verify 0x08004000
adapter speed: 2002 kHz
stm32f4x.cpu: target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0x61000003 pc: 0x2000002e msp: 0x2001fff0
adapter speed: 4061 kHz
** Programming Started **
auto erase enabled
wrote 245760 bytes from file ap.bin in 5.491092s (43.707 KiB/s)
** Programming Finished **
** Verify Started **
verified 153560 bytes in 0.333037s (450.283 KiB/s)
** Verified OK **

Now that it has programmed, time to lock the device again!

> stm32f2x lock 0                       
stm32f2x locked

Conclusion

I’m sure there is a lot more that I can do with SWD and I now have a way of connecting and using it. Sadly I’m no further forward with getting the board working 🙁

Quad Update #7

In my earlier post I said that the bootloader for the KroozSD was luftboot, but in reality it’s available here. The version contained in that tree doesn’t appear to be fully up to date as there is no CRC and the strings reported by the device are different, but I’m assuming the current code is similar.

While I realise I could simply switch to a different laptop, I can’t help but feel this should be a simple enough problem to fix and as I’m always interested in learning another wee adventure begins!

USB Debugging

Having gone from working USB transfers to non-working USB transfers I’ve been doing a little bit of debugging of the USB. The stm32_mem.py script uses PyUSB so my first stop was to enable additional debug information for the library. The library authors have made this very easy.

$ export PYUSB_DEBUG=debug
$ ./stm32_mem.py ...
2016-05-21 10:05:06,719 DEBUG:usb.backend.libusb1:_LibUSB.__init__()
2016-05-21 10:05:06,721 INFO:usb.core:find(): using backend "usb.backend.libusb1"
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.enumerate_devices()
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.get_device_descriptor()
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.get_device_descriptor()
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.get_device_descriptor()

The level of information is impressive and allowed me to check that no additional calls had been made and that what I thought was taking place was reflected in the calls being made. Having checked that the next level was to look at whether there were other calls being made by the system. For this I turned to usbmon.

The Ubuntu kernel already has the basic support needed, so the first step was simply to load the usbmon kernel module and verify that it was available.

$ sudo modprobe usbmon
$ ls /sys/kernel/debug/usb/usbmon
0s  0u  1s  1t  1u  2s  2t  2u

Finding the bus number was easy enough.

$ dmesg
...
[25450.332261] usb 1-2: new full-speed USB device number 34 using xhci_hcd
[25450.462462] usb 1-2: New USB device found, idVendor=0483, idProduct=df11
[25450.462505] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[25450.462510] usb 1-2: Product: KroozSD CRC
[25450.462514] usb 1-2: Manufacturer: S.Krukowski
[25450.462517] usb 1-2: SerialNumber: Bootloader

So, bus #1 was where to look, but which endpoint did I need? The usbmon documentation explains that endpoints ending with ‘u’ are the best ones to use, so ‘1u’ endpoint was what I wanted. Getting the data into a file so I could analyse it turned out to be as easy as

$ cat /sys/kernel/debug/usb/usbmon/1u > krooz.mon

Running the stm32_mem.py script again (in a different terminal) resulted in a capture file that contained a lot of information so I used CTRL-C to stop the capture. Analysing the data turned out to be simple enough as well thanks to Virtual USB Analyzer. The app wasn’t installed by default but was in apt, so installation and usage was simple enough.

$ sudo apt-get install vusb-analyzer
$ vusb-analyzer krooz.mon

The app displayed the information needed, but without DFU support built in some interpretation was required.

Not an issue…

One of the recurring entries in dmesg has been

[25457.004284] usb 1-4: usbfs: interface 0 claimed by btusb while 'python' sets config #1

Looking at this in more detail it’s not an issue and merely informing me that the discovery phase of the stm32_mem.py script has attempted to claim ownership of my bluetooth device (which has a DFU endpoint). It would be nice if PyUSb had a way to check whether a device was already claimed, but I couldn’t find anything that would allow such a check. It does remove one source of concern.

Results?

Everything I looked at showed me that the script does exactly what I expected it to and there is no interference from anything else on my system. Not a big surprise and certainly what I expected, but it also illustrates how easy it can be to get the additional information I wanted. As the scripts haven’t changed I can rule that out as the source of problem, which would imply that the bootloader code has an odd bug in it somewhere. Figuring out what and how to fix it will be much harder.

The bootloader code I have looked at all appears to be simple enough that there is only one way the manifestation stage can be triggered.

static int usbdfu_control_request(usbd_device *usbd_dev, 
                                  struct usb_setup_data *req, u8 **buf,
		                  u16 *len, 
                                  void (**complete)(usbd_device *usbd_dev, struct usb_setup_data *req))
{
    if ((req->bmRequestType & 0x7F) != 0x21)
        return 0; /* Only accept class request. */

    switch (req->bRequest) {
        case DFU_DNLOAD:
            if ((len == NULL) || (*len == 0)) {
                usbdfu_state = STATE_DFU_MANIFEST_SYNC;
                return 1;
            } else {
        ...

This is meant to be triggered by an empty download request, but somehow is being triggered following a 2050 byte download. The function is a callback supplied to the libopencm3 usb setup code,

    usbd_register_control_callback(usbd_dev,
                                   USB_REQ_TYPE_CLASS | USB_REQ_TYPE_INTERFACE,
                                   USB_REQ_TYPE_TYPE | USB_REQ_TYPE_RECIPIENT,
                                   usbdfu_control_request);

The sequence leading to the crash is always the same.

DFU_DNLOAD DFU_ERASE command
DFU_DNLOAD 2050 bytes
DFU_GETSTATUS => STATE_DFU_DNBUSY
DFU_GETSTATUS => STATE_DFU_IDLE
...
DFU_DNLOAD DFU_ERASE command
DFU_DNLOAD 2050 bytes of data
DFU_GETSTATUS => STATE_DFU_DNBUSY
DFU_GETSTATUS => STATE_DFU_MANIFEST

In all my debugging I see that the download transfer completes for the erase and then the data transfer, and after a few transfers (which is normally 3) working as expected the manifestation stage is triggered. Answers on how to further debug are welcomed on postcards (or emails) 🙂

Routing D’oh!

React-router is a great addition to React, but yesterday marked the first time I had used it for a project. It led to a period of head scratching, but maybe this post will help someone else avoid the same mistake I made!

Simple Setup

Installing it was simple enough.

npm install --save react-router

Having installed it I then added the import lines,

import {Router, Route, Link, browserHistory} from 'react-router';

and then added some routes to the root component.

render((
  <Router history={browserHistory}>
    <Route path="/" component={home}>
        <Route path="about" component={about} />
        <Route path="blah" component={blah} />
    </Route>
  </Router>
), document.getElementById('test'))

Adding a few links to the components with Link to enable me to test navigation and all should have been good.

    <Link to="/about">About Page</Link>

This was intended to give me the following urls,

/
/about
/blah

After making the changes a quick refresh of the page (webpack-dev-server really does make life easy) and sure enough I got the page with links. Clicking on the links gave me a surprise though – the URL changed but the page didn’t.

Children?

After some head scratching and a bit of reading around, I discovered that the routing I had added wasn’t quite what I thought in terms of the components. In my minds eye I envisaged this as rendering each component directly, but what I had added actually added was rendering a parent/child relationship.

URL Parent Child(ren)
\ home none
\about home about
\blah home blah

Fixing the problem was as simple as changing the home component to render the children by adding

<div>
{this.props.children}
</div>

Nesting

This applies for every level of nesting, so rewriting the routing block to

render((
  <Router history={browserHistory}>
    <Route path="/" component={home}>
        <Route path="about" component={home}>
            <Route path="about" component={about}/>
        </Route>
        <Route path="blah" component={blah} />
    </Route>
  </Router>
), document.getElementById('test'))

Works as expected when displaying /about/about due to using the home component with this.props.children. Using the about component failed as it lacked the this.props.children usage. The relationships look like

URL Parent Child Child
\ home none none
\about home home none
\about\about home home about
\blah home blah none

Original Intention

In order to get the behaviour I actually wanted I needed to change the routing block to this.

render((
  <Router history={browserHistory}>
    <Route path="/" component={home}/>
    <Route path="about" component={about}/>
    <Route path="blah" component={blah} />
  </Router>
), document.getElementById('test'))

postgrest lessons learned

I’ve been spending some time recently getting to grips with postgrest by writing a small schema and figuring out how it all sits together with the help of a simple python client. The plan is to continue to develop it as a react/redux app once I have postgrest and the data figured out 🙂 The following are just some things I’ve learned that may have helped me from 10 days ago and may help someone else.

Roles

I’ve ended up with 3 roles.

authenticator
This role is used as the “base” role for the database. It has sufficient access to change roles and nothing else. I use this in the postgres connection url and so it needs a password to allow connections to postgres to be made.
anon
This role caused the most confusion initially as I failed to grasp that all connections from postgrest will use this role UNLESS a valid jwt_token is presented that contains a different role. This means it needs,as a minimum, access to everything involved in the login/signup process. For the database tere is nothing else that is visible to non-authenticated users so I have only the login/signup permissions.
user
The user role is set in the jwt_token. Once presented all further access will be as this user so the permissions need to reflect that. Additionally there are some aspects of the user management that need to be accessed by this role, e.g. when the user changes their details.

I’m not keen on having every user as their own role, so that means adding access control in the views and using their credentials with a single role. How this will work if I switch over to Row Level Security isn’t obvious to me yet.

One schema to rule them all

When starting postgrest there is the option to pass a schema. If none is supplied then the default of “public” is used. Anything else isn’t accessible from postgrest. This gives a lot of flexibility but also caused me some confusion initially when things weren’t available. If you need to span schemas then you need to write a view – which must be a member of the schema you have chosen. So far I’ve chosen to just use the default schema of ‘public’. Call me lazy 🙂

Functions need /rpc/, views don’t

This one was pretty obvious from the documents, but taken together with the schema point above caused me a lot of confusion. Essentially if you have defined a function called ‘login’ and another called ‘signup’ then they are available as /rpc/login and /rpc/signup. Views are simply available, so if you create a view ‘users’ it’s simply available as ‘/users’.

Direct access or not?

Simply creating tables in the chosen schema is enough to make them available. So, if you have a table called ‘colours’ then it’s directly available via postgrest using ‘/colours’ (depending on the permissions obviously). If, however, you created it in the ‘constants’ schema then it would need a view to access it. Which is preferable depends on the design and how much extra SQL you want to write.
I’m still trying to decide which route to go down. Having the “raw” tables hidden and only accessible via views provides a level of indirection that could well be useful, though adds a lot of extra coding – the lack of which was part of the attraction of postgrest to start with!

jwt_claims

This type needs to be defined with the information required by the app. Once supplied by a request, the information is retrieved by current_setting(‘postgrest.claims.xxx’) (where xxx is the member name).

Commands are available

In writing the sql files, the commands that can be used in the psql command line app are available! This revelation helped me streamline my debugging 🙂 Additionally I have split the sql into separate files and used ‘\i‘ to include them in a main file. This allows me to run it all as one or just the parts I need to update.

I need to improve my postgresql knowledge

As a friend said a while ago, “postgresql is a real database”. My knowledge of writing views, functions and the other bits needed to glue everything together is getting better, but the more you know the better.

gitter

Buried in one of the various documentation page was a link to gitter.im/begriffs/postgrest. The room isn’t too busy but seems to have enough people who know about postgrest to offer valuable help. Some of the other rooms also seem to be helpful with low amounts of noise and their app is pretty good.

Update?

Updates are done using the PATCH http verb and normally return a 204 No Content response. The data can be sent as json and all usual filtering arguments can be used to select the records to be updated.

PATCH /fruits?id=eq.1
Content-Length: 23
Content-Type: application/json
Accept: application/json

{"name": "Green Apple"}

If you want the full record, you need to add the Prefer header. This should return a 200 OK status code with the record.

PATCH /fruits?id=eq.1
Content-Length: 23
Content-Type: application/json
Accept: application/json
Prefer: return=representation

{"name": "Green Apple"}

Command line babel?

Babel is a great project and a really useful npm module. I’ve use it in almost all of my webpack experiments, but recently I had a need to use it from the command line. Thankfully there is the babel-cli module. However, things weren’t as simple as some of the blog posts I found suggested.

Starting with a new project and npm the initial installation is the usual breeze.

$ npm init -y
$ npm install --save-dev babel-cli

According to the blog post I should now just be able to run babel-node – erm, no.

$ babel-node
babel-node: command not found

OK, so it’s Ubuntu and bash which means the PATH isn’t pointing to the correct file. That’s reasonable given I’ve just installed it, but what is the correct path? Given that npm installs things on a per project basis it’s likely something in node_modules? A quick search reveals that to be the case.

$ ls -l node_modules/.bin
total 0
...
lrwxrwxrwx 1 david david 30 Apr 16 13:31 babel-node -> ../babel-cli/bin/babel-node.js
...

To avoid this being a recurring issue I adjusted the PATH to include that directory (as a relative path).

$ export PATH=$PATH:node_modules/.bin

Now I could run babel-node, but using funky modern javascript syntax failed. I needed the es2015 preset.

$ npm install --save-dev babel-preset-es2015

Once installed I needed to tell babel to use it, which was as simple as

$ babel-node --presets es2015 ...

However, what are the chances of me remembering that every time I need it? So, there must be a better way to tell babel to always use the preset. Further reading revealed 2 options – add a .babelrc file or add an entry in package.json. I chose the package.json file for no other reason than it was one less file to create/manage. The entry is pretty simple and exactly as you’d expect.

  "babel": {
    "presets": ["es2015"]
  }

Now running is as simple as

$ babel-node index.js

However…

As I started writing more javascript I ran into problems when I used the new ES7 spread operator, ‘…’. It is supported by babel, but requires a plugin to be installed and used. Finding and adding the plugin was simple enough,

$ npm install --save-dev babel-plugin-transform-object-rest-spread

I thought that I could add it’s use via the package.json file, but my attempts failed, so I added a .babelrc file with the appropriate sections and all was well.

$ cat .babelrc
{
    "presets": [
      "es2015"
    ],
    "plugins": [
      "transform-object-rest-spread"
    ]
}

Hopefully this will help someone else!

Webpack: sass & import, names

Having started moving to sass for my project and including the required bits in my webpack configuration (blog post), the next issue I ran into was that importing didn’t seem to work as expected.

Require not Import?

One of the additions I made to my webpack config was to add a resolve section, allowing me to use more convenient and simpler require lines in my javascript.

  resolve: {
    modulesDirectories: ['node_modules', 'components', 'css', 'fonts'],
    extensions: ['', '.js', '.jsx', '.css', '.scss']
  },

This worked exactly as expected wherever I used a require statement, so I had expected that this would transfer to import statements in css and sass files – but it didn’t. As it seems such an obvious thing to do, I had another look at the README for the sass-loader and found what I was looking for.

~, but not as you know it

For my testing I had created as simple a file as I could think of, test.scss.

@import ('../components/_component.scss')

This very simple file just imports another file (which happens to be sass) that belongs to a component I have in the ‘components’ directory. Nothing special, but why do I need the full import path? This was what I needed to get things working, but after looking at the sass-loader again I realised that using the ‘~’ would use the webpack resolve routines – which is what I was hoping. A small change to the file,

@import ('~_component.scss')

resulted in things working as I wanted.

NB the README cautions against using ~ as you may expect (if you’re a command line groupie) as using ~/ implies the home directory and probably isn’t what you want.

Multiple Outputs?

Having decided that I don’t want css to be included in the javascript output, I added the ExtractText plugin which allowed me to bundle all css into a single css file. This is fine, but what if I wanted to have different css bundles? What if I wanted to have different javascript bundles? My current configuration didn’t seem to allow this.

  entry: {
    'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
    'webpack/hot/only-dev-server',
    path.resolve(__dirname, 'components/App.js'),
  }

Thankfully, webpack has this covered. Instead of having a single entry you can have multiple, each of which you can supply a name. Additionally I realised that the entry point doesn’t *need* to be a javascript file as long as it’s a file that can be processed. So I changed the entry section to this.

  entry: {
    bundle: [
      'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
      'webpack/hot/only-dev-server',
      path.resolve(__dirname, 'components/App.js'),
    ],
    test: [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

Running webpack didn’t give me what I expected as I also needed to change the output definition.

  output: {
    path: path.resolve(__dirname, 'html'),
    filename: '[name].js'
  },

Using the [name] simply replaces the name I used in the entry definition with that text, which offers additional possibilities. With the changes made, running webpack produces

html/
     bundle.js
     bundle.css
     test.js
     test.css

The test.js file is a little annoying and in an ideal world it wouldn’t be created, but so far I can’t find any way of preventing it from being created.

To control the output location even more, simply changing the definitiion is all that’s required for simple changes. Updating it to

  entry: {
    ...
    'css/test': [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

results in the files being created in html/css, ie

html/
     bundle.js
     bundle.css
     css/
         test.js
         test.css

NB when using a path the name needs to be in quotes.

Using this setup, component css is still included in the bundle.css and the only things appearing in test.css are those that I have specifically included in the entry file, which opens up a lot of possibilities for splitting things up. As I’m using bootstrap for the project one possibility is to use this to output a customised bootstrap file.

Hot Reload

At present hot reloading of css doesn’t seem to be working. I changed my configuration to this

  entry: {
    'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
    'webpack/hot/only-dev-server',
    bundle: [
      path.resolve(__dirname, 'components/App.js'),
    ],
    test: [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

which still provides hot reloading of the javascript, but the css files don’t seem to work. This seems to be a common issue, but as it’s not a serious one for me at present I’m not going to spend too much time looking for solutions. If anyone knows, then I’d love to hear from you.

sass

Continuing my delve into React, webpack and so on and after adding a bunch of css files, I decided it was time to join the 21st century and switch to one of the css preprocessors. LESS was all the rage a few years ago, but now sass seems to have the mindshare and so I’m going to head down that route.

Installing

Oddly enough, it installs via npm 🙂

npm install --save-dev node-sass
npm install --save-dev sass-loader

Webpack Config

The webpack config I’m using is as detailed in my post More Webpack, so the various examples I found online for adding sass support didn’t work as I was already using the ExtractTextPlugin to pull all my css into a single file. The solution turned out to be relatively simple and looks like this.

      {
        test: /\.scss$/,
        loader: ExtractTextPlugin.extract(['css', 'sass'])
      }

Additionally I need to add the .scss extension to the list of those that can be resolved, so another wee tweak.

  resolve {
    ...
    extensions: ['', '.js', '.jsx', '.css', '.scss']
  }

Structure?

One reason for moving to SASS is to allow me to split the huge css files into more manageable chunks, but how to arrange this? Many posts on the matter have pointed me to SMACSS and I’m going to read through the free ebook (easily found via a web search) to see what inspiration I can glean, but I think for each React component I’d like to keep the styles alongside as the bond between the JSX and the styling is very tight and changing one will probably require thinking about further changes. As per previous experiments, the component can then require the file and it will magically appear in the bundled, generated css file, regardless of whether I’ve written it in sass or plain css.

For the “alongside” files I’ll use the same filename and the leading underscore that tells sass not to output the file directly, though with the webpack setup that isn’t a concern now but getting into the habit is likely a good idea for the future 🙂 This means fora component in a file named App.js I’ll add _App.scss and add a line require(‘_App.scss’); after the rest of the requires.

Variables

I want to use a central variables file for the project, which I can then reference in the sass files, but haven’t quite decided where it should live just yet. Hopefully after reading the ebook and looking at the project a bit more it will make sense.

Now sass handling in place it’s time to start pulling apart my monolithic plain css file and creating the smaller sass files.

Webpack Dev Server

After using webpack for a few days, the attraction of changing to using the dev server are obvious.

The webpack-dev-server is a little node.js Express server, which uses the webpack-dev-middleware to serve a webpack bundle.

Install

Oddly enough, it needs installed via npm! However, as we’re going to run it from the command line, we’ll install it globally.

sudo npm install -g webpack-dev-server

Running

After install, simply running the server (in the same directory as the webpack.config.js file) will show it’s working and the bundle is built and made available. Next step is to get it serving the HTML file we’ve been using. This proves to be as simple as

$ webpack-dev-server --content-base html/

Requesting the page from http://127.0.0.1:8080/ gives the expected response. Removing the bundled files produced by webpack directly from the html directory and refreshing the page proves the files are being loaded from the dev server. Nice.

Hot Loading

Of course, having the bundle served by webpack is only the start – next I want any changes I make to my React code to be reflected straight away – hot loading! This is possible, but requires another module to be loaded.

npm install --save-dev react-hot-loader

The next steps are to tell webpack where things should be served, which means adding a couple of lines to our entry in webpack.config.js.

  entry: [
    'webpack-dev-server/client?http://127.0.0.1:8080',
    'webpack/hot/only-dev-server',
    path.resolve(__dirname, 'components/App.js'),
  ],

As I’m planning on running this from the command line I’m not going to add a plugin line as some sites advise, but rather use the ‘–hot’ command line switch. I may change in future, but at present this seems like a better plan.

The final step needed is to add the ‘react-hot’ loader, but this is where things hit a big snag. The existing entry for js(x) files looked like this.

     {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react', 'es2015']
        }
      },

Adding the loader seemed simple (remembering to change loader to loaders as there was more than one!).

     {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loaders: ['react-hot', 'babel-loader'],
        query: {
          presets: ['react', 'es2015']
        }
      },
Error: Cannot define 'query' and multiple loaders in loaders list

Whoops. The solution was given by reading various posts and eventually I settled on this. It works for my current versions of babel but may not work for future ones. All changes below are applied to the webpack.config.js file.

Add the presets as a variable before the module.exports line.

var babelPresets = {presets: ['react', 'es2015']};

Change the loader definition to use the new variable and remove the existing definition.

      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loaders: ['react-hot', 'babel-loader?'+JSON.stringify(babelPresets)],
      },

Now, when running webpack-dev-server –content base html/ –hot everything is fine and the page is served as expected.

Editing one of the components shows the expected rebuild of the bundle when saved – exactly as expected.

All Change!

As I tried to get this working I discovered that the react-hot-plugin is being deprecated. Until it happens I’m happy with what I have, but the author promises to have a migration guide.

Running

To try and keep things simpler and avoid the inevitable memory lapses leading to scratching of head about lack of hot reloading, I’ve added a line to the package.json file. With this added I can now simply type npm run dev and the expected things will happen.

  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack --progress",
    "dev": "webpack-dev-server --content-base html/ --hot"
  },

React Mixins

Having written a simple test app I’m continuing to use it to try and develop my React knowledge 🙂 One aspect that I did find a little annoying was the amount of code I seemed to repeat. I kept thinking that I should have a base class and simply inherit from it – a pattern I have used a lot in other languages, but this is React and nothing I had seen suggested that pattern.

Enter the Mixin

A react mixin is a simple idea, a block of code that is common to one or more components. After looking at them I found it was possible to extract a lot of common functionality, resulting in this code.

var AppointmentMixin = {
  componentDidMount: function() {
    this.props.updateInterval(this.props.id, this.totalTime());
  },  
  setAdditional: function(ev) {
    this.state.additional = parseInt(ev.target.value);
    this.props.updateInterval(this.props.id, this.totalTime());
  },
  totalTime: function() {
    return this.state.duration + this.state.additional;
  },  
  finishTime: function() {
    return this.props.start.clone().add(this.totalTime(), "minutes");
  },
};    

There is nothing in the above code that is specific to a component, it’s all plain generic code. To use it in a component you need to add a ‘mixins’ line and remove the old code. This now gives me a component that looks like this.

var Hair = React.createClass({
  mixins: [ AppointmentMixin],
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Start: {this.props.start.format("HH:mm")}</p>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>
        <p>Total Time Required: {this.totalTime()} minutes</p>
        <p>Finish: {this.finishTime().format("HH:mm")}</p>
      </div>
    )
  }
});

This is much closer to what I wanted.

Uh oh…

While looking around for information on mixins I cam across this line repeatedly.

Unfortunately, we will not launch any mixin support for ES6 classes in React. That would defeat the purpose of only using idiomatic JavaScript concepts.

This looked as if support would be coming, but then I found this post and also this.

Higher Order Component – huh?

So looking at some posts about the concept helped me get a better understanding of what it’s trying to do, I decided to try and change my example to use it. Sadly it didn’t work out as I’ve been unable to get the higher order component solution working in a manner akin to a mixin. It’s not so much a replacement but a totally different approach that requires things be done differently.

However, always keen to learn, I rewrote things and ended up with this.

function TimedAppointment(duration, title) {
  const Appointment = React.createClass({
    getInitialState: function() {
      return {duration: duration, 
              additional: 0,
              title: title}
    },
    componentDidMount() {
      this.props.updateInterval(this.props.id, this.totalTime());
    },  
    setAdditional(ev) {
      this.state.additional = parseInt(ev.target.value);
      this.props.updateInterval(this.props.id, this.totalTime());
    },
    totalTime() {
      return this.state.duration + this.state.additional;
    },
    finishTime() {
      return this.props.start.clone().add(this.totalTime(), "minutes");
    },
    render() {
      return (
        <div>
          <h3>{this.state.title}</h3>
          <p>Start: {this.props.start.format("HH:mm")}</p>
          <p>Duration: {this.state.duration} minutes</p>
          <p>Additional duration: <input type="number" step="1" ref="additional" 
                                         value={this.state.additional}
                                         onChange={this.setAdditional}/> minutes</p>
          <p>Total Time Required: {this.totalTime()} minutes</p>
          <p>Finish: {this.finishTime().format("HH:mm")}</p>
        </div>
      )
    }    
  });
  return Appointment;
};
var Hair = TimedAppointment(90, "Hair Appointment");
var Nails = TimedAppointment(30, "Manicure");

This is much neater and certainly gives me a single set of reusable code – no mixins required. It’s possibly far closer to where it should be and is still within my limited comfort zone of understandability.

If anyone cares to point out where I could have gone or what I could have done differently, please let me know 🙂

Summary

As time goes on I’m sure that the newer formats of javascript will become more of a requirement and so mixins probably won’t be much use going forward.

React Lessons Learned

I’ve been playing with React for a few days now and, as always, have run across a few things that made me scratch my head and seek help from various places. In an effort to capture some of the errors and mistaken assumptions I have made, I’ll try and show them in a very simple way below. As always, comments and suggestions for improvement are more than welcome.

A Simple Page

To try and show the issues I needed a simple app that I could write that followed what I was working on closely enough to be relevant but without a lot of the complexity that obscures the basic issues. After some thought I’ve come up with a very simple visit planner. It’s going to show a simple timeline of “visits” that can be added in any order and displayed in the same order. No attempt is made to save anything, it’s just data on a page that vanishes upon refresh.

Again, to keep it simple I’m not bothering with anything beyond a straight HTML page containing everything. As I’m not travelling while writing this I’ve used CDNJS for the libraries, but have gone with the non-minified versions to allow for greater debugging. I’m not sure it gets any simpler? I’ve not bothered with any CSS.

Starting Point

Without further fanfare, this is my starting point.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Visit Planner</title>
  </head>
  <body>
    <h1>Visit Planner</h1>
    <div id="content"></div>

    <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.7/react.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.7/react-dom.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.11.2/moment.min.js"></script>
    <script type="text/babel">
var Visit = React.createClass({
  getInitialState: function() {
    var start_times = [<option value="1" key="1">9 am</option>,
                       <option value="2" key="2">Midday</option>,
                       <option value="3" key="3">3 pm</option>];
    var today = moment();
    today.seconds(0).hour(9).minutes(0);
    return { start: today, start_times: start_times }
  },
  changeStartTime: function(ev) {
    switch(ev.target.value) {
      case "1": { this.setState({start: this.state.start.hour(9)}); break; }
      case "2": { this.setState({start: this.state.start.hour(12)}); break; }
      case "3": { this.setState({start: this.state.start.hour(15)}); break; }
    };
  },
  render: function() {
    return (
    <div>
      <p>
        <label>When will you be starting your visit?</label>
        <select onChange={this.changeStartTime}>
          {this.state.start_times}
        </select>
      </p>    
      <p>Visit starts at {this.state.start.format("HH:mm")}</p>
    </div>
    )
  }
});

ReactDOM.render(<Visit />, document.getElementById('content'));
    </script>
  </body>
</html>

There’s not much to say about this, it’s simple plain React. I’m using moment for the date/time and then simply ignoring the date portion as it provides nice functions for dealing with time intervals. As it stands it doesn’t do much 🙂

Visits

For the purposes of this each visit will simply be an object that has a description (to differentiate it from the other objects), a duration and an optional additional duration (initially set to 0). I’m going to create these as seperate classes (even though they will largely be identical) as it allows me to show things clearer. In reality they wouldn’t be done this way 🙂

My first pass at getting something basic looked like this.

var Hair = React.createClass({
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: {this.state.additional} minutes</p>
      </div>
    )
  }
});

var Nails = React.createClass({
  getInitialState: function() {
    return {duration: 30, additional: 0}
  },
  render: function() {
    return (
      <div>
        <h3>Manicure</h3>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: {this.state.additional} minutes</p>
      </div>
    )
  }
});

Again, nothing too fancy. The next step was to add the code to add them to the main Visit object. Obviously I would need a list of the objects, so I started by changing the initial state to

    return { start: today, start_times: start_times, appointments: [] }

Next, a couple of buttons to add appointments…

      <p>
        <button id="hair-btn" onClick={this.addAppointment}>Add Hair Appointment</button>
        <button id="nail-btn" onClick={this.addAppointment}>Add Manicure</button>
      </p>  

The addAppointment function is also pretty basic – or so I thought…

Try #1 to add Appointments

This was my initial attempt.

  addAppointment: function(ev) {
    var n = this.state.appointments.length;
    switch(ev.target.id) {
      case "hair-btn": { this.state.appointments.push(<Hair key={n}/>>); break; }
      case "nail-btn": { this.state.appointments.push(<Nails key={n}/>); break; }
    }
    this.forceUpdate();
  },

Adding a line to render these out,

      { this.state.appointments }

Gives us what appears to be a working page. Click a button – appointment appears. All looks good, so lets continue to add some other functionality.

Additional Time?

Part of the idea of each appointment is to allow each to have a certain amount of time added, so lets add that by changing the plain output to an input and adding a total time output in render.

var Hair = React.createClass({
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  setAdditional: function(ev) {
    this.setState({additional: parseInt(ev.target.value)});
  },
  totalTime: function() {
    return this.state.duration + this.state.additional;
  },  
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>
        <p>Total Time Required: {this.totalTime()} minutes</p>
      </div>
    )
  }
});

Having done that it’s now possible to add multiple objects and give each their own additional time – each is a self contained unit exactly as we’d expect. Having done that, how do we now create the timeline aspect of the main Visit object?

Timeline

The timeline is simple enough to imagine – we know the start time and how long each appointment takes, so we need to go through the and figure out a start time for each (based on either the start or the previous appointment) and then get a finish time. Changing any appointment should change all those after it, and changing the overall start time should change them all. As I want each object to be as self contained as possible, perhaps passing in the start time via the props makes sense? Changing the Hair object to allow this gave me the code below.

var Hair = React.createClass({
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  setAdditional: function(ev) {
    this.setState({additional: parseInt(ev.target.value)});
  },
  totalTime: function() {
    return this.state.duration + this.state.additional;
  },  
  finishTime: function() {
    return this.props.start.clone().add(this.totalTime(), "minutes");
  },
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Start: {this.props.start.format("HH:mm")}</p>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>
        <p>Total Time Required: {this.totalTime()} minutes</p>
        <p>Finish: {this.finishTime().format("HH:mm")}</p>
      </div>
    )
  }
});

Of course, unless I supply the start time it won’t do anything…

    switch(ev.target.id) {
      case "hair-btn": { this.state.appointments.push(<Hair start={this.state.start} key={n}/>); break; }
      case "nail-btn": { this.state.appointments.push(<Nails start={this.state.start} key={n}/>); break; }
    }

Running this looks good to start with, but there’s a problem – changing the start time doesn’t change the appointment. Changing the additional time (which forces an update) then updates the appointment and shows the correct time. Also, at present every appointment uses the same start time, so that needs fixed 🙂

Realisations

After playing with a few different things and much looking around at websites it became obvious that while react components are self contaiend objects, they can’t be used in the same way that I could use objects. So, time for a rethink.

React uses keys to determine whether an object is the same as one already rendered, so as long as the key isn’t changed objects will persist between renderings. Eah time it’s rendered I can change the props I use, so when I initially add an appointment I don’t need/want to create it, just record that enough details for it to be added during the render, probably with varying props.

Each appointment knows how long it needs to be (the fixed plus variable duration), but the main Visit object also needs to know. Time for the appointment to call the parent when something changes, so we need a callback.

First step, change the appointments to handle these changes. We need to trigger the callback in 2 instances – when the appointment is initially created (using the componentDidMount hook) and when the additional duration value is changed (via an event handler).

  componentDidMount: function() {
    this.props.updateInterval(this.props.id, this.totalTime());
  },  
  setAdditional: function(ev) {
    this.state.additional = parseInt(ev.target.value);
    this.props.updateInterval(this.props.id, this.totalTime());
  },

I found that using this.setState(…) in setAdditional always resulted in the value being the previous one, I’m guessing because an update was triggered. This led to the callback not always being called and some very odd results initially, hence my switch to simply setting the value directly and relying on the update triggered by the parent when receiving the callback to update things.

        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>

With these in place, the next step is to modify the main visit class. As we’re adding in strict order and not interested in a dyanmic ordering ability, we will just use the length of the list as the ‘id’ for each appointment. We’ll also store the duration of each appointment in the data, ending up with this code.

  addAppointment: function(ev) {
    var n = this.state.appointments.length;
    switch(ev.target.id) {
      case "hair-btn": { this.state.appointments.push({what: 'hair', key: n, interval: 0}); break; }
      case "nail-btn": { this.state.appointments.push({what: 'nails', key: n, interval: 0}); break; }
    }
    this.forceUpdate();
  },

Now that we have the data being stored, next step is to add the callback that will update the intervals. This is a simple function that takes the index and the new interval duration, as shown below.

  updateInterval: function(idx, interval) {
    this.state.appointments[idx].interval = interval;
    this.forceUpdate();
  },

Finally we need to render the appointments with the correct start times.

  render: function() {
    var _appts = [];
    var _start = this.state.start.clone();
    this.state.appointments.map(function(app, idx) {
      var _props = {key: app.key, start: _start, id: app.key, updateInterval: this.updateInterval};
      switch(app.what) {
        case "hair": { _appts.push(<Hair {..._props}/>); break; }
        case "nail": { _appts.push(<Nails {..._props}/>); break; }
      };
      _start = _start.clone().add(app.intervals, "minutes");
    }.bind(this));
    return (
      ...

We also need to render the _appts list, so one final change.

      <p>Visit starts at {this.state.start.format("HH:mm")}</p>
      { _appts }

Summary

So it all works and the way it works is surprisingly flexible, if a little different than where I started 🙂 I’m sure there are better ways of doing all this, so get in touch if you can educate me 🙂

More Webpack

Following on from yesterdays experiments with webpack and react, I’ve changed things round a little today in an effort to make things even simpler.

webpack.ProvidePlugin

One of the annoying aspects of splitting the single file into smaller pieces was the need to add the require lines to every one. I thought there must be a simpler way – and there is! The ProvidePlugin allows you to tell webpack that certain requires should be added as needed. To use it a few changes are needed in the webpack configuration.

First you’ll need to make sure that webpack is available, so add the require at the top of the file.

var webpack = require('webpack');

Then add a plugins section with the plugin and it’s configuration.

  plugins: [
    new webpack.ProvidePlugin({
      React: "react",
      ReactDOM: "react-dom"
    }),
  ],

Following this change, the javascript files can be simplified by removing the require lines for react or react-dom, so /components/Main.js becomes just

module.exports = React.createClass({
  render: function() {
    return (
      <p>Hello World!</p>
    )
  }
});

This proved very useful for the project as a few components used jquery, but remembering which ones and to include the require line wasn’t an issue once this plugin had been added – with the appropriate configuration line ($: “jquery”).

Source Maps

It always seems like a good idea to create a .map file, so it’s as simple as adding a line telling webpack to do just that.

devtool: "source-map",

CSS

Handling CSS is something that never seems as simple as it should/could be, but initially webpack seems to offer a solution. 2 new loaders are needed, so install them via npm.

npm install --save-dev style-loader css-loader

Then we need to tell webpack that we can use them for css files, by adding a section to the loaders.

  module: {
    loaders: [
      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react']
        }
      },
      {
        test: /.+.css$/,
        loader: 'style-loader!css-loader'
      }
    ]
  }

As we’re using the resolve to keep require usage simple, we should update that as well (I’ve added the css files in an inventively named css directory),

  resolve: {
    modulesDirectories: ['node_modules', 'components', 'css'],
    extensions: ['', '.js', '.jsx', '.css']
  },

After making these changes, we need to actually add a require for some css. This was done in App.js as follows,

require('style.css');

However, running webpack produced a small surprise and a touch of confusion.

$ webpack
Hash: f4a6a633c9be19bddd79
Version: webpack 1.12.13
Time: 1717ms
        Asset    Size  Chunks             Chunk Names
    bundle.js  689 kB       0  [emitted]  main
bundle.js.map  808 kB       0  [emitted]  main
    + 164 hidden modules

Where was the CSS? The answer is simple, but also, to my mind, not obvious. It’s merged into the bundle.js and if you open it in an editor you’ll find it there. However, I want the css in a seperate file for a number of reasons, so this solution only partly works. The answer turns out to be another plugin.

npm install --save-dev extract-text-webpack-plugin

Once installed there are quite a few changes required to use it. First off, we need to extract the css rather than let it be bundled with the javascript. To do this we need to change the css loader.

      {
        test: /.+.css$/,
        loader: ExtractTextPlugin.extract('style-loader', 'css-loader')
      }

Before we can use the ExtractTextPlugin we need to require it, so this line needs to be added at the top of the file.

var ExtractTextPlugin = require("extract-text-webpack-plugin");

Finally we also need to output our extracted css, so we need to add this entry to the plugins.

new ExtractTextPlugin("bundle.css")

Running webpack now gives the expected results.

$ webpack
Hash: 55234e19dea7f3c8f04f
Version: webpack 1.12.13
Time: 1757ms
         Asset       Size  Chunks             Chunk Names
     bundle.js     679 kB       0  [emitted]  main
    bundle.css  107 bytes       0  [emitted]  main
 bundle.js.map     794 kB       0  [emitted]  main
bundle.css.map   87 bytes       0  [emitted]  main
    + 164 hidden modules
Child extract-text-webpack-plugin:
        + 2 hidden modules

With this approach I can simply add a require line into any component javascript file for required css and it will be automatically bundled. This works well but the ordering of things being added isn’t always as I’d wish it, so more care will need to be taken with how I write the css. This approach also opens up the prospect of moving to a LESS/SASS based approach as the css can be processed before being added to the bundle.

I’m reasonably sure I don’t need a map file for the css, but I haven’t found any simple solutions yet. Answers on a postcard.

Starting with React & Webpack

In recent days I’ve been working on a small single page JS app using React and bootstrap. After getting over my initial issues yesterday saw a lot of progress and I’m finding it far simpler to work with React than almost any other JS “thing” I’ve tried. However, (you knew it was coming didn’t you?) at present my single page app is just that – a monster of a single HTML file with a bunch of required files on my hard drive. While it’s allowed me to make good progress, going forward it’s not what I want.

Today was time to dive back into the murky world of npm, webpack and try and start splitting the monster into lots of smaller pieces. After finding a lot of tutorials online that helped, I’m scribbling this to try and capture what I learned today and hopefully help myself in the future from making the same mistakes! If it helps anyone else then that’s a nice bonus.

There are probably many better ways to do this, but I’ve yet to find them, so if you have suggestions or comments then pass them on as I’m keen to learn and improve my understanding.

Starting Point

For this I’m going to start with a really simple HTML page in a newly created directory containing just the files I need.

The HTML file looks like this.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Hello World!</title>
  </head>
  <body>
    <h1>Yet Another Hello World...</h1>
    <div id="example">
    </div>

    <script src="react.min.js"></script>
    <script src="react-dom.min.js"></script>
    <script src="browser.min.js"></script>
    <script type="text/babel">
var Main = React.createClass({
  render: function() {
    return (
      <p>Hello World!</p>
    )
  }
});
ReactDOM.render(<Main />, document.getElementById('example'));
    </script>
  </body>
</html>

This works and produces the expected results, so now to make something simple very much more complicated 🙂

NPM

I’ll be using npm, so the next step is to get everything installed in preparation. I’ll skip the pain of figuring out what I missed first time around (and second, third etc) and just list everything that will be needed. I think the logic for the –save and –save-dev is right (–save if it’s needed for the final output, –save-dev if it’s only for the background development), but if it’s not I’m sure someone will yell.

npm init -y
npm install --save-dev webpack
npm install --save-dev babel
npm install --save-dev babel-core
npm install --save-dev babel-loader
npm install --save-dev babel-preset-react
npm install --save-dev babel-preset-es2015
npm install --save react
npm install --save react-dom

As I’m not going to be putting this into git and it’s only for my own use, I added

  "private": true

which stopped the constant warning about no repository being listed.

Structure

This is the area I am still experimenting with. There are loads of boilerplate projects in existance, but while all generally agree o principals the way they arrange things is often different. My small brain can’t handle too much complexity and things can always be changed later, so I’m going to start simple as this is a simple project.

/package.json
/webpack.config.js
/components
           /App.js
           /Main.js
/html
     /index.html

The plan is that all the react components go into (guess where?) components and the HTML file into HTML. I’ll use webpack to generate a single js file also into html so that I could distribute the entire directory. As I want to keep components as components, I’m also going to create an App.js file in components that will include the top level React element.

NB I haven’t decided if I’m going to use .jsx for JSX formatted files. It seems like a good idea but it’s another change 🙂 For now I’ll try and make sure things will work whichever extension I use, but here it’ll be plain ‘ol .js

/componets/Main.js

var React = require('react');

module.exports = React.createClass({
  render: function() {
    return (
      <p>Hello World!</p>
    )
  }
});

/componets/App.js

var React = require('react');
var ReactDOM = require('react-dom');
var Main = require('./Main.js');

ReactDOM.render(<Main />, document.getElementById('example'));

/html/index.html

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Hello World!</title>
  </head>
  <body>
    <h1>Yet Another Hello World...</h1>
    <div id="example">
    </div>

    <script src="bundle.js"></script>
  </body>
</html>

Having split all the files, I now need to configure webpack to build the bundle.js file.

Webpack

Webpack expects to find a file called webpack.config.js in the directory you call it from, so we’ll create one. We need to tell it which file is the “entry” where it will start building the required file from and where we want it to create the output. If that’s all you supply there will be an error,

Module parse failed: /components/App.js Line 5: Unexpected token < You may need an appropriate loader to handle this file type.

The module configurations allow you to tell webpack how to handle the various files, so need to add a line that tells webpack how to handle our files. To do this we add a module loader entry that tests for the appropriate files and uses babel-loader. To handle the JSX formatting you also need to tell it to use the react preset.

var path = require('path');

module.exports = {
  entry: path.resolve(__dirname, "components/App.js"),
  output: {
    path: path.resolve(__dirname, "html"),
    filename: 'bundle.js'
  },
  module: {
    loaders: [
      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react']
        }
      }
    ]
  }
};

With this file in place, you should now be able to generate the required bundle.js file by running webpack.

Hash: 1a44318bada57501b499
Version: webpack 1.12.13
Time: 1484ms
    Asset    Size  Chunks             Chunk Names
bundle.js  677 kB       0  [emitted]  main
    + 160 hidden modules

After webpack runs OK, the file html/index.html can be loaded in a browser and looks exactly as the original file 🙂 Success.

Improvements…

While using ‘./Main.js’ for the require is OK, it seems a little messy, so it would be easier to be able to use ‘Main’. To allow this we need to tell webpack a bit more about where to find files, which we do using the resolve entry.

  resolve: {
    modulesDirectories: ['node_modules', 'components'],
    extensions: ['', '.js', '.jsx']
  },

Following this addition, we can change App.js to use

var Main = require('Main');

Using the expected config file for webpack is fine, but if it’s not in the root directory you need to use the –config flag for webpack. I had this in one of my iterations and did forget it a few times – more likely with code you don’t look at for a while. To allow this to work and not forget, it’s possible to add a build command to package.json telling it to use webpack. This can also be used to provide commonly used flags.

  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },

becomes

  "scripts": {
    "build": "webpack --progress --colors",
    "test": "echo \"Error: no test specified\" && exit 1"
  },

To build using the command you need to use npm run build.

Next Steps

Now that I have a structure and a working webpack I can start to split up my monster file. I also need to figure out how to handle CSS and then look at add some extras to webpack to get minification of the output. Figuring out how to strip the CSS to the base essentials would also be nice, but that’s an optimisation for the future 🙂

Building postgrest

I’ve long thought that a simple REST layer sitting on top of a database, with some suitable access controls would be an ideal solution for many of the small projects I find myself tinkering with. Until recently I’d never quite found a solution that provided this, but then I came across postgrest.

Having some time to spend looking at it and the base of an idea that might be ideally suited to using it, I decided to install. Rather than install the binary, I cloned the repository so that I had access to the source. However, it’s written in Haskell, a language I had no experience with. So, how do I build it?

1. Install haskell

$ sudo apt-get install haskell-platform

NB As this uses postgresql you also need the development libraries for postgresql installed.
$ sudo apt-get install libpq-dev

2. Build/setup the project?

Initially it wasn’t clear, but after some web searches I found the documentation for the cabal build tool that explained the standard method which led me to do this.

$ cabal sandbox init
$ cabal install -j

This took a while as the various packages were download and installed.

3. Run postgrest

$ .cabal-sandbox/bin/postgrest
Usage: postgrest (-d|--db-name NAME) [-P|--db-port PORT] (-U|--db-user ROLE)
[--db-pass PASS] [--db-host HOST] [-p|--port PORT]
(-a|--anonymous ROLE) [-s|--secure] [--db-pool COUNT]
[--v1schema NAME] [--jwt-secret SECRET]
PostgREST 0.2.10.1 / create a REST API to an existing Postgres database

Available options:
-h,--help Show this help text
-d,--db-name NAME name of database
-P,--db-port PORT postgres server port (default: 5432)
-U,--db-user ROLE postgres authenticator role
--db-pass PASS password for authenticator role
--db-host HOST postgres server hostname (default: "localhost")
-p,--port PORT port number on which to run HTTP
server (default: 3000)
-a,--anonymous ROLE postgres role to use for non-authenticated requests
-s,--secure Redirect all requests to HTTPS
--db-pool COUNT Max connections in database pool (default: 10)
--v1schema NAME Schema to use for nonspecified version (or explicit
v1) (default: "1")
--jwt-secret SECRET Secret used to encrypt and decrypt JWT
tokens) (default: "secret")

Now I have it built, time to start playing around with using it 🙂

afpfs-ng

At home we use atalk for the shares on the Home Media Server. It makes things simpler as there are Apple computers around. The client I use for Ubuntu is afpfs-ng by Simon Vetter. Having built it a few times (old school, eh?!) I find myself having to relearn the package dependancies, so this post is intended to fix that.

The box already has the build-essentials package installed.

Before running configure for afpfs-ng I needed to install libfuse and it’s development files.

sudo apt-get install libfuse-dev

Also, gcrypt and gmp libraries were looked for by configure, so I installed them

sudo apt-get install libgcrypt20-dev libgmp-dev
./configure

During the initial make, I found that readline and libraries and development files were needed.

sudo apt-get install libreadline-dev

Then the ncurses library couldn’t be linked, which was fixed by installing the development libraries.

sudo apt-get install libncurses5-dev

Once built, I ran

sudo make install

which installed the libraries into /usr/local/lib. As this path wasn’t listed already for shared libraries, I had to add a file listing the diretcory in /etc/ld.so.conf.d/ and then run

sudo ldconfig

Hopefully this will help me next time I need to build the apps and perhaps even help others!

Broccoli

It’s fair to say that when it comes to the modern world of javascript, I’m something of a luddite. It’s not a world I’ve spent a lot of time with and while looking at options to start projects much of what I read may as well be double dutch. However, I have spent some time and EmberJS is slowly become more familiar and useful. So, now that I’m writing apps, the next step in my learning curve is deploying them. Having read about a few of the tools that are currently in use (this week at least) I chose to try Broccoli. In keeping with my “one step at a time” philosophy I elected to start simple 🙂

What follows is what I did after looking at various tutorials, but is largely based on a blog post by Tim Eagan.

The first step was making sure it was installed.

npm install --save-dev broccoli
sudo npm install --global broccoli-cli

Of course this just gets you the tool, so now I needed some plugins to help it do useful stuff. To see what’s available, I looked at http://broccoliplugins.com/. Initially I installed what seemed like the basics.

npm install --save-dev broccoli-merge-trees
npm install --save-dev broccoli-uglify-js
npm install --save-dev broccoli-static-compiler
npm install --save-dev broccoli-concat

The broccoli-sass plugin failed to install for me.

Writing the Brocfile.js was the next step. This is just a javascript file and there were many examples to look at to get started. This was my first attempt.

var concatenate = require('broccoli-concat'),
mergeTrees = require('broccoli-merge-trees'),
pickFiles = require('broccoli-static-compiler'),
uglifyJs = require('broccoli-uglify-js'),
app = '.',
appHtml,
appJs;

appHtml = pickFiles(app, {
srcDir : '/',
files : ['index.html'],
destDir : '/production'
});

appJs = concatenate(app, {
inputFiles : ['**/*.js'],
outputFile : '/production/app.js'
});
appJs = uglifyJs(appJs, {
compress: true
});

module.exports = mergeTrees([appHtml, appJs], {overwrite: true});

After creating the file in the root of my project, I was able to simply run it.

broccoli build 'public'

I now had 2 files, public/production/index.html and public/production/app.js. Tim’s example used the sass plugin to generate css files, but as I wasn’t using that some modification were needed to include the css files I was using.

appHtml = pickFiles(app, {
srcDir : '/',
files : ['index.html', 'css/style.css'],
destDir : '/production'
});

However, after making the changes and running the command again, it failed as the public directory already existed! Sadly, there is no option presently available to force an overwrite, so I had to manually remove the existing directory and will need to do this each time (a small shell script will simplify this!). This is a little annoying, but not too much effort.

After looking at the plugins available, I installed broccoli-manifest to simplify production of the appcache file, which works very well and automates another job for me.

I also installed the broccoli-uncss plugin to eliminate unused css.

UPDATE

There is a plugin that cures this problem, clear-broccoli-build-target. Installing it and adding to Brocfile.js has fixed the existing directory problem.

Shapefiles

I recently had a request for a shapefile containing geographic informaton for some of the stations listed on Variable Pitch. I’d been meaning to create a simple export facility for a while, but had never really looked at shapefiles in any detail before. As it took me a few attempts to get them working correctly, the following may help others!

I’m using the pyshp python module.

import shapefile

Essentially I’m creating a shapefile with a list of points. Each point will have a series of data fields associated with it. At the present time I’ve not specified types or other details for the fields, but that may be something I do in the future.

writer = shapefile.Writer(shapefile.POINT)
writer.autoBalance = 1
writer.field('Station Name')
writer.field('Technology')
writer.field('Variable Pitch ID')

Now the writer object is prepared, I loop through the stations I need to add and write the data. This is done in two steps, the first adding the point and the second the data associated with it. As the station names are stored in Unicode and the shapefile expects ascii, I have forced a simple conversion to ascii (ignoring any characters that cannot be converted).

for s in stations:
writer.point(loc[0].lat, loc[0].lon)
writer.record(s.name.encode('ascii', 'ignore'), s.category.desc, "%d" % s.id, 'Point')

At this point I have all the data in the writer object, but now need to create the 3 or 4 files that comprise a shapefile. To keep things simple I elected to create a single zipefile containing all the generated files which can then be downloaded. All generation is done in memory.

s = cStringIO.StringIO()
zf = zipfile.ZipFile(s, "w")

fobj = cStringIO.StringIO()
writer.saveShp(fobj)
zf.writestr('%s/shapefile.shp' % filename, fobj.getvalue())
fobj.close()

fobj = cStringIO.StringIO()
writer.saveShx(fobj)
zf.writestr('%s/shapefile.shx' % filename, fobj.getvalue())
fobj.close()

fobj = cStringIO.StringIO()
writer.saveDbf(fobj)
zf.writestr('%s/shapefile.dbf' % filename, fobj.getvalue())
fobj.close()

# Must close zip for all contents to be written
zf.close()

Update

My initial code simply used the lat/lon co-ordinates I already had stored for the stations, but when the file was downloaded by the person requesting it they complained they had expected the points to be in UK Grid Easting/Northing format. This pointed me to look at the .prj files that are often included in a shapefile and give details of the co-ordinate system used. As the format is simple formatted text (WKT) I found the appropriate text for the formats I support and insert the appropriate text in shapefile.prj and include that with the download.

The .prj file for UK Grid Easting/Northing.

You can find the current version of this code being used on the Variable Pitch site.

QGIS on Ubuntu 13.10

I’ve been asked about producing shapefiles from the geo data on Variable Pitch. This seems like a good idea, but having no experience with such files I thought maybe I should have an app to test them with. I was pointed at QGis but it needed to be added to the sources list for Ubuntu 13.10. This is how I did it.

I created a file /etc/apt/sources.list.d/qgis.list with the contents

deb http://qgis.org/debian saucy main
deb-src http://qgis.org/debian saucy main

Then I imported the PGP key as the sources are signed.

~$ gpg –keyserver keyserver.ubuntu.com –recv 47765B75
gpg: requesting key 47765B75 from hkp server keyserver.ubuntu.com
gpg: key 47765B75: public key “Quantum GIS Archive Automatic Signing Key (2013) <qgis-developer@lists.osgeo.org>” imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
~$ gpg –export –armor 47765B75 | sudo apt-key add –
OK

This was all that was needed and so installation was then as simple as

~ sudo apt-get update
~ sudo apt-get install qgis python-qgis

There are a large number of required packages!

Android Development Environment on Ubuntu 12.10

As with many people I’m using the 64-bit version of Ubuntu 12.10. This means that when you install the various pieces you need for android development the tools found in the platform-tools directory won’t run. They give the simple error message

bash: android-sdk/platform-tools/adb: No such file or directory

The solution is simple enough, but it took me a few minutes to find, so maybe this post will save someone else that time.

The issue was that the apps are 32-bit and my install is 64-bit, so you need to install the 32-bit versions of the system libraries to make them happy.

sudo apt-get install ia32-libs

Et voila! The apps now run.

I’d suggest doing this before installing the eclipse ADT as it will check whether the apps can be run and if they can’t will produce some very strange and unhelpful errors when you try and look at an android project.

I did find a web page describing installing the apps via a ppa which claimed to avoid the need for the 32-bit libs, but when I tried it I couldn’t find any way of telling eclipse where to find the apps installed via the ppa and so was no further forward. If anyone knows the magic incantations to make this work then let me know as I’d prefer not to need the 32-bit libs.

Additionally, when trying to run the emulator I saw the following errors

[2013-01-06 17:58:48 - Emulator] Failed to load libGL.so
[2013-01-06 17:58:48 - Emulator] error libGL.so: cannot open shared object file: No such file or directory
[2013-01-06 17:58:48 - Emulator] Failed to load libGL.so
[2013-01-06 17:58:48 - Emulator] error libGL.so: cannot open shared object file: No such file or directory

The solution was provided by this blog post.

Mainstream Fail

Making changes to a large project isn’t an easy decision and for a company like Google I’d imagine that such decisions are debated extensively, so when such changes are made I assume they bring genuine benefit. Having recently started using a Nexus 10 with Android 4.2.1 I was stunned when connecting it to the computer not to be able to simply transfer documents to it. A little googling around soon showed that the USB connectivity I’m used to in Cyanogen 9 is not available. It’s been replaced by the Media Transfer Protocol.

As MTP is a Microsoft developed protocol, connecting the table to Windows 7 works flawlessly and exactly as expected.

My primary desktop environment is Linux, and here the situation is less positive. Despite a lot of googling around I still can’t find a solution that works reliably. I have a script that will connect it, but that shouldn’t be needed. Simply connecting the device should enable me to view the contents and access the storage.

There are several projects and many websites that claim to show you how to do this, but having tried many of them I still can’t get it to work. Additionally, most of the guides require editing system configuration files! Last I checked it’s 2013 and yet to enable basic functionality I have to make changes like it’s 2008?

I can understand that Google made the change with the best of intentions, but I can’t help but feel disappointed that they didn’t expend some time and energy making sure that Linux (upon which Android depends) had full support.

The First Casualty

“In war, truth is the first casualty.”

Aeschylus Greek tragic dramatist (525 BC – 456 BC)

You don’t have to spend long looking at information about wind farm developments before the truth of the above statement becomes apparent. Whichever side of the argument people are on they make claims and counter claims that are difficult to verify. In such an atmosphere the smallest fact is repeated often enough that it attains religious status, resisting any attempt to validate it. Sadly this does nothing to further the debate.

When Ecotricity proposed building 4 120m turbines 700m from our house my interest in the subject grew. I found myself wanting to look at the facts and figures behind the attention grabbing claims and counter claims. I kept being told how energy efficient the turbines were and that they could produce a huge proportion of the electricity required. Both claims “felt” false and yet finding the hard figures to try and confirm or deny my feelings was hard.

Having finished some other projects I decided to spend some time trying to get the information in an effort to answer my questions. As I believe data should be free, I decided to write a small website that could be easily updated with the latest information on a regular basis so that my brief period of work would yield continuing results. So far so good, but where to get the data?

My first thought was to find a list of all the windfarms in Scotland. I was sure I’d easily be able to find such a list and that would give me a useful starting point. Sadly I was mistaken. While I was able to find lists of windfarms, none had enough information or was complete. I had already found the Ofgem reports of output for the stations but I failed to find a list that could be easily referenced to that data. I eventually decided to use the Ofegem data for a couple of reasons

  • it’s the official data that official statistics should be based on
  • it covers all the wind farms that are claiming subsidies
  • it’s updated on a frequent basis

After cobbling together some scripts in Python I was able to get and parse the Ofgem reports for certificates issued and station information. These I then used to populate a database model I built in Django and suddenly I had the basics of the information and website I wanted.

I also wanted to put the stations on a map, so using the address information within the Ofgem data was a good starting point, but the data has a lot of inconsistencies and errors, so wasn’t perfect. The Ofgem data also doesn’t have information on the number of turbines installed, something which I thought would be useful to know. By searching the web the information is available and so I’ve started filling it in wherever I can find it, together with location information.

There are places where I would like to make the Ofgem data I have copied more consistent, but I haven’t. The data from the Ofgem reports is simply used as is – if there is an obvious mistake in their data it will be replicated on the website. (The one caveat is the address information which I have tidied up where needed.) The data is simply parsed and stored in a database model that allows it to be displayed on the site. I have added additional information, but have documented where that data comes from. All sources are free, available to anyone and documented on the website.

While my primary interest was the output from wind farms, that output is also their income so I have started trying to find sensible figures to allow the website to display representative figures for the income each station generates. Much of this money is paid for by electricity users and so should be easily available, but sadly the various companies make finding the required information awkward. Presently I have found a source for sensible average prices for Renewable Obligation Certificates and show that information on the site. I’d also like to show how much income the sales of electricity earned but I haven’t found a sensible average wholesale electricity price. Nor have I managed to get information about the issuance of LEC for stations as it is considered confidential.

The site is still a work in progress and as I find time and information sources I’ll update it. Hopefully it’ll prove interesting to others and I’m open to suggestions for changes and improvements. It may not work in every browser and some of the styling can be improved 🙂

Above all I hope it allows the data to speak for itself.

If you’re interested in looking then visit http://www.variablepitch.co.uk/