RaspberryPi & SWD

The KroozSD board now comes with a handy SWD connector, a simple 3 pin 1mm JST located in the middle of the board. As debugging is one of the harder aspects of embedded development the connector has always been an interesting addition but finding a way to interact with the port has proved tricky.

kroozsd_swd

While I have several devices that claim to offer the required interfaces, none of them have proved to be supported. However, I came across an article explaining both how to build the required software but also how to physically connect the port to a RaspberryPi yesterday and so less than an hour later I was trying to figure out SWD!

Physical Connections

Figuring out how to connect the SWD pins to the RaspberryPi was my first challenge. Thankfully this post gave a diagram , which while not as clear as it could have been gave me sufficient information to make a start. I used a small breadboard to allow me to put the resistor inline and attached it to the RaspberryPi via some jumper cables.

raspberrypi_swd

Now the port was attached, it was time for the software.

Software

This Adafruit article provided exactly what I needed to build OpenOCD on the RaspberryPi. The instructions aren’t hard to follow but I ignored the part about connecting the device as I had already done that and the configuration section wasn’t relevant for the STM32 board.

As per the Adadfruit tutorial, I saved the config shown below to openocd.cfg.

source [find interface/raspberrypi2-native.cfg]
transport select swd

set BSTAPID 0x06413041
source [find target/stm32f4x.cfg]
reset_config srst_only srst_nogate

init
targets

After saving the file I then just ran openocd 🙂

Hello?

Initially there were just invalid responses, so I used a small connector to force the board to boot using the builtin ST bootloader rather than the Krooz bootloader (UART 3 SYS pin connected to the +3.3V pin). This brought the board to a simplified state and enabled SWD. When running this time things looked a more encouraging.

pi@raspberrypib:~/swd $ sudo openocd
Open On-Chip Debugger 0.10.0-dev-00319-g9728ac3 (2016-05-22-19:00)
Licensed under GNU GPL v2
For bug reports, read
	http://openocd.org/doc/doxygen/bugs.html
BCM2835 GPIO nums: swclk = 25, swdio = 24
BCM2835 GPIO config: srst = 18
srst_only separate srst_gates_jtag srst_push_pull connect_deassert_srst
adapter speed: 2000 kHz
adapter_nsrst_delay: 100
srst_only separate srst_nogate srst_push_pull connect_deassert_srst
cortex_m reset_config sysresetreq
srst_only separate srst_nogate srst_push_pull connect_deassert_srst
Info : BCM2835 GPIO JTAG/SWD bitbang driver
Info : SWD only mode enabled (specify tck, tms, tdi and tdo gpios to add JTAG mode)
Info : clock speed 2002 kHz
Info : SWD DPIDR 0x2ba01477
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
Error: stm32f4x.cpu -- clearing lockup after double fault
Polling target stm32f4x.cpu failed, trying to reexamine
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
    TargetName         Type       Endian TapName            State       
--  ------------------ ---------- ------ ------------------ ------------
 0* stm32f4x.cpu       cortex_m   little stm32f4x.cpu       halted

I will admit the first time it ran successfully I was a little surprised and wondered what to do next, but some quick searches revealed the answer!

SWD Adventures

Given my RaspberryPi is running without a screen or keyboard, the simplest way to access it is via a remote connection. OpenOCD provides just such a connection via a telnet connection, so on my laptop it’s simple enough to gain access.

$ telnet 192.168.1.95 4444
Trying 192.168.1.95...
Connected to 192.168.1.95.
Escape character is '^]'.
Open On Chip Debugger
>

I’m primarily interested in programming the flash memory, so I started by trying to get information about that.

> flash info 0                    
Device Security Bit Set
#0 : stm32f2x at 0x08000000, size 0x00100000, buswidth 0, chipwidth 0
	#  0: 0x00000000 (0x4000 16kB) protected
	#  1: 0x00004000 (0x4000 16kB) protected
	#  2: 0x00008000 (0x4000 16kB) protected
	#  3: 0x0000c000 (0x4000 16kB) protected
	#  4: 0x00010000 (0x10000 64kB) protected
	#  5: 0x00020000 (0x20000 128kB) protected
	#  6: 0x00040000 (0x20000 128kB) protected
	#  7: 0x00060000 (0x20000 128kB) protected
	#  8: 0x00080000 (0x20000 128kB) protected
	#  9: 0x000a0000 (0x20000 128kB) protected
	# 10: 0x000c0000 (0x20000 128kB) protected
	# 11: 0x000e0000 (0x20000 128kB) protected
STM32F4xx - Rev: Z
> flash banks
#0 : stm32f4x.flash (stm32f2x) at 0x08000000, size 0x00100000, buswidth 0, chipwidth 0

One thing that did catch me out initially was that this is just a telnet session, so any files that I referenced needed to be on the RaspberryPi *not* my local machine. Once the required files were transferred across it was time to update the firmware. Thankfully OpenOCD provides a simple way to do this.

> program ap.bin erase verify 0x08004000
adapter speed: 2002 kHz
stm32f4x.cpu: target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0x61000003 pc: 0x2000002e msp: 0x2001fff0
adapter speed: 4061 kHz
** Programming Started **
auto erase enabled
stm32x device protected
failed erasing sectors 1 to 5
embedded:startup.tcl:454: Error: ** Programming Failed **
in procedure 'program' 
in procedure 'program_error' called at file "embedded:startup.tcl", line 510
at file "embedded:startup.tcl", line 454

It makes sense to now allow the device to be programmed while locked, so to unlock the device, the command is stm32f2x.

> stm32f2x unlock 0
stm32f2x unlocked.
INFO: a reset or power cycle is required for the new settings to take effect.
> reset halt
adapter speed: 2002 kHz
stm32f4x.cpu: target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0x61000003 pc: 0x2000002e msp: 0x2001fff0

Now the device is unlocked, I can try programming again.

> program ap.bin erase verify 0x08004000
adapter speed: 2002 kHz
stm32f4x.cpu: target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0x61000003 pc: 0x2000002e msp: 0x2001fff0
adapter speed: 4061 kHz
** Programming Started **
auto erase enabled
wrote 245760 bytes from file ap.bin in 5.491092s (43.707 KiB/s)
** Programming Finished **
** Verify Started **
verified 153560 bytes in 0.333037s (450.283 KiB/s)
** Verified OK **

Now that it has programmed, time to lock the device again!

> stm32f2x lock 0                       
stm32f2x locked

Conclusion

I’m sure there is a lot more that I can do with SWD and I now have a way of connecting and using it. Sadly I’m no further forward with getting the board working 🙁

Quad Update #7

In my earlier post I said that the bootloader for the KroozSD was luftboot, but in reality it’s available here. The version contained in that tree doesn’t appear to be fully up to date as there is no CRC and the strings reported by the device are different, but I’m assuming the current code is similar.

While I realise I could simply switch to a different laptop, I can’t help but feel this should be a simple enough problem to fix and as I’m always interested in learning another wee adventure begins!

USB Debugging

Having gone from working USB transfers to non-working USB transfers I’ve been doing a little bit of debugging of the USB. The stm32_mem.py script uses PyUSB so my first stop was to enable additional debug information for the library. The library authors have made this very easy.

$ export PYUSB_DEBUG=debug
$ ./stm32_mem.py ...
2016-05-21 10:05:06,719 DEBUG:usb.backend.libusb1:_LibUSB.__init__()
2016-05-21 10:05:06,721 INFO:usb.core:find(): using backend "usb.backend.libusb1"
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.enumerate_devices()
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.get_device_descriptor()
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.get_device_descriptor()
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.get_device_descriptor()

The level of information is impressive and allowed me to check that no additional calls had been made and that what I thought was taking place was reflected in the calls being made. Having checked that the next level was to look at whether there were other calls being made by the system. For this I turned to usbmon.

The Ubuntu kernel already has the basic support needed, so the first step was simply to load the usbmon kernel module and verify that it was available.

$ sudo modprobe usbmon
$ ls /sys/kernel/debug/usb/usbmon
0s  0u  1s  1t  1u  2s  2t  2u

Finding the bus number was easy enough.

$ dmesg
...
[25450.332261] usb 1-2: new full-speed USB device number 34 using xhci_hcd
[25450.462462] usb 1-2: New USB device found, idVendor=0483, idProduct=df11
[25450.462505] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[25450.462510] usb 1-2: Product: KroozSD CRC
[25450.462514] usb 1-2: Manufacturer: S.Krukowski
[25450.462517] usb 1-2: SerialNumber: Bootloader

So, bus #1 was where to look, but which endpoint did I need? The usbmon documentation explains that endpoints ending with ‘u’ are the best ones to use, so ‘1u’ endpoint was what I wanted. Getting the data into a file so I could analyse it turned out to be as easy as

$ cat /sys/kernel/debug/usb/usbmon/1u > krooz.mon

Running the stm32_mem.py script again (in a different terminal) resulted in a capture file that contained a lot of information so I used CTRL-C to stop the capture. Analysing the data turned out to be simple enough as well thanks to Virtual USB Analyzer. The app wasn’t installed by default but was in apt, so installation and usage was simple enough.

$ sudo apt-get install vusb-analyzer
$ vusb-analyzer krooz.mon

The app displayed the information needed, but without DFU support built in some interpretation was required.

Not an issue…

One of the recurring entries in dmesg has been

[25457.004284] usb 1-4: usbfs: interface 0 claimed by btusb while 'python' sets config #1

Looking at this in more detail it’s not an issue and merely informing me that the discovery phase of the stm32_mem.py script has attempted to claim ownership of my bluetooth device (which has a DFU endpoint). It would be nice if PyUSb had a way to check whether a device was already claimed, but I couldn’t find anything that would allow such a check. It does remove one source of concern.

Results?

Everything I looked at showed me that the script does exactly what I expected it to and there is no interference from anything else on my system. Not a big surprise and certainly what I expected, but it also illustrates how easy it can be to get the additional information I wanted. As the scripts haven’t changed I can rule that out as the source of problem, which would imply that the bootloader code has an odd bug in it somewhere. Figuring out what and how to fix it will be much harder.

The bootloader code I have looked at all appears to be simple enough that there is only one way the manifestation stage can be triggered.

static int usbdfu_control_request(usbd_device *usbd_dev, 
                                  struct usb_setup_data *req, u8 **buf,
		                  u16 *len, 
                                  void (**complete)(usbd_device *usbd_dev, struct usb_setup_data *req))
{
    if ((req->bmRequestType & 0x7F) != 0x21)
        return 0; /* Only accept class request. */

    switch (req->bRequest) {
        case DFU_DNLOAD:
            if ((len == NULL) || (*len == 0)) {
                usbdfu_state = STATE_DFU_MANIFEST_SYNC;
                return 1;
            } else {
        ...

This is meant to be triggered by an empty download request, but somehow is being triggered following a 2050 byte download. The function is a callback supplied to the libopencm3 usb setup code,

    usbd_register_control_callback(usbd_dev,
                                   USB_REQ_TYPE_CLASS | USB_REQ_TYPE_INTERFACE,
                                   USB_REQ_TYPE_TYPE | USB_REQ_TYPE_RECIPIENT,
                                   usbdfu_control_request);

The sequence leading to the crash is always the same.

DFU_DNLOAD DFU_ERASE command
DFU_DNLOAD 2050 bytes
DFU_GETSTATUS => STATE_DFU_DNBUSY
DFU_GETSTATUS => STATE_DFU_IDLE
...
DFU_DNLOAD DFU_ERASE command
DFU_DNLOAD 2050 bytes of data
DFU_GETSTATUS => STATE_DFU_DNBUSY
DFU_GETSTATUS => STATE_DFU_MANIFEST

In all my debugging I see that the download transfer completes for the erase and then the data transfer, and after a few transfers (which is normally 3) working as expected the manifestation stage is triggered. Answers on how to further debug are welcomed on postcards (or emails) 🙂

Quad Update #6

#6! Who knew I’d get this far and still not have a flying quad? At least I’m getting closer and learning a lot along the way…

Satellite Receiver

I’ve updated the spektrum_serial.py code to have code that now converts the split channels into appropriate values and having watched it what it produces seems sane. Using this is a basis I changed the code in paparazzi to do the same conversions and rebuilt. All was well and so the next step was to flash the new firmware to the board – as usual.

Denied!

The flashing process is sometimes a little fussy and from time to time needs a few attempts, but after 30 or so attempts it was still not working. This is very unusual and checking the board it seemed as though the firmware I had flashed a while back has been removed – I have an empty board ready and waiting for code! The bootloader is still in place and when connected the USB device appears, so it’s just the upload that’s causing the problem. Given I’ve always uploaded using this laptop the sudden change is a little strange.
The KroozSD board uses the luftboot bootloader (or a modified version thereof) and a custom uploader which implements the DFU protocol. Running with debugging switched on didn’t show much, so I dug a little deeper. After adding some code to see the state being returned by the bootloader the problem became apparent (with my additional debug messages).

Using device : ID 0483:df11 S.Krukowski - KroozSD CRC - Bootloader
Programming memory from 0x08004000...
[0%=====================50%=====================100%]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Manifest [07]

The DFU process should only enter the final “Manifestation” state when the code is complete and a zero length download is sent. That’s what the DFU specification says and what the luftboot code looks for, so quite why it’s entering it so early is unclear. Each write until then appears to be sent OK and there is no sign of any extra zero byte requests being sent.

What next?

My options appear to be switching to another computer for uploading the firmware, getting an external USB interface that doesn’t cause this problem or figuring out what’s going on and fixing it. Of those I suspect the last may be a little beyond me 🙁 I guess when I have time next week I’ll try again with a different computer.

Quad Update #5

Having spent some more time looking at the Spektrum SPM9246 satellite receiver and it’s output I’ve made some progress. When trying to use this with Paparazzi I’ve had very variable results and it fails to work.

Direct Binding Doesn’t Work 🙁

I’ve been unable to get the binding via the KroozSD board to work with the satellite and watching the output using pigpiod (as detailed Quad Update #4) the reason became clear as the simple series of pulses was never visible. I’m still a little unclear why, but after hooking up the RaspberryPi I managed to replicate the required pulse pattern and the satellite receiver went straight into bind mode. Binding to my Spektrum DX8 failed however – probably because the required responses weren’t sent as these would normally be provided by the AR8000. After the failed attempt the receiver was not receiving any information so I restarted the bind process using the AR8000 and all was well again.

As this makes sense and appears logical, I think it’s worth ignoring the possibility of binding my satellite receiver directly from a Paparazzi board.

DSMX

Every time I bind the receiver the transmitter reports it’s connection as DSMX and 22ms. Given the individual frames are sent every 11ms this means that there are 32 bytes sent per frame, which after removing the 2 byte headers leaves 28 bytes of channel data, or 14 channels available. Watching the data I often see the “split channels” where movements would be reflected across 2 channels for one control. As an example, when the channel data is split idle throttle appears as (output from spektrum_serial.py)

    0    1
 ---- ----
  354    0

Increasing the throttle position results in

    0    1
 ---- ----
  900    0

Increasing it further, results in

    0    1
 ---- ----
    0  500

Every time I see this behaviour the number of bits per channel is 10, whereas when the number of bits per channel is 11 I see a single value. From this I am assuming that DSMX always uses 2048 values and hence requires 11 bit data per channel. If the transmitter/receiver decide to transmit 10 bit data then 2 channels are required to transmit the required accuracy (2 10 bit numbers == 11 bit number) and hence the channels are split. While the throttle data is simple to understand, the other “stick” controls are a little more interesting. The figures below are for the pitch control, but similar results are seen for yaw and roll.

    4    5
 ---- ----
 1004    0

With the pitch fully forward

    4    5
 ---- ----
  320    0

And fully back

    4    5
 ---- ----
    0  657

AUX 1 is a simple toggle switch, but even that is shown across 2 channels.

    8    9                            8    9
 ---- ----   switched from 0 to 1  ---- ----
  342    0                            0  682

AUX 2 is a 3 way position switch, and the values are similarly recorded across 2 channels.

   12   13                           12   13                           12   13
 ---- ----   switched from 0 to 1  ---- ----   switched from 1 to 2  ---- ----
    0  682                            0    0                          342    0 

AUX 3 is a rotary switch, so starting with it at the 7 o’clock position,

   14   15                              14   15
 ---- ----   turning fully clockwise  ---- ----
    0  682                             342   26

These are just some values I grabbed today, but they do show the range of values that I have been seeing. Switching off the transmitter and then switching on again (while still observing data) sometimes results in the data being sent changing from 10 bit to 11 bit which then results in single values being reported (AUX 2 was at position 2 and AUX3 at fully clockwise position)

    0    1    2    3    4    5    6    7
 ---- ---- ---- ---- ---- ---- ---- ----
  354  998 1006 1006  342 1706  342  342

Changing to full throttle position, AUX 1 moving from 0 to 1 and AUX 2 moving to position 0 resulted in

    0    1    2    3    4    5    6    7
 ---- ---- ---- ---- ---- ---- ---- ----
 1698  998 1006 1006 1706 1706 1706  342

With AUX 2 in position 1 and AUX 3 at the 7 o’clock position,

    0    1    2    3    4    5    6    7
 ---- ---- ---- ---- ---- ---- ---- ----
 1698  998 1006 1006 1706 1706 1024 1706

The pitch value (channel 2) changes from 322 at fully forward to 1681 fully aft.

Going Forward

Every time I switch on my transmitter it starts off sending 10 bit data and so I have to deal with the split channels. Until I can reliably detect and convert that data into the single values that are needed I suspect it will never work reliably with paparazzi. The challenge is to convert this 10 bit data

    0    1    2    3    4    5    6    7    8    9   10   11   12   13   14
 ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
  354    0  998    0 1006    0 1006    0  342    0    0  682    0  682  939

into these 11 bit values

    0    1    2    3    4    5    6    7
 ---- ---- ---- ---- ---- ---- ---- ----
  354 1003 1004 1011  342 1706 1706  936

Transmitter Settings

The other option I have is to change the transmitter setting for the frame rate to force 11ms updates. Hopefully this will cause the data to be always sent as 11 bit and hence the problem will no longer be an issue. I’ll try this soon and see what happens.

UPDATE – I tried using the 11ms and DSM2 option under the Frame Rate settings menu and from what I observed neither makes much difference and the behaviour doesn’t change, so I have reverted back to the default settings.

spektrum_serial.py

The small python script I have written to allow me to watch the output from the satellite receiver is now available on github. As usual, any comments, suggestions welcome to improve it.

Quad Update #4

Following on from my experiments with the radio control receiver I still found myself unable to simulate the binding process. Everything I’d read suggested that a series of pulses needed to be sent just after power up, but the code in paparazzi that did this didn’t work for me.

Capturing the pulses

The starting point was using the existing AR8000 as when this is powered up with a bind plug inserted and the satellite receiver attached both go into bind mode (rapid flashing of their orange lights). It was obvious that the AR8000 sent the pulses to the satellite to trigger this, so by watching the signal line I should be able to see them. The next question was how.
Bring out the RaspberryPi 🙂

Both receivers are happy at 3.3V and the GPIO pins on the RaspberryPi are also 3.3V, so that part was easy. Using the +3.3V and GND pins supplied power and after some surgery on a cable I ended up with this (apologies for my poor drawing skills).

Bind signal setup

The next step was to watch for the signals.

pigpiod and piscope

While looking for a way to capture the changes in signal I came across a post detailing using a daemon and client to do that exact thing. The daemon is pigpiod which needs to be installed and running on the RaspberryPi (obviously). Downloading, building and installing were very easy. Running the daemon proved to be as underwhelming as I had hoped – it just sat and did it’s thing 🙂 The daemon allows the output of GPIO pins to be monitored, but the data can be sent remotely, making it very flexible.

I decided to run piscope on my laptop, so grabbed the code, built and then ran it. Here I did hit a minor surprise when I was greeted with a segfault. Trying the usual command line options for help resulted in the same response. Looking at the instructions on the site provided the answer.

export PIGPIO_ADDR=192.168.1.95
./piscope

Now I was greeted with a the application running.

piscope running

I opted to only have the GPIO pin I was using selected (Misc > GPIOs) shown and made sure that the Live feed was running.

GPIO selection. A Select All/Select None option would be nice :-)

piscope_gpio15

Powering on the AR8000 with the bind plug inserted resulted in a series of pulses. Finding the correct scale and position to look at these in the piscope app took a few attempts but with some experimentation I had a screen showing what I wanted. There were 9 pulses sent, each was 120us long and the gap between them was 120us. This agreed with the code I had seen which meant I was a little confused why the binding wasn’t working when attached to the autopilot board. Once the binding was complete, the usual flow of information was visible and the 11ms separation of frames was very evident.

Having never thought about using the RaspberryPi this way before, I’ve been pleasantly surprised by how straightforward it was.

pullup not pulling up?

Paparazzi allows you to have a “bind pin” which functions much like the bind plug for the AR8000 – it starts the bind process. I had this setup but looking at the code revealed that the bind code was always being executed, even if the bind pins weren’t activated. Hmmm. More investigation revealed that the function call that was meant to activate the pin and leave it “pulled up” wasn’t doing that so the pin was always being sensed as clear. Easy enough to find and fix. Once fixed the code was only being executed when the bind pin was activated though I still didn’t get the binding process started.

Single file pull request?

The next step would have been to issue a pull request to the main paparazzi branch, but there’s a previous change that I didn’t want to include in the pull request. I’m afraid my github fu is too weak to know how to do such a thing – answers on a postcard! I tried using the cheery-pick option, but that didn’t work and so now I’m left with my local repo in an odd state of affairs! D’oh.

This is not the GPS you want…

Having made the cable to connect my GPS to the board I was surprised when it failed to work. Further investigation has revealed that I ended up buying the wrong version! So, if you want a GPS from Navilock, the one you want should end with TTL not ERS. All is not lost as I can add a serial interface to the module and will therefore be able to use it in another project. I have found another supplier for a GPS module and it should be here soon.

Molex picoblade

The 1mm connectors really are as fiddly and annoying to use as their size suggests. I’ve managed to make the cables I need but the process hasn’t been without a few failures. The crimpers seem good and are very well made. Arriving wrapped in japanese newspaper was a nice touch 🙂

Quad Update #3

When I started updating the quad I didn’t expect it to be an instant process, but I didn’t really expect it to take as long as it is taking me 🙁 The delays are due to a number of reasons, but the time it takes for some orders to arrive certainly is a large contributor.

Radio Control

I’ve been using a Spektrum DX8 for radio control and with the previous board The individual “servo” leads were connected directly from the AR8000 receiver. The satellite receiver, an SPM9645 was attached but both boxes were required and took up a reasonable amount of space.

Spektrum AR8000 & SPM9645

On the KroozSD board there is no place for the individual servo leads to attach and a UART connection is discussed. This didn’t make much sense initially, but after reading it became apparent that the SPM9645 was a fully fledged receiver and communicated via a serial interface, so the board would only need that one small module. The space savings and simplicity of a single wire are huge advantages, but once again I ran into the issue of how to attach it. I’m not sure what the connection is on the receiver (1.5mm JST?) but the UART connector is a 4 pin Molex PicoBlade, a connector I’m still unable to create. Inside the Spektrum package I did find a cable that allowed me to connect the receiver using the standard 3 pins, and as 2 of the pins are ground and power with a single cable for the communications I started looking at whether I could use that cable and one of the PWM connectors. The answer turned out to be yes, but there is a slight issue. The power supplied via those pins is 5V but the receiver is meant to be used with 3.3V, so while it may be able to handle the voltage it’s not worth the risk long term.
I did connect it for testing and to ensure that the code I was looking at worked, but the results never quite looked right and the values being set didn’t make sense to me. As I like to understand such things more investigation was required!
Longer term I have ordered a small voltage regulator that will allow me to use the pin connection safely, though it could be a while making it’s way here from China!

Can we read it?

There are plenty of articles on the web about the output from the satellite receiver, but many of them deal more with the timing than the values, so I decided to see if I could read the data and see what was going on. Knowing it was a serial stream and that it needed 3.3V I hooked up a Raspberry Pi I had and looked for the GPIO connections I needed to make.

Raspberry Pi GPIO pinouts

The pins therefore needed to be 3.3V (pin #1), GND (pin #9) and serial RX (pin #10). The cable does twist, so I was careful to ensure I connected them correctly 🙂

Raspberry Pi pinouts

Once connected I connected via SSH to the Pi and tried to read data, but nothing appeared and a number of messages appeared about problems with the serial port. Having read about the serial port being used for kernel messages I wasn’t surprised, but the posts I found to correct it didn’t seem to apply. After trying a few different suggestions I eventually came across a post suggesting the raspi-config command, which proved to be the solution. Switching off the option for kernel debug (under Advanced Settings) and restarting the Pi resulted in access to the serial port and data flowing from the receiver. I wrote a small python script and the excellent PySerial module (installed via pip).

Receiver connectedRaspberry Pi

Yes we can!

Now that data was flowing, the next step was to try and interpret it. Thankfully many had gone before me and so I knew what to expect!

The data flowed from the serial port in 16 byte blocks, each of which looked similar.

00 5A 0B E9 2E AA 13 E8 1B F3 31 56 FF FF FF FF
00 5A A1 56 01 62 3A 0C FF FF FF FF FF FF FF FF
00 5A 0B E9 2E AA 13 E8 1B F3 31 56 FF FF FF FF
00 5A A1 56 01 62 3A 0C FF FF FF FF FF FF FF FF
00 5A 0B E9 2E AA 13 E8 1B F3 31 56 FF FF FF FF
00 5A A1 56 01 62 3A 0C FF FF FF FF FF FF FF FF

This was what I had been expecting. 2 bytes of “header” followed by 7 sets of channel information. As the DX8 is an 8 channel transmitter and there are only 7 sets per frame, the repetition of the 2 frames made sense. Looking at details of the “header” I expected the first byte to the dropped frame count and the second byte to contain information about what was to follow, namely the number of bits per channel and the number of frames required.

+------------------------+---------+---+------+------+
| dropped frame count    |         |bit|      |frames|
+------------------------+---------+---+------+------+
16                                                   0

When I bound the receiver to the transmitter it helpfully told me that it had bound using DSMX and 22ms, which should (according to what I had read) give me 11 bit channel data, hence I expected to find that the transmitter data decoded as 2 frames and 11 bit data – which it did. The next step was to look at the channel data. Each channel is sent as 16 bits and as it was using 11 bits for the data should look like this.

+---+------------+-----------------------------------+
|   | channel ID | Channel value                     |
+---+------------+-----------------------------------+
16                                                   0

The channel ID’s all appeared to be within a range of 0 to 15, though those with an ID of 15 I chose to ignore as that appeared to suggest it wasn’t valid data. As I only have 8 channels this seemed a larger number than I was expecting so I decided to try and watch the values in real time using the small python script I had created. (I’m still refining the code but can put it on github if anyone is interested.)

It turns out that the main 4 channels – throttle, elevator, aileron and yaw – are split between 2 channel ID’s, with the left and right deflection being reported separately. In the case of the throttle this results in the value for channel 0 going up until you reach the midpoint where it drops to 0! Obviously only reading one channel would result in some interesting results so I need to look at combining the values appropriately.

GPS

The GPS and it’s cable have arrived and are just awaiting the addition of the connector to attach to the board.

DSCF3474DSCF3479

Unrelated…

Having played with the STM32 I’ve found myself contemplating using it for some other projects I have planned. Looking around for an easy way to start experimenting I came across the STM32 Stamp which looked perfect for my initial needs, but as my skills aren’t up to constructing one myself, does anyone know where I can buy something similar? I don’t really want a discovery board as I want to be able to use it on a breadboard while I experiment with connecting it.

Quad Update #2

I’ve spent a bit more time with both Paparazzi and the KroozSD board, so these are a few more observations.

NB these ONLY apply to the KroozSD board 🙂

Configuration

After a good look through I changed the settings to match my configuration, tweaked a few to what I hoped would be more appropriate values and rebuilt. The ESC’s now make the correct noises and the telemetry looks good. The battery voltage being shown is that supplied via the BEC built into the ESC‘s, which is essentially 5V, so I had to adjust the battery warning levels to prevent the constant warnings. I’ll need to add a battery sensor to address this.

USB

When powered up, plugging in a USB lead doesn’t provide any information. Powering the board with a USB cable attached does provide a connection to the bootloader, as expected, but the lack of connection from the board when powered struck me as odd. Thinking about it more, there is no reason why the board should power the USB when it’s going to be as far away from a computer as it is! It’s possible to build a firmware that does have a USB connection when powered, but it would only really be useful for testing.
Having built such a firmware and loaded it I found that my next attempt to flash firmware failed as the USB connection was being made with the running firmware and not the bootloader. Watching the board more closely I found that LED 1 would be solid red when the bootloader was connected and ready for new firmware to be uploaded.

Calibration

The configuration file includes some default calibration values, but these need to be replaced by values for the my own board, so time to look at calibration. Prior to doing this I had a look at the PFD (Primary Flight Display) tab on the Paparazzi ground station to see how good/bad the initial values were. The result was it showed the board in a climbing left hand turn, but the right way up. Moving the board also showed the expected responses.
Calibrating the accelerometers is the first step and proved to be quite simple, though I’d suggest reading the “Basic procedure” description and generating a log file a few times before using trying the actual calibrations.

Accelerometers

Put the board in the correct orinetation, hold for 10 seconds, move to next one… Relatively easy to accomplish once you figure out the order of the orinetations as shown in the supplied image (below).

Reading the text and comparing it with the image made things clearer but the image could be better. I did the tests with the board secured in the quad as it made moving it around easier and allowed me to reinforce that I had it correct way round!

Once complete, I ran the script and had some new calibration values. Once the configuration file was updated, firmware built and uploaded the PFD showed level wings and nose – exactly as expected.

Magnetometer

I’ll do this over the next few days, but ran out of time yesterday.

Gyroscopes

I’m sure this would give much improved results, but the procedure outlined looks…complex! I’d be interested to know hos much improvement it makes?

Quad Upgrade

When I built the quad I went with a simple controller with the intention of upgrading at some point once I had more idea “what I was doing 🙂 Of course, such an open ended target was a total cop out and after some discussions with a friend and having a little more time on my hands than I had, I recently decided the time was right to start looking at an upgrade. This is what I had (a Hobbyking KK2.0 now replaced by the newer KK2.1.5).

KK 2.0 Board

Following the discussions and looking at a few options, I decided to stick with my open source leanings and move to the Paparazzi UAV Project. Looking at the autopilot boards (and following advice) I went with the KroozSD. Ordering it was easy enough and it arrived nicely packaged in a reasonable time. The fact it shares the same physical form factor as the old board helps greatly, making it almost a drop in replacement. Of course, nothing is ever that simple and so I’m currently getting the connections sorted out 🙂

Installing the Paparazzi software was easy enough and once installed it all ran without issue. The only thing that wasn’t clear, though it may just have been my poor attention span missing it, before you can run a simulation you need to use the “Build” option to create the files you will use for the simulation. Failing to do so will produce lots of windows but also a few warnings. This isn’t the most helpful as initially it appears everything is working! However, building is quick.

One aspect of the change that was new to me was the introduction of live telemetry. This is done using XBee modules and setting them up was my first task. I took a while deciding which modules to buy as there is a bewildering selection, but eventually went with 2 of these. I also bought 2 USB adapters, so once configured using the X-CTU software it was nice to see them communicating. Installing the “remote” module onto the board and applying power produced telemetry 🙂

KroozSD TopBottom side with ZBee module attached.

Connectors

In case it helps anyone else, the connectors on the board are Molex PicoBlade with the exception of the 3 pin SWD connector in the centre of the board which is JST.
I had to guess which way round the XBee module attached, though the pictures on the webpage about the board helped!

As I discover more information I’ll pass it along, but thus far I have to congratulate Sergey on producing a great board and being very easy to deal with.

Routing D’oh!

React-router is a great addition to React, but yesterday marked the first time I had used it for a project. It led to a period of head scratching, but maybe this post will help someone else avoid the same mistake I made!

Simple Setup

Installing it was simple enough.

npm install --save react-router

Having installed it I then added the import lines,

import {Router, Route, Link, browserHistory} from 'react-router';

and then added some routes to the root component.

render((
  <Router history={browserHistory}>
    <Route path="/" component={home}>
        <Route path="about" component={about} />
        <Route path="blah" component={blah} />
    </Route>
  </Router>
), document.getElementById('test'))

Adding a few links to the components with Link to enable me to test navigation and all should have been good.

    <Link to="/about">About Page</Link>

This was intended to give me the following urls,

/
/about
/blah

After making the changes a quick refresh of the page (webpack-dev-server really does make life easy) and sure enough I got the page with links. Clicking on the links gave me a surprise though – the URL changed but the page didn’t.

Children?

After some head scratching and a bit of reading around, I discovered that the routing I had added wasn’t quite what I thought in terms of the components. In my minds eye I envisaged this as rendering each component directly, but what I had added actually added was rendering a parent/child relationship.

URL Parent Child(ren)
\ home none
\about home about
\blah home blah

Fixing the problem was as simple as changing the home component to render the children by adding

<div>
{this.props.children}
</div>

Nesting

This applies for every level of nesting, so rewriting the routing block to

render((
  <Router history={browserHistory}>
    <Route path="/" component={home}>
        <Route path="about" component={home}>
            <Route path="about" component={about}/>
        </Route>
        <Route path="blah" component={blah} />
    </Route>
  </Router>
), document.getElementById('test'))

Works as expected when displaying /about/about due to using the home component with this.props.children. Using the about component failed as it lacked the this.props.children usage. The relationships look like

URL Parent Child Child
\ home none none
\about home home none
\about\about home home about
\blah home blah none

Original Intention

In order to get the behaviour I actually wanted I needed to change the routing block to this.

render((
  <Router history={browserHistory}>
    <Route path="/" component={home}/>
    <Route path="about" component={about}/>
    <Route path="blah" component={blah} />
  </Router>
), document.getElementById('test'))

postgrest views

When is a view updateable? The answer becomes important when using views to access the data via postgrest. If a view isn’t updateable then insert, update and delete operations will fail.

It’s possible to check by requesting ‘/’ from postgrest to get information about the endpoints available and looking at the insertable field.

[
  {u'insertable': False, u'name': u'fruits', u'schema': u'public'}, 
  {u'insertable': True, u'name': u'colours', u'schema': u'public'}
]

In the above, attempts to insert, update or delete from /colours will fail, but attempts for /fruits will be OK.

Simple Views

Where a view is nothing more than a select statement to return rows from a table, it should be updateable.

CREATE OR REPLACE VIEW colours AS
  SELECT * FROM data.colour;

Joins, Unions

Having joins or unions will require more work to make them updateable.

CREATE OR REPLACE VIEW fruits AS
  SELECT f.id, f.name, c.name as colour 
    FROM data.fruit AS f INNER JOIN
    data.colour as c ON f.colour_id=c.id;

Due to the join, this view isn’t directly updateable.

Function & Trigger

In order to make it updateable a function is needed, together with a trigger to call it.

CREATE OR REPLACE FUNCTION 
insert_fruit() RETURNS TRIGGER
LANGUAGE plpgsql AS $$
DECLARE
  colour_id int;
BEGIN
  SELECT id FROM data.colour WHERE name=NEW.colour INTO colour_id;
  INSERT INTO data.fruit (name, colour_id) VALUES (NEW.name, colour_id);
  RETURN NEW;
END
$$;

The trigger then tells postgresql to use the function when an insert is required.

CREATE TRIGGER fruit_action
    INSTEAD OF INSERT ON
      fruits FOR EACH ROW EXECUTE PROCEDURE insert_fruit();

Reviewing the endpoints now shows

[
  {u'insertable': True, u'name': u'fruits', u'schema': u'public'}, 
  {u'insertable': True, u'name': u'colours', u'schema': u'public'}
]

NB The insertable key refers ONLY to insert, so in this instance with only the insert function and trigger added update and delete operations will fail.

Inserting data is now as simple as 2 post requests.

POST /colours
{"name": "green"}
>> 201
POST /fruits
{"name": "Apple", "colour": "green"}
>> 201

Of course any attempt to update or delete will fail, despite having “insertable” set to True.

PATCH /fruits?name=eq.Apple
{"name": "Green Apple"}
>> 500
{"hint":"To enable updating the view, provide an INSTEAD OF UPDATE trigger or an 
unconditional ON UPDATE DO INSTEAD rule.",
"details":"Views that do not select from a single table or view are not automatically updatable.",
"code":"55000","message":"cannot update view \"fruits\""}

Update

The function required for updating a record is very similar to the insert one.

CREATE OR REPLACE FUNCTION
update_fruit() RETURNS TRIGGER
LANGUAGE plpgsql AS $$
DECLARE
  colour_id int;
BEGIN
  SELECT id FROM data.colour WHERE name=NEW.colour INTO colour_id;
  UPDATE data.fruit set name=NEW.name, colour_id=colour_id WHERE id=NEW.id;
  return NEW;
END
$$;

CREATE TRIGGER fruit_action
    INSTEAD OF UPDATE ON
      fruits FOR EACH ROW EXECUTE PROCEDURE update_fruit();
PATCH /fruits?name=eq.Apple
{"name": "Green Apple"}
>> 204

NB It’s worth pointing out that every row matched will be updated, so be careful of the filter criteria provided on the URL.

Delete

The delete function needs to return the rows that it deletes. Note that while insert and update relied on NEW, delete uses OLD.

CREATE OR REPLACE FUNCTION
delete_fruit() RETURNS TRIGGER
LANGUAGE plpgsql AS $$
BEGIN
  DELETE FROM data.fruit WHERE id=OLD.id;
  RETURN OLD;
END
$$;

CREATE TRIGGER fruit_delete
    INSTEAD OF DELETE ON
      fruits FOR EACH ROW EXECUTE PROCEDURE delete_fruit();

With the final trigger in place, delete now works.

DELETE /fruits?id=eq.1
>> 204

postgrest lessons learned

I’ve been spending some time recently getting to grips with postgrest by writing a small schema and figuring out how it all sits together with the help of a simple python client. The plan is to continue to develop it as a react/redux app once I have postgrest and the data figured out 🙂 The following are just some things I’ve learned that may have helped me from 10 days ago and may help someone else.

Roles

I’ve ended up with 3 roles.

authenticator
This role is used as the “base” role for the database. It has sufficient access to change roles and nothing else. I use this in the postgres connection url and so it needs a password to allow connections to postgres to be made.
anon
This role caused the most confusion initially as I failed to grasp that all connections from postgrest will use this role UNLESS a valid jwt_token is presented that contains a different role. This means it needs,as a minimum, access to everything involved in the login/signup process. For the database tere is nothing else that is visible to non-authenticated users so I have only the login/signup permissions.
user
The user role is set in the jwt_token. Once presented all further access will be as this user so the permissions need to reflect that. Additionally there are some aspects of the user management that need to be accessed by this role, e.g. when the user changes their details.

I’m not keen on having every user as their own role, so that means adding access control in the views and using their credentials with a single role. How this will work if I switch over to Row Level Security isn’t obvious to me yet.

One schema to rule them all

When starting postgrest there is the option to pass a schema. If none is supplied then the default of “public” is used. Anything else isn’t accessible from postgrest. This gives a lot of flexibility but also caused me some confusion initially when things weren’t available. If you need to span schemas then you need to write a view – which must be a member of the schema you have chosen. So far I’ve chosen to just use the default schema of ‘public’. Call me lazy 🙂

Functions need /rpc/, views don’t

This one was pretty obvious from the documents, but taken together with the schema point above caused me a lot of confusion. Essentially if you have defined a function called ‘login’ and another called ‘signup’ then they are available as /rpc/login and /rpc/signup. Views are simply available, so if you create a view ‘users’ it’s simply available as ‘/users’.

Direct access or not?

Simply creating tables in the chosen schema is enough to make them available. So, if you have a table called ‘colours’ then it’s directly available via postgrest using ‘/colours’ (depending on the permissions obviously). If, however, you created it in the ‘constants’ schema then it would need a view to access it. Which is preferable depends on the design and how much extra SQL you want to write.
I’m still trying to decide which route to go down. Having the “raw” tables hidden and only accessible via views provides a level of indirection that could well be useful, though adds a lot of extra coding – the lack of which was part of the attraction of postgrest to start with!

jwt_claims

This type needs to be defined with the information required by the app. Once supplied by a request, the information is retrieved by current_setting(‘postgrest.claims.xxx’) (where xxx is the member name).

Commands are available

In writing the sql files, the commands that can be used in the psql command line app are available! This revelation helped me streamline my debugging 🙂 Additionally I have split the sql into separate files and used ‘\i‘ to include them in a main file. This allows me to run it all as one or just the parts I need to update.

I need to improve my postgresql knowledge

As a friend said a while ago, “postgresql is a real database”. My knowledge of writing views, functions and the other bits needed to glue everything together is getting better, but the more you know the better.

gitter

Buried in one of the various documentation page was a link to gitter.im/begriffs/postgrest. The room isn’t too busy but seems to have enough people who know about postgrest to offer valuable help. Some of the other rooms also seem to be helpful with low amounts of noise and their app is pretty good.

Update?

Updates are done using the PATCH http verb and normally return a 204 No Content response. The data can be sent as json and all usual filtering arguments can be used to select the records to be updated.

PATCH /fruits?id=eq.1
Content-Length: 23
Content-Type: application/json
Accept: application/json

{"name": "Green Apple"}

If you want the full record, you need to add the Prefer header. This should return a 200 OK status code with the record.

PATCH /fruits?id=eq.1
Content-Length: 23
Content-Type: application/json
Accept: application/json
Prefer: return=representation

{"name": "Green Apple"}

letsencrypt

The idea behind letsencrypt is great. Wanting to add an SSL certificate for one of my domains I decided it was time to see how it worked.

Installation

No package is yet available for Ubuntu, so it was onto the “less preferred” git route.

$ git clone https://github.com/letsencrypt/letsencrypt
...
$ cd letsencrypt

The posts I read said to run a command, answer the questions and all would be good.

$ ./letsencrypt-auto --server https://acme-v01.api.letsencrypt.org/directory auth

After answering the questions the authentication failed. Hmm, that didn’t work, despite telling it the webroot to place the auth files in.

Going Manual

The stumbling block was the lack of files to prove the domains are ones I should be asking for certificates for. That’s fine, but using the command line above gives no information to let me fix the problem. There is a manual option, so next step was to try that.

./letsencrypt-auto --server  https://acme-v01.api.letsencrypt.org/directory --manual auth

This time I was prompted with the contents of a string and a URL location to make it available. That’s more like what I was expecting, so after creating the file all was well. After reading a little more it appears that using the certonly option was what I really wanted, so the command line would be

./letsencrypt-auto --server  https://acme-v01.api.letsencrypt.org/directory --manual certonly

Once the certificates had been created and downloaded, a small edit to the apache configuration files and I have an SSL protected website 🙂

Renewals

The certificates expire after 90 days, so I needed a command line that I can run via crontab. Using the above command lines above required interaction, so they wouldn’t do. Thankfully there is an easy answer.

./letsencrypt-auto --server  https://acme-v01.api.letsencrypt.org/directory --manual renew

Tidying up

After writing a small django app to handle auth for the django powered sites that are going to be using certificates and adding the relevant lines to crontab, I think I’m done 🙂

Command line babel?

Babel is a great project and a really useful npm module. I’ve use it in almost all of my webpack experiments, but recently I had a need to use it from the command line. Thankfully there is the babel-cli module. However, things weren’t as simple as some of the blog posts I found suggested.

Starting with a new project and npm the initial installation is the usual breeze.

$ npm init -y
$ npm install --save-dev babel-cli

According to the blog post I should now just be able to run babel-node – erm, no.

$ babel-node
babel-node: command not found

OK, so it’s Ubuntu and bash which means the PATH isn’t pointing to the correct file. That’s reasonable given I’ve just installed it, but what is the correct path? Given that npm installs things on a per project basis it’s likely something in node_modules? A quick search reveals that to be the case.

$ ls -l node_modules/.bin
total 0
...
lrwxrwxrwx 1 david david 30 Apr 16 13:31 babel-node -> ../babel-cli/bin/babel-node.js
...

To avoid this being a recurring issue I adjusted the PATH to include that directory (as a relative path).

$ export PATH=$PATH:node_modules/.bin

Now I could run babel-node, but using funky modern javascript syntax failed. I needed the es2015 preset.

$ npm install --save-dev babel-preset-es2015

Once installed I needed to tell babel to use it, which was as simple as

$ babel-node --presets es2015 ...

However, what are the chances of me remembering that every time I need it? So, there must be a better way to tell babel to always use the preset. Further reading revealed 2 options – add a .babelrc file or add an entry in package.json. I chose the package.json file for no other reason than it was one less file to create/manage. The entry is pretty simple and exactly as you’d expect.

  "babel": {
    "presets": ["es2015"]
  }

Now running is as simple as

$ babel-node index.js

However…

As I started writing more javascript I ran into problems when I used the new ES7 spread operator, ‘…’. It is supported by babel, but requires a plugin to be installed and used. Finding and adding the plugin was simple enough,

$ npm install --save-dev babel-plugin-transform-object-rest-spread

I thought that I could add it’s use via the package.json file, but my attempts failed, so I added a .babelrc file with the appropriate sections and all was well.

$ cat .babelrc
{
    "presets": [
      "es2015"
    ],
    "plugins": [
      "transform-object-rest-spread"
    ]
}

Hopefully this will help someone else!

Webpack: sass & import, names

Having started moving to sass for my project and including the required bits in my webpack configuration (blog post), the next issue I ran into was that importing didn’t seem to work as expected.

Require not Import?

One of the additions I made to my webpack config was to add a resolve section, allowing me to use more convenient and simpler require lines in my javascript.

  resolve: {
    modulesDirectories: ['node_modules', 'components', 'css', 'fonts'],
    extensions: ['', '.js', '.jsx', '.css', '.scss']
  },

This worked exactly as expected wherever I used a require statement, so I had expected that this would transfer to import statements in css and sass files – but it didn’t. As it seems such an obvious thing to do, I had another look at the README for the sass-loader and found what I was looking for.

~, but not as you know it

For my testing I had created as simple a file as I could think of, test.scss.

@import ('../components/_component.scss')

This very simple file just imports another file (which happens to be sass) that belongs to a component I have in the ‘components’ directory. Nothing special, but why do I need the full import path? This was what I needed to get things working, but after looking at the sass-loader again I realised that using the ‘~’ would use the webpack resolve routines – which is what I was hoping. A small change to the file,

@import ('~_component.scss')

resulted in things working as I wanted.

NB the README cautions against using ~ as you may expect (if you’re a command line groupie) as using ~/ implies the home directory and probably isn’t what you want.

Multiple Outputs?

Having decided that I don’t want css to be included in the javascript output, I added the ExtractText plugin which allowed me to bundle all css into a single css file. This is fine, but what if I wanted to have different css bundles? What if I wanted to have different javascript bundles? My current configuration didn’t seem to allow this.

  entry: {
    'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
    'webpack/hot/only-dev-server',
    path.resolve(__dirname, 'components/App.js'),
  }

Thankfully, webpack has this covered. Instead of having a single entry you can have multiple, each of which you can supply a name. Additionally I realised that the entry point doesn’t *need* to be a javascript file as long as it’s a file that can be processed. So I changed the entry section to this.

  entry: {
    bundle: [
      'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
      'webpack/hot/only-dev-server',
      path.resolve(__dirname, 'components/App.js'),
    ],
    test: [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

Running webpack didn’t give me what I expected as I also needed to change the output definition.

  output: {
    path: path.resolve(__dirname, 'html'),
    filename: '[name].js'
  },

Using the [name] simply replaces the name I used in the entry definition with that text, which offers additional possibilities. With the changes made, running webpack produces

html/
     bundle.js
     bundle.css
     test.js
     test.css

The test.js file is a little annoying and in an ideal world it wouldn’t be created, but so far I can’t find any way of preventing it from being created.

To control the output location even more, simply changing the definitiion is all that’s required for simple changes. Updating it to

  entry: {
    ...
    'css/test': [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

results in the files being created in html/css, ie

html/
     bundle.js
     bundle.css
     css/
         test.js
         test.css

NB when using a path the name needs to be in quotes.

Using this setup, component css is still included in the bundle.css and the only things appearing in test.css are those that I have specifically included in the entry file, which opens up a lot of possibilities for splitting things up. As I’m using bootstrap for the project one possibility is to use this to output a customised bootstrap file.

Hot Reload

At present hot reloading of css doesn’t seem to be working. I changed my configuration to this

  entry: {
    'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
    'webpack/hot/only-dev-server',
    bundle: [
      path.resolve(__dirname, 'components/App.js'),
    ],
    test: [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

which still provides hot reloading of the javascript, but the css files don’t seem to work. This seems to be a common issue, but as it’s not a serious one for me at present I’m not going to spend too much time looking for solutions. If anyone knows, then I’d love to hear from you.

sass

Continuing my delve into React, webpack and so on and after adding a bunch of css files, I decided it was time to join the 21st century and switch to one of the css preprocessors. LESS was all the rage a few years ago, but now sass seems to have the mindshare and so I’m going to head down that route.

Installing

Oddly enough, it installs via npm 🙂

npm install --save-dev node-sass
npm install --save-dev sass-loader

Webpack Config

The webpack config I’m using is as detailed in my post More Webpack, so the various examples I found online for adding sass support didn’t work as I was already using the ExtractTextPlugin to pull all my css into a single file. The solution turned out to be relatively simple and looks like this.

      {
        test: /\.scss$/,
        loader: ExtractTextPlugin.extract(['css', 'sass'])
      }

Additionally I need to add the .scss extension to the list of those that can be resolved, so another wee tweak.

  resolve {
    ...
    extensions: ['', '.js', '.jsx', '.css', '.scss']
  }

Structure?

One reason for moving to SASS is to allow me to split the huge css files into more manageable chunks, but how to arrange this? Many posts on the matter have pointed me to SMACSS and I’m going to read through the free ebook (easily found via a web search) to see what inspiration I can glean, but I think for each React component I’d like to keep the styles alongside as the bond between the JSX and the styling is very tight and changing one will probably require thinking about further changes. As per previous experiments, the component can then require the file and it will magically appear in the bundled, generated css file, regardless of whether I’ve written it in sass or plain css.

For the “alongside” files I’ll use the same filename and the leading underscore that tells sass not to output the file directly, though with the webpack setup that isn’t a concern now but getting into the habit is likely a good idea for the future 🙂 This means fora component in a file named App.js I’ll add _App.scss and add a line require(‘_App.scss’); after the rest of the requires.

Variables

I want to use a central variables file for the project, which I can then reference in the sass files, but haven’t quite decided where it should live just yet. Hopefully after reading the ebook and looking at the project a bit more it will make sense.

Now sass handling in place it’s time to start pulling apart my monolithic plain css file and creating the smaller sass files.

Webpack Dev Server

After using webpack for a few days, the attraction of changing to using the dev server are obvious.

The webpack-dev-server is a little node.js Express server, which uses the webpack-dev-middleware to serve a webpack bundle.

Install

Oddly enough, it needs installed via npm! However, as we’re going to run it from the command line, we’ll install it globally.

sudo npm install -g webpack-dev-server

Running

After install, simply running the server (in the same directory as the webpack.config.js file) will show it’s working and the bundle is built and made available. Next step is to get it serving the HTML file we’ve been using. This proves to be as simple as

$ webpack-dev-server --content-base html/

Requesting the page from http://127.0.0.1:8080/ gives the expected response. Removing the bundled files produced by webpack directly from the html directory and refreshing the page proves the files are being loaded from the dev server. Nice.

Hot Loading

Of course, having the bundle served by webpack is only the start – next I want any changes I make to my React code to be reflected straight away – hot loading! This is possible, but requires another module to be loaded.

npm install --save-dev react-hot-loader

The next steps are to tell webpack where things should be served, which means adding a couple of lines to our entry in webpack.config.js.

  entry: [
    'webpack-dev-server/client?http://127.0.0.1:8080',
    'webpack/hot/only-dev-server',
    path.resolve(__dirname, 'components/App.js'),
  ],

As I’m planning on running this from the command line I’m not going to add a plugin line as some sites advise, but rather use the ‘–hot’ command line switch. I may change in future, but at present this seems like a better plan.

The final step needed is to add the ‘react-hot’ loader, but this is where things hit a big snag. The existing entry for js(x) files looked like this.

     {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react', 'es2015']
        }
      },

Adding the loader seemed simple (remembering to change loader to loaders as there was more than one!).

     {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loaders: ['react-hot', 'babel-loader'],
        query: {
          presets: ['react', 'es2015']
        }
      },
Error: Cannot define 'query' and multiple loaders in loaders list

Whoops. The solution was given by reading various posts and eventually I settled on this. It works for my current versions of babel but may not work for future ones. All changes below are applied to the webpack.config.js file.

Add the presets as a variable before the module.exports line.

var babelPresets = {presets: ['react', 'es2015']};

Change the loader definition to use the new variable and remove the existing definition.

      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loaders: ['react-hot', 'babel-loader?'+JSON.stringify(babelPresets)],
      },

Now, when running webpack-dev-server –content base html/ –hot everything is fine and the page is served as expected.

Editing one of the components shows the expected rebuild of the bundle when saved – exactly as expected.

All Change!

As I tried to get this working I discovered that the react-hot-plugin is being deprecated. Until it happens I’m happy with what I have, but the author promises to have a migration guide.

Running

To try and keep things simpler and avoid the inevitable memory lapses leading to scratching of head about lack of hot reloading, I’ve added a line to the package.json file. With this added I can now simply type npm run dev and the expected things will happen.

  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack --progress",
    "dev": "webpack-dev-server --content-base html/ --hot"
  },

React Mixins

Having written a simple test app I’m continuing to use it to try and develop my React knowledge 🙂 One aspect that I did find a little annoying was the amount of code I seemed to repeat. I kept thinking that I should have a base class and simply inherit from it – a pattern I have used a lot in other languages, but this is React and nothing I had seen suggested that pattern.

Enter the Mixin

A react mixin is a simple idea, a block of code that is common to one or more components. After looking at them I found it was possible to extract a lot of common functionality, resulting in this code.

var AppointmentMixin = {
  componentDidMount: function() {
    this.props.updateInterval(this.props.id, this.totalTime());
  },  
  setAdditional: function(ev) {
    this.state.additional = parseInt(ev.target.value);
    this.props.updateInterval(this.props.id, this.totalTime());
  },
  totalTime: function() {
    return this.state.duration + this.state.additional;
  },  
  finishTime: function() {
    return this.props.start.clone().add(this.totalTime(), "minutes");
  },
};    

There is nothing in the above code that is specific to a component, it’s all plain generic code. To use it in a component you need to add a ‘mixins’ line and remove the old code. This now gives me a component that looks like this.

var Hair = React.createClass({
  mixins: [ AppointmentMixin],
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Start: {this.props.start.format("HH:mm")}</p>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>
        <p>Total Time Required: {this.totalTime()} minutes</p>
        <p>Finish: {this.finishTime().format("HH:mm")}</p>
      </div>
    )
  }
});

This is much closer to what I wanted.

Uh oh…

While looking around for information on mixins I cam across this line repeatedly.

Unfortunately, we will not launch any mixin support for ES6 classes in React. That would defeat the purpose of only using idiomatic JavaScript concepts.

This looked as if support would be coming, but then I found this post and also this.

Higher Order Component – huh?

So looking at some posts about the concept helped me get a better understanding of what it’s trying to do, I decided to try and change my example to use it. Sadly it didn’t work out as I’ve been unable to get the higher order component solution working in a manner akin to a mixin. It’s not so much a replacement but a totally different approach that requires things be done differently.

However, always keen to learn, I rewrote things and ended up with this.

function TimedAppointment(duration, title) {
  const Appointment = React.createClass({
    getInitialState: function() {
      return {duration: duration, 
              additional: 0,
              title: title}
    },
    componentDidMount() {
      this.props.updateInterval(this.props.id, this.totalTime());
    },  
    setAdditional(ev) {
      this.state.additional = parseInt(ev.target.value);
      this.props.updateInterval(this.props.id, this.totalTime());
    },
    totalTime() {
      return this.state.duration + this.state.additional;
    },
    finishTime() {
      return this.props.start.clone().add(this.totalTime(), "minutes");
    },
    render() {
      return (
        <div>
          <h3>{this.state.title}</h3>
          <p>Start: {this.props.start.format("HH:mm")}</p>
          <p>Duration: {this.state.duration} minutes</p>
          <p>Additional duration: <input type="number" step="1" ref="additional" 
                                         value={this.state.additional}
                                         onChange={this.setAdditional}/> minutes</p>
          <p>Total Time Required: {this.totalTime()} minutes</p>
          <p>Finish: {this.finishTime().format("HH:mm")}</p>
        </div>
      )
    }    
  });
  return Appointment;
};
var Hair = TimedAppointment(90, "Hair Appointment");
var Nails = TimedAppointment(30, "Manicure");

This is much neater and certainly gives me a single set of reusable code – no mixins required. It’s possibly far closer to where it should be and is still within my limited comfort zone of understandability.

If anyone cares to point out where I could have gone or what I could have done differently, please let me know 🙂

Summary

As time goes on I’m sure that the newer formats of javascript will become more of a requirement and so mixins probably won’t be much use going forward.

React Lessons Learned

I’ve been playing with React for a few days now and, as always, have run across a few things that made me scratch my head and seek help from various places. In an effort to capture some of the errors and mistaken assumptions I have made, I’ll try and show them in a very simple way below. As always, comments and suggestions for improvement are more than welcome.

A Simple Page

To try and show the issues I needed a simple app that I could write that followed what I was working on closely enough to be relevant but without a lot of the complexity that obscures the basic issues. After some thought I’ve come up with a very simple visit planner. It’s going to show a simple timeline of “visits” that can be added in any order and displayed in the same order. No attempt is made to save anything, it’s just data on a page that vanishes upon refresh.

Again, to keep it simple I’m not bothering with anything beyond a straight HTML page containing everything. As I’m not travelling while writing this I’ve used CDNJS for the libraries, but have gone with the non-minified versions to allow for greater debugging. I’m not sure it gets any simpler? I’ve not bothered with any CSS.

Starting Point

Without further fanfare, this is my starting point.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Visit Planner</title>
  </head>
  <body>
    <h1>Visit Planner</h1>
    <div id="content"></div>

    <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.7/react.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.7/react-dom.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.11.2/moment.min.js"></script>
    <script type="text/babel">
var Visit = React.createClass({
  getInitialState: function() {
    var start_times = [<option value="1" key="1">9 am</option>,
                       <option value="2" key="2">Midday</option>,
                       <option value="3" key="3">3 pm</option>];
    var today = moment();
    today.seconds(0).hour(9).minutes(0);
    return { start: today, start_times: start_times }
  },
  changeStartTime: function(ev) {
    switch(ev.target.value) {
      case "1": { this.setState({start: this.state.start.hour(9)}); break; }
      case "2": { this.setState({start: this.state.start.hour(12)}); break; }
      case "3": { this.setState({start: this.state.start.hour(15)}); break; }
    };
  },
  render: function() {
    return (
    <div>
      <p>
        <label>When will you be starting your visit?</label>
        <select onChange={this.changeStartTime}>
          {this.state.start_times}
        </select>
      </p>    
      <p>Visit starts at {this.state.start.format("HH:mm")}</p>
    </div>
    )
  }
});

ReactDOM.render(<Visit />, document.getElementById('content'));
    </script>
  </body>
</html>

There’s not much to say about this, it’s simple plain React. I’m using moment for the date/time and then simply ignoring the date portion as it provides nice functions for dealing with time intervals. As it stands it doesn’t do much 🙂

Visits

For the purposes of this each visit will simply be an object that has a description (to differentiate it from the other objects), a duration and an optional additional duration (initially set to 0). I’m going to create these as seperate classes (even though they will largely be identical) as it allows me to show things clearer. In reality they wouldn’t be done this way 🙂

My first pass at getting something basic looked like this.

var Hair = React.createClass({
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: {this.state.additional} minutes</p>
      </div>
    )
  }
});

var Nails = React.createClass({
  getInitialState: function() {
    return {duration: 30, additional: 0}
  },
  render: function() {
    return (
      <div>
        <h3>Manicure</h3>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: {this.state.additional} minutes</p>
      </div>
    )
  }
});

Again, nothing too fancy. The next step was to add the code to add them to the main Visit object. Obviously I would need a list of the objects, so I started by changing the initial state to

    return { start: today, start_times: start_times, appointments: [] }

Next, a couple of buttons to add appointments…

      <p>
        <button id="hair-btn" onClick={this.addAppointment}>Add Hair Appointment</button>
        <button id="nail-btn" onClick={this.addAppointment}>Add Manicure</button>
      </p>  

The addAppointment function is also pretty basic – or so I thought…

Try #1 to add Appointments

This was my initial attempt.

  addAppointment: function(ev) {
    var n = this.state.appointments.length;
    switch(ev.target.id) {
      case "hair-btn": { this.state.appointments.push(<Hair key={n}/>>); break; }
      case "nail-btn": { this.state.appointments.push(<Nails key={n}/>); break; }
    }
    this.forceUpdate();
  },

Adding a line to render these out,

      { this.state.appointments }

Gives us what appears to be a working page. Click a button – appointment appears. All looks good, so lets continue to add some other functionality.

Additional Time?

Part of the idea of each appointment is to allow each to have a certain amount of time added, so lets add that by changing the plain output to an input and adding a total time output in render.

var Hair = React.createClass({
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  setAdditional: function(ev) {
    this.setState({additional: parseInt(ev.target.value)});
  },
  totalTime: function() {
    return this.state.duration + this.state.additional;
  },  
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>
        <p>Total Time Required: {this.totalTime()} minutes</p>
      </div>
    )
  }
});

Having done that it’s now possible to add multiple objects and give each their own additional time – each is a self contained unit exactly as we’d expect. Having done that, how do we now create the timeline aspect of the main Visit object?

Timeline

The timeline is simple enough to imagine – we know the start time and how long each appointment takes, so we need to go through the and figure out a start time for each (based on either the start or the previous appointment) and then get a finish time. Changing any appointment should change all those after it, and changing the overall start time should change them all. As I want each object to be as self contained as possible, perhaps passing in the start time via the props makes sense? Changing the Hair object to allow this gave me the code below.

var Hair = React.createClass({
  getInitialState: function() {
    return {duration: 90, additional: 0}
  },
  setAdditional: function(ev) {
    this.setState({additional: parseInt(ev.target.value)});
  },
  totalTime: function() {
    return this.state.duration + this.state.additional;
  },  
  finishTime: function() {
    return this.props.start.clone().add(this.totalTime(), "minutes");
  },
  render: function() {
    return (
      <div>
        <h3>Hair Appointment</h3>
        <p>Start: {this.props.start.format("HH:mm")}</p>
        <p>Duration: {this.state.duration} minutes</p>
        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>
        <p>Total Time Required: {this.totalTime()} minutes</p>
        <p>Finish: {this.finishTime().format("HH:mm")}</p>
      </div>
    )
  }
});

Of course, unless I supply the start time it won’t do anything…

    switch(ev.target.id) {
      case "hair-btn": { this.state.appointments.push(<Hair start={this.state.start} key={n}/>); break; }
      case "nail-btn": { this.state.appointments.push(<Nails start={this.state.start} key={n}/>); break; }
    }

Running this looks good to start with, but there’s a problem – changing the start time doesn’t change the appointment. Changing the additional time (which forces an update) then updates the appointment and shows the correct time. Also, at present every appointment uses the same start time, so that needs fixed 🙂

Realisations

After playing with a few different things and much looking around at websites it became obvious that while react components are self contaiend objects, they can’t be used in the same way that I could use objects. So, time for a rethink.

React uses keys to determine whether an object is the same as one already rendered, so as long as the key isn’t changed objects will persist between renderings. Eah time it’s rendered I can change the props I use, so when I initially add an appointment I don’t need/want to create it, just record that enough details for it to be added during the render, probably with varying props.

Each appointment knows how long it needs to be (the fixed plus variable duration), but the main Visit object also needs to know. Time for the appointment to call the parent when something changes, so we need a callback.

First step, change the appointments to handle these changes. We need to trigger the callback in 2 instances – when the appointment is initially created (using the componentDidMount hook) and when the additional duration value is changed (via an event handler).

  componentDidMount: function() {
    this.props.updateInterval(this.props.id, this.totalTime());
  },  
  setAdditional: function(ev) {
    this.state.additional = parseInt(ev.target.value);
    this.props.updateInterval(this.props.id, this.totalTime());
  },

I found that using this.setState(…) in setAdditional always resulted in the value being the previous one, I’m guessing because an update was triggered. This led to the callback not always being called and some very odd results initially, hence my switch to simply setting the value directly and relying on the update triggered by the parent when receiving the callback to update things.

        <p>Additional duration: <input type="number" step="1" ref="additional" 
                                       value={this.state.additional}
                                       onChange={this.setAdditional}/> minutes</p>

With these in place, the next step is to modify the main visit class. As we’re adding in strict order and not interested in a dyanmic ordering ability, we will just use the length of the list as the ‘id’ for each appointment. We’ll also store the duration of each appointment in the data, ending up with this code.

  addAppointment: function(ev) {
    var n = this.state.appointments.length;
    switch(ev.target.id) {
      case "hair-btn": { this.state.appointments.push({what: 'hair', key: n, interval: 0}); break; }
      case "nail-btn": { this.state.appointments.push({what: 'nails', key: n, interval: 0}); break; }
    }
    this.forceUpdate();
  },

Now that we have the data being stored, next step is to add the callback that will update the intervals. This is a simple function that takes the index and the new interval duration, as shown below.

  updateInterval: function(idx, interval) {
    this.state.appointments[idx].interval = interval;
    this.forceUpdate();
  },

Finally we need to render the appointments with the correct start times.

  render: function() {
    var _appts = [];
    var _start = this.state.start.clone();
    this.state.appointments.map(function(app, idx) {
      var _props = {key: app.key, start: _start, id: app.key, updateInterval: this.updateInterval};
      switch(app.what) {
        case "hair": { _appts.push(<Hair {..._props}/>); break; }
        case "nail": { _appts.push(<Nails {..._props}/>); break; }
      };
      _start = _start.clone().add(app.intervals, "minutes");
    }.bind(this));
    return (
      ...

We also need to render the _appts list, so one final change.

      <p>Visit starts at {this.state.start.format("HH:mm")}</p>
      { _appts }

Summary

So it all works and the way it works is surprisingly flexible, if a little different than where I started 🙂 I’m sure there are better ways of doing all this, so get in touch if you can educate me 🙂

More Webpack

Following on from yesterdays experiments with webpack and react, I’ve changed things round a little today in an effort to make things even simpler.

webpack.ProvidePlugin

One of the annoying aspects of splitting the single file into smaller pieces was the need to add the require lines to every one. I thought there must be a simpler way – and there is! The ProvidePlugin allows you to tell webpack that certain requires should be added as needed. To use it a few changes are needed in the webpack configuration.

First you’ll need to make sure that webpack is available, so add the require at the top of the file.

var webpack = require('webpack');

Then add a plugins section with the plugin and it’s configuration.

  plugins: [
    new webpack.ProvidePlugin({
      React: "react",
      ReactDOM: "react-dom"
    }),
  ],

Following this change, the javascript files can be simplified by removing the require lines for react or react-dom, so /components/Main.js becomes just

module.exports = React.createClass({
  render: function() {
    return (
      <p>Hello World!</p>
    )
  }
});

This proved very useful for the project as a few components used jquery, but remembering which ones and to include the require line wasn’t an issue once this plugin had been added – with the appropriate configuration line ($: “jquery”).

Source Maps

It always seems like a good idea to create a .map file, so it’s as simple as adding a line telling webpack to do just that.

devtool: "source-map",

CSS

Handling CSS is something that never seems as simple as it should/could be, but initially webpack seems to offer a solution. 2 new loaders are needed, so install them via npm.

npm install --save-dev style-loader css-loader

Then we need to tell webpack that we can use them for css files, by adding a section to the loaders.

  module: {
    loaders: [
      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react']
        }
      },
      {
        test: /.+.css$/,
        loader: 'style-loader!css-loader'
      }
    ]
  }

As we’re using the resolve to keep require usage simple, we should update that as well (I’ve added the css files in an inventively named css directory),

  resolve: {
    modulesDirectories: ['node_modules', 'components', 'css'],
    extensions: ['', '.js', '.jsx', '.css']
  },

After making these changes, we need to actually add a require for some css. This was done in App.js as follows,

require('style.css');

However, running webpack produced a small surprise and a touch of confusion.

$ webpack
Hash: f4a6a633c9be19bddd79
Version: webpack 1.12.13
Time: 1717ms
        Asset    Size  Chunks             Chunk Names
    bundle.js  689 kB       0  [emitted]  main
bundle.js.map  808 kB       0  [emitted]  main
    + 164 hidden modules

Where was the CSS? The answer is simple, but also, to my mind, not obvious. It’s merged into the bundle.js and if you open it in an editor you’ll find it there. However, I want the css in a seperate file for a number of reasons, so this solution only partly works. The answer turns out to be another plugin.

npm install --save-dev extract-text-webpack-plugin

Once installed there are quite a few changes required to use it. First off, we need to extract the css rather than let it be bundled with the javascript. To do this we need to change the css loader.

      {
        test: /.+.css$/,
        loader: ExtractTextPlugin.extract('style-loader', 'css-loader')
      }

Before we can use the ExtractTextPlugin we need to require it, so this line needs to be added at the top of the file.

var ExtractTextPlugin = require("extract-text-webpack-plugin");

Finally we also need to output our extracted css, so we need to add this entry to the plugins.

new ExtractTextPlugin("bundle.css")

Running webpack now gives the expected results.

$ webpack
Hash: 55234e19dea7f3c8f04f
Version: webpack 1.12.13
Time: 1757ms
         Asset       Size  Chunks             Chunk Names
     bundle.js     679 kB       0  [emitted]  main
    bundle.css  107 bytes       0  [emitted]  main
 bundle.js.map     794 kB       0  [emitted]  main
bundle.css.map   87 bytes       0  [emitted]  main
    + 164 hidden modules
Child extract-text-webpack-plugin:
        + 2 hidden modules

With this approach I can simply add a require line into any component javascript file for required css and it will be automatically bundled. This works well but the ordering of things being added isn’t always as I’d wish it, so more care will need to be taken with how I write the css. This approach also opens up the prospect of moving to a LESS/SASS based approach as the css can be processed before being added to the bundle.

I’m reasonably sure I don’t need a map file for the css, but I haven’t found any simple solutions yet. Answers on a postcard.

Starting with React & Webpack

In recent days I’ve been working on a small single page JS app using React and bootstrap. After getting over my initial issues yesterday saw a lot of progress and I’m finding it far simpler to work with React than almost any other JS “thing” I’ve tried. However, (you knew it was coming didn’t you?) at present my single page app is just that – a monster of a single HTML file with a bunch of required files on my hard drive. While it’s allowed me to make good progress, going forward it’s not what I want.

Today was time to dive back into the murky world of npm, webpack and try and start splitting the monster into lots of smaller pieces. After finding a lot of tutorials online that helped, I’m scribbling this to try and capture what I learned today and hopefully help myself in the future from making the same mistakes! If it helps anyone else then that’s a nice bonus.

There are probably many better ways to do this, but I’ve yet to find them, so if you have suggestions or comments then pass them on as I’m keen to learn and improve my understanding.

Starting Point

For this I’m going to start with a really simple HTML page in a newly created directory containing just the files I need.

The HTML file looks like this.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Hello World!</title>
  </head>
  <body>
    <h1>Yet Another Hello World...</h1>
    <div id="example">
    </div>

    <script src="react.min.js"></script>
    <script src="react-dom.min.js"></script>
    <script src="browser.min.js"></script>
    <script type="text/babel">
var Main = React.createClass({
  render: function() {
    return (
      <p>Hello World!</p>
    )
  }
});
ReactDOM.render(<Main />, document.getElementById('example'));
    </script>
  </body>
</html>

This works and produces the expected results, so now to make something simple very much more complicated 🙂

NPM

I’ll be using npm, so the next step is to get everything installed in preparation. I’ll skip the pain of figuring out what I missed first time around (and second, third etc) and just list everything that will be needed. I think the logic for the –save and –save-dev is right (–save if it’s needed for the final output, –save-dev if it’s only for the background development), but if it’s not I’m sure someone will yell.

npm init -y
npm install --save-dev webpack
npm install --save-dev babel
npm install --save-dev babel-core
npm install --save-dev babel-loader
npm install --save-dev babel-preset-react
npm install --save-dev babel-preset-es2015
npm install --save react
npm install --save react-dom

As I’m not going to be putting this into git and it’s only for my own use, I added

  "private": true

which stopped the constant warning about no repository being listed.

Structure

This is the area I am still experimenting with. There are loads of boilerplate projects in existance, but while all generally agree o principals the way they arrange things is often different. My small brain can’t handle too much complexity and things can always be changed later, so I’m going to start simple as this is a simple project.

/package.json
/webpack.config.js
/components
           /App.js
           /Main.js
/html
     /index.html

The plan is that all the react components go into (guess where?) components and the HTML file into HTML. I’ll use webpack to generate a single js file also into html so that I could distribute the entire directory. As I want to keep components as components, I’m also going to create an App.js file in components that will include the top level React element.

NB I haven’t decided if I’m going to use .jsx for JSX formatted files. It seems like a good idea but it’s another change 🙂 For now I’ll try and make sure things will work whichever extension I use, but here it’ll be plain ‘ol .js

/componets/Main.js

var React = require('react');

module.exports = React.createClass({
  render: function() {
    return (
      <p>Hello World!</p>
    )
  }
});

/componets/App.js

var React = require('react');
var ReactDOM = require('react-dom');
var Main = require('./Main.js');

ReactDOM.render(<Main />, document.getElementById('example'));

/html/index.html

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Hello World!</title>
  </head>
  <body>
    <h1>Yet Another Hello World...</h1>
    <div id="example">
    </div>

    <script src="bundle.js"></script>
  </body>
</html>

Having split all the files, I now need to configure webpack to build the bundle.js file.

Webpack

Webpack expects to find a file called webpack.config.js in the directory you call it from, so we’ll create one. We need to tell it which file is the “entry” where it will start building the required file from and where we want it to create the output. If that’s all you supply there will be an error,

Module parse failed: /components/App.js Line 5: Unexpected token < You may need an appropriate loader to handle this file type.

The module configurations allow you to tell webpack how to handle the various files, so need to add a line that tells webpack how to handle our files. To do this we add a module loader entry that tests for the appropriate files and uses babel-loader. To handle the JSX formatting you also need to tell it to use the react preset.

var path = require('path');

module.exports = {
  entry: path.resolve(__dirname, "components/App.js"),
  output: {
    path: path.resolve(__dirname, "html"),
    filename: 'bundle.js'
  },
  module: {
    loaders: [
      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react']
        }
      }
    ]
  }
};

With this file in place, you should now be able to generate the required bundle.js file by running webpack.

Hash: 1a44318bada57501b499
Version: webpack 1.12.13
Time: 1484ms
    Asset    Size  Chunks             Chunk Names
bundle.js  677 kB       0  [emitted]  main
    + 160 hidden modules

After webpack runs OK, the file html/index.html can be loaded in a browser and looks exactly as the original file 🙂 Success.

Improvements…

While using ‘./Main.js’ for the require is OK, it seems a little messy, so it would be easier to be able to use ‘Main’. To allow this we need to tell webpack a bit more about where to find files, which we do using the resolve entry.

  resolve: {
    modulesDirectories: ['node_modules', 'components'],
    extensions: ['', '.js', '.jsx']
  },

Following this addition, we can change App.js to use

var Main = require('Main');

Using the expected config file for webpack is fine, but if it’s not in the root directory you need to use the –config flag for webpack. I had this in one of my iterations and did forget it a few times – more likely with code you don’t look at for a while. To allow this to work and not forget, it’s possible to add a build command to package.json telling it to use webpack. This can also be used to provide commonly used flags.

  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },

becomes

  "scripts": {
    "build": "webpack --progress --colors",
    "test": "echo \"Error: no test specified\" && exit 1"
  },

To build using the command you need to use npm run build.

Next Steps

Now that I have a structure and a working webpack I can start to split up my monster file. I also need to figure out how to handle CSS and then look at add some extras to webpack to get minification of the output. Figuring out how to strip the CSS to the base essentials would also be nice, but that’s an optimisation for the future 🙂