Simple ZFS Usage

For years all I heard from various people I know was “ZFS“. It was the answer to almost every question and given how few OS’s supported it always appeared to be the promised land over the rainbow. That was then, but in recent years the availability of ZFS has grown quickly and I can now fully appreciate what they were saying was true. I’ve used ZFS on Ubuntu and now am using it on FreeBSD without too many problems.
I’m far from an expert but this may help others make the switch.

Keeping it Simple

My main concern is preserving data rather than speed, so I’ve chosen to go with a simple “mirror” setup for my pools. I’ve also not enabled compression at present, though I may in the future.

Root

On both Ubuntu and FreeBSD the ZFS commands root privilege, hence the use of sudo below.

Pools?

ZFS file systems live within a ZFS pool. Before you can create a mountpoint and it’s associated file system you need a pool. For a pool you need at least one physical drive available.
zpool operates on pools and zfs operates on file systems.

On my system I added 2 4Tb drives which after looking at dmesg and fdisk I confirmed were /dev/sdd and /dev/sde. Given these were new disks I didn’t do anything to create partitions as I was about to use them for the pool.

$ sudo zpool create tank mirror /dev/sdd /dev/sde

Once created it was time to check their status.

$ sudo zpool status
pool: tank
 state: ONLINE
  scan: none requested
config:
 
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
 
errors: No known data errors

File Systems

One aspect I didn’t fully appreciate was that file systems will automatically be mounted under the pool name. This can be easily changed but can make things much easier.

To create a file system with a mountpoint of /tank/media all you need is this command.

$ sudo zfs create tank/media

However, if you want the file system to be mounted as /media you can specify it at creation.

$ sudo zfs create -o mountpoint=/media tank/media

Once created, changing your mind is simple enough as well. To change the mountpoint from /tank/media to /media is just

$ sudo zfs set mountpoint=/media tank/media

Once created, the newly created file systems appear as any other mounted filesystem.

$ mount
/dev/ada4p2 on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
tank/media on /media (zfs, local, nfsv4acls)
tank on /tank (zfs, local, nfsv4acls)

I’m far from an expert on this, but it’s all quite logical and if I can figure it out… 🙂

Fail2ban Logging

Over the last few days I’ve been trying to help one of my users who had an odd connectivity issue for my server. After looking at the obvious issues it started to look more and more like he had triggered one of the fail2ban rules and was being blocked by iptables. This has been the case a few times and normally checking the rules shows the problem, but this time it didn’t show anything obvious.

$ sudo iptables -L
...
Chain fail2ban-ssh (1 references)
target     prot opt source               destination         
REJECT     all  --  62-210-205-239.rev.poneytelecom.eu  anywhere             reject-with icmp-port-unreachable
RETURN     all  --  anywhere             anywhere   
...

After trying a few other changes and finidng the results to be very intermittent it started to look/feel more like an ipatbles issue, but probably one that was being triggered and then expiring resulting in the intermittent results. But how to view the IP addresses that had been blocked? There was nothing in the logs…

Don’t Do This!

I started looking for answers to enable logging in iptables, and after finding a few places that detailed how it was done I made some changes – only to lock myself out of the server! Yes, iptables is a very powerful tool and getting it wrong results in real problems when you are connected to your server remotely 🙁 However, as I hadn’t configured iptables to load rules at startup a simple reboot would have restored my access had I thought more about it at the time before using a rescue image 🙂

Logging via Fail2ban

The solution turns out to be very straightforward! My jail.conf file had this configuration

# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overridden globally or per
# section within jail.local file
banaction = iptables-multiport

To enable logging of the actions this is simply changed to use the iptables-multiport-log action.

# Default banning action (e.g. iptables, iptables-new,
# iptables-multiport, shorewall, etc) It is used to define
# action_* variables. Can be overridden globally or per
# section within jail.local file
banaction = iptables-multiport-log

Messages are logged to /var/log/syslog on this system.

...
Jun 24 15:01:05 x kernel: [ 3773.082548] fail2ban-ssh:DROP IN=eth0 OUT= MAC=d4:3d:7e:ec:ea:55:cc:e1:7f:ac:56:9e:08:00 SRC=121.18.238.19 DST=aaa.aaa.aaa.aaa LEN=188 TOS=0x00 PREC=0x00 TTL=51 ID=14196 DF PROTO=TCP SPT=53375 DPT=22 WINDOW=296 RES=0x00 ACK PSH FIN URGP=0 
...

Services

Having just started migrating away from Ubunutu to FreeBSD I found the return to using /etc/rc.conf to be “quaint”, but after the issue today I have a new found respect. Rather than having to spend time looking around for how/where the service is started on Ubuntu, it’s all in one place with FreeBSD. Not only that, but when using a rescue image I can mount the drive, find /etc/rc.conf and switch off an offending service quickly and easily.

Thankfully my server is still running 14.04 and so hasn’t been ruined by systemd or this wee adventure would have been far more painful. Another good reason to keep the migration moving forward.

Multiple Postfix Instances

I’ve been using Postfix for around 5 years now and it’s been a great solution for mail. Initially I used a single instance, but as the mail volume grew it started to cause bottlenecks and frustrations. The solution was to move from one instance to 3!

Basic Flow

Postfix
The theory is to have each instance of postfix performing a specific role and using the standard communication mechanism between them. Each instance operates independently and using it’s own queues while sharing a common log.

Submission Instance

The submission instance listens only on localhost and has few additional checks.
This does open a potential issue if the host is compromised as it will be able to access the port directly, but if compromised then there are other issues 🙂

Input Instance

My input instance listens on the usual SMTP and SMTPS ports for all available interfaces, allowing local access as well as external. It has a lot of additional checks, spam checking, DMARC and so on as well as rate limiting.

Output Instance

All mail that is accepted by either of the input instances arrives here, either for local or external delivery. This is the only instance that delivers mail and as such only the appropriate checks are carried out.

Installation

The Postfix website has a page explaining how to set up the various instances and it’s a great place to start, but in my experience it takes a while to get the configuration working exactly as you want it. Adding additional checks sometimes has unexpected consequences, so the usual guide of “make small changes, let the dust settle” applies.
As postfix makes use of chroot after the initial installation I found a number of files that were required weren’t available, so I had to copy them across into the correct places within the directory structure created. This has meant that following updates some files and libraries are out of date and so had to be updated manually. Log entries are made for such files following a restart.

Start/Stop/Reload

This doesn’t change and acts on all instances.

$ sudo postfix reload
postfix/postfix-script: refreshing the Postfix mail system
postfix-out/postfix-script: refreshing the Postfix mail system
postfix-in/postfix-script: refreshing the Postfix mail system

Queue Checking

After making the changes it took me a while to always remember how to check the queues! It’s explained in the explanation but the fact the command still works and doesn’t tell you can be a bit unsettling.

$ mailq
Mail queue is empty

Checking all the instances requires the use of postmulti.

$ postmulti -x mailq
Mail queue is empty
Mail queue is empty
Mail queue is empty

Each instance reports separately, hence the 3 lines in the response.

Summary

While it took a while to get every instance running as I wanted, the advantages of having each instance running at their own speed has been a huge increase in throughput with a reduction in the load on the machine.

PostgreSQL & FreeBSD 10.3

I’ve recently started moving back to FreeBSD from Ubuntu. As it’s a large move and I’ve not touched FreeBSD for quite a few years, baby steps are required. With that in mind I’ve started small with my home server and once I’m comfortable with that and how things work I’ll look at moving my online server. The reasons for the move will have to wait for another post 🙂

Previous the home server had both MySQL and PostgreSQL installed and running as that reflected how the online server was setup. With this new start I’ve decided to skip the MySQL server and instead move totally to PostgreSQL. However, the change isn’t without it’s issues and challenges – mainly around getting it installed!

Build

The build was easy enough. I’m using the ports to try and keep things as up to date and configurable as possible as time isn’t a large factor in this build.

$ cd /usr/ports/databases/postgresql95-server
$ sudo make install clean

After answering the various configuration choices all went well and the build completed. Reading the post install text will reveal you need another step before you can do much.

Initdb

After installing I wanted to choose a different location for the actual database files – one that would be on my zfs pools with all their extra protections. With previous installations I would have edited the configuration file to point to the new location, but looking around there were no configuration files! Hmmm. As a first step, I decided to just run the initdb command and see what was installed.

$ sudo /usr/local/etc/rc.d/postgresql initdb

Files

After running all the expected files were present but the database cluster had also been created under the same tree. Not quite what I wanted. Time to look at the config script and figure out what was going on…

In the startup script I found this block, which points to the “usual” FreeBSD configuration mechanism as being usable for the changes I wanted.

load_rc_config postgresql

# set defaults
postgresql_enable=${postgresql_enable:-"NO"}
postgresql_flags=${postgresql_flags:-"-w -s -m fast"}
postgresql_user=${postgresql_user:-"pgsql"}
eval postgresql_data=${postgresql_data:-"~${postgresql_user}/data"}
postgresql_class=${postgresql_class:-"default"}
postgresql_initdb_flags=${postgresql_initdb_flags:-"--encoding=utf-8 --lc-collate=C"}

Configuration

After finding that the configuration settings would be honoured (as I should have expected) I just needed to add them.

$ sudo vi /etc/rc.conf
...
postgresql_enable="YES"
postgresql_data=/usr/local/pgdata
postgresql_initdb_flags="--encoding=utf-8 --lc-collate=C"

Close, but not quite…

After making the changes I tried again.

$ sudo service postgresql initdb
The files belonging to this database system will be owned by user "pgsql".
This user must also own the server process.

The database cluster will be initialized with locale "C".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /usr/local/pgdata ... initdb: could not create directory "/usr/local/pgdata": Permission denied

Hmm, OK, so it’s almost working but as the PostgreSQL commands are run as the pgsql user and not root, the inability to create a new directory isn’t unexpected. I guess what I need to do is create the directory, change ownership and then run the initdb command.

$ sudo mkdir /usr/local/pgdata
$ sudo chown pgsql /usr/local/pgdata
$ sudo service postgresql initdb
...
Success. You can now start the database server using:

    /usr/local/bin/pg_ctl -D /usr/local/pgdata -l logfile start

Success

The database was installed, configuration files are all in the same location and while for this post I just used /usr/local/pgdata I can now create the database where it needs to be. Interestingly though, removing the /usr/local/pgsql directory caused the initdb script to fail, so the directory needs to be present, though it stays empty throughout the process, probably due to being listed as the home directory of the pgsql user.

For future installs, this will be my process

  1. build and install postgresql
  2. set postgresql variables in /etc/rc.conf
  3. create postgresql data directory and change ownership
  4. run postgresql initdb
  5. start postgresql

Update

It was pointed out to me that I probably wanted to set the encoding of the database to UTF-8, so I needed to add this line to my /etc/rc.conf file

postgresql_initdb_flags="--encoding=utf-8 --lc-collate=C"

This line is given at the top of the script but I’d missed it earlier.

RaspberryPi & SWD

The KroozSD board now comes with a handy SWD connector, a simple 3 pin 1mm JST located in the middle of the board. As debugging is one of the harder aspects of embedded development the connector has always been an interesting addition but finding a way to interact with the port has proved tricky.

kroozsd_swd

While I have several devices that claim to offer the required interfaces, none of them have proved to be supported. However, I came across an article explaining both how to build the required software but also how to physically connect the port to a RaspberryPi yesterday and so less than an hour later I was trying to figure out SWD!

Physical Connections

Figuring out how to connect the SWD pins to the RaspberryPi was my first challenge. Thankfully this post gave a diagram , which while not as clear as it could have been gave me sufficient information to make a start. I used a small breadboard to allow me to put the resistor inline and attached it to the RaspberryPi via some jumper cables.

raspberrypi_swd

Now the port was attached, it was time for the software.

Software

This Adafruit article provided exactly what I needed to build OpenOCD on the RaspberryPi. The instructions aren’t hard to follow but I ignored the part about connecting the device as I had already done that and the configuration section wasn’t relevant for the STM32 board.

As per the Adadfruit tutorial, I saved the config shown below to openocd.cfg.

source [find interface/raspberrypi2-native.cfg]
transport select swd

set BSTAPID 0x06413041
source [find target/stm32f4x.cfg]
reset_config srst_only srst_nogate

init
targets

After saving the file I then just ran openocd 🙂

Hello?

Initially there were just invalid responses, so I used a small connector to force the board to boot using the builtin ST bootloader rather than the Krooz bootloader (UART 3 SYS pin connected to the +3.3V pin). This brought the board to a simplified state and enabled SWD. When running this time things looked a more encouraging.

pi@raspberrypib:~/swd $ sudo openocd
Open On-Chip Debugger 0.10.0-dev-00319-g9728ac3 (2016-05-22-19:00)
Licensed under GNU GPL v2
For bug reports, read
	http://openocd.org/doc/doxygen/bugs.html
BCM2835 GPIO nums: swclk = 25, swdio = 24
BCM2835 GPIO config: srst = 18
srst_only separate srst_gates_jtag srst_push_pull connect_deassert_srst
adapter speed: 2000 kHz
adapter_nsrst_delay: 100
srst_only separate srst_nogate srst_push_pull connect_deassert_srst
cortex_m reset_config sysresetreq
srst_only separate srst_nogate srst_push_pull connect_deassert_srst
Info : BCM2835 GPIO JTAG/SWD bitbang driver
Info : SWD only mode enabled (specify tck, tms, tdi and tdo gpios to add JTAG mode)
Info : clock speed 2002 kHz
Info : SWD DPIDR 0x2ba01477
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
Error: stm32f4x.cpu -- clearing lockup after double fault
Polling target stm32f4x.cpu failed, trying to reexamine
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
    TargetName         Type       Endian TapName            State       
--  ------------------ ---------- ------ ------------------ ------------
 0* stm32f4x.cpu       cortex_m   little stm32f4x.cpu       halted

I will admit the first time it ran successfully I was a little surprised and wondered what to do next, but some quick searches revealed the answer!

SWD Adventures

Given my RaspberryPi is running without a screen or keyboard, the simplest way to access it is via a remote connection. OpenOCD provides just such a connection via a telnet connection, so on my laptop it’s simple enough to gain access.

$ telnet 192.168.1.95 4444
Trying 192.168.1.95...
Connected to 192.168.1.95.
Escape character is '^]'.
Open On Chip Debugger
>

I’m primarily interested in programming the flash memory, so I started by trying to get information about that.

> flash info 0                    
Device Security Bit Set
#0 : stm32f2x at 0x08000000, size 0x00100000, buswidth 0, chipwidth 0
	#  0: 0x00000000 (0x4000 16kB) protected
	#  1: 0x00004000 (0x4000 16kB) protected
	#  2: 0x00008000 (0x4000 16kB) protected
	#  3: 0x0000c000 (0x4000 16kB) protected
	#  4: 0x00010000 (0x10000 64kB) protected
	#  5: 0x00020000 (0x20000 128kB) protected
	#  6: 0x00040000 (0x20000 128kB) protected
	#  7: 0x00060000 (0x20000 128kB) protected
	#  8: 0x00080000 (0x20000 128kB) protected
	#  9: 0x000a0000 (0x20000 128kB) protected
	# 10: 0x000c0000 (0x20000 128kB) protected
	# 11: 0x000e0000 (0x20000 128kB) protected
STM32F4xx - Rev: Z
> flash banks
#0 : stm32f4x.flash (stm32f2x) at 0x08000000, size 0x00100000, buswidth 0, chipwidth 0

One thing that did catch me out initially was that this is just a telnet session, so any files that I referenced needed to be on the RaspberryPi *not* my local machine. Once the required files were transferred across it was time to update the firmware. Thankfully OpenOCD provides a simple way to do this.

> program ap.bin erase verify 0x08004000
adapter speed: 2002 kHz
stm32f4x.cpu: target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0x61000003 pc: 0x2000002e msp: 0x2001fff0
adapter speed: 4061 kHz
** Programming Started **
auto erase enabled
stm32x device protected
failed erasing sectors 1 to 5
embedded:startup.tcl:454: Error: ** Programming Failed **
in procedure 'program' 
in procedure 'program_error' called at file "embedded:startup.tcl", line 510
at file "embedded:startup.tcl", line 454

It makes sense to now allow the device to be programmed while locked, so to unlock the device, the command is stm32f2x.

> stm32f2x unlock 0
stm32f2x unlocked.
INFO: a reset or power cycle is required for the new settings to take effect.
> reset halt
adapter speed: 2002 kHz
stm32f4x.cpu: target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0x61000003 pc: 0x2000002e msp: 0x2001fff0

Now the device is unlocked, I can try programming again.

> program ap.bin erase verify 0x08004000
adapter speed: 2002 kHz
stm32f4x.cpu: target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0x61000003 pc: 0x2000002e msp: 0x2001fff0
adapter speed: 4061 kHz
** Programming Started **
auto erase enabled
wrote 245760 bytes from file ap.bin in 5.491092s (43.707 KiB/s)
** Programming Finished **
** Verify Started **
verified 153560 bytes in 0.333037s (450.283 KiB/s)
** Verified OK **

Now that it has programmed, time to lock the device again!

> stm32f2x lock 0                       
stm32f2x locked

Conclusion

I’m sure there is a lot more that I can do with SWD and I now have a way of connecting and using it. Sadly I’m no further forward with getting the board working 🙁

Quad Update #7

In my earlier post I said that the bootloader for the KroozSD was luftboot, but in reality it’s available here. The version contained in that tree doesn’t appear to be fully up to date as there is no CRC and the strings reported by the device are different, but I’m assuming the current code is similar.

While I realise I could simply switch to a different laptop, I can’t help but feel this should be a simple enough problem to fix and as I’m always interested in learning another wee adventure begins!

USB Debugging

Having gone from working USB transfers to non-working USB transfers I’ve been doing a little bit of debugging of the USB. The stm32_mem.py script uses PyUSB so my first stop was to enable additional debug information for the library. The library authors have made this very easy.

$ export PYUSB_DEBUG=debug
$ ./stm32_mem.py ...
2016-05-21 10:05:06,719 DEBUG:usb.backend.libusb1:_LibUSB.__init__()
2016-05-21 10:05:06,721 INFO:usb.core:find(): using backend "usb.backend.libusb1"
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.enumerate_devices()
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.get_device_descriptor()
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.get_device_descriptor()
2016-05-21 10:05:06,721 DEBUG:usb.backend.libusb1:_LibUSB.get_device_descriptor()

The level of information is impressive and allowed me to check that no additional calls had been made and that what I thought was taking place was reflected in the calls being made. Having checked that the next level was to look at whether there were other calls being made by the system. For this I turned to usbmon.

The Ubuntu kernel already has the basic support needed, so the first step was simply to load the usbmon kernel module and verify that it was available.

$ sudo modprobe usbmon
$ ls /sys/kernel/debug/usb/usbmon
0s  0u  1s  1t  1u  2s  2t  2u

Finding the bus number was easy enough.

$ dmesg
...
[25450.332261] usb 1-2: new full-speed USB device number 34 using xhci_hcd
[25450.462462] usb 1-2: New USB device found, idVendor=0483, idProduct=df11
[25450.462505] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[25450.462510] usb 1-2: Product: KroozSD CRC
[25450.462514] usb 1-2: Manufacturer: S.Krukowski
[25450.462517] usb 1-2: SerialNumber: Bootloader

So, bus #1 was where to look, but which endpoint did I need? The usbmon documentation explains that endpoints ending with ‘u’ are the best ones to use, so ‘1u’ endpoint was what I wanted. Getting the data into a file so I could analyse it turned out to be as easy as

$ cat /sys/kernel/debug/usb/usbmon/1u > krooz.mon

Running the stm32_mem.py script again (in a different terminal) resulted in a capture file that contained a lot of information so I used CTRL-C to stop the capture. Analysing the data turned out to be simple enough as well thanks to Virtual USB Analyzer. The app wasn’t installed by default but was in apt, so installation and usage was simple enough.

$ sudo apt-get install vusb-analyzer
$ vusb-analyzer krooz.mon

The app displayed the information needed, but without DFU support built in some interpretation was required.

Not an issue…

One of the recurring entries in dmesg has been

[25457.004284] usb 1-4: usbfs: interface 0 claimed by btusb while 'python' sets config #1

Looking at this in more detail it’s not an issue and merely informing me that the discovery phase of the stm32_mem.py script has attempted to claim ownership of my bluetooth device (which has a DFU endpoint). It would be nice if PyUSb had a way to check whether a device was already claimed, but I couldn’t find anything that would allow such a check. It does remove one source of concern.

Results?

Everything I looked at showed me that the script does exactly what I expected it to and there is no interference from anything else on my system. Not a big surprise and certainly what I expected, but it also illustrates how easy it can be to get the additional information I wanted. As the scripts haven’t changed I can rule that out as the source of problem, which would imply that the bootloader code has an odd bug in it somewhere. Figuring out what and how to fix it will be much harder.

The bootloader code I have looked at all appears to be simple enough that there is only one way the manifestation stage can be triggered.

static int usbdfu_control_request(usbd_device *usbd_dev, 
                                  struct usb_setup_data *req, u8 **buf,
		                  u16 *len, 
                                  void (**complete)(usbd_device *usbd_dev, struct usb_setup_data *req))
{
    if ((req->bmRequestType & 0x7F) != 0x21)
        return 0; /* Only accept class request. */

    switch (req->bRequest) {
        case DFU_DNLOAD:
            if ((len == NULL) || (*len == 0)) {
                usbdfu_state = STATE_DFU_MANIFEST_SYNC;
                return 1;
            } else {
        ...

This is meant to be triggered by an empty download request, but somehow is being triggered following a 2050 byte download. The function is a callback supplied to the libopencm3 usb setup code,

    usbd_register_control_callback(usbd_dev,
                                   USB_REQ_TYPE_CLASS | USB_REQ_TYPE_INTERFACE,
                                   USB_REQ_TYPE_TYPE | USB_REQ_TYPE_RECIPIENT,
                                   usbdfu_control_request);

The sequence leading to the crash is always the same.

DFU_DNLOAD DFU_ERASE command
DFU_DNLOAD 2050 bytes
DFU_GETSTATUS => STATE_DFU_DNBUSY
DFU_GETSTATUS => STATE_DFU_IDLE
...
DFU_DNLOAD DFU_ERASE command
DFU_DNLOAD 2050 bytes of data
DFU_GETSTATUS => STATE_DFU_DNBUSY
DFU_GETSTATUS => STATE_DFU_MANIFEST

In all my debugging I see that the download transfer completes for the erase and then the data transfer, and after a few transfers (which is normally 3) working as expected the manifestation stage is triggered. Answers on how to further debug are welcomed on postcards (or emails) 🙂

Quad Update #6

#6! Who knew I’d get this far and still not have a flying quad? At least I’m getting closer and learning a lot along the way…

Satellite Receiver

I’ve updated the spektrum_serial.py code to have code that now converts the split channels into appropriate values and having watched it what it produces seems sane. Using this is a basis I changed the code in paparazzi to do the same conversions and rebuilt. All was well and so the next step was to flash the new firmware to the board – as usual.

Denied!

The flashing process is sometimes a little fussy and from time to time needs a few attempts, but after 30 or so attempts it was still not working. This is very unusual and checking the board it seemed as though the firmware I had flashed a while back has been removed – I have an empty board ready and waiting for code! The bootloader is still in place and when connected the USB device appears, so it’s just the upload that’s causing the problem. Given I’ve always uploaded using this laptop the sudden change is a little strange.
The KroozSD board uses the luftboot bootloader (or a modified version thereof) and a custom uploader which implements the DFU protocol. Running with debugging switched on didn’t show much, so I dug a little deeper. After adding some code to see the state being returned by the bootloader the problem became apparent (with my additional debug messages).

Using device : ID 0483:df11 S.Krukowski - KroozSD CRC - Bootloader
Programming memory from 0x08004000...
[0%=====================50%=====================100%]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Download Idle [05]
State = DFU Download Busy [04]
State = DFU Manifest [07]

The DFU process should only enter the final “Manifestation” state when the code is complete and a zero length download is sent. That’s what the DFU specification says and what the luftboot code looks for, so quite why it’s entering it so early is unclear. Each write until then appears to be sent OK and there is no sign of any extra zero byte requests being sent.

What next?

My options appear to be switching to another computer for uploading the firmware, getting an external USB interface that doesn’t cause this problem or figuring out what’s going on and fixing it. Of those I suspect the last may be a little beyond me 🙁 I guess when I have time next week I’ll try again with a different computer.

Quad Update #5

Having spent some more time looking at the Spektrum SPM9246 satellite receiver and it’s output I’ve made some progress. When trying to use this with Paparazzi I’ve had very variable results and it fails to work.

Direct Binding Doesn’t Work 🙁

I’ve been unable to get the binding via the KroozSD board to work with the satellite and watching the output using pigpiod (as detailed Quad Update #4) the reason became clear as the simple series of pulses was never visible. I’m still a little unclear why, but after hooking up the RaspberryPi I managed to replicate the required pulse pattern and the satellite receiver went straight into bind mode. Binding to my Spektrum DX8 failed however – probably because the required responses weren’t sent as these would normally be provided by the AR8000. After the failed attempt the receiver was not receiving any information so I restarted the bind process using the AR8000 and all was well again.

As this makes sense and appears logical, I think it’s worth ignoring the possibility of binding my satellite receiver directly from a Paparazzi board.

DSMX

Every time I bind the receiver the transmitter reports it’s connection as DSMX and 22ms. Given the individual frames are sent every 11ms this means that there are 32 bytes sent per frame, which after removing the 2 byte headers leaves 28 bytes of channel data, or 14 channels available. Watching the data I often see the “split channels” where movements would be reflected across 2 channels for one control. As an example, when the channel data is split idle throttle appears as (output from spektrum_serial.py)

    0    1
 ---- ----
  354    0

Increasing the throttle position results in

    0    1
 ---- ----
  900    0

Increasing it further, results in

    0    1
 ---- ----
    0  500

Every time I see this behaviour the number of bits per channel is 10, whereas when the number of bits per channel is 11 I see a single value. From this I am assuming that DSMX always uses 2048 values and hence requires 11 bit data per channel. If the transmitter/receiver decide to transmit 10 bit data then 2 channels are required to transmit the required accuracy (2 10 bit numbers == 11 bit number) and hence the channels are split. While the throttle data is simple to understand, the other “stick” controls are a little more interesting. The figures below are for the pitch control, but similar results are seen for yaw and roll.

    4    5
 ---- ----
 1004    0

With the pitch fully forward

    4    5
 ---- ----
  320    0

And fully back

    4    5
 ---- ----
    0  657

AUX 1 is a simple toggle switch, but even that is shown across 2 channels.

    8    9                            8    9
 ---- ----   switched from 0 to 1  ---- ----
  342    0                            0  682

AUX 2 is a 3 way position switch, and the values are similarly recorded across 2 channels.

   12   13                           12   13                           12   13
 ---- ----   switched from 0 to 1  ---- ----   switched from 1 to 2  ---- ----
    0  682                            0    0                          342    0 

AUX 3 is a rotary switch, so starting with it at the 7 o’clock position,

   14   15                              14   15
 ---- ----   turning fully clockwise  ---- ----
    0  682                             342   26

These are just some values I grabbed today, but they do show the range of values that I have been seeing. Switching off the transmitter and then switching on again (while still observing data) sometimes results in the data being sent changing from 10 bit to 11 bit which then results in single values being reported (AUX 2 was at position 2 and AUX3 at fully clockwise position)

    0    1    2    3    4    5    6    7
 ---- ---- ---- ---- ---- ---- ---- ----
  354  998 1006 1006  342 1706  342  342

Changing to full throttle position, AUX 1 moving from 0 to 1 and AUX 2 moving to position 0 resulted in

    0    1    2    3    4    5    6    7
 ---- ---- ---- ---- ---- ---- ---- ----
 1698  998 1006 1006 1706 1706 1706  342

With AUX 2 in position 1 and AUX 3 at the 7 o’clock position,

    0    1    2    3    4    5    6    7
 ---- ---- ---- ---- ---- ---- ---- ----
 1698  998 1006 1006 1706 1706 1024 1706

The pitch value (channel 2) changes from 322 at fully forward to 1681 fully aft.

Going Forward

Every time I switch on my transmitter it starts off sending 10 bit data and so I have to deal with the split channels. Until I can reliably detect and convert that data into the single values that are needed I suspect it will never work reliably with paparazzi. The challenge is to convert this 10 bit data

    0    1    2    3    4    5    6    7    8    9   10   11   12   13   14
 ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
  354    0  998    0 1006    0 1006    0  342    0    0  682    0  682  939

into these 11 bit values

    0    1    2    3    4    5    6    7
 ---- ---- ---- ---- ---- ---- ---- ----
  354 1003 1004 1011  342 1706 1706  936

Transmitter Settings

The other option I have is to change the transmitter setting for the frame rate to force 11ms updates. Hopefully this will cause the data to be always sent as 11 bit and hence the problem will no longer be an issue. I’ll try this soon and see what happens.

UPDATE – I tried using the 11ms and DSM2 option under the Frame Rate settings menu and from what I observed neither makes much difference and the behaviour doesn’t change, so I have reverted back to the default settings.

spektrum_serial.py

The small python script I have written to allow me to watch the output from the satellite receiver is now available on github. As usual, any comments, suggestions welcome to improve it.

Quad Update #4

Following on from my experiments with the radio control receiver I still found myself unable to simulate the binding process. Everything I’d read suggested that a series of pulses needed to be sent just after power up, but the code in paparazzi that did this didn’t work for me.

Capturing the pulses

The starting point was using the existing AR8000 as when this is powered up with a bind plug inserted and the satellite receiver attached both go into bind mode (rapid flashing of their orange lights). It was obvious that the AR8000 sent the pulses to the satellite to trigger this, so by watching the signal line I should be able to see them. The next question was how.
Bring out the RaspberryPi 🙂

Both receivers are happy at 3.3V and the GPIO pins on the RaspberryPi are also 3.3V, so that part was easy. Using the +3.3V and GND pins supplied power and after some surgery on a cable I ended up with this (apologies for my poor drawing skills).

Bind signal setup

The next step was to watch for the signals.

pigpiod and piscope

While looking for a way to capture the changes in signal I came across a post detailing using a daemon and client to do that exact thing. The daemon is pigpiod which needs to be installed and running on the RaspberryPi (obviously). Downloading, building and installing were very easy. Running the daemon proved to be as underwhelming as I had hoped – it just sat and did it’s thing 🙂 The daemon allows the output of GPIO pins to be monitored, but the data can be sent remotely, making it very flexible.

I decided to run piscope on my laptop, so grabbed the code, built and then ran it. Here I did hit a minor surprise when I was greeted with a segfault. Trying the usual command line options for help resulted in the same response. Looking at the instructions on the site provided the answer.

export PIGPIO_ADDR=192.168.1.95
./piscope

Now I was greeted with a the application running.

piscope running

I opted to only have the GPIO pin I was using selected (Misc > GPIOs) shown and made sure that the Live feed was running.

GPIO selection. A Select All/Select None option would be nice :-)

piscope_gpio15

Powering on the AR8000 with the bind plug inserted resulted in a series of pulses. Finding the correct scale and position to look at these in the piscope app took a few attempts but with some experimentation I had a screen showing what I wanted. There were 9 pulses sent, each was 120us long and the gap between them was 120us. This agreed with the code I had seen which meant I was a little confused why the binding wasn’t working when attached to the autopilot board. Once the binding was complete, the usual flow of information was visible and the 11ms separation of frames was very evident.

Having never thought about using the RaspberryPi this way before, I’ve been pleasantly surprised by how straightforward it was.

pullup not pulling up?

Paparazzi allows you to have a “bind pin” which functions much like the bind plug for the AR8000 – it starts the bind process. I had this setup but looking at the code revealed that the bind code was always being executed, even if the bind pins weren’t activated. Hmmm. More investigation revealed that the function call that was meant to activate the pin and leave it “pulled up” wasn’t doing that so the pin was always being sensed as clear. Easy enough to find and fix. Once fixed the code was only being executed when the bind pin was activated though I still didn’t get the binding process started.

Single file pull request?

The next step would have been to issue a pull request to the main paparazzi branch, but there’s a previous change that I didn’t want to include in the pull request. I’m afraid my github fu is too weak to know how to do such a thing – answers on a postcard! I tried using the cheery-pick option, but that didn’t work and so now I’m left with my local repo in an odd state of affairs! D’oh.

This is not the GPS you want…

Having made the cable to connect my GPS to the board I was surprised when it failed to work. Further investigation has revealed that I ended up buying the wrong version! So, if you want a GPS from Navilock, the one you want should end with TTL not ERS. All is not lost as I can add a serial interface to the module and will therefore be able to use it in another project. I have found another supplier for a GPS module and it should be here soon.

Molex picoblade

The 1mm connectors really are as fiddly and annoying to use as their size suggests. I’ve managed to make the cables I need but the process hasn’t been without a few failures. The crimpers seem good and are very well made. Arriving wrapped in japanese newspaper was a nice touch 🙂

Quad Update #3

When I started updating the quad I didn’t expect it to be an instant process, but I didn’t really expect it to take as long as it is taking me 🙁 The delays are due to a number of reasons, but the time it takes for some orders to arrive certainly is a large contributor.

Radio Control

I’ve been using a Spektrum DX8 for radio control and with the previous board The individual “servo” leads were connected directly from the AR8000 receiver. The satellite receiver, an SPM9645 was attached but both boxes were required and took up a reasonable amount of space.

Spektrum AR8000 & SPM9645

On the KroozSD board there is no place for the individual servo leads to attach and a UART connection is discussed. This didn’t make much sense initially, but after reading it became apparent that the SPM9645 was a fully fledged receiver and communicated via a serial interface, so the board would only need that one small module. The space savings and simplicity of a single wire are huge advantages, but once again I ran into the issue of how to attach it. I’m not sure what the connection is on the receiver (1.5mm JST?) but the UART connector is a 4 pin Molex PicoBlade, a connector I’m still unable to create. Inside the Spektrum package I did find a cable that allowed me to connect the receiver using the standard 3 pins, and as 2 of the pins are ground and power with a single cable for the communications I started looking at whether I could use that cable and one of the PWM connectors. The answer turned out to be yes, but there is a slight issue. The power supplied via those pins is 5V but the receiver is meant to be used with 3.3V, so while it may be able to handle the voltage it’s not worth the risk long term.
I did connect it for testing and to ensure that the code I was looking at worked, but the results never quite looked right and the values being set didn’t make sense to me. As I like to understand such things more investigation was required!
Longer term I have ordered a small voltage regulator that will allow me to use the pin connection safely, though it could be a while making it’s way here from China!

Can we read it?

There are plenty of articles on the web about the output from the satellite receiver, but many of them deal more with the timing than the values, so I decided to see if I could read the data and see what was going on. Knowing it was a serial stream and that it needed 3.3V I hooked up a Raspberry Pi I had and looked for the GPIO connections I needed to make.

Raspberry Pi GPIO pinouts

The pins therefore needed to be 3.3V (pin #1), GND (pin #9) and serial RX (pin #10). The cable does twist, so I was careful to ensure I connected them correctly 🙂

Raspberry Pi pinouts

Once connected I connected via SSH to the Pi and tried to read data, but nothing appeared and a number of messages appeared about problems with the serial port. Having read about the serial port being used for kernel messages I wasn’t surprised, but the posts I found to correct it didn’t seem to apply. After trying a few different suggestions I eventually came across a post suggesting the raspi-config command, which proved to be the solution. Switching off the option for kernel debug (under Advanced Settings) and restarting the Pi resulted in access to the serial port and data flowing from the receiver. I wrote a small python script and the excellent PySerial module (installed via pip).

Receiver connectedRaspberry Pi

Yes we can!

Now that data was flowing, the next step was to try and interpret it. Thankfully many had gone before me and so I knew what to expect!

The data flowed from the serial port in 16 byte blocks, each of which looked similar.

00 5A 0B E9 2E AA 13 E8 1B F3 31 56 FF FF FF FF
00 5A A1 56 01 62 3A 0C FF FF FF FF FF FF FF FF
00 5A 0B E9 2E AA 13 E8 1B F3 31 56 FF FF FF FF
00 5A A1 56 01 62 3A 0C FF FF FF FF FF FF FF FF
00 5A 0B E9 2E AA 13 E8 1B F3 31 56 FF FF FF FF
00 5A A1 56 01 62 3A 0C FF FF FF FF FF FF FF FF

This was what I had been expecting. 2 bytes of “header” followed by 7 sets of channel information. As the DX8 is an 8 channel transmitter and there are only 7 sets per frame, the repetition of the 2 frames made sense. Looking at details of the “header” I expected the first byte to the dropped frame count and the second byte to contain information about what was to follow, namely the number of bits per channel and the number of frames required.

+------------------------+---------+---+------+------+
| dropped frame count    |         |bit|      |frames|
+------------------------+---------+---+------+------+
16                                                   0

When I bound the receiver to the transmitter it helpfully told me that it had bound using DSMX and 22ms, which should (according to what I had read) give me 11 bit channel data, hence I expected to find that the transmitter data decoded as 2 frames and 11 bit data – which it did. The next step was to look at the channel data. Each channel is sent as 16 bits and as it was using 11 bits for the data should look like this.

+---+------------+-----------------------------------+
|   | channel ID | Channel value                     |
+---+------------+-----------------------------------+
16                                                   0

The channel ID’s all appeared to be within a range of 0 to 15, though those with an ID of 15 I chose to ignore as that appeared to suggest it wasn’t valid data. As I only have 8 channels this seemed a larger number than I was expecting so I decided to try and watch the values in real time using the small python script I had created. (I’m still refining the code but can put it on github if anyone is interested.)

It turns out that the main 4 channels – throttle, elevator, aileron and yaw – are split between 2 channel ID’s, with the left and right deflection being reported separately. In the case of the throttle this results in the value for channel 0 going up until you reach the midpoint where it drops to 0! Obviously only reading one channel would result in some interesting results so I need to look at combining the values appropriately.

GPS

The GPS and it’s cable have arrived and are just awaiting the addition of the connector to attach to the board.

DSCF3474DSCF3479

Unrelated…

Having played with the STM32 I’ve found myself contemplating using it for some other projects I have planned. Looking around for an easy way to start experimenting I came across the STM32 Stamp which looked perfect for my initial needs, but as my skills aren’t up to constructing one myself, does anyone know where I can buy something similar? I don’t really want a discovery board as I want to be able to use it on a breadboard while I experiment with connecting it.

Quad Update #2

I’ve spent a bit more time with both Paparazzi and the KroozSD board, so these are a few more observations.

NB these ONLY apply to the KroozSD board 🙂

Configuration

After a good look through I changed the settings to match my configuration, tweaked a few to what I hoped would be more appropriate values and rebuilt. The ESC’s now make the correct noises and the telemetry looks good. The battery voltage being shown is that supplied via the BEC built into the ESC‘s, which is essentially 5V, so I had to adjust the battery warning levels to prevent the constant warnings. I’ll need to add a battery sensor to address this.

USB

When powered up, plugging in a USB lead doesn’t provide any information. Powering the board with a USB cable attached does provide a connection to the bootloader, as expected, but the lack of connection from the board when powered struck me as odd. Thinking about it more, there is no reason why the board should power the USB when it’s going to be as far away from a computer as it is! It’s possible to build a firmware that does have a USB connection when powered, but it would only really be useful for testing.
Having built such a firmware and loaded it I found that my next attempt to flash firmware failed as the USB connection was being made with the running firmware and not the bootloader. Watching the board more closely I found that LED 1 would be solid red when the bootloader was connected and ready for new firmware to be uploaded.

Calibration

The configuration file includes some default calibration values, but these need to be replaced by values for the my own board, so time to look at calibration. Prior to doing this I had a look at the PFD (Primary Flight Display) tab on the Paparazzi ground station to see how good/bad the initial values were. The result was it showed the board in a climbing left hand turn, but the right way up. Moving the board also showed the expected responses.
Calibrating the accelerometers is the first step and proved to be quite simple, though I’d suggest reading the “Basic procedure” description and generating a log file a few times before using trying the actual calibrations.

Accelerometers

Put the board in the correct orinetation, hold for 10 seconds, move to next one… Relatively easy to accomplish once you figure out the order of the orinetations as shown in the supplied image (below).

Reading the text and comparing it with the image made things clearer but the image could be better. I did the tests with the board secured in the quad as it made moving it around easier and allowed me to reinforce that I had it correct way round!

Once complete, I ran the script and had some new calibration values. Once the configuration file was updated, firmware built and uploaded the PFD showed level wings and nose – exactly as expected.

Magnetometer

I’ll do this over the next few days, but ran out of time yesterday.

Gyroscopes

I’m sure this would give much improved results, but the procedure outlined looks…complex! I’d be interested to know hos much improvement it makes?

Quad Upgrade

When I built the quad I went with a simple controller with the intention of upgrading at some point once I had more idea “what I was doing 🙂 Of course, such an open ended target was a total cop out and after some discussions with a friend and having a little more time on my hands than I had, I recently decided the time was right to start looking at an upgrade. This is what I had (a Hobbyking KK2.0 now replaced by the newer KK2.1.5).

KK 2.0 Board

Following the discussions and looking at a few options, I decided to stick with my open source leanings and move to the Paparazzi UAV Project. Looking at the autopilot boards (and following advice) I went with the KroozSD. Ordering it was easy enough and it arrived nicely packaged in a reasonable time. The fact it shares the same physical form factor as the old board helps greatly, making it almost a drop in replacement. Of course, nothing is ever that simple and so I’m currently getting the connections sorted out 🙂

Installing the Paparazzi software was easy enough and once installed it all ran without issue. The only thing that wasn’t clear, though it may just have been my poor attention span missing it, before you can run a simulation you need to use the “Build” option to create the files you will use for the simulation. Failing to do so will produce lots of windows but also a few warnings. This isn’t the most helpful as initially it appears everything is working! However, building is quick.

One aspect of the change that was new to me was the introduction of live telemetry. This is done using XBee modules and setting them up was my first task. I took a while deciding which modules to buy as there is a bewildering selection, but eventually went with 2 of these. I also bought 2 USB adapters, so once configured using the X-CTU software it was nice to see them communicating. Installing the “remote” module onto the board and applying power produced telemetry 🙂

KroozSD TopBottom side with ZBee module attached.

Connectors

In case it helps anyone else, the connectors on the board are Molex PicoBlade with the exception of the 3 pin SWD connector in the centre of the board which is JST.
I had to guess which way round the XBee module attached, though the pictures on the webpage about the board helped!

As I discover more information I’ll pass it along, but thus far I have to congratulate Sergey on producing a great board and being very easy to deal with.

Routing D’oh!

React-router is a great addition to React, but yesterday marked the first time I had used it for a project. It led to a period of head scratching, but maybe this post will help someone else avoid the same mistake I made!

Simple Setup

Installing it was simple enough.

npm install --save react-router

Having installed it I then added the import lines,

import {Router, Route, Link, browserHistory} from 'react-router';

and then added some routes to the root component.

render((
  <Router history={browserHistory}>
    <Route path="/" component={home}>
        <Route path="about" component={about} />
        <Route path="blah" component={blah} />
    </Route>
  </Router>
), document.getElementById('test'))

Adding a few links to the components with Link to enable me to test navigation and all should have been good.

    <Link to="/about">About Page</Link>

This was intended to give me the following urls,

/
/about
/blah

After making the changes a quick refresh of the page (webpack-dev-server really does make life easy) and sure enough I got the page with links. Clicking on the links gave me a surprise though – the URL changed but the page didn’t.

Children?

After some head scratching and a bit of reading around, I discovered that the routing I had added wasn’t quite what I thought in terms of the components. In my minds eye I envisaged this as rendering each component directly, but what I had added actually added was rendering a parent/child relationship.

URL Parent Child(ren)
\ home none
\about home about
\blah home blah

Fixing the problem was as simple as changing the home component to render the children by adding

<div>
{this.props.children}
</div>

Nesting

This applies for every level of nesting, so rewriting the routing block to

render((
  <Router history={browserHistory}>
    <Route path="/" component={home}>
        <Route path="about" component={home}>
            <Route path="about" component={about}/>
        </Route>
        <Route path="blah" component={blah} />
    </Route>
  </Router>
), document.getElementById('test'))

Works as expected when displaying /about/about due to using the home component with this.props.children. Using the about component failed as it lacked the this.props.children usage. The relationships look like

URL Parent Child Child
\ home none none
\about home home none
\about\about home home about
\blah home blah none

Original Intention

In order to get the behaviour I actually wanted I needed to change the routing block to this.

render((
  <Router history={browserHistory}>
    <Route path="/" component={home}/>
    <Route path="about" component={about}/>
    <Route path="blah" component={blah} />
  </Router>
), document.getElementById('test'))

postgrest views

When is a view updateable? The answer becomes important when using views to access the data via postgrest. If a view isn’t updateable then insert, update and delete operations will fail.

It’s possible to check by requesting ‘/’ from postgrest to get information about the endpoints available and looking at the insertable field.

[
  {u'insertable': False, u'name': u'fruits', u'schema': u'public'}, 
  {u'insertable': True, u'name': u'colours', u'schema': u'public'}
]

In the above, attempts to insert, update or delete from /colours will fail, but attempts for /fruits will be OK.

Simple Views

Where a view is nothing more than a select statement to return rows from a table, it should be updateable.

CREATE OR REPLACE VIEW colours AS
  SELECT * FROM data.colour;

Joins, Unions

Having joins or unions will require more work to make them updateable.

CREATE OR REPLACE VIEW fruits AS
  SELECT f.id, f.name, c.name as colour 
    FROM data.fruit AS f INNER JOIN
    data.colour as c ON f.colour_id=c.id;

Due to the join, this view isn’t directly updateable.

Function & Trigger

In order to make it updateable a function is needed, together with a trigger to call it.

CREATE OR REPLACE FUNCTION 
insert_fruit() RETURNS TRIGGER
LANGUAGE plpgsql AS $$
DECLARE
  colour_id int;
BEGIN
  SELECT id FROM data.colour WHERE name=NEW.colour INTO colour_id;
  INSERT INTO data.fruit (name, colour_id) VALUES (NEW.name, colour_id);
  RETURN NEW;
END
$$;

The trigger then tells postgresql to use the function when an insert is required.

CREATE TRIGGER fruit_action
    INSTEAD OF INSERT ON
      fruits FOR EACH ROW EXECUTE PROCEDURE insert_fruit();

Reviewing the endpoints now shows

[
  {u'insertable': True, u'name': u'fruits', u'schema': u'public'}, 
  {u'insertable': True, u'name': u'colours', u'schema': u'public'}
]

NB The insertable key refers ONLY to insert, so in this instance with only the insert function and trigger added update and delete operations will fail.

Inserting data is now as simple as 2 post requests.

POST /colours
{"name": "green"}
>> 201
POST /fruits
{"name": "Apple", "colour": "green"}
>> 201

Of course any attempt to update or delete will fail, despite having “insertable” set to True.

PATCH /fruits?name=eq.Apple
{"name": "Green Apple"}
>> 500
{"hint":"To enable updating the view, provide an INSTEAD OF UPDATE trigger or an 
unconditional ON UPDATE DO INSTEAD rule.",
"details":"Views that do not select from a single table or view are not automatically updatable.",
"code":"55000","message":"cannot update view \"fruits\""}

Update

The function required for updating a record is very similar to the insert one.

CREATE OR REPLACE FUNCTION
update_fruit() RETURNS TRIGGER
LANGUAGE plpgsql AS $$
DECLARE
  colour_id int;
BEGIN
  SELECT id FROM data.colour WHERE name=NEW.colour INTO colour_id;
  UPDATE data.fruit set name=NEW.name, colour_id=colour_id WHERE id=NEW.id;
  return NEW;
END
$$;

CREATE TRIGGER fruit_action
    INSTEAD OF UPDATE ON
      fruits FOR EACH ROW EXECUTE PROCEDURE update_fruit();
PATCH /fruits?name=eq.Apple
{"name": "Green Apple"}
>> 204

NB It’s worth pointing out that every row matched will be updated, so be careful of the filter criteria provided on the URL.

Delete

The delete function needs to return the rows that it deletes. Note that while insert and update relied on NEW, delete uses OLD.

CREATE OR REPLACE FUNCTION
delete_fruit() RETURNS TRIGGER
LANGUAGE plpgsql AS $$
BEGIN
  DELETE FROM data.fruit WHERE id=OLD.id;
  RETURN OLD;
END
$$;

CREATE TRIGGER fruit_delete
    INSTEAD OF DELETE ON
      fruits FOR EACH ROW EXECUTE PROCEDURE delete_fruit();

With the final trigger in place, delete now works.

DELETE /fruits?id=eq.1
>> 204

postgrest lessons learned

I’ve been spending some time recently getting to grips with postgrest by writing a small schema and figuring out how it all sits together with the help of a simple python client. The plan is to continue to develop it as a react/redux app once I have postgrest and the data figured out 🙂 The following are just some things I’ve learned that may have helped me from 10 days ago and may help someone else.

Roles

I’ve ended up with 3 roles.

authenticator
This role is used as the “base” role for the database. It has sufficient access to change roles and nothing else. I use this in the postgres connection url and so it needs a password to allow connections to postgres to be made.
anon
This role caused the most confusion initially as I failed to grasp that all connections from postgrest will use this role UNLESS a valid jwt_token is presented that contains a different role. This means it needs,as a minimum, access to everything involved in the login/signup process. For the database tere is nothing else that is visible to non-authenticated users so I have only the login/signup permissions.
user
The user role is set in the jwt_token. Once presented all further access will be as this user so the permissions need to reflect that. Additionally there are some aspects of the user management that need to be accessed by this role, e.g. when the user changes their details.

I’m not keen on having every user as their own role, so that means adding access control in the views and using their credentials with a single role. How this will work if I switch over to Row Level Security isn’t obvious to me yet.

One schema to rule them all

When starting postgrest there is the option to pass a schema. If none is supplied then the default of “public” is used. Anything else isn’t accessible from postgrest. This gives a lot of flexibility but also caused me some confusion initially when things weren’t available. If you need to span schemas then you need to write a view – which must be a member of the schema you have chosen. So far I’ve chosen to just use the default schema of ‘public’. Call me lazy 🙂

Functions need /rpc/, views don’t

This one was pretty obvious from the documents, but taken together with the schema point above caused me a lot of confusion. Essentially if you have defined a function called ‘login’ and another called ‘signup’ then they are available as /rpc/login and /rpc/signup. Views are simply available, so if you create a view ‘users’ it’s simply available as ‘/users’.

Direct access or not?

Simply creating tables in the chosen schema is enough to make them available. So, if you have a table called ‘colours’ then it’s directly available via postgrest using ‘/colours’ (depending on the permissions obviously). If, however, you created it in the ‘constants’ schema then it would need a view to access it. Which is preferable depends on the design and how much extra SQL you want to write.
I’m still trying to decide which route to go down. Having the “raw” tables hidden and only accessible via views provides a level of indirection that could well be useful, though adds a lot of extra coding – the lack of which was part of the attraction of postgrest to start with!

jwt_claims

This type needs to be defined with the information required by the app. Once supplied by a request, the information is retrieved by current_setting(‘postgrest.claims.xxx’) (where xxx is the member name).

Commands are available

In writing the sql files, the commands that can be used in the psql command line app are available! This revelation helped me streamline my debugging 🙂 Additionally I have split the sql into separate files and used ‘\i‘ to include them in a main file. This allows me to run it all as one or just the parts I need to update.

I need to improve my postgresql knowledge

As a friend said a while ago, “postgresql is a real database”. My knowledge of writing views, functions and the other bits needed to glue everything together is getting better, but the more you know the better.

gitter

Buried in one of the various documentation page was a link to gitter.im/begriffs/postgrest. The room isn’t too busy but seems to have enough people who know about postgrest to offer valuable help. Some of the other rooms also seem to be helpful with low amounts of noise and their app is pretty good.

Update?

Updates are done using the PATCH http verb and normally return a 204 No Content response. The data can be sent as json and all usual filtering arguments can be used to select the records to be updated.

PATCH /fruits?id=eq.1
Content-Length: 23
Content-Type: application/json
Accept: application/json

{"name": "Green Apple"}

If you want the full record, you need to add the Prefer header. This should return a 200 OK status code with the record.

PATCH /fruits?id=eq.1
Content-Length: 23
Content-Type: application/json
Accept: application/json
Prefer: return=representation

{"name": "Green Apple"}

letsencrypt

The idea behind letsencrypt is great. Wanting to add an SSL certificate for one of my domains I decided it was time to see how it worked.

Installation

No package is yet available for Ubuntu, so it was onto the “less preferred” git route.

$ git clone https://github.com/letsencrypt/letsencrypt
...
$ cd letsencrypt

The posts I read said to run a command, answer the questions and all would be good.

$ ./letsencrypt-auto --server https://acme-v01.api.letsencrypt.org/directory auth

After answering the questions the authentication failed. Hmm, that didn’t work, despite telling it the webroot to place the auth files in.

Going Manual

The stumbling block was the lack of files to prove the domains are ones I should be asking for certificates for. That’s fine, but using the command line above gives no information to let me fix the problem. There is a manual option, so next step was to try that.

./letsencrypt-auto --server  https://acme-v01.api.letsencrypt.org/directory --manual auth

This time I was prompted with the contents of a string and a URL location to make it available. That’s more like what I was expecting, so after creating the file all was well. After reading a little more it appears that using the certonly option was what I really wanted, so the command line would be

./letsencrypt-auto --server  https://acme-v01.api.letsencrypt.org/directory --manual certonly

Once the certificates had been created and downloaded, a small edit to the apache configuration files and I have an SSL protected website 🙂

Renewals

The certificates expire after 90 days, so I needed a command line that I can run via crontab. Using the above command lines above required interaction, so they wouldn’t do. Thankfully there is an easy answer.

./letsencrypt-auto --server  https://acme-v01.api.letsencrypt.org/directory --manual renew

Tidying up

After writing a small django app to handle auth for the django powered sites that are going to be using certificates and adding the relevant lines to crontab, I think I’m done 🙂

Command line babel?

Babel is a great project and a really useful npm module. I’ve use it in almost all of my webpack experiments, but recently I had a need to use it from the command line. Thankfully there is the babel-cli module. However, things weren’t as simple as some of the blog posts I found suggested.

Starting with a new project and npm the initial installation is the usual breeze.

$ npm init -y
$ npm install --save-dev babel-cli

According to the blog post I should now just be able to run babel-node – erm, no.

$ babel-node
babel-node: command not found

OK, so it’s Ubuntu and bash which means the PATH isn’t pointing to the correct file. That’s reasonable given I’ve just installed it, but what is the correct path? Given that npm installs things on a per project basis it’s likely something in node_modules? A quick search reveals that to be the case.

$ ls -l node_modules/.bin
total 0
...
lrwxrwxrwx 1 david david 30 Apr 16 13:31 babel-node -> ../babel-cli/bin/babel-node.js
...

To avoid this being a recurring issue I adjusted the PATH to include that directory (as a relative path).

$ export PATH=$PATH:node_modules/.bin

Now I could run babel-node, but using funky modern javascript syntax failed. I needed the es2015 preset.

$ npm install --save-dev babel-preset-es2015

Once installed I needed to tell babel to use it, which was as simple as

$ babel-node --presets es2015 ...

However, what are the chances of me remembering that every time I need it? So, there must be a better way to tell babel to always use the preset. Further reading revealed 2 options – add a .babelrc file or add an entry in package.json. I chose the package.json file for no other reason than it was one less file to create/manage. The entry is pretty simple and exactly as you’d expect.

  "babel": {
    "presets": ["es2015"]
  }

Now running is as simple as

$ babel-node index.js

However…

As I started writing more javascript I ran into problems when I used the new ES7 spread operator, ‘…’. It is supported by babel, but requires a plugin to be installed and used. Finding and adding the plugin was simple enough,

$ npm install --save-dev babel-plugin-transform-object-rest-spread

I thought that I could add it’s use via the package.json file, but my attempts failed, so I added a .babelrc file with the appropriate sections and all was well.

$ cat .babelrc
{
    "presets": [
      "es2015"
    ],
    "plugins": [
      "transform-object-rest-spread"
    ]
}

Hopefully this will help someone else!

Webpack: sass & import, names

Having started moving to sass for my project and including the required bits in my webpack configuration (blog post), the next issue I ran into was that importing didn’t seem to work as expected.

Require not Import?

One of the additions I made to my webpack config was to add a resolve section, allowing me to use more convenient and simpler require lines in my javascript.

  resolve: {
    modulesDirectories: ['node_modules', 'components', 'css', 'fonts'],
    extensions: ['', '.js', '.jsx', '.css', '.scss']
  },

This worked exactly as expected wherever I used a require statement, so I had expected that this would transfer to import statements in css and sass files – but it didn’t. As it seems such an obvious thing to do, I had another look at the README for the sass-loader and found what I was looking for.

~, but not as you know it

For my testing I had created as simple a file as I could think of, test.scss.

@import ('../components/_component.scss')

This very simple file just imports another file (which happens to be sass) that belongs to a component I have in the ‘components’ directory. Nothing special, but why do I need the full import path? This was what I needed to get things working, but after looking at the sass-loader again I realised that using the ‘~’ would use the webpack resolve routines – which is what I was hoping. A small change to the file,

@import ('~_component.scss')

resulted in things working as I wanted.

NB the README cautions against using ~ as you may expect (if you’re a command line groupie) as using ~/ implies the home directory and probably isn’t what you want.

Multiple Outputs?

Having decided that I don’t want css to be included in the javascript output, I added the ExtractText plugin which allowed me to bundle all css into a single css file. This is fine, but what if I wanted to have different css bundles? What if I wanted to have different javascript bundles? My current configuration didn’t seem to allow this.

  entry: {
    'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
    'webpack/hot/only-dev-server',
    path.resolve(__dirname, 'components/App.js'),
  }

Thankfully, webpack has this covered. Instead of having a single entry you can have multiple, each of which you can supply a name. Additionally I realised that the entry point doesn’t *need* to be a javascript file as long as it’s a file that can be processed. So I changed the entry section to this.

  entry: {
    bundle: [
      'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
      'webpack/hot/only-dev-server',
      path.resolve(__dirname, 'components/App.js'),
    ],
    test: [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

Running webpack didn’t give me what I expected as I also needed to change the output definition.

  output: {
    path: path.resolve(__dirname, 'html'),
    filename: '[name].js'
  },

Using the [name] simply replaces the name I used in the entry definition with that text, which offers additional possibilities. With the changes made, running webpack produces

html/
     bundle.js
     bundle.css
     test.js
     test.css

The test.js file is a little annoying and in an ideal world it wouldn’t be created, but so far I can’t find any way of preventing it from being created.

To control the output location even more, simply changing the definitiion is all that’s required for simple changes. Updating it to

  entry: {
    ...
    'css/test': [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

results in the files being created in html/css, ie

html/
     bundle.js
     bundle.css
     css/
         test.js
         test.css

NB when using a path the name needs to be in quotes.

Using this setup, component css is still included in the bundle.css and the only things appearing in test.css are those that I have specifically included in the entry file, which opens up a lot of possibilities for splitting things up. As I’m using bootstrap for the project one possibility is to use this to output a customised bootstrap file.

Hot Reload

At present hot reloading of css doesn’t seem to be working. I changed my configuration to this

  entry: {
    'webpack-dev-server/client?http://127.0.0.1:8080', // WebpackDevServer host and port
    'webpack/hot/only-dev-server',
    bundle: [
      path.resolve(__dirname, 'components/App.js'),
    ],
    test: [
      path.resolve(__dirname, 'css/app.scss'),
    ]
  },

which still provides hot reloading of the javascript, but the css files don’t seem to work. This seems to be a common issue, but as it’s not a serious one for me at present I’m not going to spend too much time looking for solutions. If anyone knows, then I’d love to hear from you.

sass

Continuing my delve into React, webpack and so on and after adding a bunch of css files, I decided it was time to join the 21st century and switch to one of the css preprocessors. LESS was all the rage a few years ago, but now sass seems to have the mindshare and so I’m going to head down that route.

Installing

Oddly enough, it installs via npm 🙂

npm install --save-dev node-sass
npm install --save-dev sass-loader

Webpack Config

The webpack config I’m using is as detailed in my post More Webpack, so the various examples I found online for adding sass support didn’t work as I was already using the ExtractTextPlugin to pull all my css into a single file. The solution turned out to be relatively simple and looks like this.

      {
        test: /\.scss$/,
        loader: ExtractTextPlugin.extract(['css', 'sass'])
      }

Additionally I need to add the .scss extension to the list of those that can be resolved, so another wee tweak.

  resolve {
    ...
    extensions: ['', '.js', '.jsx', '.css', '.scss']
  }

Structure?

One reason for moving to SASS is to allow me to split the huge css files into more manageable chunks, but how to arrange this? Many posts on the matter have pointed me to SMACSS and I’m going to read through the free ebook (easily found via a web search) to see what inspiration I can glean, but I think for each React component I’d like to keep the styles alongside as the bond between the JSX and the styling is very tight and changing one will probably require thinking about further changes. As per previous experiments, the component can then require the file and it will magically appear in the bundled, generated css file, regardless of whether I’ve written it in sass or plain css.

For the “alongside” files I’ll use the same filename and the leading underscore that tells sass not to output the file directly, though with the webpack setup that isn’t a concern now but getting into the habit is likely a good idea for the future 🙂 This means fora component in a file named App.js I’ll add _App.scss and add a line require(‘_App.scss’); after the rest of the requires.

Variables

I want to use a central variables file for the project, which I can then reference in the sass files, but haven’t quite decided where it should live just yet. Hopefully after reading the ebook and looking at the project a bit more it will make sense.

Now sass handling in place it’s time to start pulling apart my monolithic plain css file and creating the smaller sass files.

Webpack Dev Server

After using webpack for a few days, the attraction of changing to using the dev server are obvious.

The webpack-dev-server is a little node.js Express server, which uses the webpack-dev-middleware to serve a webpack bundle.

Install

Oddly enough, it needs installed via npm! However, as we’re going to run it from the command line, we’ll install it globally.

sudo npm install -g webpack-dev-server

Running

After install, simply running the server (in the same directory as the webpack.config.js file) will show it’s working and the bundle is built and made available. Next step is to get it serving the HTML file we’ve been using. This proves to be as simple as

$ webpack-dev-server --content-base html/

Requesting the page from http://127.0.0.1:8080/ gives the expected response. Removing the bundled files produced by webpack directly from the html directory and refreshing the page proves the files are being loaded from the dev server. Nice.

Hot Loading

Of course, having the bundle served by webpack is only the start – next I want any changes I make to my React code to be reflected straight away – hot loading! This is possible, but requires another module to be loaded.

npm install --save-dev react-hot-loader

The next steps are to tell webpack where things should be served, which means adding a couple of lines to our entry in webpack.config.js.

  entry: [
    'webpack-dev-server/client?http://127.0.0.1:8080',
    'webpack/hot/only-dev-server',
    path.resolve(__dirname, 'components/App.js'),
  ],

As I’m planning on running this from the command line I’m not going to add a plugin line as some sites advise, but rather use the ‘–hot’ command line switch. I may change in future, but at present this seems like a better plan.

The final step needed is to add the ‘react-hot’ loader, but this is where things hit a big snag. The existing entry for js(x) files looked like this.

     {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loader: 'babel-loader',
        query: {
          presets: ['react', 'es2015']
        }
      },

Adding the loader seemed simple (remembering to change loader to loaders as there was more than one!).

     {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loaders: ['react-hot', 'babel-loader'],
        query: {
          presets: ['react', 'es2015']
        }
      },
Error: Cannot define 'query' and multiple loaders in loaders list

Whoops. The solution was given by reading various posts and eventually I settled on this. It works for my current versions of babel but may not work for future ones. All changes below are applied to the webpack.config.js file.

Add the presets as a variable before the module.exports line.

var babelPresets = {presets: ['react', 'es2015']};

Change the loader definition to use the new variable and remove the existing definition.

      {
        test: /components\/.+.jsx?$/,
        exclude: /node_modules/,
        loaders: ['react-hot', 'babel-loader?'+JSON.stringify(babelPresets)],
      },

Now, when running webpack-dev-server –content base html/ –hot everything is fine and the page is served as expected.

Editing one of the components shows the expected rebuild of the bundle when saved – exactly as expected.

All Change!

As I tried to get this working I discovered that the react-hot-plugin is being deprecated. Until it happens I’m happy with what I have, but the author promises to have a migration guide.

Running

To try and keep things simpler and avoid the inevitable memory lapses leading to scratching of head about lack of hot reloading, I’ve added a line to the package.json file. With this added I can now simply type npm run dev and the expected things will happen.

  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack --progress",
    "dev": "webpack-dev-server --content-base html/ --hot"
  },