Home Assistant on Raspberry Pi 5 with NVMe PCIe SSD

Recently Raspberry Pi 5 support was added for Home Assistant OS. The new model is twice as fast as its predecessor and add new features like a PCIe bus. The discussion started on how to use NVMe SSD’s directly connected to the PCIe bus.

M2 PCIe hats for Raspberry Pi 5

There are several PCIe hats you can buy, but there is no official Raspberry Pi hat yet, but there is an official Raspberry 5 case and Active Cooler.

The Raspberry Pi + Active cooler fits in the case if you remove the standard fan.

An option is to attach an SSD over USB3. This works okay, but the performance is limited for NVMe PCIe SSD’s that have higher throughput than SATA SSD drives.

The Raspberry Pi 5 now brings a second alternative by allowing to connect an NVMe SSD directly to the PCIe connector.

An M2 hat exists that directly connects to the PCIe port and also fits inside the original Raspberry Pi 5 case, together with the original Raspberry Pi active cooler.

The Geekworm X1003 mounts on top of the Raspberry Pi 5 and active cooler and even allows to install the top cover (if that is desired).

Install an NVMe SSD

The Geekworm X1003 fits 2242 (42 mm) and 2230 (30mm) size NMVe SSD’s. There is a nut at the 2242 position, but you might be needing a soldering iron if you want to move the nut to the 2230 position.

At the moment most nVME SSD’s are 2280, and for that size you need a different hat, e.g. the Geekworm X1001 or X1002, but those will not fit into the original case. I found a cheap SAMSUNG 2242 SSD at Amazon. This model uses a feature named HMB (Host Memory Buffer) which requires some extra attention which I will address later. After we have mounted the SSD (make sure the PCI FFC cable is installed correctly) we can continue with the next step.

Preparing the installation

To install Home Assistant OS we use the imager that is installed by default on the Raspberry PI OS image.

So to start we need an SD-card with Raspberry PI OS flashed on it, do not flash Home Assistant OS on the SD card.

We can prepare this using the Official Raspberry PI imager. After the prepared SD card is installed in or Raspberry Pi 5 we can boot the Raspberry PI OS. When the OS is started we should be able to see if the NVMe SSD is recognized. To make sure the SSD works without issues we can use the dmesg command. If there are issues with you SSD you will see the errors in the bootlog this command shows.

Identify possible NMVe SSD issues

In my case there were a few issues I needed to address. I needed to …

  • … disable PCIe power management (at least I thought that was a good idea).
  • .. enable the pcie-port on the Raspberry Pi
  • … enable gen3 mode
  • .. increase the CMA buffer to ensure enough continues memory for the Host Memory Buffer feature

If you see no NVMe SSD popping up at all, then that could be mean that…

  • … your the FFC cable between board and M2 head is not connected properly.
  • … your SDD is not probed correctly and you need to add PCIE_PROBE to your EEPROM config first.
~# sudo rpi-eeprom-config --edit
# Add the following line if using a non-HAT+ adapter:
PCIE_PROBE=1

Disable PCIe power management

We can disable pcie power management. When we know that the SSD is always on, and we see errors like:

[22609.179027] nvme 0000:01:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
[22609.179031] nvme 0000:01:00.0: device [1d79:2263] error status/mask=00009100/0000e000
[22609.179035] nvme 0000:01:00.0: [ 8] Rollover
[22609.179039] nvme 0000:01:00.0: [12] Timeout

To disable pcie power management as root we need to edit /boot/firmware/cmdline.txt and add pcie_aspm=off to the command line.

Enable the PCIe port on the Raspberry Pi

Most likely this needed at all times. To enable the pcie-port as root we need to edit the config.txt containing the kernel configuration. As root you can also edit the following the file directly: /boot/firmware/config.txt

To enable pcie add the following lines:

dtparam=pciex1
dtparam=nvme

If you still have errors in you log you could also enable gen3.

Enable the PCIe gen3 mode on the Raspberry Pi

The PCIe gen3 mode is not officially supported, so only enable this if your SSD does not work correctly without it.

To enable as root we need to edit the config.txt add the following line:

dtparam=pciex1_gen=3

Increase the CMA buffer

In case dmesg shows cma_allocation errors like:

[    0.514198] nvme nvme0: Shutdown timeout set to 8 seconds
[    0.516601] nvme nvme0: allocated 64 MiB host memory buffer.
[    0.519481] cma: cma_alloc: linux,cma: alloc failed, req-size: 2 pages, ret: -12
[    0.519494] cma: cma_alloc: linux,cma: alloc failed, req-size: 8 pages, ret: -12
[    0.519506] cma: cma_alloc: linux,cma: alloc failed, req-size: 2 pages, ret: -12
[    0.519513] cma: cma_alloc: linux,cma: alloc failed, req-size: 8 pages, ret: -12
[    0.519523] cma: cma_alloc: linux,cma: alloc failed, req-size: 2 pages, ret: -12
[    0.519530] cma: cma_alloc: linux,cma: alloc failed, req-size: 8 pages, ret: -12
[    0.519539] cma: cma_alloc: linux,cma: alloc failed, req-size: 2 pages, ret: -12
[    0.519546] cma: cma_alloc: linux,cma: alloc failed, req-size: 8 pages, ret: -12
[    0.519735] nvme nvme0: 4/0/0 default/read/poll queues
[    0.525784] nvme nvme0: Ignoring bogus Namespace Identifiers
[    0.531980]  nvme0n1: p1 p2 p3 p4 p5 p6 p7 p8
[    0.519735] nvme nvme0: 4/0/0 default/read/poll queues
[    0.525784] nvme nvme0: Ignoring bogus Namespace Identifiers
[    0.531980]  nvme0n1: p1 p2 p3 p4 p5 p6 p7 p8
[    0.619473] brcm-pcie 1000120000.pcie: link up, 5.0 GT/s PCIe x4 (!SSC)
[    0.619497] pci 0001:01:00.0: [1de4:0001] type 00 class 0x020000
[    0.619514] pci 0001:01:00.0: reg 0x10: [mem 0xffffc000-0xffffffff]
[    0.619524] pci 0001:01:00.0: reg 0x14: [mem 0xffc00000-0xffffffff]
[    0.619533] pci 0001:01:00.0: reg 0x18: [mem 0xffff0000-0xffffffff]
[    0.619605] pci 0001:01:00.0: supports D1
[    0.619609] pci 0001:01:00.0: PME# supported from D0 D1 D3hot D3cold
[    0.631479] pci_bus 0001:01: busn_res: [bus 01-ff] end is updated to 01

I had these errors due to the HMB (Host Memory Buffer) feature that needed more that the standard configured cma memory.

To increase the cma memory buffer I increased the default allocation of 64 MB to 96 MB.

As root we can edit the config.txt add the following line:

dtoverlay=cma,cma-96

It is possible to enable a larger block, but we should not allocate more than needed. To see the free cma memory as root we can use:

# cat /proc/meminfo | grep -i cma
CmaTotal:          98304 kB
CmaFree:           15684 kB

Preparing Home Assistant OS on the NVMe SSD

When we have configured our SSD and have resolved any issues, then we can flash the NVMe with Home Assistant OS. We use the preinstalled Raspberry Pi imager that is installed by default on Raspberry Pi OS and can be found in the accessories menu. Note you can set some options that allow to set WiFi and hostname in advance. Other settings we need te set manually after the image has been flashed. Make sure the NVMe SDD is flashed correctly.

Adjusting boot order, kernel parameters and configuration

After flashing Home Assistant OS, we need to adjust the boot order and the make sure Home Assistant OS uses the same config.txt and cmdline.txt as we prepped for Raspberry PI OS. There are files in the image we need to adjust before we can boot from the new image, as for now it is not possible yet to apply these settings from the UI. To access the these files we need to mount the right partition. In my case the NVMe was called nvme0n1. If you are not sure about the name of your drive as root you can use the command fdisk -l

~# fdisk -l
...
other drive info
...
Disk /dev/nvme0n1: 238.47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: SAMSUNG MZAL4256HBJD-00BL2
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: DFEFDCBD-D15E-4E82-B88C-5E6B46166328

Device           Start       End   Sectors   Size Type
/dev/nvme0n1p1    2048    133119    131072    64M EFI System
/dev/nvme0n1p2  133120    182271     49152    24M Linux filesystem
/dev/nvme0n1p3  182272    706559    524288   256M Linux filesystem
/dev/nvme0n1p4  706560    755711     49152    24M Linux filesystem
/dev/nvme0n1p5  755712   1279999    524288   256M Linux filesystem
/dev/nvme0n1p6 1280000   1296383     16384     8M Linux filesystem
/dev/nvme0n1p7 1296384   1492991    196608    96M Linux filesystem
/dev/nvme0n1p8 1492992 500118158 498625167 237.8G Linux filesystem

To access config.txt and cmdline.txt we mount the first partition:

~# mkdir boot
~# mount -t vfat /dev/nvme0n1p1 ~/boot
~# cd boot
~# ls

Use the same settings that were okay for Raspberry PI OS, and we should be okay.

Finally we change the boot sequence so the Raspberry Pi will boot from the SSD. The code below shows how to do that:

# Edit the EEPROM on the Raspberry Pi 5.
sudo rpi-eeprom-config --edit

# Change the BOOT_ORDER line to:
BOOT_ORDER=0xf461
# This will try SD card first, then NVMe and finally USB

# Add the following line if using a non-HAT+ adapter:
PCIE_PROBE=1

# Press Ctrl-O, then enter, to write the change to the file.
# Press Ctrl-X to exit nano (the editor).

For me it was not needed to add PCIE_PROBE. The order is read right to left. So 0xf416 will firts try 6=NVMe, then 1=sd card, then 4=USB and finally f=restart the loop. As you can see in the example, I used 0xf461 as boot order, because I prefer I can easily boot from SD card, without the need to remove the NMVe SSD, if that is needed for rescue actions.

More documentation about the boot order options can be found here: https://www.raspberrypi.com/documentation/computers/raspberry-pi.html#BOOT_ORDER.

Boot Home Assistant from NVMe SSD

When the boot order includes the NMVe as the virst valid boot option, we can try booting the flashed NMVe SSD.

The Home Assistant instance should intialize at http://homeassistant.local:8123. It may take some minutes to intialize. If you have a suitable HDMI cable and a keyboard and mouse you can get access to the Home Assistant CLI.

To see if there are errors you can excecute dmesg to see the boot log. To know if there are any PCIe run time errors you can use ha host logs -f. When useing the CLI from the console you can omit ha, so host logs -f or host logs should just work. If you set up SSH when flasing the image, you should also be able to use SSH to open the CLI.

If every thing is fine, you can start setting up Home Assistant or restore from a backup, but make sure that when you restore from a backup that the same supervisor version is used, or the backup restore will fail. To update to specific version from the CLI use:

ha supervisor update --version 2024.03.0

To just update to the latest version:

ha supervisor update

Restoring from backup can take quite some time if the backup file is large. The UI does no show much detail on the progress. To see some more details you can check the supervisor logs. If anything went wrong you will able to see it.

ha supervisor logs

And that is all: Enjoy your Home Assistant installation on Raspberry Pi 5 and NVMe SSD!

Elro Connects

Elro Connects is a Smart Home brand that connectable fire, smoke, water or CO alarms, and some other stuff. Initially these devices were sold with a K1 connector, but this type of connector is not available in stock anymore. I own some fire alarms and a water detector alarm and a K1 connector. I was able automate and integrate these into Home Assistant.

Elro Connects K1 for Home Assistant can be found in HACS by default. HACS allows to to add custom integrations to Home Assistant.

Elro Connects socket support

The Elro Connects integration for Home Assistant now has support for Elro Connects power sockets.

An update has been made available. Note that only the K1 connector is supported. If you have a K2 connector, then this will not work. If you have have some other Elro Connects devices and a K1 connector, and you want to help, then you can leave a reaction in the comments. If you would like to do some more technical testing, then note there is a test test script you can in install and use examples of the lib-elro-connects library. The socket-demo.py script can show the status of all your Elro Connects devices connected to the K1 connector. Further it allow you to send command to it (2 digit hex code). That allows you to play a bit with it.

Allow DANE authentication to your mail server or website

DANE https://datatracker.ietf.org/doc/html/rfc7671 stands for DNS-Based Authentication of Named Entities. This protocol allows clients to check the remote certificate used trough TLSA DNS records. DANE requires DNSSEC https://datatracker.ietf.org/doc/html/rfc9364.

This post is not about the client side implementation, it is about the backend. I’ll explain how to created TLSA records that use the public key of the certificate and issuer certificate used. Basically I only use openssl to create the TLSA records. The article is based on https://www.mailhardener.com/kb/how-to-create-a-dane-tlsa-record-with-openssl. Most common use is for mail servers to ensure encrypted mail transfer between MTA’s, so that is what I will use in this example. The mail server that wants to deliver an email to your protected mail server should still respect the TLSA records you have published. The adaption for DANE has increased the last years. even companies as Microsoft start adoption DANE.

Obtaining the certificate and chain from your mail server

With openssl you can easily re-generate the PEM encoded certificate and chain from a mailserver.

echo QUIT | openssl s_client -connect mail.example.com:25 -starttls smtp -showcerts

This opens a connection over port 25 with STARTTLS and prints the certificates and then quits the connection made. To store the output in a file just redirect the output.

The certificate chain in the output probably contains multiple certificates starting with the server certificate.

Lets save the first certificate in the chain as server.crt, and the second as intermediate.crt.

For a mail server we are interested in the server certificate (the first certificate in the chain) and the issuer certificate. We use the schema 3 1 1 (DANE EE) for server certificate and 2 1 1(DANE TA) for the issuing certicate. The TLSA value published in DNS is a SHA256 hash of the public key. The public key will only change if the private key used to create the CSR of the certificate has changed.

Create SHA256 hash from the public key

From the certificate files we have obtained we now can calculate the SHA256 hash.

openssl x509 -in server.crt -pubkey -noout | openssl rsa -pubin -outform der | sha256sum

generates the public key hash for the server certificate. Output could be something similar like:

writing RSA key
4648564dc7c901037f631391d765643e8f8f86622849f59dfc9564838e1e8a76  -

We only need the long string here. We can repeat this for the intermediate certificate.

Create and publish TLSA DNS records

DANE authentication checks TLSA records published. For our mail server we want to publish the public key SHA256 hash for server and intermediate certificate for port 25 (you can also publish records for fort 465 or 587 is you want). So lets say we want to publish the server public key for our mail server (mail.example.com) we publish the following record:

Name: _25._tcp.mail.example.com.
Type: TLSA
TTL: 1 day
Value: 3 1 1 4648564dc7c901037f631391d765643e8f8f86622849f59dfc9564838e1e8a76

It is a good practice to also publish a 2 1 1 TLSA record for the intermediate certificate. When your certificate changes (and the private key has changed too) make sure you publish a new TLSA record before installing the new certificate. You can have multiple instances for 3 1 1 xxx and 2 1 1 x x x records. After the new certificate has been installed, the stale records can be removed.

Using git

As a developer I am collaborating on multiple projects. In many cases it is need to create a fork of the upstream project add add a pull request. The first steps (making a fork) etc are not difficult, and at first you will have an up to date clone of the original website on which you can work and open pull requests to the upstream project. Most of these steps can be down using the de web UI (e.g. when using Github). But while working on a pull request often it is needed to sync your forked clone, for instance to open a new branch. Then the trouble begins. What can happen:

  • You started a new branch , but the main branch was not in sync
  • You need to rebase and possible there is a conflict

People start rebasing but often pull in unrelated commits causing other issues. In this post I want to share some of methods I have to deal with such cases.

Opening a fresh branch

When you want to open a new pull request you want this to happen correctly. I assume here that you already have set up some develop environment with your cloned git repository, ready to push commits.

To make sure your fork is in sync you need to fetch all upstream updates. We can do that with:

git fetch --all

or if it is just upstream, we can use

git fetch upstream

which is faster.

The next step is to checkout the target branch you want to create your pull request against. This can be main, master or dev, but sometimes this can be even different. In our example we use main.

git checkout main 

Now we want to reset main to the same state as upstream\main, as this is the branch where we want to target the pull request to. We need to make sure there are no uncommitted files as these will be removed!

git reset --hard upstream/main

If we omit the --hard flag then git will create uncommitted files for all changed between the old and new state. This can be handy if we want to revert commits but not when starting a new branch.

From here we can push the local branch to our fork with:

git push

Now we are at the same level as the target branch and we can open a new branch that is in sync with upstream. We choose a new branch name that reflects the name of our pull request, in this example I’ll use test-pull-request as branch name.

git checkout -b test-pull-request

This creates a new local branch, from here you can start coding and add your commits. If you are ready you can publish your branch to your fork (I use origin as name for the cloned repository). From then every new commit will be pushed to origin as well. When we publish our new local branch, then we should set the upstream branch too so we can open a pull request later.

git push --set-upstream origin test-pull-request

When all commits are done and we pushed all our commits (git push), then we are ready to open a pull request.

Open a new pull request

When using Github it is advisable to open a pull request using the web UI or using a plugin in vscode or other development environment. In most cases there is a template that needs to be filled in. Make sure the correct commits are shown and you have selected the correct target branch!

Rebasing and resolving merging issues

Sometimes we need to rebase. This can happen if the PR is to old and we need to sync with our target branch. Personally I would like to force push the original commits on top of the rebased branch instead of adding a merge commit. Force pushing makes it easier to keep track on the commits in the pull request and avoids additional merging commits. To rebase correctly first we need to fetch the latest updates. We can use:

git fetch upstream

Now we make sure we are checked out to the correct branch.

git checkout test-pull-request

To rebase with our upstream target branch we start a rebase command.

git rebase upstream/dev

If you want to select the commits that should be included (for instance when you want revert some commits) you should consider using the -i flag to start interactive rebasing.

git rebase -i upstream/dev

For each commit include git will try to rebase, if this fails, then you need to correct, save and stage your files to tell git which changes should be made. After staging you can continue rebasing:

git rebase --continue

Make sure you only make changes to commits that are directly related to the merging conflicts and reflect the wanted changes for that particular commit. Repeat this for the other commits (if any) till the rebase is finished successfully.

If all becomes a mess, then there is a bail out of this by aborting the merge operation.

git rebase --abort

Next step is that we no NOT just pull and push commits now but only force push the rebased commits. This can illogical but what we want is to make sure that we push the exact local situation to our origin branch.

git push --force

Now our branch should be rebased and in the pull request we should only see the commits of our PR, our the selected commits from the file (when using the -i option).

Creating new clean commits from previous work

If your have a larger PR with multiple files and you would like to replace or revert existing commits without loosing your work you can use git reset.

With git log we can show the commits we have made on top of our branch. To replace these commits we need to rebase to the commit from where we want to start again.

Say we have two commits we want to do over. Show them with git log.

commit 6090d321e3926ad9c5ffdd026f7c2fb046cdbbf2 (HEAD -> test2)
Author: you <you@example.com>
Date:   Wed May 24 14:19:24 2023 +0000

    commit 2

commit a0ad22921fb8f5797aeb4e414cf3403b80027a3d
Author: you <you@example.com>
Date:   Wed May 24 14:18:03 2023 +0000

    commit 1

commit abf08f66a4c7e01955213c228542884951d45a11 (upstream/dev, origin/test2, origin/dev, origin/HEAD, dev)
Author: user <user@users.noreply.github.com>
Date:   Wed May 24 01:38:16 2023 -0500

    Some commit upstream

Assuming we have no uncommitted files we can rebase to commit abf08f66a4c7e01955213c228542884951d45a11 (upstream/dev, origin/test2, origin/dev, origin/HEAD, dev).

git reset abf08f66a4c7e01955213c228542884951d45a11

Now you will see that the changes of the last 2 commits are unstaged files now. You can now stage the changes for your new commit, or if you want do this in more commits until all changes are staged and committed. Now, to replace the old commits with the new ones we can force push them to our origin branch.

git push --force

Now your commits have been replaced with the new ones.

Temporary save uncommitted work

Sometimes you might need to change between branches, but you have uncommitted files. If pre-commit is used, it might not be possible to commit. Instead you could stash your changes on the stack. You can get back your changes later by popping them from the stack. This also works if you want to apply uncommitted changes to a different branch.

To stash your uncommitted work:

git stash

To pop back the changes:

git stash pop

There is a lot more you can do with git stash, but I will keep it brief here.

For more git commands you can use the online documentation.

Home Assistant + `haproxy` +`LetsEncrypt`+TransIP

This post is about my (positive) experience with haproxy as reverse proxy for Home Assistant. Remote access is need if youw want to access Home Assistant from outside of your home network.

As prerequisite we will also need to forward port 443(https) in our router/firewall to system used and make sure we have a valid DNS A/AAAA or CNAME record set up pointing to the Public IP address that is used to forward.

As I wanted to have LetsEncrypt certificates being enrolled using a DNS Challenge. For this I have used a Raspberry Pi 3b board with Rasbian (Debian) installed. Futher I installed docker, and haproxy.

Installing the haproxy package is as simple as:

sudo apt update and sudo apt install haproxy

As prerequisite we will also need to forward port 443(https) in our router/firewall to system used and make sure we have a valid DNS A/AAAA or CNAME record set up pointing to the Public IP address that is used to forward.

LetsEncrypt with DNS-01 challenge and TransIP API

My Internet domain are managed by TransIP, and DNS access has API support. In the portal you can create an API key and whitelist the IP-adresses that you want to use for DNS accesses. We use this API to enroll LetsEncrypt certificates. Big advantage is that we can enroll wild card certificates or certificates with internal names we do not want to expose on the Internet, and we do not need to open port 80!

For the enrollment of certificates with LetsEncrypt via the TransIP API I used the docker image (rbongers/certbot-dns-transip) of Roy Bongers that has all functionality in it.

Setting up LetsEncrypt

First create the folders /etc/letsencrypt, /var/log/letsencrypt and /etc/transip.

Create as root file config.php in /etc/transip. Use the link below for the template:

https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-config-php-L1-L17

You need to paste in the obtained TransIP API key and your TransIP username.

Make sure you secure your key file with sudo chmod 400 /etc/transip/config/php so that is read-only for root.

To setup the certificate for the first time we create a bash script (I placed it in /usr/local/sbin):

https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-certinit-bash-L1-L10

Replace the `–cert-name` and the wild card domain (-d certname.example.com), and replace the domain with any domain name you own. You can use the -d parameter multiple times. For each domain a dns-challenge will be created.

Make sure the script is executable chmod +x certinit.bash.

Now run certinit.bash . It will download the image and start a docker for the set-up. This is interactive, you will be asked some questions like your email adress.

root@docker01:/usr/local/sbin# certinit.bash
Unable to find image 'rbongers/certbot-dns-transip:latest' locally
latest: Pulling from rbongers/certbot-dns-transip
330ad28688ae: Pull complete
882df4fa64e9: Pull complete
07e271639575: Pull complete
2d60c5e17079: Pull complete
f54b294a6f71: Pull complete
6f27ea6ab430: Pull complete
0c8c5a3cd6a8: Pull complete
6436ec8cd157: Pull complete
350482e0cef8: Pull complete
fcb8169b6442: Pull complete
ba9658959877: Pull complete
b9d5ffb589b1: Pull complete
a368f8fc57ed: Pull complete
Digest: sha256:faec7bc102edf00237041fbce8030249fb55f300da76b637660384c353043bff
Status: Downloaded newer image for rbongers/certbot-dns-transip:latest
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Enter email address (used for urgent renewal and security notices)
 (Enter 'c' to cancel): user@example.com (your email adres)


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf. You must
agree in order to register with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: N

Account registered.
Requesting a certificate for xxxxxx and yyyyy
Performing the following challenges:
dns-01 challenge for xxxxx
dns-01 challenge for yyyyy
Running manual-auth-hook command: /opt/certbot-dns-transip/auth-hook
Output from manual-auth-hook command auth-hook:
[2023-03-31 08:32:03.596791] INFO: Creating TXT record for _acme-challenge with challenge 'FfGM5VPKWKkwR-6ggUhlpoZlQ5-vPGK-JIy25tekb_E' [] []
[2023-03-31 08:32:06.782752] INFO: Waiting until nameservers (ns0.transip.net, ns1.transip.nl, ns2.transip.eu) are up-to-date [] []
[2023-03-31 08:32:08.625219] INFO: All nameservers are updated! [] []

Running manual-auth-hook command: /opt/certbot-dns-transip/auth-hook
Output from manual-auth-hook command auth-hook:
[2023-03-31 08:32:09.659195] INFO: Creating TXT record for _acme-challenge with challenge 'lP_hQVRODhVCiglLiNSZvNky9voGChnTyo407N_xRU4' [] []
[2023-03-31 08:32:12.628115] INFO: Waiting until nameservers (ns0.transip.net, ns1.transip.nl, ns2.transip.eu) are up-to-date [] []
[2023-03-31 08:32:13.659641] INFO: All nameservers are updated! [] []

Waiting for verification...
Cleaning up challenges
Running manual-cleanup-hook command: /opt/certbot-dns-transip/cleanup-hook
Output from manual-cleanup-hook command cleanup-hook:
[2023-03-31 08:32:16.459605] INFO: Cleaning up record _acme-challenge with value 'FfGM5VPKWKkwR-6ggUhlpoZlQ5-vPGK-JIy25tekb_E' [] []

Running manual-cleanup-hook command: /opt/certbot-dns-transip/cleanup-hook
Output from manual-cleanup-hook command cleanup-hook:
[2023-03-31 08:32:20.031682] INFO: Cleaning up record _acme-challenge with value 'lP_hQVRODhVCiglLiNSZvNky9voGChnTyo407N_xRU4' [] []


IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/xxxxxx/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/xxxxxx/privkey.pem
   Your certificate will expire on 202x-mm-ss. To obtain a new or
   tweaked version of this certificate in the future, simply run
   certbot again. To non-interactively renew *all* of your
   certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

Your certificate will be created /etc/letsencrypt/live/{certname}.

Set up auto enrollment

We want to auto renew the certificate and update haproxy to use the new certificate. The haproxy certificate will placed at /etc/haproxy/cert.pem. To enroll I made a bash script certrenew.bash:

https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-certrenew-bash-L1-L68

Make sure you adjust the basecert parameter in the script. and place it as certrenew in /usr/local/bin/sbin and make it executable with sudo chmod + x /etc/local/sbin/certrenew.

We need to run certrenew a first time to make sure the haproxy cert is created.

To make sure it runs every day somewhere between 11pm and midnight we create a cronfile (`/etc/cron.d/certrenew`) for it (see link).

https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-certrenew-cron-L1-L9.

Restart cron sudo systemctl restart cron

When the script executes, it will check if the certificate will be expired within 30 days, in that case a new certificate is requested. If the process was successful it will update the certificate and restart haproxy with the new certificate.

Now we are all set, make sure you check the logs to see the whole setup is working correctly.

Set up haproxy

As a last step now can set up haproxy (/etc/haproxy/haproxy.cfg) using the new certificate.

You can use https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-haproxy-cfg-L1-L61 as a template for your configuration. You need to change the DNS names and internal IP-address of your backend. In my case I used a Raspberry PI with a SD card and switched off logging to ensure the SD card lasts longer. If you have an SSD attached you can change the logging .

Make sure you also install /etc/haproxy/dhparam.pem (see the comment at the last line of the config file on how to obtain it).

In the example config I have enabled the stats page at https://{your_domain_name}/stats. You can use this page to see the stats. acl network_allowed_src src is used to secure that the page is only accessible from internal IP adresses. Make sure you fill in the correct IP-ranges that should have access.

If you are ready you can test the config file with:

haproxy -f /etc/haproxy/haproxy.cfg -c

If that is okay, you are all set. Restart haproxy to load the new config using: sudo systemctl restart haproxy.

Now you can access Home Assistant from out side. Make sure to set up two-factor-authentication to secure the access to your network.

Elro Connects -Home Assistant

I am working on an integration for Elro Connects. The integration should allow users with Elro Connects fire, water or CO alarms to integrate those in Home Assistant. If you own Elro Connects fire alarms and like to test, you could add the elro connects integration with HACS.

Add the following repo to HACS https://github.com/jbouwh/ha-elro-connects/. You need to restart Home Assistant after downloading the custom integration. After the restart you should be able to add the integration to Home Assistant.

For each alarm a siren entity is created. If you turn it on, a test alarm request will be sent. You can turn it off to silenece a (test) alarm.

Let me know if you like this integration. If your device is not supported, or not working correctly, then let me known!

InfluxDB 2 support

Omnikdatalogger v1.11.1 now has native support for InfluxDB v2 authentication. It is no longer needed to supply v1 authentication support. To enable it you need to configure the config parameters org, bucket, and token. If you use HACS via AppDaemon, make sure you add influxdb-client to the python packages.

Additional you now can use an SSL connection.

Read more about he new release:

https://github.com/jbouwh/omnikdatalogger/releases/tag/v1.11.1-1

Omnikdatalogger now supports new http method

Omnikatalogger’s tcpclient suports direct connection over port 8899 to fetch the data using Wouter van der Zwan’s library. Not all inverters support this way of connecting. Some inverters support collecting the inverter data over HTTP. With the new release the tcpclient will fallback to this http method when it fails to fetch the data over port 8899. Omnikdatalogger will try to fetch http://{inverter_ip}:80/js/status.js and will try to extract the data. A new setting http_only at the plant specific settings will allow to configure tcpclient only to use this method.

A disadvantage of this method is that it contains less details about the inverter and misses info about the PV DC strings.