Home Assistant on Raspberry Pi 5 with NVMe PCIe SSD

Recently Raspberry Pi 5 support was added for Home Assistant OS. The new model is twice as fast as its predecessor and add new features like a PCIe bus. The discussion started on how to use NVMe SSD’s directly connected to the PCIe bus.

M2 PCIe hats for Raspberry Pi 5

There are several PCIe hats you can buy, but there is no official Raspberry Pi hat yet, but there is an official Raspberry 5 case and Active Cooler.

The Raspberry Pi + Active cooler fits in the case if you remove the standard fan.

An option is to attach an SSD over USB3. This works okay, but the performance is limited for NVMe PCIe SSD’s that have higher throughput than SATA SSD drives.

The Raspberry Pi 5 now brings a second alternative by allowing to connect an NVMe SSD directly to the PCIe connector.

An M2 hat exists that directly connects to the PCIe port and also fits inside the original Raspberry Pi 5 case, together with the original Raspberry Pi active cooler.

The Geekworm X1003 mounts on top of the Raspberry Pi 5 and active cooler and even allows to install the top cover (if that is desired).

Install an NVMe SSD

The Geekworm X1003 fits 2242 (42 mm) and 2230 (30mm) size NMVe SSD’s. There is a nut at the 2242 position, but you might be needing a soldering iron if you want to move the nut to the 2230 position.

At the moment most nVME SSD’s are 2280, and for that size you need a different hat, e.g. the Geekworm X1001 or X1002, but those will not fit into the original case. I found a cheap SAMSUNG 2242 SSD at Amazon. This model uses a feature named HMB (Host Memory Buffer) which requires some extra attention which I will address later. After we have mounted the SSD (make sure the PCI FFC cable is installed correctly) we can continue with the next step.

Preparing the installation

To install Home Assistant OS we use the imager that is installed by default on the Raspberry PI OS image.

So to start we need an SD-card with Raspberry PI OS flashed on it, do not flash Home Assistant OS on the SD card.

We can prepare this using the Official Raspberry PI imager. After the prepared SD card is installed in or Raspberry Pi 5 we can boot the Raspberry PI OS. When the OS is started we should be able to see if the NVMe SSD is recognized. To make sure the SSD works without issues we can use the dmesg command. If there are issues with you SSD you will see the errors in the bootlog this command shows.

Identify possible NMVe SSD issues

In my case there were a few issues I needed to address. I needed to …

  • … disable PCIe power management (at least I thought that was a good idea).
  • .. enable the pcie-port on the Raspberry Pi
  • … enable gen3 mode
  • .. increase the CMA buffer to ensure enough continues memory for the Host Memory Buffer feature

If you see no NVMe SSD popping up at all, then that could be mean that…

  • … your the FFC cable between board and M2 head is not connected properly.
  • … your SDD is not probed correctly and you need to add PCIE_PROBE to your EEPROM config first.
~# sudo rpi-eeprom-config --edit
# Add the following line if using a non-HAT+ adapter:
PCIE_PROBE=1

Disable PCIe power management

We can disable pcie power management. When we know that the SSD is always on, and we see errors like:

[22609.179027] nvme 0000:01:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
[22609.179031] nvme 0000:01:00.0: device [1d79:2263] error status/mask=00009100/0000e000
[22609.179035] nvme 0000:01:00.0: [ 8] Rollover
[22609.179039] nvme 0000:01:00.0: [12] Timeout

To disable pcie power management as root we need to edit /boot/firmware/cmdline.txt and add pcie_aspm=off to the command line.

Enable the PCIe port on the Raspberry Pi

Most likely this needed at all times. To enable the pcie-port as root we need to edit the config.txt containing the kernel configuration. As root you can also edit the following the file directly: /boot/firmware/config.txt

To enable pcie add the following lines:

dtparam=pciex1
dtparam=nvme

If you still have errors in you log you could also enable gen3.

Enable the PCIe gen3 mode on the Raspberry Pi

The PCIe gen3 mode is not officially supported, so only enable this if your SSD does not work correctly without it.

To enable as root we need to edit the config.txt add the following line:

dtparam=pciex1_gen=3

Increase the CMA buffer

In case dmesg shows cma_allocation errors like:

[    0.514198] nvme nvme0: Shutdown timeout set to 8 seconds
[    0.516601] nvme nvme0: allocated 64 MiB host memory buffer.
[    0.519481] cma: cma_alloc: linux,cma: alloc failed, req-size: 2 pages, ret: -12
[    0.519494] cma: cma_alloc: linux,cma: alloc failed, req-size: 8 pages, ret: -12
[    0.519506] cma: cma_alloc: linux,cma: alloc failed, req-size: 2 pages, ret: -12
[    0.519513] cma: cma_alloc: linux,cma: alloc failed, req-size: 8 pages, ret: -12
[    0.519523] cma: cma_alloc: linux,cma: alloc failed, req-size: 2 pages, ret: -12
[    0.519530] cma: cma_alloc: linux,cma: alloc failed, req-size: 8 pages, ret: -12
[    0.519539] cma: cma_alloc: linux,cma: alloc failed, req-size: 2 pages, ret: -12
[    0.519546] cma: cma_alloc: linux,cma: alloc failed, req-size: 8 pages, ret: -12
[    0.519735] nvme nvme0: 4/0/0 default/read/poll queues
[    0.525784] nvme nvme0: Ignoring bogus Namespace Identifiers
[    0.531980]  nvme0n1: p1 p2 p3 p4 p5 p6 p7 p8
[    0.519735] nvme nvme0: 4/0/0 default/read/poll queues
[    0.525784] nvme nvme0: Ignoring bogus Namespace Identifiers
[    0.531980]  nvme0n1: p1 p2 p3 p4 p5 p6 p7 p8
[    0.619473] brcm-pcie 1000120000.pcie: link up, 5.0 GT/s PCIe x4 (!SSC)
[    0.619497] pci 0001:01:00.0: [1de4:0001] type 00 class 0x020000
[    0.619514] pci 0001:01:00.0: reg 0x10: [mem 0xffffc000-0xffffffff]
[    0.619524] pci 0001:01:00.0: reg 0x14: [mem 0xffc00000-0xffffffff]
[    0.619533] pci 0001:01:00.0: reg 0x18: [mem 0xffff0000-0xffffffff]
[    0.619605] pci 0001:01:00.0: supports D1
[    0.619609] pci 0001:01:00.0: PME# supported from D0 D1 D3hot D3cold
[    0.631479] pci_bus 0001:01: busn_res: [bus 01-ff] end is updated to 01

I had these errors due to the HMB (Host Memory Buffer) feature that needed more that the standard configured cma memory.

To increase the cma memory buffer I increased the default allocation of 64 MB to 96 MB.

As root we can edit the config.txt add the following line:

dtoverlay=cma,cma-96

It is possible to enable a larger block, but we should not allocate more than needed. To see the free cma memory as root we can use:

# cat /proc/meminfo | grep -i cma
CmaTotal:          98304 kB
CmaFree:           15684 kB

Preparing Home Assistant OS on the NVMe SSD

When we have configured our SSD and have resolved any issues, then we can flash the NVMe with Home Assistant OS. We use the preinstalled Raspberry Pi imager that is installed by default on Raspberry Pi OS and can be found in the accessories menu. Note you can set some options that allow to set WiFi and hostname in advance. Other settings we need te set manually after the image has been flashed. Make sure the NVMe SDD is flashed correctly.

Adjusting boot order, kernel parameters and configuration

After flashing Home Assistant OS, we need to adjust the boot order and the make sure Home Assistant OS uses the same config.txt and cmdline.txt as we prepped for Raspberry PI OS. There are files in the image we need to adjust before we can boot from the new image, as for now it is not possible yet to apply these settings from the UI. To access the these files we need to mount the right partition. In my case the NVMe was called nvme0n1. If you are not sure about the name of your drive as root you can use the command fdisk -l

~# fdisk -l
...
other drive info
...
Disk /dev/nvme0n1: 238.47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: SAMSUNG MZAL4256HBJD-00BL2
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: DFEFDCBD-D15E-4E82-B88C-5E6B46166328

Device           Start       End   Sectors   Size Type
/dev/nvme0n1p1    2048    133119    131072    64M EFI System
/dev/nvme0n1p2  133120    182271     49152    24M Linux filesystem
/dev/nvme0n1p3  182272    706559    524288   256M Linux filesystem
/dev/nvme0n1p4  706560    755711     49152    24M Linux filesystem
/dev/nvme0n1p5  755712   1279999    524288   256M Linux filesystem
/dev/nvme0n1p6 1280000   1296383     16384     8M Linux filesystem
/dev/nvme0n1p7 1296384   1492991    196608    96M Linux filesystem
/dev/nvme0n1p8 1492992 500118158 498625167 237.8G Linux filesystem

To access config.txt and cmdline.txt we mount the first partition:

~# mkdir boot
~# mount -t vfat /dev/nvme0n1p1 ~/boot
~# cd boot
~# ls

Use the same settings that were okay for Raspberry PI OS, and we should be okay.

Finally we change the boot sequence so the Raspberry Pi will boot from the SSD. The code below shows how to do that:

# Edit the EEPROM on the Raspberry Pi 5.
sudo rpi-eeprom-config --edit

# Change the BOOT_ORDER line to:
BOOT_ORDER=0xf461
# This will try SD card first, then NVMe and finally USB

# Add the following line if using a non-HAT+ adapter:
PCIE_PROBE=1

# Press Ctrl-O, then enter, to write the change to the file.
# Press Ctrl-X to exit nano (the editor).

For me it was not needed to add PCIE_PROBE. The order is read right to left. So 0xf416 will firts try 6=NVMe, then 1=sd card, then 4=USB and finally f=restart the loop. As you can see in the example, I used 0xf461 as boot order, because I prefer I can easily boot from SD card, without the need to remove the NMVe SSD, if that is needed for rescue actions.

More documentation about the boot order options can be found here: https://www.raspberrypi.com/documentation/computers/raspberry-pi.html#BOOT_ORDER.

Boot Home Assistant from NVMe SSD

When the boot order includes the NMVe as the virst valid boot option, we can try booting the flashed NMVe SSD.

The Home Assistant instance should intialize at http://homeassistant.local:8123. It may take some minutes to intialize. If you have a suitable HDMI cable and a keyboard and mouse you can get access to the Home Assistant CLI.

To see if there are errors you can excecute dmesg to see the boot log. To know if there are any PCIe run time errors you can use ha host logs -f. When useing the CLI from the console you can omit ha, so host logs -f or host logs should just work. If you set up SSH when flasing the image, you should also be able to use SSH to open the CLI.

If every thing is fine, you can start setting up Home Assistant or restore from a backup, but make sure that when you restore from a backup that the same supervisor version is used, or the backup restore will fail. To update to specific version from the CLI use:

ha supervisor update --version 2024.03.0

To just update to the latest version:

ha supervisor update

Restoring from backup can take quite some time if the backup file is large. The UI does no show much detail on the progress. To see some more details you can check the supervisor logs. If anything went wrong you will able to see it.

ha supervisor logs

And that is all: Enjoy your Home Assistant installation on Raspberry Pi 5 and NVMe SSD!

Home Assistant + `haproxy` +`LetsEncrypt`+TransIP

This post is about my (positive) experience with haproxy as reverse proxy for Home Assistant. Remote access is need if youw want to access Home Assistant from outside of your home network.

As prerequisite we will also need to forward port 443(https) in our router/firewall to system used and make sure we have a valid DNS A/AAAA or CNAME record set up pointing to the Public IP address that is used to forward.

As I wanted to have LetsEncrypt certificates being enrolled using a DNS Challenge. For this I have used a Raspberry Pi 3b board with Rasbian (Debian) installed. Futher I installed docker, and haproxy.

Installing the haproxy package is as simple as:

sudo apt update and sudo apt install haproxy

As prerequisite we will also need to forward port 443(https) in our router/firewall to system used and make sure we have a valid DNS A/AAAA or CNAME record set up pointing to the Public IP address that is used to forward.

LetsEncrypt with DNS-01 challenge and TransIP API

My Internet domain are managed by TransIP, and DNS access has API support. In the portal you can create an API key and whitelist the IP-adresses that you want to use for DNS accesses. We use this API to enroll LetsEncrypt certificates. Big advantage is that we can enroll wild card certificates or certificates with internal names we do not want to expose on the Internet, and we do not need to open port 80!

For the enrollment of certificates with LetsEncrypt via the TransIP API I used the docker image (rbongers/certbot-dns-transip) of Roy Bongers that has all functionality in it.

Setting up LetsEncrypt

First create the folders /etc/letsencrypt, /var/log/letsencrypt and /etc/transip.

Create as root file config.php in /etc/transip. Use the link below for the template:

https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-config-php-L1-L17

You need to paste in the obtained TransIP API key and your TransIP username.

Make sure you secure your key file with sudo chmod 400 /etc/transip/config/php so that is read-only for root.

To setup the certificate for the first time we create a bash script (I placed it in /usr/local/sbin):

https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-certinit-bash-L1-L10

Replace the `–cert-name` and the wild card domain (-d certname.example.com), and replace the domain with any domain name you own. You can use the -d parameter multiple times. For each domain a dns-challenge will be created.

Make sure the script is executable chmod +x certinit.bash.

Now run certinit.bash . It will download the image and start a docker for the set-up. This is interactive, you will be asked some questions like your email adress.

root@docker01:/usr/local/sbin# certinit.bash
Unable to find image 'rbongers/certbot-dns-transip:latest' locally
latest: Pulling from rbongers/certbot-dns-transip
330ad28688ae: Pull complete
882df4fa64e9: Pull complete
07e271639575: Pull complete
2d60c5e17079: Pull complete
f54b294a6f71: Pull complete
6f27ea6ab430: Pull complete
0c8c5a3cd6a8: Pull complete
6436ec8cd157: Pull complete
350482e0cef8: Pull complete
fcb8169b6442: Pull complete
ba9658959877: Pull complete
b9d5ffb589b1: Pull complete
a368f8fc57ed: Pull complete
Digest: sha256:faec7bc102edf00237041fbce8030249fb55f300da76b637660384c353043bff
Status: Downloaded newer image for rbongers/certbot-dns-transip:latest
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Enter email address (used for urgent renewal and security notices)
 (Enter 'c' to cancel): user@example.com (your email adres)


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf. You must
agree in order to register with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: N

Account registered.
Requesting a certificate for xxxxxx and yyyyy
Performing the following challenges:
dns-01 challenge for xxxxx
dns-01 challenge for yyyyy
Running manual-auth-hook command: /opt/certbot-dns-transip/auth-hook
Output from manual-auth-hook command auth-hook:
[2023-03-31 08:32:03.596791] INFO: Creating TXT record for _acme-challenge with challenge 'FfGM5VPKWKkwR-6ggUhlpoZlQ5-vPGK-JIy25tekb_E' [] []
[2023-03-31 08:32:06.782752] INFO: Waiting until nameservers (ns0.transip.net, ns1.transip.nl, ns2.transip.eu) are up-to-date [] []
[2023-03-31 08:32:08.625219] INFO: All nameservers are updated! [] []

Running manual-auth-hook command: /opt/certbot-dns-transip/auth-hook
Output from manual-auth-hook command auth-hook:
[2023-03-31 08:32:09.659195] INFO: Creating TXT record for _acme-challenge with challenge 'lP_hQVRODhVCiglLiNSZvNky9voGChnTyo407N_xRU4' [] []
[2023-03-31 08:32:12.628115] INFO: Waiting until nameservers (ns0.transip.net, ns1.transip.nl, ns2.transip.eu) are up-to-date [] []
[2023-03-31 08:32:13.659641] INFO: All nameservers are updated! [] []

Waiting for verification...
Cleaning up challenges
Running manual-cleanup-hook command: /opt/certbot-dns-transip/cleanup-hook
Output from manual-cleanup-hook command cleanup-hook:
[2023-03-31 08:32:16.459605] INFO: Cleaning up record _acme-challenge with value 'FfGM5VPKWKkwR-6ggUhlpoZlQ5-vPGK-JIy25tekb_E' [] []

Running manual-cleanup-hook command: /opt/certbot-dns-transip/cleanup-hook
Output from manual-cleanup-hook command cleanup-hook:
[2023-03-31 08:32:20.031682] INFO: Cleaning up record _acme-challenge with value 'lP_hQVRODhVCiglLiNSZvNky9voGChnTyo407N_xRU4' [] []


IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/xxxxxx/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/xxxxxx/privkey.pem
   Your certificate will expire on 202x-mm-ss. To obtain a new or
   tweaked version of this certificate in the future, simply run
   certbot again. To non-interactively renew *all* of your
   certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

Your certificate will be created /etc/letsencrypt/live/{certname}.

Set up auto enrollment

We want to auto renew the certificate and update haproxy to use the new certificate. The haproxy certificate will placed at /etc/haproxy/cert.pem. To enroll I made a bash script certrenew.bash:

https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-certrenew-bash-L1-L68

Make sure you adjust the basecert parameter in the script. and place it as certrenew in /usr/local/bin/sbin and make it executable with sudo chmod + x /etc/local/sbin/certrenew.

We need to run certrenew a first time to make sure the haproxy cert is created.

To make sure it runs every day somewhere between 11pm and midnight we create a cronfile (`/etc/cron.d/certrenew`) for it (see link).

https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-certrenew-cron-L1-L9.

Restart cron sudo systemctl restart cron

When the script executes, it will check if the certificate will be expired within 30 days, in that case a new certificate is requested. If the process was successful it will update the certificate and restart haproxy with the new certificate.

Now we are all set, make sure you check the logs to see the whole setup is working correctly.

Set up haproxy

As a last step now can set up haproxy (/etc/haproxy/haproxy.cfg) using the new certificate.

You can use https://gist.github.com/jbouwh/3b6042ed4ca1189e1f37d0f8ff7274e5#file-haproxy-cfg-L1-L61 as a template for your configuration. You need to change the DNS names and internal IP-address of your backend. In my case I used a Raspberry PI with a SD card and switched off logging to ensure the SD card lasts longer. If you have an SSD attached you can change the logging .

Make sure you also install /etc/haproxy/dhparam.pem (see the comment at the last line of the config file on how to obtain it).

In the example config I have enabled the stats page at https://{your_domain_name}/stats. You can use this page to see the stats. acl network_allowed_src src is used to secure that the page is only accessible from internal IP adresses. Make sure you fill in the correct IP-ranges that should have access.

If you are ready you can test the config file with:

haproxy -f /etc/haproxy/haproxy.cfg -c

If that is okay, you are all set. Restart haproxy to load the new config using: sudo systemctl restart haproxy.

Now you can access Home Assistant from out side. Make sure to set up two-factor-authentication to secure the access to your network.

Elro Connects -Home Assistant

I am working on an integration for Elro Connects. The integration should allow users with Elro Connects fire, water or CO alarms to integrate those in Home Assistant. If you own Elro Connects fire alarms and like to test, you could add the elro connects integration with HACS.

Add the following repo to HACS https://github.com/jbouwh/ha-elro-connects/. You need to restart Home Assistant after downloading the custom integration. After the restart you should be able to add the integration to Home Assistant.

For each alarm a siren entity is created. If you turn it on, a test alarm request will be sent. You can turn it off to silenece a (test) alarm.

Let me know if you like this integration. If your device is not supported, or not working correctly, then let me known!

New gas monitoring feature

The latest version of Omnik Data Logger now supports the monitoring of gas consumption (using MQTT). Update your Home Assistant 2021.9.x before you update Omnik Data Logger, for else MQTT will not work with Home Assistant.

The new Home Assistant Energy dashboard with gas monitoring

The unit of measurement for gas consumption has changed from m3 to m³ to be compliant with Home Assistant. Gas consumption per hour was m3/h and is now m³/h.

More energy coming soon: Gas!

With the latest beta release of Home Assistant 2021.9 the monitoring of gas consumption will come available. The current version of Omnikdatalogger (1.6.x) does nog support this feature yet, but you can test it already with the latest beta version.

Be aware that this beta changes the unit off measurement of the gas entity. This impacts logging influx DB causing new data te be stored in a new measurement.

Omnik data logger DSMR P1 support

With the new brand new release of Omnik data logger I have included support for the Dutch Smart Meter Requirements compliant Electricity meter. With a simple FTDI P1 to USB serial adapter Omnik datalogger now also publish your smart meter data including gas. With PVoutput this means we can also upload the consumed energy and calculate the netto energy used/delivered.

The P1 adapter can be either connected directly to your device or using TCP. I used ser2net to be able to connect multiple sessions to my P1 meter. To make this possible you will need at least ser2net v3.5. I have used a docker container installing ser2net (jsurf/rpi-ser2net:buster). Adding the max-connections option will enable me to do some debugging and in the mean will logging my data.

3333:raw:600:/dev/ttyUSB0:115200 NONE 1STOPBIT 8DATABITS -XONXOFF RTSCTS max-connections=3

/etc/ser2net.conf

If you publish your smart meter to Home Assistant this can replace the native dsmr integration include with Home Assistant. Over MQTT auto discovery Home Assistant will discover automatically. In the config file you can rename the entities for your smart meter. Addition direct use, direct consumption and the real calculated consumption data will be published over MQTT and InfluxDB.

HACS Default

Omnik data logger now has been added to the default store. This means there is no need to add a custom repository any more. You can find Omnik data logger at the Automation section.

Make sure you have installed the AppDaemon Add-on from the official Home Assistant Add-on store.

TCPclient tested and working now

Thanks to Han Lubach the tcpclient could be tested and corrected. It seems this client, based on Wouter van der Zwan’s logger is now usable with Omnik data logger as well.

Caching of manual inputs for Home Assistant

I found it to be annoying Home Assistant is recovering the manual input selects I am using for my automations. To enable a persistant state I have made a generic scrits that cashes the state of these input to disk and restores them automatically when Home Assistant restarts.

The script is written in Python and is written to work with AppDaemon.

The code shows how to use the Home Assistant integration API that comes with AppDaemon.

New release Omnik datalogger

Check https://GitHub.com/jbouwh/omnikdatalogger for the newrelease (0.91-beta) of Omnik datalogger.

The last days omnikport was down. In the mean while I found a second method getting access your the data. During the outage the portal at https://www.solarmanpv.com/portal was available the whole time. Here you can find your data aswell. This proves omnikportal is just a skin and the data is at solarman PV. I found an API and working code that gives access to the realtime information. This API seems to give more accurate and in time access to the most recent data that was logged. Unfortunally the API is not TLS encrypted and uses a MD5 hashed password login. Not very safe. Neverless the alternative API is a good alternative for the time being. In the main menu there is now a special page for the omnik datalogger sofware.

Pluggable client

The new release comes with a pluggable client for logging. This makes it easy to change the logging code by just changing the configuration.

New client in development for local logging

For older Omnik inverters it is not possible to get the data straight from the inverter. Newer inverters listen at port 8899 and are able to respond directly. The datalogger in the inverter sends an update about every 5 minutes to a fixed IP of a logging server in the Solarman datacenter. The IP address the inverter is logging to is fixed and cannot be changed via the interface, but it is possible to intercept the message by simulating the server using a NAT rule and a static route for the IP address in your netwerk. The code te parse the data that is logged is already available to use, thanks to Woutrrr. The software will be able to listen on a TCP port to the rerouted datalogger sessions. The code ewill be based on this project of t3kpunk. A tutorial how to use this will apear on this site as soon as the code is ready for release. Local logging will give more detail information then currently available using the API’s. Extra sensors will be:

  • AV Voltage* and Current and Frequency (3 fase if yor model supports it)
  • DC Voltages and Currents for all strings (max 2)
  • The Temperature* of the inverter

*These values are submitted to PVoutput as well. And every new sensor will become available via the MQTT plugin.

DSMR integration

For ouput to pvoutput.org I have planned to make an integration for the Dutch Smart Meter (DSMR). An adapter from the P1 connector to a USB serial interface is general available. Integrating makes is possible to calculate the real energy consumption. Of course all these new sensor information will be available to the MQTT output plugin as well.

Outage omnik portal

The last few days, the omnik portal and app and API have been unavailabe. So the data loggging is not possible at the moment. I have found out Omnik dataloggers log to the database of SolarmanPV. It seems you can still see your live output and history.

The URL is https://www.solarmanpv.com/portal/LoginPage.aspx and you can use your omnik portal credentials. I am working to get the API working so the software can be updated. Unfortunally there is only a working unsecure API interface. There is a secure alternative, but an API-key is needed for that. I am working on this.

Have some patience, I will come with a working update coming days.

Regards,

Jan

Thirst (pre) release for the omnik data logger

The first pre-release of the new Omnik data logger is now available. I would like to know if this works for you! If you have any comments or issues, let me know!

Release details

Version: 0.9-alpha (pre-release)

This pre-release brings a solution to integrate the actual state of your Omnik power inverter in to your favorite home automation system. The script reads out omnikportal.com using your credentials and enables you process the real time data of your solar plant.
The realetime output can be forwared to pvoutput.com or send over MQTT to your favorite Home Automation porject. The MQTT implementation supports the Home Assistant Automatic MQTT discovery.

You can run the script from the command line, use systemd or make use of AppDaemon so you can integrate directly with Home Assistent.

Have fun! Let me know your experience if you have any issues.

You can download the release here.