Using DS18B20 on RPi (python) w/bonus writing to InfluxDB

This is how I used 3 x DS18B20 digital temperature sensors wired to a RPi. I bought the versions sealed in a casing as plan to put them outdoors. All data handling done in python and being written to InfluxDB and finally displayed on Grafana.

If you want to first familiarise yourself with python and InfuxDB see an earlier post.

Connect the sensors to the RPi as shown below:

Image Source: Scott Campbell https://www.circuitbasics.com/raspberry-pi-ds18b20-temperature-sensor-tutorial/

Sensors can be connected in parallel and no extra resistors required. I soldered onto the ribbon cable and used servo connectors to connect each sensor, this would make it easier to pass through glands later on. See my final setup below:

Anyway starting from a fresh install setup the RPi:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python-influxdb

Enable the One-Wire interface for the DS18B20 by opening the below file:

sudo nano /boot/config.txt

And add the below to the bottom of the file:

dtoverlay=w1-gpio

Next exit (Ctrl + x) & reboot:

sudo reboot

Next lets see if device detected:

sudo modprobe w1-gpio
sudo modprobe w1-therm
cd /sys/bus/w1/devices
ls

In my case: (when I only connected 1 sensor, the others showed up when connect them)

28-3c01d60708e8 w1_bus_master1

is displayed. Now enter: (change the X’s to your own address or hit tab to auto-fill)

cd 28-XXXXXXXXXXXX
cat w1_slave

The raw temperature reading output by the sensor will be show as below:

4f 01 55 05 7f a5 81 66 3b : crc=3b YES
4f 01 55 05 7f a5 81 66 3b t=20937

Here the temperature reading is t=20937, which means a temperature of 20.937 degrees Celsius.

Great so we are reading a single sensor fine, lets create a python file to do all the above for us:

cd /home/pi
nano temp.py

Fill the file with the below: Remember to update the below:
– The IP of your InfluxDB instance along with database details.
– The address of your DS18B20 sensors

import os
import glob
import time

os.system('modprobe w1-gpio')
os.system('modprobe w1-therm')

from influxdb import InfluxDBClient
client = InfluxDBClient(host='192.168.1.XXX', port=8086)
#client.get_list_database()
client.switch_database('YOUR_DATABASE')

sensor_1 = '/sys/bus/w1/devices/28-3c01d60708e8/w1_slave'
sensor_2 = '/sys/bus/w1/devices/28-3c01d60711da/w1_slave'
sensor_3 = '/sys/bus/w1/devices/28-3c01d6072d92/w1_slave'

sensor_1_t = 0
sensor_2_t = 0
sensor_3_t = 0

def read_temp(sensor):
    f = open(sensor, 'r')
    lines = f.readlines()
    f.close()

    equals_pos = lines[1].find('t=')
    if equals_pos != -1:
        temp_string = lines[1][equals_pos+2:]
        temp_c = float(temp_string) / 1000.0
        return temp_c

sensor_1_t = read_temp(sensor_1)
sensor_2_t = read_temp(sensor_2)
sensor_3_t = read_temp(sensor_3)

print("Sensor 1 (Inside Shed): " + str(sensor_1_t))
print("Sensor 2 (Outside): " + str(sensor_2_t))
print("Sensor 3 (Soil): " + str(sensor_3_t))

json_body = [
    {
        "measurement": "YOUR_MEASUREMENT",
        "tags": {
            "Device": "YOUR_DEVICE",
            "ID": "YOUR_ID"
        },
        "fields": {
            "i_temp": sensor_1_t,
            "o_temp": sensor_2_t,
            "s_temp": sensor_3_t
        }
    }
]
client.write_points(json_body)

Exit the nano editor while saving using Ctrl + x and hitting Y to save. Make the file executable:

chmod +x temp.py

Run the python file:

python temp.py

Data will be written to your database along with the text outputted to the console. Okay now lets get it to run/log every 15mins by:

crontab -e
*/15 * * * * /usr/bin/python /home/pi/temp.py

Ctrl + x to exit

I then setup a Grafana display to show the sensors (all in the same location for now)

Settings used for above display.

That’s it!

Resources I used:
https://www.circuitbasics.com/raspberry-pi-ds18b20-temperature-sensor-tutorial/

Using integral function on Grafana (covert Watt to kWh)

After fighting for longer than I’d like to admit with this function I finally managed to get it working.

I use a single stat visualation and the below queries to give me energy usage in Watt/Hour from my data stored in Watts.

1hr Usage: (Relative time over-ride = 1h)

SELECT integral("Energy_Usage",1h) FROM "esp" WHERE ("Device" = 'esp_03') AND $timeFilter GROUP BY time(3h) 

24hr Usage: (Relative time over-ride = 24h)

SELECT integral("Energy_Usage",1h) FROM "esp" WHERE ("Device" = 'esp_03') AND $timeFilter GROUP BY time(3d) 

7 Day Usage: (Relative time over-ride = 7d)

SELECT integral("Energy_Usage",1h) FROM "esp" WHERE ("Device" = 'esp_03') AND $timeFilter GROUP BY time(21d) 

That’s it!

Backup & Restore Grafana

I had full TICK stack and Grafana running on a RPi4 for a couple of months without issue until suddenly CPU usage went through the roof and reduced functionality (caused by InfluxDB, unknown why) so I need to do a full reinstall, at this point I decided to put Grafana on a separate machine (RPi3), below is how to export Grafana configuration and import onto a different machine.

Export old config by copying the below files to external USB drive:

/var/lib/grafana/grafana.db
/etc/grafana/grafana.ini

After Installing Grafana on new machine:
Note: You can upgrade to the highest minor release of your current Grafana version, I upgraded from 6.3 to 6.7.3. All versions viewable here.

sudo apt update
sudo apt upgrade
sudo apt-get install -y adduser libfontconfig1
wget https://dl.grafana.com/oss/release/grafana-rpi_6.7.3_armhf.deb
sudo dpkg -i grafana-rpi_6.7.3_armhf.deb
sudo systemctl unmask grafana-server.service
sudo systemctl start grafana-server
sudo systemctl enable grafana-server.service
sudo reboot

Insert USB into new machine and import the configuration files again:

cd usb-drive/
sudo cp grafana.db /var/lib/grafana/
sudo cp grafana.ini /etc/grafana/

The only other adjustment I had to do was adjust the Grafana Datasources URL from the previous local host to the InfluxDB Server address since they were now on different machines.

That’s it!

PiHole logging to InfluxDB & Grafana Dash

Building on the work of others before me, below you will find a tutorial to get PiHole logging to InfluxDB using a python script and then to a Grafana Dashboard. All required code available on my GitHub.

SSH into your PiHole: ssh pi@xxx.xxx.xxx.xxx and run the below:

Install python dependencies:

sudo apt-get install python-influxdb

Create the below python file:

sudo nano influx_scripts/piholestats.py
#! /usr/bin/python

# History:
# 2016: Script originally created by JON HAYWARD: https://fattylewis.com/Graphing-pi-hole-stats/
# 2016 (December) Adapted to work with InfluxDB by /u/tollsjo
# 2016 (December) Updated by Cludch https://github.com/sco01/piholestatus
# 2020 (March) Updated by http://cactusprojects.com/pihole-logging-to-influxdb-&-grafana-dash

import requests
import time
from influxdb import InfluxDBClient

HOSTNAME = "pihole" # Pi-hole hostname to report in InfluxDB for each measurement
PIHOLE_API = "http://192.168.1.XXX/admin/api.php"
INFLUXDB_SERVER = "192.168.1.XXX" # IP or hostname to InfluxDB server
INFLUXDB_PORT = 8086 # Port on InfluxDB server
INFLUXDB_USERNAME = ""
INFLUXDB_PASSWORD = ""
INFLUXDB_DATABASE = "dev_pihole"
DELAY = 10 # seconds

def send_msg(domains_blocked, dns_queries_today, ads_percentage_today, ads_blocked_today):

	json_body = [
	    {
	        "measurement": "piholestats." + HOSTNAME.replace(".", "_"),
	        "tags": {
	            "host": HOSTNAME
	        },
	        "fields": {
	            "domains_blocked": int(domains_blocked),
                    "dns_queries_today": int(dns_queries_today),
                    "ads_percentage_today": float(ads_percentage_today),
                    "ads_blocked_today": int(ads_blocked_today)
	        }
	    }
	]

	client = InfluxDBClient(INFLUXDB_SERVER, INFLUXDB_PORT, INFLUXDB_USERNAME, INFLUXDB_PASSWORD, INFLUXDB_DATABASE) # InfluxDB host, InfluxDB port, Username, Password, database
	# client.create_database(INFLUXDB_DATABASE) # Uncomment to create the database (expected to exist prior to feeding it data)
	client.write_points(json_body)

api = requests.get(PIHOLE_API) # URI to pihole server api
API_out = api.json()

#print (API_out) # Print out full data, there are other parameters not sent to InfluxDB

domains_blocked = (API_out['domains_being_blocked'])#.replace(',', '')
dns_queries_today = (API_out['dns_queries_today'])#.replace(',', '')
ads_percentage_today = (API_out['ads_percentage_today'])#
ads_blocked_today = (API_out['ads_blocked_today'])#.replace(',', '')

send_msg(domains_blocked, dns_queries_today, ads_percentage_today, ads_blocked_today)

Save and Exit.

I have the file run on a cron job every minute. Others set it up as a service but cron job works just fine for me:

crontab -e
*/1 * * * * /usr/bin/python /home/pi/influx_scripts/piholestats.py

We need to create Influx database next, I carried this out through the Chronograf web interface but add it through the terminal by the below if required:

influx
create database dev_pihole
exit

Now onto Grafana Dash:

Add the “dev_pihole” database to the Grafana Data Sources list.

Next go to “Import dashboard” and paste in the JSON code on my Github. I tweaked a previous dashboard slightly.

All done!

OpenWRT logging to InfluxDB & Grafana Dash

Building on the work of others before me, below you will find a complete tutorial to get OpenWRT logging to InfluxDB using the “connectd” plugin. All required code available on my GitHub.

SSH into your router console: ssh root@xxx.xxx.xxx.xxx and run the below:

opkg update
opkg install luci-app-statistics collectd collectd-mod-cpu \
collectd-mod-interface collectd-mod-iwinfo \
collectd-mod-load collectd-mod-memory collectd-mod-network collectd-mod-uptime collectd-mod-thermal collectd-mod-openvpn collectd-mod-dns collectd-mod-wireless
/etc/init.d/luci_statistics enable
/etc/init.d/collectd enable

Go to router Web Interface and there is a new “Statistics” tab, its mostly setup but quick configuration: (also see screenshot below)

  • Go to Statistics -> Setup -> add ‘Hostname’ field and populate it. (doesn’t exist by default for some reason)
  • Go to Statistics -> Setup -> Output plugins -> add the details of your InfuxDB server. (leave the port as 25826)

We are finished with the router now, I rebooted it, not sure if was 100% necessary.

Next SSH into your InfluxDB console: ssh xxx@xxx.xxx.xxx.xxx

Create file: /usr/local/share/collectd/types.db (add file from my Github)

sudo nano /usr/local/share/collectd/types.db

We now need to enable the “collectd” plugin in InfluxDB config:

sudo nano /etc/influxdb/influxdb.conf

Configure it so it is the same as below:

[[collectd]]
   enabled = true
   bind-address = ":25826"
   database = "dev_collectd"
   retention-policy = ""
  #
  # The collectd service supports either scanning a directory for multiple types
  # db files, or specifying a single db file.
   typesdb = "/usr/local/share/collectd/types.db"
  #
   security-level = "none"
   auth-file = "/etc/collectd/auth_file"

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
   batch-size = 5000

  # Number of batches that may be pending in memory
   batch-pending = 10

  # Flush at least this often even if we haven't hit buffer limit
   batch-timeout = "10s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
   read-buffer = 0

  # Multi-value plugins can be handled two ways.
  # "split" will parse and store the multi-value plugin data into separate measurements
  # "join" will parse and store the multi-value plugin as a single multi-value measurement.
  # "split" is the default behavior for backward compatibility with previous versions of influxdb.
  # parse-multivalue-plugin = "split"

Exit & Save.

Add new database in InfluxDB, I carried this out through the Chronograf web interface but add in through the terminal by the below if required:

    influx
    create database dev_collectd
    exit

Restart InfluxDB to activate the new config:

sudo service influxd restart

Now onto Grafana Dash:

Add the “dev_collectd” database to the Grafana Data Sources list.

Next go to “Import dashboard” and paste in the JSON code on my Github. I tweaked a previous dashboard slightly.

All done!

References I used:
https://blog.christophersmart.com/2019/09/09/monitoring-openwrt-with-collectd-influxdb-and-grafana/
https://wiki.opnfv.org/display/fastpath/Installing+and+configuring+InfluxDB+and+Grafana+to+display+metrics+with+collectd

Notes on what doesn’t work:
Can’t see amount of connected wireless devices.
OpenVPN stats also not working.
Its on the to do list if I can get this going again.

InfluxDB Ver 1.x Backup Database

Note: This was implemented for InfluxDB Version 1.6. I have since upgraded to InfluxDB Version 2.x and a different method is required for this version.

It makes sense to periodically backup InfluxDB to an external drive in-case of corruption of onboard memory. I am using a USB memory stick.

A simple cronjob can take care of this (every night 2am), open Crontab:

sudo crontab -e

and insert the below line: (change for your storage device)

0 2 * * * influxd backup -portable /media/usb/drive

Backup names start with the date it was generated but it can get messy after a few weeks so long term its better to run a backup script to put backups in individual directories and catch errors etc., create a python file for this and use the below example, update crontab -e instead to:

sudo crontab -e
0 2 * * * python /home/pi/influx_scripts/influx_backup.py

Create our python backup file:

nano /home/pi/influx_scripts/influx_backup.py
import os
from datetime import date

today = date.today()

d1 = today.strftime("%Y_%m_%d")
print("Date", d1)

command = "mkdir /media/usb-backup/" + d1
#print(command)
os.system(command)

command = "influxd backup -portable /media/usb-backup/" + d1
os.system(command)

command = "kapacitor backup /media/usb-backup/"+d1+"/kapacitor.db"
os.system(command)

command = "sudo find /media/usb-backup/* -mtime +7 -type d -exec rm -rf {} \;"
os.system(command)

os.system("echo Backups Done!")

You can keep an eye on the USB memory stick size by the below snip of script which can be logged to InfluxDB. An Influx alert keeps an eye on the size and alerts if getting close to capacity.

The above already deletes all files over 7 Days old.

DIRECTORY="/media/usb/drive"
if [ -d "$DIRECTORY" ]; then
    usb_mem_usage=$(du -s $DIRECTORY | awk 'NR==1{print $1}')
else
    usb_mem_usage="-1"
fi
echo $usb_mem_usage

All done!

Resources I used:
– https://stackoverflow.com/questions/31389483/find-and-delete-file-or-folder-older-than-x-days

Setup HTTPS for Grafana

By default Grafana operates over HTTP but for added security you can operate over HTTPS. For my use case I am using a self generated certificate as not using a public domain.

Generate Keys: (a key.pem and cert.pm files will be generated)

openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem

Put the keys in /home/pi/ directory. It did not work for me in the /etc/grafana/ directory.

sudo mv cert.pem /home/pi/
sudo mv key.pem /home/pi/

Change permissions of the keys:

sudo chmod -R 777 /home/pi/cert.pem
sudo chmod -R 777 /home/pi/key.pem 

Edit Grafana Config File:

sudo nano /etc/grafana/grafana.ini

Ensure the server protocol is updated and the key locations listed:

[server]
# Protocol (http, https, socket)
protocol = https

# https certs & key file
cert_file = /home/pi/cert.pem
cert_key = /home/pi/key.pem

Reboot system and all done!

Grafana Setup on RPi Zero

So you will have InfluxDB installed and data stored in the database, now we are going to visualize this data in Grafana. Click on the images to see the detail.

Install Grafana by:

wget https://raw.githubusercontent.com/trashware/grafana-rpi-zero/master/grafana_6.0.1_armhf.deb

sudo apt-get install adduser libfontconfig

sudo dpkg -i grafana_6.0.1_armhf.deb

sudo update-rc.d grafana-server defaults

sudo service grafana-server start

Grafana will be running now so in a browser you can navigate to the IP of your device, port 3000, for example 192.168.1.2:3000. User and Password is admin. We will create our first graphic later on but now back to the Raspberry Pi.

After a reboot check if Grafana starts up like it should. (mine didn’t)

service grafana-server status

If it does not show as “active (running)” then run the below:

sudo systemctl enable grafana-server.service

Okay now lets start creating a graphic, on a browser go to the device (for example 192.168.1.2:3000) and login, default user and password is admin.

Now we need to add a database, click on the cog wheel and select Data Sources and then click “Add Data Sources”

Setup your database like the below. rpi_01 is the name of the database I created in the previous tutorial. Then Click Save & Test. Everything should work.

Now lets create a graph, go to Dashboard -> Add Panel (top right area) -> Choose Visualization -> Graph. Set up the 4 setting tabs like mine below:

Now you will have a single graph like the top graph of mine below. Read on to understand how to efficiently show the data.

The above graphs all are showing the same data but by far the top graph is the easiest to read. This is displaying a moving average (10 samples) of the mean of the data. The middle graph is displaying moving average (10 samples) of the distinct data values. The bottom chart is just showing distinct values.

Another important setting is the Group By. Grafana only show as much data as it needs if you leave the Group By as time($_interval), otherwise it will fetch far more data than required in long time series and visualizations may fail to load.

Resources Used:
https://www.circuits.dk/install-grafana-influxdb-raspberry/
https://www.neteye-blog.com/2017/02/how-to-tune-your-grafana-dashboards/