TTT: Temperature based Tea Timer

Winter is coming…well…sort of. The season for hot tea is getting closer every day now. The color does not matter as long as a cup of tea makes the moment more hyggelig. Different teas need to steep for different amounts of time. Usually green teas taste better if they did not steep longer than 2 minutes. Black teas may steep longer, depending on the desired strength. Herbal and fruit teas may steep even longer, and why take the peppermint out of the cup at all?

Tea timers come in handy. So why not create one using an Arduino? But simply counting the time passed is trivial. It appears way more interesting to use the tea temperature to judge whether the tea is ready.


Arduino Pro Mini 5 V
Monochrome OLED display
TMP36 temperature sensor
Piezo Buzzer
Some cables


Arduino Pro Mini OLED Display TMP36 Piezo Buzzer
5 V 5 V 5 V
A0 middle leg
Pin 8 +


The Arduino sketch is pretty straight forward. In the loop the temperature is measured every second. The temperature is displayed along with the seconds passed since startup time on the OLED display. If the desired temperature is reached the display shows a message and the buzzer plays a melody. As a very simple safety measurement the time passed is taken into account as well…
The temperatures were measured with a cup of hot water and are to be considered experimental. The measured temperatures depend on various parameters and will vary in other environments.

The sketch can be uploaded to the Arduino using the Arduino IDE.

As suggested by deloarts the Arduino sketch can be found on github:

The Case

The temperature sensor should be close to the water. An idea which came up while working on this prototype is to create some kind of floating bubble with the display in the upper part and the temperature sensor in the lower half. TBD…

Prototype of the temperature tea timer
Prototype of the temperature tea timer

Raspberry Pi 3 JukeBox with RFID Music Selection and Gesture Control

A couple of weeks ago friends mentioned a nice project idea. Their daughter is quite young and already a huge music fan. Until she is grown enough to use CDs or small MP3 players (or whatever is en vogue when she is old enough) she could use RFID tagged somethings to select her choice of music in a simple way.
The idea of an RFID controlled Raspberry Pi 3 Music Player is not new. Several examples like this cool looking music robot already exist.  So here I want to add the description of my little prototype JukeBox which uses gesture control to adjust the volume.


Raspberry Pi 3 with Jessie
Speaker with 3.5 mm jack
RFID Reader and Cards/Tags
APDS-9960 Gesture Control Chip


RFID Reader

RFID Reader Pin # Pin name
IRQ None None
GND Any Any Ground
3.3V 1 3.3V

Gesture Control Sensor

Board Pin Name Remarks Pin # RPi Function
1 VIN +3.3V Power 1 3.3V
2 GND Ground 6 GND
3 SCL Clock 5 BCM 3 (SCL)
4 SDA Data 3 BCM 2 (SDA)
5 INT Interrupt 16 BCM 23 (SDA)

Setting up the Sound

sudo raspi-config

In the advanced options select the audio settings and set audio output to 3.5 mm audo jack.
In /etc/boot.config the parameter dtparam=audio=on should not be commented.


Adjusting the volume via the command line is possible with

amixer cset numid=1 -- 80%

See this page for more information on using audio on a Raspberry Pi.


A simple python script loaded in /etc/rc.local controls the music being played. It uses the MFRC522 library for reading RFID tags and the VLC python bindings for playing music.


To use the MFRC522 python library first enable SPI and install the SPI library.

sudo raspi-config
# Interfacing Options > P4 SPI > enable

sudo apt-get install git python-dev --yes
git clone
cd SPI-Py
sudo python install
cd ..
git clone

Once the libraries are installed RFID tags can be read with the example program:

cd MFRC522-python
sudo python

These RFID tag IDs are used in the example python script below to select the desired MP3s.


For playing MP3s I used the VLC python bindings. Numerous other possibilities exist as well, but I chose VLC because of its documented API. The python binding can be found in the VLC git repository. Simply place the file beneath the own python script.

Gesture Control with APDS-9960

For detecting gestures with the APDS-9960 sensor I found these sources on github:

The first repository provides a setup script for the library. The second repository contains an example python script for detecting gestures.

Python Scripts

Adjusting the Volume with Gesture Control

A simple way for adjusting the volume is running a python script dedicated to detecting gestures in the background. Such a script can be launched in /etc/rc.local . The volume is adjusted with a system call.

import os
import time

from apds9960.const import *
from apds9960 import APDS9960
import RPi.GPIO as GPIO
import smbus

port = 1
bus = smbus.SMBus(port)
apds = APDS9960(bus)

def intH(channel):

GPIO.setup(7, GPIO.IN)

dirs = {
    APDS9960_DIR_NONE: "none",
    APDS9960_DIR_LEFT: "left",
    APDS9960_DIR_RIGHT: "right",
    APDS9960_DIR_UP: "up",
    APDS9960_DIR_DOWN: "down",
    APDS9960_DIR_NEAR: "near",
    APDS9960_DIR_FAR: "far",

volume = 50   # 0..100 %
def adjustVolume(value):
  global volume
  volume += value
  if volume < 0.0:     volume = 0   elif volume > 100.0:
    volume = 100
  if volume >= 0.0 and volume <= 100.0:
    print('Adjust volume to ' + str(volume) + ' %')
    cmd = 'amixer cset numid=1 -- ' + str(volume) + '%'
    print('Volume value out of bounds: ' + str(volume) + ' (0.0 .. 100.0 %)')

def run():
  # Add interrupt event: rising edge
  GPIO.add_event_detect(7, GPIO.FALLING, callback = intH)

  while True:
    if apds.isGestureAvailable():
      motion = apds.readGesture()
      gesture = dirs.get(motion, "unknown")

      if gesture == 'up':
      elif gesture == 'down':

  print('Gesture Control')
  print('Press Ctrl-C to stop.')
except KeyboardInterrupt:
  print "Ctrl+C captured, ending read."
  continue_reading = False

Playing Music with RFID Tags

import vlc
import RPi.GPIO as GPIO
import MFRC522
import datetime
import os
import time

MIFAREReader = MFRC522.MFRC522()

mp3path = '/home/pi/Music/'
mp3dict = {
'123-234-456-678' :'A.mp3',	# 1
'123-234-456-679' : 'B.mp3',	# card
'123-234-456-670' : 'C.mp3'	# 2
isPlaying = False
continue_reading = True
currentUID = '-1'
lastUID = '-1'


volume = 50 # 0..100 %
def adjustVolume(value):
  global volume
  volume += value
  if volume < 0.0:     volume = 0   elif volume > 100.0:
    volume = 100
  if volume >= 0.0 and volume <= 100.0:
    print('Adjust volume to ' + str(volume) + ' %')
    cmd = 'amixer cset numid=1 -- ' + str(volume) + '%'
    print('Volume value out of bounds: ' + str(volume) + ' (0.0 .. 100.0 %)')

def isMP3playing():
  global isPlaying
  print('Is MP3 playing? ' + str(isPlaying))
  return isPlaying

def playMP3(currentUID):
  global isPlaying
  if not isMP3playing() and str(currentUID) != '-1':
    print('Play MP3 ' + mp3path + mp3dict[currentUID])
    p = vlc.MediaPlayer(mp3path + mp3dict[currentUID])
    lastUID = currentUID
    PLAYERS[currentUID] = p
    isPlaying = True
    print( 'Playing: ' + str(lastUID))
    #while == True:
    #    continue
    print('Error: Play MP3 ' + str(currentUID))

def pauseMP3(currentUID):
  global isPlaying
  if isMP3playing() and str(currentUID) != '-1':
     print('Pause MP3 ' + mp3path + mp3dict[currentUID])
     if PLAYERS[currentUID] != None:
       isPlaying = False
       print('Error: Pause MP3 ' + str(currentUID))

def stopMP3(currentUID):
  global isPlaying
  if isMP3playing() and str(currentUID) != '-1':
    print('Stop MP3 ' + mp3path + mp3dict[currentUID])
    if PLAYERS[currentUID] != None:
      isPlaying = False
      print('Error: Stop MP3 ' + str(currentUID))

def run():
    global isPlaying
    a = None
    b = None

    while continue_reading:
        (status,TagType) = MIFAREReader.MFRC522_Request(MIFAREReader.PICC_REQIDL)
        if status == MIFAREReader.MI_OK:
            print('Tag detected')

        # Get the UID of the card
        (status,uid) = MIFAREReader.MFRC522_Anticoll()
        print('Status: ' + str(status) + ' [OK = ' + str(MIFAREReader.MI_OK) + ']')

        if status == MIFAREReader.MI_OK:
            a =
            if isMP3playing() == False:
                currentUID = str(uid[0]) + '-' + str(uid[1]) + '-' + str(uid[2]) + '-' + str(uid[3])
                print('Current UID: ' + str(currentUID) + ' / Last UID: ' + lastUID)

                if lastUID != currentUID:
                    print('Start playing MP3: ' + str(mp3dict[currentUID]))

        elif status == MIFAREReader.MI_ERR:
            # check timestamps, this status is detected just after reading a tag successfully
            b =
            if a != None:
                print('Check time delta ' + str(a))
                c = b-a
                print('Time delta: ' + str(c) + ' ' + str(c.seconds))
                if c.seconds == 0:
                    print('Do not stop the music')
                    print('Stop the music')
                    if isMP3playing() == True:

  print('My little JukeBox')
  print('Press Ctrl-C to stop.')
  adjustVolume(30) # default is 50
except KeyboardInterrupt:
  print('Ctrl+C captured, ending read.')
  continue_reading = False


Technically the same techniques described here could be used to play videos on a connected display. Perhaps this is a nice extension of such a project…

Raspberry Pi RFID Jukebox Prototype
Raspberry Pi RFID Jukebox Prototype

Raspberry Pi Home Automation Project: Remote Power Plug Socket Control

From hacking dash buttons it is a small step towards further home automation. Home automation in the sense of remotely controlling power plug sockets. This way a dash button can be used as an additional light switch.
The ingredients for such a project are

Raspberry Pi (Zero W)
Amazon Dash Button
433 MHz receiver and transmitter
Remote controlled power plug sockets (ideally with DIP switches)


On the Raspberry Pi the following libraries are required at least:

sudo pip3 install rpi-rf     #
sudo pip3 install scapy-python3   #


433MHz Receiver

Pi (Zero W) 433 MHz Receiver
3,3 V 3,3 V
GPIO 27 Data

433MHz Transmitter

Pi (Zero W) 433 MHz Transmitter
3,3 V 3,3 V
GPIO 17 Data

Power Plug Sockets

To set up the power plug sockets see their manual. The ones with DIP switches should be preferred over those without. DIP switches allow to precisely select the addresses of the power plug sockets.


First the codes to toggle the power plug sockets are required. These can be read using the example script from  the rpi-rf library.

sudo python3

Make a note of the codes for turning the power on and for turning the power off for each power plug socket. The codes have to be adapted in the python script below. Required is also the MAC adress of the Dash Button to be used as an additional light switch.

from scapy.all import *
import http.client, urllib
from rpi_rf import RFDevice

from time import sleep

#A on : 1234567 A off: 9876543


rfdevice = RFDevice(17)
protocol = 1
pulselength = 350

def readFile(fileName):
    target = open(fileName, 'r')
    state =
    print("Read toggle state: " + str(state))
  except FileNotFoundError:
    writeFile(fileName, state)
  return state

def writeFile(fileName, state):
  print("Store toggle state: " + str(state))
  target = open(fileName, 'w')
  return True

def toggleLight():
  state = readFile(fileLRL)
  print("LRL state " + str(state))
  if state == str(0):
    print ("light is currently off, turn it on")
    rfdevice.tx_code(1234567, protocol, pulselength)
    writeFile(fileLRL, 1)
    print ("light is currently on, turn it off")
    rfdevice.tx_code(9876543, protocol, pulselength)
    writeFile(fileLRL, 0)

def arp_detect(pkt):
  if pkt[ARP].op == 1: # network request
    mac = pkt[ARP].hwsrc
    mac = mac.lower()
    ip = pkt[ARP].psrc

  if mac == 'xx:xx:yy:xx:yy:xx': # dash button
    return "dash button detected\n"

  print( sniff(prn=arp_detect, filter="arp", store=0))
except KeyboardInterrupt:

This way the dash button can be used as an additional remote control in parallel to the original remote control of the power plug sockets. Although this solution cannot keep up with the original remote control regarding the response time. There are several steps in between which take their time…

Raspberry Pi Zero, dash button, remote power plug socket
Raspberry Pi Zero, dash button, remote power plug socket

Dash Button Hacks

Quite some time ago Amazon launched the dash buttons. Amazon intends them to be used for ordering everyday products. Without even knowing the price before automatically finalizing the order! This way I don’t want to use such a button.
In the end a dash button is a relatively cheap WiFi button. Quickly the first users found out how to hack them and use them in different contexts. A dash button can be a doorbell, a phone finder, a tool for doing statistics (work started/stopped, …)  or it could simply switch the light on.
Here is a short description on how to set up a dash button for alternative uses.


Raspberry Pi (Zero)
Amazon Dash Button

Dash Button Setup

    • Follow the setup descriptions as described here. The trick is to leave the Amazon app directly after having copied the WiFi credentials to the button.
    • Find out the dash button’s IP and MAC, e.g. by looking at the active devices in your router’s setup.
    • The button will constantly nag in the amazon app about being setup uncompletely. Block the button’s internet access using the setup of the router.


A small Python script on a Raspberry Pi „sniffs“ the local network for packets of all the devices within the network. If the MAC of a dash button is found certain actions can be triggered.

Additionally required packets:

sudo pip install scapy #
sudo apt-get install tcpdump
from scapy.all import *
import httplib, urllib

def doWhatIWant():
  print "TODO"

def arp_detect(pkt): # debug info
  if pkt[ARP].op == 1: # network request
    mac = pkt[ARP].hwsrc
    mac = mac.lower()
    ip = pkt[ARP].psrc
    print "IP: " + str(ip) + ", MAC: " + str(mac)

  if mac == 'xx:xx:xx:xx:xx:xx': # dash button
    return "dash button detected\n"
    print "Unknown: " + str(ip) + ", " + str(mac)
    return "Unknown MAC: " + pkt[ARP].hwsrc

  print sniff(prn=arp_detect, filter="arp", store=0)
except KeyboardInterrupt:

To run the script automatically after boot simply add a line to /etc/rc.local:

sudo python /home/pi/

Now the dash button is ready to be used for anything else.

To make it a doorbell or a phone finder one could use pushover. The app is installed on a smartphone. That way it is possible to send notifications to this smartphone using the pushover API  using an API and a user key.

Example code

def sendNotification(message):
  conn = httplib.HTTPSConnection("")
  conn.request("POST", "/1/messages.json",
  "token": "APItoken",
  "user": "usertoken",
  "message": str(message),
 "sound": "intermission"
 }), { "Content-type": "application/x-www-form-urlencoded" })

Shutdown Switch for a Raspberry Pi – or: too many Legs

A Raspberry Pi lacks a shutdown button. To keep the price down I heard.
However, simply cutting off the power of a running Pi might damage the system. This is avoidable with a simple button.

In projects that require a Pi running headless (without display) it certainly helps to have a button which triggers a safe shutdown procedure. It might not always be possible to log on to the Pi, run the custom shutdown procedure for the specific project and manually type the shutdown command.

A script triggered by a simple button can do this! The script can trigger the custom shutdown procedure for the project and can turn off the Pi afterwards.

Here’s the description of how it works for me:


Raspberry Pi (Zero) with running OS etc
Momentary switch
Some cables



The switch is connected to GND and a free pin next to the GND pin. In this case it is BCM pin 21. When the switch is pressed, an edge is detected. This signal can be used to trigger the desired actions.


Short vs. Long Press

A short and a long press can be easily distuingished using a simple counter in a callback function. The callback function is triggered when an edge is detected on pin 21.

# This script will wait for a button to be pressed and then shutdown or reboot the Raspberry Pi.
# A long press initiates a reboot, a very long press initiates a shutdown.

import time
from time import sleep
import RPi.GPIO as GPIO
import os

demo_mode = False
debug = False
##############END CONFIG######################


# Pin 21 will be input and will have its pull-up resistor (to 3V3) activated
# so we only need to connect a push button with a current limiting resistor to ground
GPIO.setup(GPIN, GPIO.IN, pull_up_down=GPIO.PUD_UP)
int_active = 0

print "Shutdown / Reboot script started."

# ISR: if our button is pressed, we will have a falling edge on pin 21
# this will trigger the interrupt:
def shutdown_reboot_callback(channel):

# button is pressed
# possibly shutdown our Raspberry Pi

global int_active

# only react when there is no other shutdown process running

if (int_active == 0):
int_active = 1
pressed = 1
shutdown = 0

# count how long the button is pressed
counter = 0

while ( pressed == 1 ):
if ( GPIO.input(GPIN) == False ):
# button is still pressed
counter = counter + 1
if debug:
print "pressed: " + str(counter)
# break if we count beyond 20 (long-press is a shutdown)
if (counter >= 20):
pressed = 0
# button has been released
pressed = 0

# button has been released, count cycles and determine action

# count how long the button was pressed
if (counter < 2):
# short press, do nothing
pressed = 0
int_active = 0
if debug:
print str("short press, do nothing")

if debug:
print "else " + str(counter)
# longer press
if (counter < 10):
# medium length press, initiate system reboot
if debug:

# run the reboot command as a background process
if demo_mode:
print("sudo reboot now")
os.system("sudo reboot now")

elif (counter>=10 and counter<20):
# long press, initiate system shutdown

if debug:
print("shutting down..")

# run the shutdown command as a background process
if demo_mode:
print("sudo shutdown now")
os.system("sudo shutdown now")

# Program pin 21 as an interrupt input:
# it will react on a falling edge and call our interrupt routine "shutdown_reboot_callback"
GPIO.add_event_detect(GPIN, GPIO.FALLING, callback = shutdown_reboot_callback, bouncetime = 500)

while True:
if debug:
print "."


To launch the script at startup of the Raspberry Pi place it on the pi (e.g. in /usr/bin) and  add this line to /etc/rc.local just before exit 0:

sudo python /usr/bin/
Shutdown / Reboot switch on a Raspberry Pi Zero
Shutdown / Reboot switch on a Raspberry Pi Zero

And here is the solution for the attentive reader on the remark on too many legs: a common momentary switch has four legs and is squared. Actually it is a matter of orientation whether the switch works as expected…

A smart Raspberry Pi Zero DIY Text Clock

So this is the project I had in mind when I was experimenting with a NeoPixel strip on a Raspberry Pi Zero. The original text clock was invented a couple of years ago. With its elegant and timeless (yes, literally) design the QLOCKTWO is simultaneously a beautiful and useful piece of art.
It is not that I exactly needed yet another clock – but I got intrigued and wanted to create my own, smart version of a text clock.

Numerous examples of DIY versions and manuals on how to build a text clock are available on the internet. Some manuals involve soldering a lot of LEDs. I wanted to skip this step and went for a NeoPixel strip. In total I calculated 92 NeoPixels: one for each letter that can be alighted.

My version of the text clock should not only display the time in a unique way but should also indicate something more, it should be smart! This is why I chose a Raspberry Pi Zero instead of a microcontroller as a base. This way I’m able to easily get more information using a Python script along with some ready-made libraries.

My smart text clock indicates whether I have unread emails in my inbox by changing the colour of the LEDs. If desired the smart text clock is also able to indicate the weather developments depending on the outside temperature or any other criteria. The weather data is taken from as in former projects.

One could also try to indicate whether a train one needs to catch regularly is on time. Or the smart text clock could be used as a traffic monitor for commuters (similar to this project idea).

Updating the smart text clock every 5 minutes should be precise enough for me. It is definitely more precise than a fuzzy clock which indicates bright and dark only.


The hardware list of the last blog entry can be extended by the picture frame which is often used for DIY text clocks. A suitable one is sold by a well-known swedish furniture store.
Additionally some paper is useful for dispersing the light from the LEDs behind the letters.

Adhesive foil with precisely cut letters can be put on the glass to match the LEDs from the strip. Here I had professional help by friends owning a cutting plotter.

The LED strip is cut and soldered together appropriately to match the letters positions. The strip is glued with its adhesive back to the picture frame’s back plate.

The LEDs are separated by a grid behind the glass. It is printed with a 3D printer. This grid helps to avoid interferences between the different letters.

A piece of transparent paper between the glass and the grid is the possibility to make the letters look smooth. If it was missing the LEDs were directly visible. A bit of diffusion makes it look better…


A straightforward python script is run automatically every five minutes. First the current time is determined. The time is translated into words with a five minute precision.

The words are mapped to the LED indices from the NeoPixel strip. These are the ones to alight to display the time.

Colour Definition

To determine which colour to use for the alighted LEDs some (optional) checks are built-in:

  • Approximately every hour the weather data is fetched from using the python owm library. The temperature is extracted along with the weather code. The results are used for defining the colour of the LEDs. Other parameters can be taken into account as well.
  • The number of unread emails is checked using the Python imap library. If the number is greater than zero the LED color is changed.

During night time the brightness of the LEDs is lowered. That way the smart text clock serves as a convenient night light as well.

Source Code

# -*- coding: cp1252 -*-

import time

import imaplib

import pyowm
import json
import pprint

from neopixel import *


OWM_APYKEY='get one from'
OWM_ID = an ID number

# file to store weather state

EMAIL_PASS = "password"

# LED strip configuration:
LED_COUNT = 92 # Number of LED pixels.
LED_PIN = 18 # GPIO pin connected to the pixels (must support PWM!).
LED_FREQ_HZ = 800000 # LED signal frequency in hertz (usually 800khz)
LED_DMA = 5 # DMA channel to use for generating signal (try 5)
LED_BRIGHTNESS = 128 # Set to 0 for darkest and 255 for brightest
LED_INVERT = False # True to invert the signal (when using NPN transistor level shift)


_start = "IT IS "
_end = " O\'CLOCK"
_numbers = ('ONE', 'TWO', 'THREE', 'FOUR', 'FIVE', 'SIX', 'SEVEN', 'EIGHT', 'NINE', 'TEN', 'ELEVEN', 'TWELVE')
_past = ' PAST '
_to = ' TO '
_fivepast = 'FIVE PAST '
_tenpast = 'TEN PAST '
_aquarter = 'A QUARTER '
_twenty = ' TWENTY'
_twentyfive = ' TWENTYFIVE'
_half = ' HALF'
_fiveto = 'FIVE TO '
_tento = 'TEN TO '

I T L I S A S T I M E 0,1, 2,3
A C Q U A R T E R D C 4, 5,6,7,8,9,10,11
T W E N T Y F I V E X 12,13,14,15,16,17, 18,19,20,21
H A L F B T E N F T O 22,23,24,25, 26,27,28, 29,30
P A S T E R U N I N E 31,32,33,34, 35,36,37,38
O N E S I X T H R E E 39,40,41, 42,43,44, 45,46,47,48,49
F O U R F I V E T W O 50,51,52,53, 54,55,56,57, 58,59,60
E I G H T E L E V E N 61,62,63,64,65, 66,67,68,69,70,71
S E V E N T W E L V E 72,73,74,75,76, 77,78,79,80,81,82
T E N S E O C L O C K 83,84,85
# map time to precise LED indices
_timeLightMap = {
'IT IS ' : (0,1,2,3),
' HALF' : (22,23,24,25),
' PAST ' : (31,32,33,34),
' TO ' : (29,30),
'FIVE PAST ' : (18,19,20,21, 31,32,33,34),
'TEN PAST ' : (26,27,28, 31,32,33,34),
'A QUARTER ' : (4, 5,6,7,8,9,10,11),
' TWENTY' : (12,13,14,15,16,17),
' TWENTYFIVE' : (12,13,14,15,16,17, 18,19,20,21),
' HALF PAST ' : (22,23,24,25, 31,32,33,34),
' TWENTYFIVE TO ' : (12,13,14,15,16,17, 18,19,20,21, 29,30),
' TWENTY TO ' : (12,13,14,15,16,17, 29,30),
'TEN TO ' : (26,27,28, 29,30),
'FIVE TO ' : (18,19,20,21, 29,30),
'ONE' : (39,40,41),
'TWO' : (58,59,60),
'THREE' : (45,46,47,48,49),
'FOUR' : (50,51,52,53),
'FIVE' : (54,55,56,57),
'SIX' : (42,43,44),
'SEVEN' : (72,73,74,75,76),
'EIGHT' : (61,62,63,64,65),
'NINE' : (35,36,37,38),
'TEN' : (83,84,85),
'ELEVEN' : (66,67,68,69,70,71),
'TWELVE' : (77,78,79,80,81,82),
' O\'CLOCK' : (0,1,2,3)

class SmartTextClock():

def run(self):

def check_googlemail(self, login, password):
# if new mail return # emails
obj = imaplib.IMAP4_SSL('','993')
obj.login(login, password)
nofUnreadMessages = len(, 'UnSeen')[1][0].split())
print "Unread emails: " + str(nofUnreadMessages)
return nofUnreadMessages

def clock(self):
t = time.strftime("%H:%M")
print t
return t

def translateHour(self, hour, offset):
if hour == '00' or hour == '12':
if offset == True:
return _numbers[0]
return _numbers[11]
if hour == '1' or hour == '13':
if offset == True:
return _numbers[1]
return _numbers[0]
if hour == '2' or hour == '14':
if offset == True:
return _numbers[2]
return _numbers[1]
if hour == '3' or hour == '15':
if offset == True:
return _numbers[3]
return _numbers[2]
if hour == '4' or hour == '16':
if offset == True:
return _numbers[4]
return _numbers[3]
if hour == '5' or hour == '17':
if offset == True:
return _numbers[5]
return _numbers[4]
if hour == '6' or hour == '18':
if offset == True:
return _numbers[6]
return _numbers[5]
if hour == '7' or hour == '19':
if offset == True:
return _numbers[7]
return _numbers[6]
if hour == '8' or hour == '20':
if offset == True:
return _numbers[8]
return _numbers[7]
if hour == '9' or hour == '21':
if offset == True:
return _numbers[9]
return _numbers[8]
if hour == '10' or hour == '22':
if offset == True:
return _numbers[10]
return _numbers[9]
if hour == '11' or hour == '23':
if offset == True:
return _numbers[11]
return _numbers[10]
return ''

# time format: HH:mm
def translateTime(self, time):
t = time.split(':', 1)
print t
h = str(t[0])
m = str(t[1])
print h + ":" + m

indices = (1,2)

if float(m) >= 0.0 and float(m) <= 2.5:
indices = _timeLightMap[_start] + _timeLightMap[self.translateHour(h, False)] + _timeLightMap[_end]
if float(m) > 2.5 and float(m) <= 7.5:
indices = _timeLightMap[_start] + _timeLightMap[_fivepast] + _timeLightMap[self.translateHour(h, False)] + _timeLightMap[_end]
if float(m) > 7.5 and float(m) <= 12.5:
indices = _timeLightMap[_start] + _timeLightMap[_tenpast] + _timeLightMap[self.translateHour(h, False)] + _timeLightMap[_end]
if float(m) > 12.5 and float(m) <= 17.5:
indices = _timeLightMap[_start] + _timeLightMap[_aquarter] + _timeLightMap[_past] + _timeLightMap[self.translateHour(h, False)] + _timeLightMap[_end]
if float(m) > 17.5 and float(m) <= 22.5:
indices = _timeLightMap[_start] + _timeLightMap[_twenty] + _timeLightMap[_past] + _timeLightMap[self.translateHour(h, False)] + _timeLightMap[_end]
if float(m) > 22.5 and float(m) <= 27.5:
indices = _timeLightMap[_start] + _timeLightMap[_twentyfive] + _timeLightMap[_past] + _timeLightMap[self.translateHour(h, False)] + _timeLightMap[_end]
if float(m) > 27.5 and float(m) <= 32.5:
indices = _timeLightMap[_start] + _timeLightMap[_half] + _timeLightMap[_past] + _timeLightMap[self.translateHour(h, False)] + _timeLightMap[_end]
if float(m) > 32.5 and float(m) <= 37.5:
indices = _timeLightMap[_start] + _timeLightMap[_twentyfive] + _timeLightMap[_to] + _timeLightMap[self.translateHour(h, True)] + _timeLightMap[_end]
if float(m) > 37.5 and float(m) <= 42.5:
indices = _timeLightMap[_start] + _timeLightMap[_twenty] + _timeLightMap[_to] + _timeLightMap[self.translateHour(h, True)] +_timeLightMap[_end]
if float(m) > 42.5 and float(m) <= 47.5:
indices = _timeLightMap[_start] + _timeLightMap[_aquarter] + _timeLightMap[_to] + _timeLightMap[self.translateHour(h, True)] + _timeLightMap[_end]
if float(m) > 47.5 and float(m) <= 52.5:
indices = _timeLightMap[_start] + _timeLightMap[_tento] + _timeLightMap[self.translateHour(h, True)] + _timeLightMap[_end]
if float(m) > 52.5 and float(m) <= 57.5:
indices = _timeLightMap[_start] + _timeLightMap[_fiveto] + _timeLightMap[self.translateHour(h, True)] + _timeLightMap[_end]
if float(m) > 57.5 and float(m) <= 59.9:
indices = _timeLightMap[_start] + _timeLightMap[self.translateHour(h, True)] + _timeLightMap[_end]
return indices

def selectColor(self, weatherCondition):
# Email: Lime Green 50-205-50
color = Color(255, 222, 173)
# weatherCondition = 'fair' # 'good', 'fair, 'bad'
if weatherCondition == 'fair':
color = Color(255, 222, 173) # Navajo White 255-222-173 # Lemon Chiffon 255-250-205
if weatherCondition == 'good':
color = Color(255, 127, 80) # Coral 255-127-80 # Light Salmon 255-160-122
if weatherCondition == 'bad':
color = Color(70, 130, 180) # Steel Blue 70-130-180
return color

def getWeatherFromOWM(self):
owm = pyowm.OWM(OWM_APYKEY, version='2.5')
# Search for current weather
print "Weather @ID"
obs = owm.weather_at_id(OWM_ID)
w1 = obs.get_weather()
print w1.get_status()

weatherCondition = 'fair' # 'good', 'fair, 'bad'

# get general meaning for weather codes
weatherCode = w1.get_weather_code()
print weatherCode
print w1.get_sunset_time('iso')
temperature = w1.get_temperature('celsius')['temp']
print str(temperature) + " C"

# simple: judge weather on temperature
if temperature<=10.0:
weatherCondition = 'bad'
if temperature>10.0 and temperature<20.0:
weatherCondition = 'fair'
if temperature>=20.0 and temperature<35.0:
weatherCondition = 'good'
if temperature>=35.0:
weatherCondition = 'bad'

return weatherCondition

def readSavedWeatherCondition(self):
weatherCondition = 'fair' # default
print 'Read file ' + fileName
target = open(fileName, 'r')
weatherCondition =
print weatherCondition
except IOError:
print fileName + " does not exist yet. Creating a default file."
return weatherCondition

def saveWeatherConditionToFile(self, weatherCondition):
print 'Write file ' + fileName
target = open(fileName, 'w')
print 'File ' + fileName + ' could not be written.'

#strip.setPixelColor(i, Color(0,0,120)) # B
#strip.setPixelColor(i, Color(0,120,0)) # R
#strip.setPixelColor(i, Color(120,0,0)) # G
def alight(self, LEDindices, color):
print 'Alight indices ' + str(LEDindices)
for i in LEDindices:
strip.setPixelColor(i, color)

if __name__ == "__main__":
app = SmartTextClock()

# check for unread emails
unreadEmails = app.check_googlemail(EMAIL_NAME, EMAIL_PASS)

# get time and determine LED indices
time = app.clock()
indices = app.translateTime(time)
print "Indices " + str(indices)

# investigate weather data
weatherCondition = app.readSavedWeatherCondition()

# update weather data every hour
theTime = time.split(":")
# this range should be met from time to time
if int(theTime[1])>=0 and int(theTime[1])<=7:
weatherCondition = app.getWeatherFromOWM()

# create NeoPixel object with appropriate configuration
# intialize the library (must be called once before other functions)

# low light during 19-8 o'clock
if(8 < int(theTime[0]) > 19):

stripColor = Color(120,120,120)

# select color depending on weather condition
if weatherCondition == 'bad':
stripColor = Color(0, 120, 0)
if weatherCondition == 'fair':
stripColor = Color(120, 120, 120)
if weatherCondition == 'good':
stripColor = Color(0,0,120)

if unreadEmails > 0:
stripColor = Color(205,50,50)

app.alight(indices, stripColor)


Audio Book: Raspberry Pi Zero Internet Radio with pHAT BEAT

A couple of days ago I laid my hands on a pHAT BEAT and two small speakers. Together with a Raspberry Pi Zero (and an Internet connection of course) this makes building an internet radio easily possible. And yes, inspiration for this project was also the Pirate Radio Kit.
The pHAT BEAT comes along with stereo output, an amplifier, a couple of buttons for adjusting the volume, playing/pause, forward/backward and powering off and a number of bright and shiny LEDs. Just the perfect audio hardware component for an internet radio.


Raspberry Pi Zero with Micro SD card and up-to-date OS
USB WiFi stick (not needed if a Raspberry Pi Zero W is used)
small speakers
some cables
USB power supply

Assemble the hardware as required. This implies some soldering for the headers of the Raspberry Pi Zero and the pHAT BEAT as well as the connections to the speakers. This tutorial is a good guideline to see what to do.


Once the Raspberry Pi Zero is accessible headless in the local WLAN network (see this blog post for setup instructions) install the pHAT BEAT Python library.

Luckily the software for an internet radio project already exists. The setup is really made simple by running the setup script only. The setup script installs the required software and adjusts the whole configuration on the Raspberry Pi Zero. See for further reference.

Once the installation is complete, reboot. After reboot the internet radio will be automatically started and will play some example music.

The pHAT BEAT’s buttons directly work with the example projects software. Adjusting the volume or switching between different items on a configurable playlist (see configuration below) is directly possible. Even the off button immediately works: it turns off the radio and fully shuts down the Raspberry Pi Zero.


Configure Internet Radio Streams

Collect the URLs of your favourite internet radio streams. Create the file /home/pi/.config/vlc/playlist.m3u . Insert the URLs into the playlist as in this example:

Example playlist.m3u

Alternatively create a playlist containing the radio stream URLs of your choice in VLC and save the playlist to a file. This file can be copied to the Raspberry Pi Zero to /home/pi/.config/vlc/playlist.m3u.

After reboot the forward/backward buttons of the pHAT BEAT can be used to switch between the different internet radio streams.

Wrapping: The Result

The wrapping was simple in this case: an old book became a nice „audio book“! Similar to my ‚book book shelves‘ an old book is hollowed inside with a sharp knife so the hardware fits in.
Surprisingly well is the sound of the speakers inside the book!
All I need now is to find a way to operate the small buttons of the pHAT BEAT…

Info & Links


NeoPixels Strip on Raspberry Pi Zero

Looking into my desk’s drawer I found the remainder of an Adafruit NeoPixel strip I used in another project. And an unused, last years Raspberry Pi Zero. Does that work together? Well, yes, it does! At least after fiddling a bit with hard- and software and circumventing some common traps.

Searching the web I found a tutorial for steering a NeoPixel strip with a first generation Raspberry Pi. Technically it should work with an exemplary of a more recent version, but it did not initially.
Here is the description of how it all worked out in the end:


Raspberry Pi Zero with up-to-date Raspbian Jessie Pixel
Mini USB WiFi Adapter (if the brand new Raspberry Pi Zero W is not used)
Raspberry Pi Zero adapter cables + power supply
Adafruit NeoPixel strip
1000 μF capacitor
330 Ω resistor
1N4001 diode
5 V breadboard power supply
breadbord, cables


  • 5V power supply GND : 1000 μF capacitor (short leg)
  • 5V power supply 5V : 1000 μF capacitor
  • 5V power supply GND : NeoPixel strip GND
  • 5V power supply 5V : NeoPixel strip 5V via 1N4001 diode (side with stripe goes to 5V input of the strip)
  • 5V power supply GND : Raspberry Pi Zero GND (physical pin 6)
  • Raspberry Pi Zero (physical pin 12) : NeoPixel strip data line via 330 Ω resistor

The available pins of the Raspberry Pi Zero are listed here. GPIO #1 correlates to physical pin 12 which is BCM #18. The latter is used in the Python software.


It is not recommended to use the 5V output of the Raspberry Pi Zero directly to power the NeoPixel strip. The pixels might draw too much current and might therefore damage the pin. It would have been way too convenient…so: an additional 5V power supply is strongly recommended.


Running Headless: Setting up WiFi

To run the Raspberry Pi Zero headless (without display), set up the WiFi connection. For this step an HDMI display and a keyboard is required. Open the file

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

Put the network configuration at the end of the file:

ssid="WiFi network name"

Use raspi-config to allow SSH connections and to adjust the Pi’s hostname, the password, the time settings etc. .

SSH to Zero

When attaching the Mini USB WiFi Adapter instead of the keyboard and rebooting the Raspberry Pi Zero the desired WiFi network is used and it is possible to SSH to the Pi Zero. To find the IP adress in the local network check which devices are logged into the network at your routers access point. Or kindly ask your network admin to check. 😉

GPIO Checks

To see the available GPIO pins on the Raspberry Pi Zero run

gpio readall

NeoPixel Python Library

To set up the Python library for driving NeoPixels on a Raspberry follow this tutorial. Jeremy Garff’s Python library for NeoPixels is working like a charm.

Disabling Audio

To be able to use the PWM pins as data pins for the NeoPixel strip I disabled audio by commenting the line

# Enable audio (loads snd_bcm2835)
#dtparam=audio=on  # disable audio for PWM pin usage

in the file /boot/config.txt.

Whether audio is disabled can be checked using

aplay -l

. If audio is disabled properly the result is an error message („aplay: device_list:268: no soundcards found…“).


Once the NeoPixel library is set up and the hardware is connected properly run or any other example code from the rpi_ws281x/python/examples section.


That’s it! The NeoPixel strip finally can be driven by a Raspberry Pi Zero.


While this example is working I definitively have a new project in mind…


Making a Raspberry Pi speak: Alexa

Speaking with devices (and making them answer or do something) seems to be a trend of the time. Some up-to-date smartphones and tablets allow allow to use speech to trigger internet searches, to write short messages or e-mails (sometimes with funny results), to ask for something in the region, to turn the light of the smartphone on, … .

In addition to voice control on smartphones, well-known companies started to launch devices to enable voice control @home. However, the commercial solutions perform speech recognition on their own servers. AFAIK due to computing power requirements of the AI behind.
In this case one has to live with the fact that a constant internet connection is inevitable and that own voice samples are uploaded somewhere else for analysis.

Still, speech control can be extremely useful. My favourite example for illustration is setting a timer while being busy with something else.
Though, in a smart home there are many more applications for speech control: light, heating, media, … . Even for elderly, handicapped or visually impaired controlling everyday procedures by the own voice can be a huge advantage in the daily life.

Open Source Solution: Jasper

The open source solution for voice control, Jasper, offers the possibility to work offline, but the setup of the software is not trivial. It looks like the manual is outdated. Some experimental, but required libraries are not to be found easily anymore. This is why I turned to the API of a commercial solution to play with speech recognition on my Raspberry Pi 3.

At the moment speech recognition devices such as Amazon’s Alexa are not sold everywhere yet. It is possible to order them in Europe, but they are not shipped yet. As rumour has it: regions in which stronger accents are spoken are served first. 🙂

Amazon’s Alexa

The voice service that is used by Amazon’s Alexa devices can be relatively easy tested on a Raspberry Pi 3. Since a couple of weeks wake word detection in this solution is possible on the Raspberry Pi 3 as well.


Raspberry Pi 3 (incl. power supply, display, keyboard and mouse for setup)

USB microphone

Non-bluetooth speaker


Amazon Developer Account Settings

An Amazon developer account is required for using the voice service. The registration is free. After the registration an Alexa device has to be created along with security and web settings. On this page the required steps are explained. Save the client ID and secret for later.

Raspberry Pi

This github project contains the required installation software for download:

git clone

The setup of the software is performed running the automated_install shell script. It has to be completed with the product name, client ID and secret. The script guides through the configuration and setup.

After successful installation the companion service, the AVS client and the desired wake word agent have to be launched in three separate terminals.

The AVS client requires authorization by signing in using the Amazon developer account. On request the default browser is opened and Alexa is ready to listen in after the confirmation.

Playing around

On the Raspberry Pi Alexa starts to listen more closely either on the push of a button or by hearing the wake word ‚Alexa‘. It confirms with a sound that it is listening. The next spoken words (shoud be english) are going to be analyzed. A longer break between words marks the end of the sentence.
Alexa’s answers are returned quickly! Out of the box it is possible to ask for the current weather at a specific location, to ask for a joke, to convert unities, to look up something in wikipedia, etc . Alexa can be connected to a calendar, it can calculate and it knows its „birthday“ (being the day it was first sold). That’s not all…

Surprising was my low-cost microphone in combination with Alexa. The first tests on various operating systems were devastating: I had to speak from a distance of 1 cm to be heard at all. Independent of the recording settings. I thought it is also some kind of safety precaution if I had to be close to the microphone to use speech recognition …but Alexa immediately worked from a distance of 2 m as well. It felt a bit slower, though, but still, it worked…

When I played the video recorded of my running system telling a joke it just started itself again when hearing the wake word from the video! It has already been shown that infinite loops of voice control can be set up easily: . Alexa might also react on its wake word spoken on TV as recently learnt from the Verge’s doll house article!

Alexa is extensible with custom skills for own applications. Perhaps this is the thing to try next.


Raspberry Pi 3 Gesture controlled digital Picture Frame

The first time I realised the possibilities of gesture controlled devices I was dazzled. It was the kick-started project of a smart bike assistant, the Haiku. The Haiku’s idea is to use flick gestures to switch between functions or to handle notifications, independent of possibly covered fingertips. No direct touching is required!

How simple is it to steer something without touching? Without taking off gloves or anything that hampers control as it might with common touch sensitive devices such as smartphones or tablets? Without leaving fingerprints or accidentally painting the unlock pattern on a smudgy touchscreen (take a look at a smartphone in grazing light!)?
Gesture control sounds highly convenient and probably more safe than touch control to me. Although, one could accidentally do something by moving a bit too close on a gesture control input. That is truly a side effect…

During a hacking night with fellow workers I learnt about Pimoroni’s skywriter. Attachable to either an Arduino controller or the Raspberry Pi the skywriter allows to recognise gestures such as flicks and taps as well as X/Y/Z 3D position sensing within a range of 15 cm. I ordered one to test it for another project… so why not use a skywriter to see through photographs in a directory? Displaying images in a digital picture frame with a convenient input method.

Components used

Raspberry Pi 3

Raspberry Pi 7″ Touch Screen (any other monitor for a Pi will do)

Skywriter breakout or HAT


The wiring is described in pimoroni’s github repository. The Raspberry Pi 3 pinouts are described here.

Skywriter Raspberry Pi


At pimoroni’s github repository some very good examples are found for using the skywriter either on an Arduino controller or a Raspberry. On the Raspberry the same libraries have to be installed, e.g. with the given shell command as described in the readme.

For the Raspberry I started with the Python example Each recognised gesture and a move’s coordinates are printed on the console. The print statement in the move() method obscures the results of the recognised gestures, it can be commented.

Python’s TKinter toolkit is used to display pictures in a window. The TKinter mainloop runs in a separate thread.


The usage should be natural and intuitive.

  • Flicking from left to right will display the next image in the directory.
  • Flicking from right to left will switch back to the previous image.
  • Tapping into the centre of the skywriter will close the program
  • Tapping onto the lower end will minimise the image window.
  • Guess how to maximise the image window again!

This is the whole Python script to move through (holiday) pictures using gesture control on the digital picture frame:


Switch between different images in a directory using the skywriter.

Swipe left to right: display next image in directory
Swipe north to south: display next image in directory
Swipe right to left: display previous image in directory
Swipe south to north: display previous image in directory

Tap the lower left corner to minimize the image window
Tap the upper right corner to maximize the image window

Press CTRL+C, ESC or tap the skywriter's center to exit.
# use a Tkinter label as a panel/frame with a background image
# note that Tkinter only reads gif and ppm images
# use the Python Image Library (PIL) for other image formats
# free from [url][/url]
# give Tkinter a namespace to avoid conflicts with PIL
# (they both have a class named Image)

import Tkinter as tk
from PIL import Image, ImageTk
from ttk import Frame, Button, Style
import gtk, pygtk
import time

import sys
import os
import signal
import skywriter
import threading
import random

pathToPictures = "/home/pi/Desktop/images/"

class ImageDisplay(threading.Thread):

    def __init__(self):
        self.root = None

    def callback(self):

    def run(self):
        if self.root == None:
        self.root = tk.Tk()
        self.root.title('My Photographs')


    def setImageName(self, name): = name

    def showImage(self, path):
        self.original =

        # make the root window the size of the screen
        screen_size = getScreenSize()
        self.root.geometry("%dx%d+%d+%d" % (screen_size["width"],         screen_size["height"], 0, 0))
        #self.root.attributes("-fullscreen", False)
                       #self.root.geometry({0}x{1}+0+0".format(self.root.winfo_screenwidth(), self.root.winfo_screenheight()))
        ##self.root.geometry("{0}x{1}+0+0".format(screen_size["width"],       screen_size["height"]))
        self.root.focus_set()  # >-- move focus to this widget
        self.root.bind(">Escape<", lambda e: e.widget.quit())

        self.resized = self.original.resize((screen_size["width"],        screen_size["height"]), Image.ANTIALIAS)
        self.image = ImageTk.PhotoImage(self.resized) # keep a reference, prevent GC

        # root has no image argument, so use a label as a panel
        self.panel1 = tk.Label(self.root, image = self.image)
        self.display = self.image
        self.panel1.pack(side=tk.TOP, fill=tk.BOTH, expand=tk.YES)
        print "Display image " + path

    def updateImage(self, path):
        self.original =
        # resize
        screen_size = getScreenSize()
        self.root.geometry("%dx%d+%d+%d" % (screen_size["width"], screen_size["height"], 0, 0))

        self.resized = self.original.resize((screen_size["width"], screen_size["height"]), Image.ANTIALIAS)
        self.image = ImageTk.PhotoImage(self.resized) # keep a reference, prevent GC

        # root has no image argument, so use a label as a panel
        self.display = self.image
        print "Display image " + path

    def minimize(self):
    def maximize(self):

    def stopThread(self):
        self.do_run = False  # stop thread

def getScreenSize():
    window = gtk.Window()
    screen = window.get_screen()
    print "width = " + str(screen.get_width()) + ", height = " + str(screen.get_height())
    screen_size = {}
    screen_size["width"] = screen.get_width()
    screen_size["height"] = screen.get_height()
    return screen_size

def findImages(directory):
    imageList = []
    for file in os.listdir(directory):
        if file.endswith(('.jpg','.JPG','.jpeg','.JPEG')):
    return imageList

def increaseIndex():
    global index, images
    index += 1
    # start again with index 0
    if index >= len(images):
	index = 0

def decreaseIndex():
    global index, images
    index -= 1
    # start again with index max
    if index < 0:
	index = len(images) - 1

def nextImage():
    global imageDisplay, images, index, pathToPictures
    print "image " + images[index] + ", index=" + str(index) + "(" + str(len(images)) + ")"
    imageDisplay.updateImage(pathToPictures + images[index])

def previousImage():
    global imageDisplay, images, index, pathToPictures
    print "image " + images[index] + ", index=" + str(index) + "(" + str(len(images)) + ")"
    imageDisplay.updateImage(pathToPictures + images[index])

#---detect gestures on skywriter---#
def flick(start,finish):
    print('Got a flick!', start, finish)
    if (start == "west" and finish == "east") or (start == "south" and finish == "north"):
        print "Display next image in directory"
    if (start == "east" and finish == "west") or (start == "north" and finish == "south"):
        print "Display previous image in directory"

def touch(position):
    print('Touched!', position)
    if (position == "center"):
        print "Exit image display"
    if (position == "south"):
        print "minimize image window"
    if (position == "north"):
        print "maximize image window"

# parse picture folder
images = findImages(pathToPictures)

# reset index
index = 0

# launch image window as thread
imageDisplay = ImageDisplay()

def main():
        print "Skywriter image display launched"
        print "Images found: "
        for i in images:
            print i
        global imageDisplay, pathToPictures
        imageDisplay.showImage(pathToPictures + images[index])
    except KeyboardInterrupt:
        print "Exit"
if __name__ == '__main__':


Gesture controlled Picture Frame