Raspberry Pi 3 JukeBox with RFID Music Selection and Gesture Control

A couple of weeks ago friends mentioned a nice project idea. Their daughter is quite young and already a huge music fan. Until she is grown enough to use CDs or small MP3 players (or whatever is en vogue when she is old enough) she could use RFID tagged somethings to select her choice of music in a simple way.
The idea of an RFID controlled Raspberry Pi 3 Music Player is not new. Several examples like this cool looking music robot already exist.  So here I want to add the description of my little prototype JukeBox which uses gesture control to adjust the volume.


Raspberry Pi 3 with Jessie
Speaker with 3.5 mm jack
RFID Reader and Cards/Tags
APDS-9960 Gesture Control Chip


RFID Reader

RFID Reader Pin # Pin name
IRQ None None
GND Any Any Ground
3.3V 1 3.3V

Gesture Control Sensor

Board Pin Name Remarks Pin # RPi Function
1 VIN +3.3V Power 1 3.3V
2 GND Ground 6 GND
3 SCL Clock 5 BCM 3 (SCL)
4 SDA Data 3 BCM 2 (SDA)
5 INT Interrupt 16 BCM 23 (SDA)

Setting up the Sound

sudo raspi-config

In the advanced options select the audio settings and set audio output to 3.5 mm audo jack.
In /etc/boot.config the parameter dtparam=audio=on should not be commented.


Adjusting the volume via the command line is possible with

amixer cset numid=1 -- 80%

See this page for more information on using audio on a Raspberry Pi.


A simple python script loaded in /etc/rc.local controls the music being played. It uses the MFRC522 library for reading RFID tags and the VLC python bindings for playing music.


To use the MFRC522 python library first enable SPI and install the SPI library.

sudo raspi-config
# Interfacing Options > P4 SPI > enable

sudo apt-get install git python-dev --yes
git clone https://github.com/lthiery/SPI-Py.git
cd SPI-Py
sudo python setup.py install
cd ..
git clone https://github.com/mxgxw/MFRC522-python.git

Once the libraries are installed RFID tags can be read with the example program:

cd MFRC522-python
sudo python Read.py

These RFID tag IDs are used in the example python script below to select the desired MP3s.


For playing MP3s I used the VLC python bindings. Numerous other possibilities exist as well, but I chose VLC because of its documented API. The python binding can be found in the VLC git repository. Simply place the file vlc.py beneath the own python script.

Gesture Control with APDS-9960

For detecting gestures with the APDS-9960 sensor I found these sources on github:



The first repository provides a setup script for the library. The second repository contains an example python script for detecting gestures.

Python Scripts

Adjusting the Volume with Gesture Control

A simple way for adjusting the volume is running a python script dedicated to detecting gestures in the background. Such a script can be launched in /etc/rc.local . The volume is adjusted with a system call.

import os
import time

from apds9960.const import *
from apds9960 import APDS9960
import RPi.GPIO as GPIO
import smbus

port = 1
bus = smbus.SMBus(port)
apds = APDS9960(bus)

def intH(channel):

GPIO.setup(7, GPIO.IN)

dirs = {
    APDS9960_DIR_NONE: "none",
    APDS9960_DIR_LEFT: "left",
    APDS9960_DIR_RIGHT: "right",
    APDS9960_DIR_UP: "up",
    APDS9960_DIR_DOWN: "down",
    APDS9960_DIR_NEAR: "near",
    APDS9960_DIR_FAR: "far",

volume = 50   # 0..100 %
def adjustVolume(value):
  global volume
  volume += value
  if volume < 0.0:     volume = 0   elif volume > 100.0:
    volume = 100
  if volume >= 0.0 and volume <= 100.0:
    print('Adjust volume to ' + str(volume) + ' %')
    cmd = 'amixer cset numid=1 -- ' + str(volume) + '%'
    print('Volume value out of bounds: ' + str(volume) + ' (0.0 .. 100.0 %)')

def run():
  # Add interrupt event: rising edge
  GPIO.add_event_detect(7, GPIO.FALLING, callback = intH)

  while True:
    if apds.isGestureAvailable():
      motion = apds.readGesture()
      gesture = dirs.get(motion, "unknown")

      if gesture == 'up':
      elif gesture == 'down':

  print('Gesture Control')
  print('Press Ctrl-C to stop.')
except KeyboardInterrupt:
  print "Ctrl+C captured, ending read."
  continue_reading = False

Playing Music with RFID Tags

import vlc
import RPi.GPIO as GPIO
import MFRC522
import datetime
import os
import time

MIFAREReader = MFRC522.MFRC522()

mp3path = '/home/pi/Music/'
mp3dict = {
'123-234-456-678' :'A.mp3',	# 1
'123-234-456-679' : 'B.mp3',	# card
'123-234-456-670' : 'C.mp3'	# 2
isPlaying = False
continue_reading = True
currentUID = '-1'
lastUID = '-1'


volume = 50 # 0..100 %
def adjustVolume(value):
  global volume
  volume += value
  if volume < 0.0:     volume = 0   elif volume > 100.0:
    volume = 100
  if volume >= 0.0 and volume <= 100.0:
    print('Adjust volume to ' + str(volume) + ' %')
    cmd = 'amixer cset numid=1 -- ' + str(volume) + '%'
    print('Volume value out of bounds: ' + str(volume) + ' (0.0 .. 100.0 %)')

def isMP3playing():
  global isPlaying
  print('Is MP3 playing? ' + str(isPlaying))
  return isPlaying

def playMP3(currentUID):
  global isPlaying
  if not isMP3playing() and str(currentUID) != '-1':
    print('Play MP3 ' + mp3path + mp3dict[currentUID])
    p = vlc.MediaPlayer(mp3path + mp3dict[currentUID])
    lastUID = currentUID
    PLAYERS[currentUID] = p
    isPlaying = True
    print( 'Playing: ' + str(lastUID))
    #while pygame.mixer.music.get_busy() == True:
    #    continue
    print('Error: Play MP3 ' + str(currentUID))

def pauseMP3(currentUID):
  global isPlaying
  if isMP3playing() and str(currentUID) != '-1':
     print('Pause MP3 ' + mp3path + mp3dict[currentUID])
     if PLAYERS[currentUID] != None:
       isPlaying = False
       print('Error: Pause MP3 ' + str(currentUID))

def stopMP3(currentUID):
  global isPlaying
  if isMP3playing() and str(currentUID) != '-1':
    print('Stop MP3 ' + mp3path + mp3dict[currentUID])
    if PLAYERS[currentUID] != None:
      isPlaying = False
      print('Error: Stop MP3 ' + str(currentUID))

def run():
    global isPlaying
    a = None
    b = None

    while continue_reading:
        (status,TagType) = MIFAREReader.MFRC522_Request(MIFAREReader.PICC_REQIDL)
        if status == MIFAREReader.MI_OK:
            print('Tag detected')

        # Get the UID of the card
        (status,uid) = MIFAREReader.MFRC522_Anticoll()
        print('Status: ' + str(status) + ' [OK = ' + str(MIFAREReader.MI_OK) + ']')

        if status == MIFAREReader.MI_OK:
            a = datetime.datetime.now()
            if isMP3playing() == False:
                currentUID = str(uid[0]) + '-' + str(uid[1]) + '-' + str(uid[2]) + '-' + str(uid[3])
                print('Current UID: ' + str(currentUID) + ' / Last UID: ' + lastUID)

                if lastUID != currentUID:
                    print('Start playing MP3: ' + str(mp3dict[currentUID]))

        elif status == MIFAREReader.MI_ERR:
            # check timestamps, this status is detected just after reading a tag successfully
            b = datetime.datetime.now()
            if a != None:
                print('Check time delta ' + str(a))
                c = b-a
                print('Time delta: ' + str(c) + ' ' + str(c.seconds))
                if c.seconds == 0:
                    print('Do not stop the music')
                    print('Stop the music')
                    if isMP3playing() == True:

  print('My little JukeBox')
  print('Press Ctrl-C to stop.')
  adjustVolume(30) # default is 50
except KeyboardInterrupt:
  print('Ctrl+C captured, ending read.')
  continue_reading = False


Technically the same techniques described here could be used to play videos on a connected display. Perhaps this is a nice extension of such a project…

Raspberry Pi RFID Jukebox Prototype
Raspberry Pi RFID Jukebox Prototype

Raspberry Pi Home Automation Project: Remote Power Plug Socket Control

From hacking dash buttons it is a small step towards further home automation. Home automation in the sense of remotely controlling power plug sockets. This way a dash button can be used as an additional light switch.
The ingredients for such a project are

Raspberry Pi (Zero W)
Amazon Dash Button
433 MHz receiver and transmitter
Remote controlled power plug sockets (ideally with DIP switches)


On the Raspberry Pi the following libraries are required at least:

sudo pip3 install rpi-rf     # https://github.com/milaq/rpi-rf
sudo pip3 install scapy-python3   # https://github.com/phaethon/scapy


433MHz Receiver

Pi (Zero W) 433 MHz Receiver
3,3 V 3,3 V
GPIO 27 Data

433MHz Transmitter

Pi (Zero W) 433 MHz Transmitter
3,3 V 3,3 V
GPIO 17 Data

Power Plug Sockets

To set up the power plug sockets see their manual. The ones with DIP switches should be preferred over those without. DIP switches allow to precisely select the addresses of the power plug sockets.


First the codes to toggle the power plug sockets are required. These can be read using the example script from  the rpi-rf library.

sudo python3 rcv.py

Make a note of the codes for turning the power on and for turning the power off for each power plug socket. The codes have to be adapted in the python script below. Required is also the MAC adress of the Dash Button to be used as an additional light switch.

from scapy.all import *
import http.client, urllib
from rpi_rf import RFDevice

from time import sleep

#A on : 1234567 A off: 9876543


rfdevice = RFDevice(17)
protocol = 1
pulselength = 350

def readFile(fileName):
    target = open(fileName, 'r')
    state = target.read()
    print("Read toggle state: " + str(state))
  except FileNotFoundError:
    writeFile(fileName, state)
  return state

def writeFile(fileName, state):
  print("Store toggle state: " + str(state))
  target = open(fileName, 'w')
  return True

def toggleLight():
  state = readFile(fileLRL)
  print("LRL state " + str(state))
  if state == str(0):
    print ("light is currently off, turn it on")
    rfdevice.tx_code(1234567, protocol, pulselength)
    writeFile(fileLRL, 1)
    print ("light is currently on, turn it off")
    rfdevice.tx_code(9876543, protocol, pulselength)
    writeFile(fileLRL, 0)

def arp_detect(pkt):
  if pkt[ARP].op == 1: # network request
    mac = pkt[ARP].hwsrc
    mac = mac.lower()
    ip = pkt[ARP].psrc

  if mac == 'xx:xx:yy:xx:yy:xx': # dash button
    return "dash button detected\n"

  print( sniff(prn=arp_detect, filter="arp", store=0))
except KeyboardInterrupt:

This way the dash button can be used as an additional remote control in parallel to the original remote control of the power plug sockets. Although this solution cannot keep up with the original remote control regarding the response time. There are several steps in between which take their time…

Raspberry Pi Zero, dash button, remote power plug socket
Raspberry Pi Zero, dash button, remote power plug socket

Dash Button Hacks

Quite some time ago Amazon launched the dash buttons. Amazon intends them to be used for ordering everyday products. Without even knowing the price before automatically finalizing the order! This way I don’t want to use such a button.
In the end a dash button is a relatively cheap WiFi button. Quickly the first users found out how to hack them and use them in different contexts. A dash button can be a doorbell, a phone finder, a tool for doing statistics (work started/stopped, …)  or it could simply switch the light on.
Here is a short description on how to set up a dash button for alternative uses.


Raspberry Pi (Zero)
Amazon Dash Button

Dash Button Setup

    • Follow the setup descriptions as described here. The trick is to leave the Amazon app directly after having copied the WiFi credentials to the button.
    • Find out the dash button’s IP and MAC, e.g. by looking at the active devices in your router’s setup.
    • The button will constantly nag in the amazon app about being setup uncompletely. Block the button’s internet access using the setup of the router.


A small Python script on a Raspberry Pi „sniffs“ the local network for packets of all the devices within the network. If the MAC of a dash button is found certain actions can be triggered.

Additionally required packets:

sudo pip install scapy # http://www.secdev.org/projects/scapy/
sudo apt-get install tcpdump
from scapy.all import *
import httplib, urllib

def doWhatIWant():
  print "TODO"

def arp_detect(pkt):
  #pkt.show() # debug info
  if pkt[ARP].op == 1: # network request
    mac = pkt[ARP].hwsrc
    mac = mac.lower()
    ip = pkt[ARP].psrc
    print "IP: " + str(ip) + ", MAC: " + str(mac)

  if mac == 'xx:xx:xx:xx:xx:xx': # dash button
    return "dash button detected\n"
    print "Unknown: " + str(ip) + ", " + str(mac)
    return "Unknown MAC: " + pkt[ARP].hwsrc

  print sniff(prn=arp_detect, filter="arp", store=0)
except KeyboardInterrupt:

To run the script automatically after boot simply add a line to /etc/rc.local:

sudo python /home/pi/sniff.py&

Now the dash button is ready to be used for anything else.

To make it a doorbell or a phone finder one could use pushover. The app is installed on a smartphone. That way it is possible to send notifications to this smartphone using the pushover API  using an API and a user key.

Example code

def sendNotification(message):
  conn = httplib.HTTPSConnection("api.pushover.net:443")
  conn.request("POST", "/1/messages.json",
  "token": "APItoken",
  "user": "usertoken",
  "message": str(message),
 "sound": "intermission"
 }), { "Content-type": "application/x-www-form-urlencoded" })

Shutdown Switch for a Raspberry Pi – or: too many Legs

A Raspberry Pi lacks a shutdown button. To keep the price down I heard.
However, simply cutting off the power of a running Pi might damage the system. This is avoidable with a simple button.

In projects that require a Pi running headless (without display) it certainly helps to have a button which triggers a safe shutdown procedure. It might not always be possible to log on to the Pi, run the custom shutdown procedure for the specific project and manually type the shutdown command.

A script triggered by a simple button can do this! The script can trigger the custom shutdown procedure for the project and can turn off the Pi afterwards.

Here’s the description of how it works for me:


Raspberry Pi (Zero) with running OS etc
Momentary switch
Some cables



The switch is connected to GND and a free pin next to the GND pin. In this case it is BCM pin 21. When the switch is pressed, an edge is detected. This signal can be used to trigger the desired actions.


Short vs. Long Press

A short and a long press can be easily distuingished using a simple counter in a callback function. The callback function is triggered when an edge is detected on pin 21.

# This script will wait for a button to be pressed and then shutdown or reboot the Raspberry Pi.
# A long press initiates a reboot, a very long press initiates a shutdown.

import time
from time import sleep
import RPi.GPIO as GPIO
import os

demo_mode = False
debug = False
##############END CONFIG######################


# Pin 21 will be input and will have its pull-up resistor (to 3V3) activated
# so we only need to connect a push button with a current limiting resistor to ground
GPIO.setup(GPIN, GPIO.IN, pull_up_down=GPIO.PUD_UP)
int_active = 0

print "Shutdown / Reboot script started."

# ISR: if our button is pressed, we will have a falling edge on pin 21
# this will trigger the interrupt:
def shutdown_reboot_callback(channel):

# button is pressed
# possibly shutdown our Raspberry Pi

global int_active

# only react when there is no other shutdown process running

if (int_active == 0):
int_active = 1
pressed = 1
shutdown = 0

# count how long the button is pressed
counter = 0

while ( pressed == 1 ):
if ( GPIO.input(GPIN) == False ):
# button is still pressed
counter = counter + 1
if debug:
print "pressed: " + str(counter)
# break if we count beyond 20 (long-press is a shutdown)
if (counter >= 20):
pressed = 0
# button has been released
pressed = 0

# button has been released, count cycles and determine action

# count how long the button was pressed
if (counter < 2):
# short press, do nothing
pressed = 0
int_active = 0
if debug:
print str("short press, do nothing")

if debug:
print "else " + str(counter)
# longer press
if (counter < 10):
# medium length press, initiate system reboot
if debug:

# run the reboot command as a background process
if demo_mode:
print("sudo reboot now")
os.system("sudo reboot now")

elif (counter>=10 and counter<20):
# long press, initiate system shutdown

if debug:
print("shutting down..")

# run the shutdown command as a background process
if demo_mode:
print("sudo shutdown now")
os.system("sudo shutdown now")

# Program pin 21 as an interrupt input:
# it will react on a falling edge and call our interrupt routine "shutdown_reboot_callback"
GPIO.add_event_detect(GPIN, GPIO.FALLING, callback = shutdown_reboot_callback, bouncetime = 500)

while True:
if debug:
print "."


To launch the script at startup of the Raspberry Pi place it on the pi (e.g. in /usr/bin) and  add this line to /etc/rc.local just before exit 0:

sudo python /usr/bin/shutdown_reboot.py&
Shutdown / Reboot switch on a Raspberry Pi Zero
Shutdown / Reboot switch on a Raspberry Pi Zero

And here is the solution for the attentive reader on the remark on too many legs: a common momentary switch has four legs and is squared. Actually it is a matter of orientation whether the switch works as expected…

Making a Raspberry Pi speak: Alexa

Speaking with devices (and making them answer or do something) seems to be a trend of the time. Some up-to-date smartphones and tablets allow allow to use speech to trigger internet searches, to write short messages or e-mails (sometimes with funny results), to ask for something in the region, to turn the light of the smartphone on, … .

In addition to voice control on smartphones, well-known companies started to launch devices to enable voice control @home. However, the commercial solutions perform speech recognition on their own servers. AFAIK due to computing power requirements of the AI behind.
In this case one has to live with the fact that a constant internet connection is inevitable and that own voice samples are uploaded somewhere else for analysis.

Still, speech control can be extremely useful. My favourite example for illustration is setting a timer while being busy with something else.
Though, in a smart home there are many more applications for speech control: light, heating, media, … . Even for elderly, handicapped or visually impaired controlling everyday procedures by the own voice can be a huge advantage in the daily life.

Open Source Solution: Jasper

The open source solution for voice control, Jasper, offers the possibility to work offline, but the setup of the software is not trivial. It looks like the manual is outdated. Some experimental, but required libraries are not to be found easily anymore. This is why I turned to the API of a commercial solution to play with speech recognition on my Raspberry Pi 3.

At the moment speech recognition devices such as Amazon’s Alexa are not sold everywhere yet. It is possible to order them in Europe, but they are not shipped yet. As rumour has it: regions in which stronger accents are spoken are served first. 🙂

Amazon’s Alexa

The voice service that is used by Amazon’s Alexa devices can be relatively easy tested on a Raspberry Pi 3. Since a couple of weeks wake word detection in this solution is possible on the Raspberry Pi 3 as well.


Raspberry Pi 3 (incl. power supply, display, keyboard and mouse for setup)

USB microphone

Non-bluetooth speaker


Amazon Developer Account Settings

An Amazon developer account is required for using the voice service. The registration is free. After the registration an Alexa device has to be created along with security and web settings. On this page the required steps are explained. Save the client ID and secret for later.

Raspberry Pi

This github project contains the required installation software for download:

git clone https://github.com/alexa/alexa-avs-sample-app.git

The setup of the software is performed running the automated_install shell script. It has to be completed with the product name, client ID and secret. The script guides through the configuration and setup.

After successful installation the companion service, the AVS client and the desired wake word agent have to be launched in three separate terminals.

The AVS client requires authorization by signing in using the Amazon developer account. On request the default browser is opened and Alexa is ready to listen in after the confirmation.

Playing around

On the Raspberry Pi Alexa starts to listen more closely either on the push of a button or by hearing the wake word ‚Alexa‘. It confirms with a sound that it is listening. The next spoken words (shoud be english) are going to be analyzed. A longer break between words marks the end of the sentence.
Alexa’s answers are returned quickly! Out of the box it is possible to ask for the current weather at a specific location, to ask for a joke, to convert unities, to look up something in wikipedia, etc . Alexa can be connected to a calendar, it can calculate and it knows its „birthday“ (being the day it was first sold). That’s not all…

Surprising was my low-cost microphone in combination with Alexa. The first tests on various operating systems were devastating: I had to speak from a distance of 1 cm to be heard at all. Independent of the recording settings. I thought it is also some kind of safety precaution if I had to be close to the microphone to use speech recognition …but Alexa immediately worked from a distance of 2 m as well. It felt a bit slower, though, but still, it worked…

When I played the video recorded of my running system telling a joke it just started itself again when hearing the wake word from the video! It has already been shown that infinite loops of voice control can be set up easily: https://www.youtube.com/watch?v=ZfCfTYZJWtI . Alexa might also react on its wake word spoken on TV as recently learnt from the Verge’s doll house article!

Alexa is extensible with custom skills for own applications. Perhaps this is the thing to try next.


Raspberry Pi 3 Gesture controlled digital Picture Frame

The first time I realised the possibilities of gesture controlled devices I was dazzled. It was the kick-started project of a smart bike assistant, the Haiku. The Haiku’s idea is to use flick gestures to switch between functions or to handle notifications, independent of possibly covered fingertips. No direct touching is required!

How simple is it to steer something without touching? Without taking off gloves or anything that hampers control as it might with common touch sensitive devices such as smartphones or tablets? Without leaving fingerprints or accidentally painting the unlock pattern on a smudgy touchscreen (take a look at a smartphone in grazing light!)?
Gesture control sounds highly convenient and probably more safe than touch control to me. Although, one could accidentally do something by moving a bit too close on a gesture control input. That is truly a side effect…

During a hacking night with fellow workers I learnt about Pimoroni’s skywriter. Attachable to either an Arduino controller or the Raspberry Pi the skywriter allows to recognise gestures such as flicks and taps as well as X/Y/Z 3D position sensing within a range of 15 cm. I ordered one to test it for another project… so why not use a skywriter to see through photographs in a directory? Displaying images in a digital picture frame with a convenient input method.

Components used

Raspberry Pi 3

Raspberry Pi 7″ Touch Screen (any other monitor for a Pi will do)

Skywriter breakout or HAT


The wiring is described in pimoroni’s github repository. The Raspberry Pi 3 pinouts are described here.

Skywriter Raspberry Pi


At pimoroni’s github repository some very good examples are found for using the skywriter either on an Arduino controller or a Raspberry. On the Raspberry the same libraries have to be installed, e.g. with the given shell command as described in the readme.

For the Raspberry I started with the Python example touch.py. Each recognised gesture and a move’s coordinates are printed on the console. The print statement in the move() method obscures the results of the recognised gestures, it can be commented.

Python’s TKinter toolkit is used to display pictures in a window. The TKinter mainloop runs in a separate thread.


The usage should be natural and intuitive.

  • Flicking from left to right will display the next image in the directory.
  • Flicking from right to left will switch back to the previous image.
  • Tapping into the centre of the skywriter will close the program
  • Tapping onto the lower end will minimise the image window.
  • Guess how to maximise the image window again!

This is the whole Python script to move through (holiday) pictures using gesture control on the digital picture frame:


Switch between different images in a directory using the skywriter.

Swipe left to right: display next image in directory
Swipe north to south: display next image in directory
Swipe right to left: display previous image in directory
Swipe south to north: display previous image in directory

Tap the lower left corner to minimize the image window
Tap the upper right corner to maximize the image window

Press CTRL+C, ESC or tap the skywriter's center to exit.
# use a Tkinter label as a panel/frame with a background image
# note that Tkinter only reads gif and ppm images
# use the Python Image Library (PIL) for other image formats
# free from [url]http://www.pythonware.com/products/pil/index.htm[/url]
# give Tkinter a namespace to avoid conflicts with PIL
# (they both have a class named Image)

import Tkinter as tk
from PIL import Image, ImageTk
from ttk import Frame, Button, Style
import gtk, pygtk
import time

import sys
import os
import signal
import skywriter
import threading
import random

pathToPictures = "/home/pi/Desktop/images/"

class ImageDisplay(threading.Thread):

    def __init__(self):
        self.root = None

    def callback(self):

    def run(self):
        if self.root == None:
        self.root = tk.Tk()
        self.root.title('My Photographs')


    def setImageName(self, name):
        self.name = name

    def showImage(self, path):
        self.original = Image.open(path)

        # make the root window the size of the screen
        screen_size = getScreenSize()
        self.root.geometry("%dx%d+%d+%d" % (screen_size["width"],         screen_size["height"], 0, 0))
        #self.root.attributes("-fullscreen", False)
                       #self.root.geometry({0}x{1}+0+0".format(self.root.winfo_screenwidth(), self.root.winfo_screenheight()))
        ##self.root.geometry("{0}x{1}+0+0".format(screen_size["width"],       screen_size["height"]))
        self.root.focus_set()  # >-- move focus to this widget
        self.root.bind(">Escape<", lambda e: e.widget.quit())

        self.resized = self.original.resize((screen_size["width"],        screen_size["height"]), Image.ANTIALIAS)
        self.image = ImageTk.PhotoImage(self.resized) # keep a reference, prevent GC

        # root has no image argument, so use a label as a panel
        self.panel1 = tk.Label(self.root, image = self.image)
        self.display = self.image
        self.panel1.pack(side=tk.TOP, fill=tk.BOTH, expand=tk.YES)
        print "Display image " + path

    def updateImage(self, path):
        self.original = Image.open(path)
        # resize
        screen_size = getScreenSize()
        self.root.geometry("%dx%d+%d+%d" % (screen_size["width"], screen_size["height"], 0, 0))

        self.resized = self.original.resize((screen_size["width"], screen_size["height"]), Image.ANTIALIAS)
        self.image = ImageTk.PhotoImage(self.resized) # keep a reference, prevent GC

        # root has no image argument, so use a label as a panel
        self.display = self.image
        print "Display image " + path

    def minimize(self):
    def maximize(self):

    def stopThread(self):
        self.do_run = False  # stop thread

def getScreenSize():
    window = gtk.Window()
    screen = window.get_screen()
    print "width = " + str(screen.get_width()) + ", height = " + str(screen.get_height())
    screen_size = {}
    screen_size["width"] = screen.get_width()
    screen_size["height"] = screen.get_height()
    return screen_size

def findImages(directory):
    imageList = []
    for file in os.listdir(directory):
        if file.endswith(('.jpg','.JPG','.jpeg','.JPEG')):
    return imageList

def increaseIndex():
    global index, images
    index += 1
    # start again with index 0
    if index >= len(images):
	index = 0

def decreaseIndex():
    global index, images
    index -= 1
    # start again with index max
    if index < 0:
	index = len(images) - 1

def nextImage():
    global imageDisplay, images, index, pathToPictures
    print "image " + images[index] + ", index=" + str(index) + "(" + str(len(images)) + ")"
    imageDisplay.updateImage(pathToPictures + images[index])

def previousImage():
    global imageDisplay, images, index, pathToPictures
    print "image " + images[index] + ", index=" + str(index) + "(" + str(len(images)) + ")"
    imageDisplay.updateImage(pathToPictures + images[index])

#---detect gestures on skywriter---#
def flick(start,finish):
    print('Got a flick!', start, finish)
    if (start == "west" and finish == "east") or (start == "south" and finish == "north"):
        print "Display next image in directory"
    if (start == "east" and finish == "west") or (start == "north" and finish == "south"):
        print "Display previous image in directory"

def touch(position):
    print('Touched!', position)
    if (position == "center"):
        print "Exit image display"
    if (position == "south"):
        print "minimize image window"
    if (position == "north"):
        print "maximize image window"

# parse picture folder
images = findImages(pathToPictures)

# reset index
index = 0

# launch image window as thread
imageDisplay = ImageDisplay()

def main():
        print "Skywriter image display launched"
        print "Images found: "
        for i in images:
            print i
        global imageDisplay, pathToPictures
        imageDisplay.showImage(pathToPictures + images[index])
    except KeyboardInterrupt:
        print "Exit"
if __name__ == '__main__':


Gesture controlled Picture Frame

Raspberry Pi 3 wears a Display-O-Tron HAT

Two weeks ago I laid my hands on a Raspberry Pi 3. For sure I did not buy the 10.000.000th one. I ordered just a week before this milestone wich was a good excuse for discounts, but anyway, a Raspberry Pi is a nice and versatile thing to play with.

Since there are already a huge number of tutorials on how to set up and configure a Raspberry Pi I will spare the details. For this example of using the Display-O-Tron HAT on a Raspberry lets assume your Pi is already set up with a Linux distribution, knows Python and is connected to the internet.


An Idea

My idea is to use the Display-O-Tron HAT as display for several purposes. The Display-O-Tron HAT comes with a three line LCD display, a row of very bright LEDs on the right and 6 touch buttons. Each button can trigger the display of different information such as

  • system statistics
  • the number of unread emails, signaling incoming messages
  • process states (dead or alive?)
  • date and time
  • head lines of a news feed

The Sources

Based on the examples for the Display-O-Tron HAT on github I created a simple, straightforward, quick and dirty Python script.

On the push of a button the LCD display will be alighted for a couple of seconds in a different colour and the desired information will be retrieved and displayed. After a couple of seconds the backlight LEDs will be turned off. This is realized using threads that will finish after the desired time has passed.
The functionality that is implemented is a check for unread emails on googlemail’s IMAP servers, a simple date and time display, a quick internet connection and system status check and a status check of certain processes.
If unread emails were detected the LEDs on the (b)right side will blink three times in a row. Every minute the display will switch back to the default view: date and time.

#!/usr/bin/env python
Switch between different views using the Display-o-Tron HAT buttons

Main button: Display date and time
Right: Display number of unread emails
Left: Display system statistics
Down: Display process status
Up: TODO find something else to display

Press CTRL+C to exit.

import dothat.touch as touch
import dothat.lcd as lcd
import dothat.backlight as backlight
import signal
import sys

import psutil
import urllib2
import subprocess
import imaplib
import time
from datetime import date
import calendar
import threading


Captouch provides the @captouch.on() decorator
to make it super easy to attach handlers to each button.

The handler will receive 'channel'( corresponding to a particular
button ID ) and 'event' ( corresponding to press/release ) arguments.

'''DISPLAY utilities'''
# store time of last button touch
# to be able to reset backlight LEDs
oldTime = time.time()
turnedOff = True

def resetBacklightLEDs():
    global oldTime
    oldTime = time.time()
    global turnedOff
    turnedOff = False

def clearDOT():

def colorDOT(r, g, b):
    backlight.rgb(r, g, b)

def turnOnLEDsDOT():
    # turn on LEDs one by one in a row
    for led in range(6):
        backlight.graph_set_led_state(led, 1)

def turnOffDOT():
    colorDOT(0, 0, 0)  # backlight off
    backlight.graph_off()  # side LEDs off
    global turnedOff
    turnedOff = True

def dotClock():

    d = time.strftime("%d.%m.%Y")
    t = time.strftime("%H:%M")
    the_date = date.today()
    day = calendar.day_name[the_date.weekday()]
    print day + ", " + d + " / " + t

    lcd.set_cursor_position(3, 0)
    lcd.set_cursor_position(3, 1)
    lcd.set_cursor_position(5, 2)

def dotMails(login, password):
    nofUnreadEmails = check_googlemail(login, password)
    if nofUnreadEmails > 0:
        lcd.write("Unread Emails: " + str(nofUnreadEmails))
        showNewMessages(200, 0, 0)

        lcd.write("No new mail")

def dotSystemStats():
    lcd.set_cursor_position(0, 0)

    lcd.set_cursor_position(0, 1)
    lcd.write("CPU: " + check_CPU())

    lcd.set_cursor_position(0, 2)
    lcd.write("Memory: " + check_memory())

def check_internet():
        # ping google to check whether internet connection works
        response = urllib2.urlopen('http://www.google.com', timeout=1)
        return "Internet: OK"
    except urllib2.URLError as err: pass
    return "Internet connection broken"

def check_CPU():
    cpu_usage = str(psutil.cpu_percent(interval=None)) + " %"
    print "CPU usage: " + cpu_usage
    return cpu_usage

def check_memory():
    mem = psutil.virtual_memory()
    # print "Memory: " + str(mem)
    memory_used = str(mem.percent) + " %"
    print "Memory used: " + memory_used
    THRESHOLD = 100 * 1024 * 1024  # 100MB
    if mem.available >= THRESHOLD:
        print("Warning, memory low")
        return "Warning, memory low"
    return memory_used

def get_pid(name):
        pids = subprocess.check_output(["pidof", name])
    except subprocess.CalledProcessError as pids:
        print "error code", pids.returncode, pids.output
        return ""
    return map(int, pids.split())

def get_single_pid(name):
    return int(check_output(["pidof", "-s", name]))

def check_process(name):
    PID = get_pid(name)
    if len(PID) == 1:
        print "PID " + name + ": " + str(PID[0])
        p = psutil.Process(PID[0])
        status = ""
        if p.status == psutil.STATUS_ZOMBIE:
            status = "Process " + name + " died"
            print status
            status = "Process " + name + " OK"
            print status
            return status
        return ""

def dotProcessStats():
    lcd.set_cursor_position(0, 0)     
    process1 = check_process('geany')
    if len(process1) > 1 :    
        lcd.write("geany is dead.")

    lcd.set_cursor_position(0, 1)
    process2 = check_process('bash')
    if len(process2) > 1 :
        lcd.write("bash is dead.")

    lcd.set_cursor_position(0, 2)
    process3 = check_process('firefox')
    if len(process3) > 1 :
        lcd.write("firefox is dead.")

def check_googlemail(login, password):
    # if new mail return # emails
    obj = imaplib.IMAP4_SSL('imap.gmail.com', '993')
    obj.login(login, password)
    nofUnreadMessages = len(obj.search(None, 'UnSeen')[1][0].split())
    print "Unread emails: " + str(nofUnreadMessages)
    return nofUnreadMessages

class ShowNewMessagesThread (threading.Thread):
    red, green, blue = 0, 0, 0  # static elements, it means, they belong to the class

    def run (self):
        colorDOT(self.red, self.green, self.blue)
        for i in range(0, 3):

            if i == 2:
                print "stop ShowNewMessagesThread"
                self.do_run = False  # stop thread

snmt = ShowNewMessagesThread()
def showNewMessages(r, g, b):
    global snmt
    if snmt.is_alive():
    snmt = ShowNewMessagesThread()
    snmt.red = r
    snmt.green = g
    snmt.blue = b
    snmt.daemon = True  # enable stop of thread along script with Ctrl+C

class AlightThread (threading.Thread):
    red, green, blue = 0, 0, 0  # static elements, it means, they belong to the class

    def run (self):
        colorDOT(self.red, self.green, self.blue)
        while True:
        if turnedOff == False:
            if time.time() - oldTime > 5:
                print "stop AlightThread"
                self.do_run = False  # stop thread

at = AlightThread()
def alightDisplay(r, g, b):
    global at
    if at.is_alive():
    at = AlightThread()
    at.red = r
    at.green = g
    at.blue = b
    at.daemon = True  # enable stop of thread along script with Ctrl+C

class ClockThread (threading.Thread):
    def run (self):
        print "run update clock thread " + str(self)
        self.do_run = False  # stop thread

    def stopClockThread(self):
        self.do_run = False  # stop thread

ct = ClockThread()
def updateClock():
    global ct
    if ct.is_alive():
        print "Clock thread " + str(ct) + " is alive."
    print "Launching clock thread " + str(ct)
    ct = ClockThread()
    ct.daemon = True  # enable stop of thread along script with Ctrl+C

'''DOT touch button handler'''
def handle_up(ch, evt):
    print("Up pressed: TODO find another useful display idea")
    alightDisplay(255, 0, 255)
    lcd.write("Up up and away: TODO")

def handle_down(ch, evt):
    print("Down pressed: display process states")
    alightDisplay(255, 0, 0)

def handle_left(ch, evt):
    print("Left pressed: display system statistics")
    alightDisplay(0, 100, 200)

def handle_right(ch, evt):
    print("Right pressed, check for new email")
    alightDisplay(100, 200, 255)
    dotMails('email adress', 'password')

def handle_button(ch, evt):
    print("Main button pressed: show date and time")
    alightDisplay(255, 255, 255)

def handle_cancel(ch, evt):
    print("Cancel pressed!")
    backlight.rgb(0, 0, 0)
    alightDisplay(20, 20, 20)

if __name__ == '__main__':
    while True:
        timeDiff = time.time() - oldTime
        # print "time diff: " + str(timeDiff)

        # update clock every minute
        if timeDiff > 59 or timeDiff < 0.5:
        oldTime = time.time()
        alightDisplay(255, 255, 255)
    except KeyboardInterrupt:
        print "exit"



Raspberry Pi 3 Model B

Display-O-Tron HAT