Raspberry Pi Camera Guide
Raspberry Pi Camera Guide
Raspberry Pi Camera Guide
CAMERA
GUIDE
3
www.dbooks.org
First published in 2020 by Raspberry Pi Trading Ltd, Maurice Wilkes Building,
St. John's Innovation Park, Cowley Road, Cambridge, CB4 0DS
ISBN: 978-1-912047-52-9
Welcome to
The Official Raspberry
Pi Camera Guide
O
ne of the most popular add-ons for the Raspberry Pi, the official Camera Module –
or the new High Quality Camera – turns your favourite single-board computer into
a powerful digital camera. Launched back in 2013, the original Camera Module was
succeeded by the higher-spec v2 in April 2016. The High Quality Camera was launched in April
2020, offers Ultra HD image resolution, and enables you to attach any C- or CS-mount lens.
In this book we’ll show you how to get started with your Raspberry Pi camera, taking photos
and videos from the command line and writing Python programs to automate the process.
We’ll reveal how to create time-lapse and slow-motion videos, before moving on to exciting
projects including a Minecraft photo booth, wildlife camera trap, and smart door with video.
There are just so many things you can do with a Raspberry Pi camera!
5
www.dbooks.org
Contents
Chapter 1: Getting started 008
Set up and connect your camera and start taking shots
7
www.dbooks.org
Chapter 1
Getting started
Find out how to connect your High Quality Camera or
Camera Module, enable it, and take your first shots
I
n this chapter, we show you how to connect the High Quality Camera or Camera Module
to your Raspberry Pi using the supplied ribbon cable. We will then enable it in Raspbian,
before entering some commands in a Terminal window to start shooting photos and
video. Let’s get started…
01 Fitting
camera
the lens to the
The 6 mm lens is a CS-mount device, so it
has a short back focus and does not need the
C-CS adapter that comes with the HQ Camera.
Rotate the lens clockwise all the way into the
back focus adjustment ring.
04 Focus
To adjust focus, hold the camera
with the lens facing away from you. Hold the
outer two rings of the lens; this is easier if the
aperture is locked as described above. Turn
the camera and the inner ring anti-clockwise
relative to the two outer rings to focus on a
nearby object. Turn them clockwise to focus
on a distant object. You may find you need to
adjust the aperture again after this.
01 Fitting the
C-CS adapter
Ensure the C-CS adapter that comes with the
HQ Camera is fitted to the 16 mm lens. The
lens is a C-mount device, so it has a longer
back focus than the 6 mm lens and therefore
requires the adapter.
04 Aperture
To adjust the aperture, hold the camera
with the lens facing away from you. Turn the
inner ring, closest to the camera, while holding
the camera steady. Turn clockwise to close the
aperture and reduce image brightness. Turn anti-
clockwise to open the aperture. When happy with
the light level, tighten the screw on the side of
the lens to lock the aperture into position.
05 Focus
To adjust focus, hold the camera with the
lens facing away from you. Turn the focus ring,
labelled ‘NEAR FAR’, anti-clockwise to focus
on a nearby object. Turn it clockwise to focus on
a distant object. You may find you need to adjust
the aperture again after this.
Figure 2
Figure 3
Terminal. A black window with green and blue writing in it will appear (Figure 4): this is the
Terminal, which allows you to access the command-line interface.
To take a test shot, type the following into the Terminal:
raspistill -o test.jpg
As soon as you hit the ENTER key, you’ll see a large picture of what the camera sees
appear on-screen (Figure 5). This is called the live preview and, unless you tell raspistill
otherwise, it will last for five seconds. After those five seconds are up, the camera will
capture a single still picture and save it in your home folder under the name test.jpg. If you
want to capture another, type the same command again – but make sure to change the
output file name, after the -o, or you’ll save over the top of your first picture.
The -t option changes the delay before the picture is taken, from the default five seconds
to whatever time you give it in milliseconds – in this case, you have a full 15 seconds to get
Figure 5
your shot arranged perfectly after you press ENTER. You can explore more camera options
in the next chapter, or by referring to Chapter 17.
SHOOTING VIDEO
For shooting video, raspivid is what you need. Try it out with this Terminal command:
This records a ten-second video (10,000 milliseconds) at the 1920 × 1080 resolution. You can also
shoot slow-mo video at 640 × 480 by using:
You can use VLC to play the videos back – see Chapter 4 for more details.
Precise
camera control
Use command-line switches to access a variety
of camera options and effects
S
o, you’ve connected the HQ Camera or Camera Module to your Raspberry Pi and
learned how to take still photos and shoot videos from the command line. Now
let’s explore the raspistill and raspivid commands further, including the many
switches and options available. We’ll also take a look at the raspiyuv command, which sends
its unencoded YUV or RGB output directly from the camera component to a file.
01 Preview mode
When taking stills or shooting video, one of the first things you might want to alter
is the preview window that appears by default on the screen. First of all, if it’s upside-down,
just add -rot 180 to your raspistill or raspivid command to rotate it. Also, adding -hf
and/or -vf will flip the image horizontally and/or vertically.
Using the -p switch, you can set the window’s on-screen position, along with its height and
width. The -p switch takes four parameters: x co-ordinate, y co-ordinate, width, and height. So,
for example:
…would place the preview window’s top-left corner at co-ordinate (20,100), with a width of
1280 pixels and height of 720 pixels.
Note that if you only want to see a preview without taking a shot, you can simply omit the
-o image.jpg part. The -t switch sets the duration of the preview: you can set it to 0 to make
it stay on screen until you press CTRL+C.
If you want a full-screen preview, this is easily achieved using the -f switch. The -op switch
can be used to adjust the preview’s opacity, from 0 (invisible) to 255 (solid). If you want to
disable the preview window completely, use the -n switch.
The preview can be resized and positioned manually, and can also have its opacity adjusted
03 IfKeypress mode
you’d like to take a still photo at an exact time, rather than having to wait for the
-t switch delay time to elapse, keypress mode is your friend. Just add the -k switch to your
raspistill command, then press the ENTER key to take the shot: it acts like a shutter button.
To exit the procedure, press X followed by ENTER.
By adding %04d to the end of your file name in the command, you can save every shot you
have taken before aborting:
raspistill -o keypress%04d.jpg -k
Each shot will have a four-digit sequential number added to its file name, so you’ll get
keypress0000.jpg, keypress0001.jpg, keypress0002.jpg, etc. This is a useful technique for
time-lapses using the -tl switch, too: see Chapter 3 for more details.
04 AImage effects
whole bunch of effects can be added to the camera in real-time, shown in the
preview window. This is achieved by using the -ifx switch followed by one of the following
terms: none, negative, solarise, posterise, sketch, denoise, emboss, oilpaint, hatch,
gpen (graphite sketch effect), pastel, watercolour, film, blur, saturation (adjust colour
saturation of the image), colorswap, washedout, colorpoint, colorbalance, or cartoon.
If you’d like to take monochrome images, you can use the -cfx (colour effect) switch to
achieve this, using the following setting: -cfx 128:128.
To increase contrast between dark and light areas using DRC (dynamic range compression),
use the -drc switch to turn it on/off (it’s off by default).
Still options
05 Let’s take a look at some options that are specific to the raspistill command. As
already mentioned, we use -o followed by a file name to output to a file, and the -t switch sets
the shutter delay in milliseconds. For example, to save a photo taken after two seconds, use:
You can set the width and height of the image with -w and -h, each followed by a value – up to
4056 and 3040 (HQ Camera), 3280 and 2464 (Camera Module v2), or 2592 and 1944 (CM v1).
You can also set the quality of the JPEG image, using -q, from 0 to 100 – the latter is almost
completely uncompressed. Alternatively, to save it as a lossless PNG (slower than using JPG),
use -e (encoding) followed by png:
For a full list of options, see Chapter 17. The raspiyuv command works in a similar fashion
and offers most of the same options, apart from adding EXIF tags, but sends its YUV or RGB
output directly from the camera component to file. To use RGB, add the -rgb switch.
Shooting video
06 The raspivid command is used to shoot video. In this case, the -t switch sets the
duration in milliseconds. The bitrate is set using -b, in bits per second (so, 25Mpbs is -b
25000000), while -fps sets the frame rate – see Chapter 17 for the maximum bitrate and
frame rate for your camera model. For example, to shoot five seconds of video at 1080p
(1920 × 1080), with a bitrate of 15Mbps and frame rate of 30fps, use:
Time-lapse
photography
Make a device to capture photographs at regular
intervals, then turn these images into a video
T
ime-lapse photography reveals exciting things about the world which you wouldn’t
otherwise be able see. These are things that happen too slowly for us to perceive:
bread rising and plants growing; the clouds, sun, moon, and stars crossing the sky;
shadows moving across the land. In this chapter, we’ll be making a Raspbian-based device
that lets you watch things that are too slow to observe with the naked eye. To do this, we will
capture lots of still photographs and combine these frames into a video with FFmpeg/libav,
which can then be accessed via a web browser.
raspistill -o testimage.jpg
After five seconds (if using an original Camera Module v1, its red LED should light up during
this time), a JPEG image will be saved to the current directory. If the camera is mounted
upside-down, you can use the rotate command-line switch (-rot 180) to account for this.
sudo rm /var/www/html/index.html
Visit the IP address of your Raspberry Pi (e.g. http://192.168.1.45 – you can find this by using
hostname -I) on another computer and you should see an empty directory listing. If you run
the following command and refresh the page, you should see an image file listed. You run this
as a superuser (by using sudo) so you can write to the directory.
Click on the file link on the remote computer and you’ll see the image in your browser.
The width and height have been changed to capture a smaller image in 16:9 aspect ratio. This
makes things easier later. The top and bottom are cropped, so make sure that your subject is
in frame. Run this to start the capture:
This installs a fork of FFmpeg, but you can also use the original FFmpeg.
To copy the images to a remote machine, you can download them from the web server using
wget or curl. For example:
When the rendering process has finished, you’ll be able to view the video in your browser.
The default frame rate is 25fps. This compresses three hours of images taken at ten-second
intervals to about 40 seconds of video. You can adjust this with the -framerate command-
line option. The bitrate (-b) has been set high, and the Constant Rate Factor (-crf) has been
kept low, to produce a good-quality video.
Running the
rendering process
on a Raspberry
Pi. This will take
some time, so you
may prefer to use
a faster machine
High-speed
photography
All you need to make dazzling slow-motion clips of exciting
events is a Raspberry Pi and HQ Camera or Camera Module
A
t first glance it seems counter-intuitive, but in order to create a smooth slow-
motion movie, you need a high-speed camera. Essentially, a movie is just a
collection of still photos, or frames, all played one after the other at a speed that
matches the original action. A slow-motion clip is produced by recording more frames than
are normally needed and then playing them back at a reduced speed. Normal film is typically
recorded at 24 frames per second (fps), with video frame rates varying between 25 and 29fps
depending on which format/region is involved. So if you record at 50fps and play back at
25fps, the action will appear to be taking place at half the original speed. It’s actually a little
more complicated than that with the use of interlaced frames, but you don’t really need to
consider them here.
Right: now that you’ve recorded your movie clip, how can you play it back? One easy way is to
use the free VLC player, which is now installed by default in the full Raspbian ‘with desktop and
recommended software’ image. If it’s not, you can install it with:
The version on Raspberry Pi has some handy features which can be accessed by checking the
‘Advanced Controls’ option under the View menu. These include the extremely useful ‘Frame
by Frame’ button. You can also alter the playback speed to slow things down even further.
01 Lights
Get your scene lined up and lit, then test how it looks by using the camera preview
mode for five seconds:
02 Camera
Type the command, ready for execution (but don’t press ENTER yet):
03 Action
When everything is ready, hit ENTER and then release the car / drop the egg / burst
the balloon. You’ll have footage before and after the event, which can be trimmed with some
post‑production editing.
S
o far, we’ve looked at using the Camera Module or HQ Camera from the command
line. This is all very well and good, but what if you want to control it from a Python
program? This is where the picamera library comes in, enabling you to access all
the camera’s features in Python. In this chapter, we’ll take a look at how to use it to take stills,
shoot videos, alter settings, and add effects.
01 Getting started
The picamera library comes pre-installed in the most recent versions of Raspbian. If
it’s not present already, you can install it manually. In a Terminal window, enter:
With your camera already connected and enabled in Raspberry Pi Configuration, open
Programming > Thonny from the Raspbian desktop menu. Create a new file by clicking
File > New file. Save it with File > Save, naming it ch5listing1.py. Note: Never name a file
picamera.py, as this is the file name for the picamera library itself!
Now enter the code from ch5listing1.py. Save it with CTRL+S and run with F5. The full-
screen camera preview should be shown for ten seconds, and then close. Note: To be able
to see the preview when using VNC for remote access from another computer, open the VNC
Server menu and go to Options > Troubleshooting, then select ‘Enable direct capture mode’.
If the preview appears upside-down, add the line camera.rotation = 180 just above
camera.start_preview(). Other possible rotation values are 90 and 270.
You can alter the transparency level of the preview by entering an alpha value – from 0 to
255 – within the latter command’s brackets; e.g. camera.start_preview(alpha=200).
It’s also possible to change the position and size of the preview. For example, to place its
top corner 50 pixels right and 150 down, and resize it to 1024 × 576:
02 Take a photo
Now let’s take a still photo. We can do this by adding the line:
camera.capture('/home/pi/Desktop/image.jpg')
…just after the sleep in our code, so it looks like ch5listing2.py. Run the code and after a
preview of five seconds (as set by sleep), it’ll capture a photo as image.jpg. You may the
preview adjust to a different resolution momentarily as the picture is taken. In this example,
the resulting image file will appear on the desktop; double-click its icon to open it.
You can alter the file name and directory path in the code, along with the sleep time.
Remember, though, that it should be at least five seconds, to give the camera sensor enough
time to adjust its light levels.
03 Make a loop
The great thing about using Python with the picamera library is that it makes it easy
to use a loop to take a sequence of photos. In Thonny, create a new file and enter the code
from ch5listing3.py.
After initiating the camera preview, we add a for loop with a range of 5, so it will run five times
to take five photos. The sleep command sets the time between shots, captured using the line:
camera.capture('/home/pi/Desktop/image%s.jpg' % i)
Here, the %s token is replaced by whatever we add after the % following the file name – in this
case, the variable i set by our for loop. Note that i will range from 0 to 4, so the images will
be saved as image0.jpg, image1.jpg, and so on. Once they’re all taken, the preview will close.
In this example, you’ll see the five files on your desktop; double-click to open them.
You can also use a for loop to alter camera setting levels such as brightness over time. For
more details, see Step 04.
camera.brightness = 50 (0 to 100)
camera.sharpness = 0 (-100 to 100)
camera.contrast = 0 (-100 to 100)
camera.saturation = 0 (-100 to 100)
camera.iso = 0 (automatic) (100 to 800)
camera.exposure_compensation = 0 (-25 to 25)
camera.exposure_mode = 'auto'
camera.meter_mode = 'average'
camera.awb_mode = 'auto'
camera.rotation = 0
camera.hflip = False
camera.vflip = False
camera.crop = (0.0, 0.0, 1.0, 1.0)
The maximum resolution for photos is 4056 × 3040 (HQ Camera), 3280 × 2464 (Camera
Module v2), or 2592 × 1944 (Camera Module v1). Note: you may need to increase gpu_mem in
/boot/config.txt to achieve full resolution with the Camera Module v2.
06 Shoot a video
To shoot video footage, we replace the camera.capture() command with
camera.start_recording(), and use camera.stop_recording() to stop. Enter the
example code from ch5listing6.py.
When you run the code, it records ten seconds of video before closing the preview. To play
the resulting file, open a Terminal window from the desktop and enter:
oxplayer video.h264
(Or you can use VLC instead.) Note that it may well play faster than the original frame rate. It’s
possible to convert videos to MP4 format and adjust the frame rate using the MP4Box utility
(installed with sudo apt-get install gpac), like so:
All of the image effects and most of the camera settings can be applied while shooting video.
You can also turn on video stabilisation, which compensates for camera motion, by adding the
following line to your Python program:
camera.video_stabilization = True
camera = PiCamera()
camera.start_preview()
sleep(10)
camera.stop_preview()
ch5listing2.py / Python 3
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview()
sleep(5)
camera.capture('/home/pi/Desktop/image.jpg')
camera.stop_preview()
ch5listing3.py / Python 3
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview()
for i in range(5):
sleep(5)
camera.capture('/home/pi/Desktop/image%s.jpg' % i)
camera.stop_preview()
ch5listing4.py / Python 3
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview()
camera.image_effect = 'colorswap'
sleep(5)
camera.capture('/home/pi/Desktop/colorswap.jpg')
camera.stop_preview()
ch5listing5.py / Python 3
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview()
for effect in camera.IMAGE_EFFECTS:
camera.image_effect = effect
camera.annotate_text = "Effect: %s" % effect
sleep(5)
camera.stop_preview()
ch5listing6.py / Python 3
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview()
camera.start_recording('/home/pi/video.h264')
sleep(10)
camera.stop_recording()
camera.stop_preview()
Stop-motion
and selfies
Wire up a physical push-button to take photos
H
ave you been reading the last few chapters and thinking you’d like to take a picture
with a Raspberry Pi camera with less hassle? In this tutorial we’ll show you how to
take a photo with a click of a button, just like a real camera. This could be useful for
many projects (for example, time-lapse photography), but in this chapter we are focusing on
stop-motion animation. We also show how to create your own selfie stick!
02 Install picamera
That’s all the hardware done. Now it’s time for the software. If you haven’t done so
already in Chapter 5, you’ll need to install the picamera library. In a Terminal window, enter:
YOU’LL NEED
• Camera Module / HQ Camera • Raspberry Pi case with a hole for the camera
• Push-button cable (selfie stick)
• Breadboard (optional) • Long wires (selfie stick)
• Jumper wires • A stick, slim metal pole etc. (selfie stick)
If for some reason you don’t have GPIO Zero already installed (it has come pre-installed in
Raspbian for some time), do so with:
03 Stop-motion software
Because we’re focusing on stop-motion for our first project, we’re using the camera’s
preview mode so that we can set up our shot before we take it, to ensure everything is in the
frame. Then, only when the button is pressed do we save an image file. Each image file will
have a different name based on the date and time at which it is taken. This makes it easy to
assemble all the images from the shoot for post-processing.
The wonderful GPIO Zero library is used to capture the button activity; we simply define a
function that is run whenever the button is pressed. This function uses the picamera Python
library which allows us to control the camera through code, making all the normal command-
line operations available.
Download or type up the code from ch6listing1.py and either run it through Thonny or the
command line. To quit the program, press CTRL+C.
05 Selfie stick
Next, we’ll look at making a selfie stick. A lot of people roll their eyes and complain
about vanity when it comes to the art of the selfie, but we all know it’s nothing like that. New
outfit? New glasses? Eyeliner wings perfectly symmetrical today? Why not chronicle it? It’s a
great confidence boost.
You can use a breadboard for a small button, or connect your jumper
wires directly to the pins on a bigger one
b=Button(14)
pc=picamera.PiCamera()
running = True
#pc.resolution = (1024, 768)
#use this to set the resolution if you dislike the
default values
timestamp=datetime.now()
def picture():
pc.capture('pic'+str(timestamp)+'.jpg') #taking the
picture
Our test selfie stick is very DIY, but you can use anything as long
as you can attach a Raspberry Pi and have a long enough wire
Our Raspberry Pi-powered selfie stick will use a similar hardware and software setup to the
stop-motion animation project. As before, we’re wiring up a push-button to GPIO 14 and GND
pins on Raspberry Pi, but this time we need to attach the jumpers to longer wires to put the
button at the end of the ‘stick’ – we used a spatula, but anything long will do.
Your Raspberry Pi needs to be near to the camera (unless you’ve got an extra-long ribbon
cable). Attach Raspberry Pi in a case to one end of the stick with whatever means you see fit
(glue, adhesive putty, string, etc.) and then attach the button.
Flash photography
using an LED
Add an LED flash to shoot images in low light
T
he Raspberry Pi Camera Module YOU’LL NEED
or HQ Camera works really well in
• Camera Module / HQ Camera
good lighting conditions, but what if
• White LED
there’s less light available? Here we show you
• Resistor
how to set up a simple LED flash, which will be
triggered each time you take a photo, using the
picamera Python library. We also take a look at how to shoot better images in low light when
you are not using a flash.
wget https://raw.githubusercontent.com/raspberrypi/firmware/
master/extra/dt-blob.dts
You’ll need to find the correct part of the code for the Raspberry Pi model you’re using; for
instance, the part for Raspberry Pi 4 is found under pins_4b {.
Here you’ll find pin_config and pin_defines sections. In the pin_config section, add a
line to configure the GPIO pin (we’re using GPIO 17) that you want to use for the flash:
03 Enable flash
Next, we need to associate the pin we added with the flash enable function by editing
it in the pin_define section. We simply change absent to internal and add a line with the pin
number, so it looks like the following:
pin-define@FLASH_0_ENABLE {
type = "internal";
number = <17>;
};
Note that it’s the FLASH_0 section that you need to alter: FLASH_1 is for an optional privacy
LED to come on after taking a picture, but we won’t bother with that.
This should output nothing. Next, you need to place the new binary on the first partition of the
microSD card. In the case of non-NOOBS Raspbian installs, this is generally /boot, so use:
In you installed Raspbian via NOOBS, however, you’ll need to do the following instead:
To activate the new device tree configuration, reboot your Raspberry Pi.
You need to edit the device tree source to enable a GPIO pin for the flash
06 Test it out
With the LED connected, we can now test out our flash with a short Python program.
In Thonny, create a new file and enter the code from ch7listing1.py. The camera.flash_mode
= 'on' line sets the flash to trigger when we issue the capture command below; the LED will
light up briefly before the image capture, to enable the camera to set the correct exposure level
for the extra illumination, before the flash proper is triggered.
If you want the flash to trigger automatically only when it’s dark enough, you can change the
penultimate line of the code to camera.flash_mode = 'auto'.
07 Low-light photography
In low-light scenarios where you don’t want to use a flash, you can improve capture
of images using a few tricks. By setting a high gain combined with a long exposure time, the
camera is able to gather the maximum amount of light. Note that since the shutter_speed
attribute is constrained by the camera’s frame rate, we need to set a very slow frame rate.
The code in ch7listing2.py captures an image with a six-second exposure time: this is the
maximum time for the Camera Module v1 – if you have a v2 Camera Module, it can be
extended to ten seconds, or much longer for an HQ Camera. The frame rate is set to a sixth of
a second, while we set the ISO to 800 for greater exposure. A pause of 30 seconds gives the
camera enough time to set gains and measure AWB (auto white balance).
Try running the script in a very dark setting: it may take some time to run, including the
30-second pause and about 20 seconds for the capture itself. Note: if you’re getting a timeout
error, you may need to do a full Raspbian upgrade with sudo apt-get update and sudo
apt‑get dist-upgrade.
The particular camera settings in this
script are only useful for very low light
conditions: in a less dark environment, the
image produced will be heavily overexposed,
so you may need to increase the frame rate
and lower the shutter speed accordingly.
If the image has a green cast, you’ll need
to alter the white balance manually. Turn
AWB off with camera.awb_mode = 'off'.
Then set the red/blue gains manually;.e.g. Even a single LED can provide illumination
camera.awb_gains = (1.5, 1.5). for close-up photography
ch7listing2.py / Python 3
from picamera import PiCamera
from time import sleep
from fractions import Fraction
# Set a framerate of 1/6fps, then set shutter
# speed to 6s and ISO to 800
camera = PiCamera(resolution=(1280, 720),
framerate=Fraction(1, 6))
camera.shutter_speed = 6000000
camera.iso = 800
# Give the camera a good long time to set gains and
# measure AWB (you may wish to use fixed AWB instead)
sleep(30)
camera.exposure_mode = 'off'
# Finally, capture an image with a 6s exposure. Due
# to mode switching on the still port, this will take
# longer than six seconds
camera.capture('dark.jpg')
Make a Minecraft
photo booth
Create a photo booth in Minecraft that takes photos
of the real world. What will you see on your travels?
N
ot only is Minecraft Pi great fun to play around with, you can also use Python
programming to manipulate the Minecraft world and create various structures
within it. Going beyond this, you can even have it interact with the real world. In this
chapter, we’ll be getting Minecraft to trigger the Camera Module or HQ Camera with code when
the player enters a virtual photo booth.
The first thing you need to do is import the Minecraft API (application programming
interface). This enables you to connect to Minecraft and program it with Python. You also need
to import picamera’s PiCamera class to control the camera, and the time module to add a
small delay between taking each photo.
Open Minecraft from the applications menu (if it's not present under Games, install it via
the Recommended Software tool), then enter an existing world or create a new one. Move
the Minecraft window to one side of the screen. You’ll need to use the TAB key to take your
mouse’s focus away from the Minecraft window to move it. This will be needed later when you
switch between the Minecraft and Python windows.
Open Thonny from the applications menu. This will open up the code editor which you’ll use
to write the photo booth program.
Enter the code from ch8listing1.py, or download it. Save with CTRL+S and run the
program with F5. You should see the message ‘Find the photobooth’ appear in the Minecraft
world. This is the first part of the code. Stop the program running using CTRL+C.
Camera tests
Next, we’ll make sure the camera is set up. We’ve set the camera to show a two-second
preview, so that you can strike your pose and smile before the picture is taken. The image is
stored as a file called selfie.jpg in your home directory (/home/pi).
Now, you need to create a photo booth in the Minecraft environment. This is done manually,
and the booth can be built wherever you want to locate it. Using any block type, build your
photo booth. It can be any shape you like, but it should have at least one block width of free
space inside so that the player can enter.
Once you have created your photo booth, you need to be able to move your player inside and
onto the trigger block. This is the block that the player stands on to run the function that you
wrote in the first step, which will then trigger the camera. In the Minecraft environment, your
position is given in reference to the x, y, and z axes. Look at the top-right of the window and
you’ll see the x, y, and z co-ordinates of your player – for example, 10.5, 9.0, -44.3. Assuming
you are still in the photo booth, then these are also the x, y, and z co-ordinates of the trigger
block in your booth.
Steve is your ‘shutter’ in the Place the booth anywhere in your world.
Minecraft world: move him Give it a special room in your house, or use
to the booth to take a photo it as a trap to see if someone is in your world
To find your position, you use the code x, y, x = mc.player.getPos(). This saves the
x, y, and z position of your player into the variables x, y, and z. You can then use print(x) to
print the x value, or print(x, y, z) to see them all if you wish, by adding it to the code. Now
you know the position of the player, you can test to see if they’re in the photo booth.
At this point we have a photo booth, the co-ordinates of the trigger block, and code to
control the camera and take a picture. The next part of the code is to test whether the program
knows when you’re in the photo booth. To do this, we create a loop which checks if your
player’s co-ordinates match the trigger block co-ordinates. If they do, then you’re standing in
the photo booth. For this, we use a simple if statement, which is known as a conditional.
Change the if line in the code to ensure the co-ordinates you enter are those of your photo
booth. Save and run your code to test it: walk into your photo booth and you should see the
message ‘You are in the photobooth!’ in the Minecraft window.
You will note that the if statement checks if the x value is greater than or equal to 10.5: this
is to ensure that it picks up the block, as it could have a value of 10.6. Remember to replace
the x, y, and z values with those from your photo booth. After the message is printed, the same
preview and camera snap will happen as before the while loop. The loop then resets itself so
you can enter it again and take another photo!
mc = Minecraft.create()
camera = PiCamera()
camera.start_preview()
sleep(2)
camera.capture('/home/pi/selfie.jpg')
camera.stop_preview()
while True:
x, y, z = mc.player.getPos()
sleep(3)
sleep(3)
W
e’ve all been there. You’ve gone YOU’LL NEED
out for the day and you know you
• Camera Module / HQ Camera
closed your bedroom door, but you
• PIR sensor magpi.cc/pir
come back and it’s slightly ajar. Who’s been in
• Raspberry Pi Zero camera cable
there? Were they friend or foe? In this chapter
(optional) magpi.cc/zerocamcable
we’ll use the Camera Module or HQ Camera
• Portable power supply (optional)
as a spy camera that takes a picture when
• Jumper wires
anyone’s presence is detected by a passive
infrared (PIR) sensor. Here we’re using a
Raspberry Pi Zero – which is easier to hide away due to its size – with a special camera cable
for it, but you can use any Raspberry Pi model. Unless you want to power it from the mains,
you’ll also need a portable power supply such as a mobile phone battery pack.
01 Getting started
First, connect your Camera Module or HQ Camera to Raspberry Pi. Note that if you’re
using a Raspberry Pi Zero, you’ll need a special adapter cable since its camera connector is
smaller: the cable’s silver connectors should face the Raspberry Pi circuit board. You’ll also
need to have enabled the camera in Raspberry Pi Configuration, as explained in Chapter 1.
We’ll be using the picamera Python library to trigger our spy camera, so if you haven’t yet
installed it, open a Terminal window and enter:
find their labels on the bottom of the sensor, lift off the plastic golf-ball-like diffuser and you
should see them on the top of the board. VCC needs to be connected to a 5V power pin,
GND needs to go to a ground pin, and then there’s the OUT wire which will be our input. We’re
connecting it to GPIO 14.
If your Raspberry Pi Zero has GPIO pins attached, you can use female-to-female jumper
wires to make the connections, as shown in Figure 1. Otherwise you can loop the wire around
the GPIO holes and use a bit of putty to keep them in place, or a dab of glue from a glue gun on
a low setting. Soldering is an option if you want to create a permanent spy camera device.
In a never-ending while True: loop, we use GPIO Zero’s handy wait_for_motion function
to pause the code until the PIR detects any motion. When it does, we set the photo file name
to the current time and date, then take the picture. To enable the PIR to settle, we sleep for five
seconds before returning to the top of the loop to wait for motion again.
04 Final preparations
You can run the code first to give it a test. You might want to change the sensitivity
and/or trigger time, which you can do by adjusting the little orange potentiometer screws on
the side of the PIR board: Sx adjusts sensitivity, while Tx alters the trigger time.
Once that’s done, we’ll get the program to start automatically whenever we boot up the
Raspberry Pi. To do so, open up a Terminal window and edit the profile config file with sudo
nano /etc/profile. To the bottom of the file, add this line:
In addition, to get Raspberry Pi to boot up slightly faster and, more importantly, to use a little
less power so your battery lasts longer, it’s best to get it to boot directly to the command line
rather than booting to the desktop. The easiest way to change this is to open Preferences >
Raspberry Pi Configuration from the desktop; in the default System tab, change Boot to the ‘To
sensor = MotionSensor(14)
camera = PiCamera()
while True:
sensor.wait_for_motion()
filename = datetime.now().strftime("%H.%M.%S_%Y-%m-
%d.jpg")
camera.capture(filename)
sleep(5)
CLI’ option. Alternatively, open a Terminal window and enter sudo raspi-config to open the
Configuration Tool; select Boot Options > Desktop / CLI and option B2 – Console Autologin
Text console.
Smart door
Adding a Raspberry Pi to your door has magical results. Want
to see who’s at the door or know when the post has arrived?
Control the lock? Read on…
I
s your door a bore? Open and close, open and close. Snoozefest. Surely it can do more
than that? How about a smart door that knows when someone approaches, when the post
arrives, and can even offer remote viewing of the peephole? You can also add intelligent
lighting, a controllable door lock, and facial recognition, all powered with your Raspberry Pi. So,
let’s ignore super-expensive door systems and build our own. You can do as much, or as little,
as you like of this project and there’s plenty of room for new and inventive uses.
YOU’LL NEED
• Raspberry Pi Touch Display magpi.cc/touch • Wired doorbell magpi.cc/wiredbell
• Camera Module / HQ Camera • PAM8302 amplifier magpi.cc/pam8302
• PIR sensor magpi.cc/pir • Speaker magpi.cc/3inspeaker
• 2 × Security door contact reed switch • Magnetic access control system
magpi.cc/doorswitch magpi.cc/magneticaccess
03 Footsteps approaching!
The first smart thing our door is going to do is detect someone approaching it. A
cheap PIR sensor is perfect for the job. These cool little geodesic domes are triggered by heat
and are the same gizmos that you find in motion-sensor lights, switches, and security systems.
Connect to Raspberry Pi as shown in Figure 1, checking whether you have a 5 V or 3.3 V
sensor. Sensitivity and duration of a ‘detection’ can be controlled by the two potentiometers on
the PIR board. Mount this outside in a suitable location to ‘watch’ your door.
05 Ding dong!
If we replace the doorbell with
our own button, we can take a photo
with the Camera Module or HQ Camera
when someone presses it and send
a notification. Way better. Mount a
standard wired doorbell, which after all
is just a momentary contact button, to
the outside door frame and wire it back
to Raspberry Pi using GPIO 13 and an
available GND pin. If you’re prototyping
on a breadboard, a tactile switch will
do fine.
06 Sounds good
There’s little point in a doorbell that makes no sound. We can use the small, but
surprisingly powerful, PAM8302 amplifier with a speaker to make some noise. Supply power
by soldering ‘Vin’ to an available 3V3 pin on Raspberry Pi, and ground to GND. To get an audio
signal, you can tap the audio connector’s signal and ground, then connect them to A+ and A-
respectively. Finally, solder the speaker to the larger + and - terminals. When prototyping, you
can skip this and use any active or passive speaker via the audio connector on Raspberry Pi.
07 Code
Double-check all your connections and then power up your Raspberry Pi. To use the
code published here (overleaf), open a Terminal and enter:
mkdir ~/smartdoor
nano ~/smartdoor/smartdoor_test.py
Now type in the code as shown. Alternatively, to download all the code:
cd
git clone https://github.com/themagpimag/cameraguideCh10
python3 ~/smartdoor/smartdoor_test.py
Watch the console output. If everything is working, you should be able to trigger the PIR, the
reed switches, and the doorbell. The camera will capture ten seconds of video when motion is
detected, and a photo when the doorbell is pressed. These are both saved to the desktop.
08 Get alerts!
Let’s make this useful. Install Pushover on your phone, head over to pushover.net,
sign up for a trial account, then log in and make a note of your User Key (a long string of
characters). Now create a new Application
and give it a name. Once created, you’ll
see an API Token. Make a note of this
too. From the GitHub repository, edit
smartdoor.py and add the User Key and
API Token where shown. Run this version
and you’ll get phone alerts for each event
and even a photo attachment when the
doorbell is pressed.
09 Intelligent
porch light
Following on from the Trådfri lighting
tutorial in The MagPi #75 (magpi.cc/75),
if you have an external porch light, why
not make it smart! The file porch.py will
connect a Trådfri smart light to an API
that provides sunrise and sunset times
for your location. Leave the script running The web app can run on the touchscreen,
and the light will switch on and off at the as well as on mobile devices or desktop
correct times. Additionally, it monitors the browsers. Release the door from anywhere!
10 Door lock
If you’re interested in being able to control your door’s lock, you may see that some
solutions are very pricey. One that is perfect for experimentation is the magnetic hold lock,
which uses an electromagnet to hold the door closed. The one we’ve used can withstand
180 kg of force, although stronger ones are available. The magnet mounts on the door and
the electromagnet on the frame. The provided PSU contains a relay that can be powered by
Raspberry Pi by simply connecting it to a spare GPIO line and ground. Please note this is no
replacement for a proper door lock system.
11 Web app
If would be great to see what our door has been up to remotely, so a web app seems
the next logical step. In the directory called webapp is a Python script that uses Flask
to provide a web server that is usable on mobile devices. You can take a photo from the
peephole, see the last recorded video, and even control the magnetic door lock from Step 10.
Simply run the app alongside the others. Better still, set smartlights.py, porch.py, and
webapp/smartdoor.py to start on boot (see the repository README).
12 Facial recognition
Once a futuristic technology, decent facial recognition is now well within the grasp
of Raspberry Pi. Using the doorbell photo taken by Raspberry Pi, we can recognise a face
using reference photos and send an alert to Pushover with the name of the caller! In a secure
environment, a recognised face could even trigger the lock or you could play a welcome
announcement. The install process is a little complicated, so if this interests you, see the
documentation in the GitHub repository in the face_recognition directory of the
‘smartdoor’ repository.
13 Over to you
Here we’ve given you the basics to get going, but more complex events are possible.
You could alert different people based on facial recognition or play custom doorbell tones.
And, if you had problems with deliveries, video evidence can build up automatically. On a
serious note, remember a lot of this is ‘just for fun’ and designed to inspire, so unless you’re
prepared to put in the work hardening the code and including failsafes, don’t rely on this, or
possibly make it as a fun kids’ door project (but maybe without the lock!).
print('Getting smart...')
def motionDetected():
print('Motion detected, video recording')
os.system('DISPLAY=:0 xset s reset') # Wakes the display up
camera.start_preview()
camera.start_recording('/home/pi/Desktop/motion.h264')
sleep(10)
def motionStopped():
print('Stopping video recording')
camera.stop_recording()
camera.stop_preview()
def doorOpen():
print('Door open')
def doorClosed():
print('Door closed')
def letterboxOpen():
print('You got mail!')
def doorbellPressed():
subprocess.Popen(['mpg123', '/home/pi/smartdoor/doorbell.mp3'],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Car Spy Pi
Who’s that parked on the driveway?
Find out automatically using ANPR
A
utomatic number-plate recognition
YOU’LL NEED
(ANPR) is becoming more and
more commonplace. Once the • Camera Module / HQ Camera
exclusive realm of the police, the technology • Suitable outdoor enclosure,
used to accurately read car number-plates e.g. magpi.cc/rainberry
can now be found in supermarket and • Pushover account (optional) pushover.net
airport car parks. It wasn’t long ago that
this technology was extremely expensive to
purchase and implement. Now, even a Raspberry Pi has the ability to read number-plates with
high accuracy using the Camera Module (or HQ Camera) and open-source software. Let’s see
what’s possible by building a system to detect and alert when a car comes onto the driveway.
01 Pick a spot
First things first: where are we going to put it? Although this project has lots of
applications, we’re going to see who’s home (or not) by reading number-plates of cars
coming and going on a driveway. This means our Raspberry Pi is probably going to live
outside; therefore, many environmental constraints come into place. You’ll need USB 5 V
power to your Raspberry Pi and a mounting position suitable for reading plates, although the
software is surprisingly tolerant of angles and heights, so don’t worry too much if you can’t
get it perfectly aligned.
02 Get an enclosure
As your Raspberry Pi is going to live outside (unless you have a well-placed window),
you’ll need an appropriate enclosure. For a proper build, get an IP67 waterproof case (e.g.
magpi.cc/ip67kit). We’re opting for homemade, and are using a Raspberry Pi 3 A+ with the
RainBerry – a 3D-printable case that, once you add some rubber seals, provides adequate
protection. Make sure whatever you choose has a hole for the camera.
04 Install openALPR
Thankfully, we don’t need to be experts in machine learning and image processing to
implement ANPR. An open-source project, openALPR provides fast and accurate processing
just from a camera image. ‘ALPR’ is not a mistake: this US project is ‘Automatic License
Plate Recognition’. Thanks to APT, installation is straightforward. At the command line, enter
the following:
This may take a while, as many supporting packages need to be installed, such as Tesseract,
an open-source optical character recognition (OCR) tool. This, coupled with code that
identifies a number-plate, is what works the magic.
cd
wget http://plates.openalpr.com/ea7the.jpg
This is a sample USA plate image and a tough one too. Wget grabs the file from the web and
places it in your home directory. Let’s see if we can recognise it:
alpr -c us ea7the.jpg
All being well, you’ll see a report on screen. The most likely ‘match’ should be the same as the
file name: EA7THE.
Now test everything is working by running Python (type python) and enter the following code
line by line at the >>> prompt:
import json
from openalpr import Alpr
If you’ve not seen JSON-formatted text before, this might seem a bit much, but you should see
the correct plate number returned as the first result.
08 Typing time
Now you have everything you need
to create your ANPR application. Enter the
code listing shown here or download it from
magpi.cc/cameragit11. Save it as anpr.py in
your home directory. Edit the file and enter
your User and App tokens where prompted. When a car arrives or leaves our driveway,
Save the file, then test by entering: we receive an alert in seconds
The code makes use of the Camera Module (or HQ Camera) and openALPR in tandem.
Every five seconds, the camera takes a picture which is passed to openALPR for analysis.
If a licence plate is found, we get the number. If there has been a change, an alert is sent to
Pushover, which is then forwarded to any registered mobile devices.
lookup = {
"ABC123": "Steve McQueen",
"ZXY123": "Lewis Hamilton"
}
lookup[number_plate]
Now you’ll get a friendly name instead. See if you can handle what happens if the plate isn’t
recognised.
10 Run on boot
A key part of any ‘hands-free’ Raspberry Pi installation is ensuring that in the event of
a power failure, the required services start up again. There are many ways of doing this; we’re
going use one of the simpler methods.
Find the final line, exit 0 and enter the following on the line above:
Press CTRL+X then Y to save the file. Finally, run the earlier pip command again, using sudo
this time to install the libraries for the root user:
# Pushover settings
PUSHOVER_USER_KEY = "<REPLACE WITH USER KEY>"
PUSHOVER_APP_TOKEN = "<REPLACE WITH APP TOKEN>"
try:
# Let's loop forever:
while True:
# Take a photo
print('Taking a photo')
camera.capture('/home/pi/latest.jpg')
# If no results, no car!
if len(analysis['results']) == 0:
print('No number plate detected')
last_seen = None
else:
last_seen = number_plate
except KeyboardInterrupt:
print('Shutting down')
alpr.unload()
Build a wildlife
camera trap
Uncover the goings-on in your garden, pond, or school
playground when no one’s looking with this easy-to-use
Raspberry Pi camera trap
E
ver wondered what lurks at the bottom of your garden at night, or which furry
friends are visiting the school playground once all the children have gone home?
Using a Raspberry Pi and Camera Module (or HQ Camera), along with Google’s
Vision API, is a cheap but effective way to capture some excellent close-ups of foxes, birds,
mice, squirrels and badgers, and to tweet the results.
Using Google’s Vision API makes it really easy to get AI to classify our own images. We’ll
install and set up some motion detection, link to our Vision API, and then tweet the picture if
there’s a bird in it. It’s assumed you are using a new Raspbian installation on your Raspberry
Pi and you have your Raspberry Pi camera set up (see Chapter 1). You will also need a Twitter
account and a Google account to set up the APIs.
YOU’LL NEED
• Camera Module / HQ Camera • ZeroCam NightVision (optional)
magpi.cc/zerocamnight
• Pi NoIR Camera Module (optional)
magpi.cc/ircamera • Blu Tack, Sugru, elastic bands, carabiners
curl -L https://raw.github.com/pageauc/pi-timolo/master/source/
pi-timolo-install.sh | bash
Once installed, test it by typing in cd ~/pi-timolo and then ./pi-timolo.py to run the
Python script. At this point, you should be alerted to any errors such as the camera not being
installed correctly, otherwise the script will run and you should see debug info in the Terminal
window. Check the pictures by waving your hand in front of the camera, then looking in
Pi-timolo > Media Recent > Motion. You may need to change the image size and orientation
of the camera; in the Terminal window, enter nano config.py and edit these variables:
imageWidth, imageHeight, and imageRotation.
While we’re here, if you get a lot of false positives, try changing the motionTrackMinArea
and motionTrackTrigLen variables and experiment with the values by increasing to reduce
sensitivity. See the Pi-timolo GitHub repo (magpi.cc/pitimologit) for more details.
There will also be some editing of the pi-timolo.py file, so don’t close the Terminal
window. Code needs to be added to import some Python libraries (ch12listing1.py), and
also added to the function userMotionCodeHere() to check with the Vision API before
tweeting (ch12listing2.py). We can do this now in preparation of setting up our Google and
Twitter API. You should still be in the Pi-timolo folder, so type nano pi-timolo.py and add
the imports at the top of the file. Next, press CTRL+W to use the search option to find the
UserMotionCodeHere() function and where it’s called from. Add the new code into the function
(line 240), before the return line. Also locate where the function is being called from (line 1798),
to pass the image file name and path. Press CTRL+X then Y and ENTER to save. Next, we’ll set
up the APIs.
key to allow you to make calls to the API locally. Rename and move the JSON file into your
pi-timolo folder and make a note of the file path. Next, go back to pi-timolo.py and add the
line: os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path_to_your_.json_
credential_file" below import os to reference the credentials in your JSON file.
Finally, set up a Twitter account if you haven’t already and install Tweepy by entering sudo
pip install tweepy into a Terminal window. Once set up, visit apps.twitter.com and create
a new app, then click on Keys and Access Tokens. Edit the code in userMotionCodeHere()
with your own consumer and access info, labelled as ‘XXX’ in the code listing. Finally, place
your camera in front of your bird feeder and run ./pi-timolo.py. Any pictures taken of a bird
should now be tweeted! If you want to identify a different animal, change the line if "bird"
in tweetText: animalInPic = true.
Please note that although the API works well, it can’t always discern exactly what’s in
the picture, especially if partially in view. It also won’t distinguish between types of bird, but
you should have more success with mammals. You can test the API out with some of your
pictures at magpi.cc/visionai and visit twitter.com/pibirdbrain to see example tweets (scroll
down a bit). Good luck and happy tweeting!
ch12listing2.py / Python
# search for userMotionCodeHere.
# There will be 2 results,
# edit the second so you are passing filename to the function
userMotionCodeHere(filename)
image = types.Image(content=content)
# set up Tweepy
# consumer keys and access tokens, used for authorisation
consumer_key = ‘XXX’
consumer_secret = ‘XXX’
access_token = ‘XXX’
access_token_secret = ‘XXX’
return
T
here are plenty of underwater sports cameras available, but they can be quite
expensive, especially if you want to be able to control them remotely. In this
chapter we’re going to use readily available Raspberry Pi add-ons to make a cheaper,
customisable camera unit. There are lots of options and alternative sources of components
for a project like this. For example, the Pimoroni Enviro board (or earlier Enviro pHAT) can
report back information about the environment in which the camera is operating, especially
how much light is available.
YOU’LL NEED
• Camera Module / HQ Camera • Python Flask library
• Transparent, waterproof box • WiFi dongle (if not using a Raspberry Pi model
magpi.cc/waterproofcase with built-in wireless LAN)
You can save space by using a LiPo battery (via a boost regulator) instead of a power bank
First, since the configuration files aren’t ready yet, turn off dnsmasq and hostapd:
Go to the end of the file and edit it so that it looks like the following (we're using the IP address
192.168.4.1, but you may want to choose a different one):
interface wlan0
static ip_address=192.168.4.1/24
nohook wpa_supplicant
Now restart the dhcpcd daemon and set up the new wlan0 configuration:
Next, you need to edit the /etc/dnsmasq.conf and /etc/hostapd/hostapd.conf files – see
magpi.cc/accesspoint for details – ensuring that the IP addresses are consistent with your
settings in /etc/dhcpcd.conf. Then reboot your Raspberry Pi.
The Enviro board’s library comes with some example programs; you should run some of these
to test that everything is working correctly.
Note: If you are not using an Enviro board, you will need to comment out some of the related
code in the main ch13listing1.py script. If using the Enviro pHAT instead, use the alternative
ch13listing2.py script in the GitHub repository (magpi.cc/cameragit13).
Then use the desktop File Manager to move the Flask folder within cameraCh13 to your
Raspberry Pi’s home directory (or use the mv command in a Terminal window).
To see the generated webpage from another computer, you just have to open a web browser
and enter your Raspberry Pi’s static IP address. Using the on-screen buttons, we can also
switch between recording modes (video or continuous still frames) or take photos on demand
– by selecting QuickSnap and then clicking the Take button. This control of the camera is
achieved via the picamera library, which is used for the three main functions – timelapse,
video, and snapstart – defined in our Python script. You could enhance the project by adding
additional exposure and shutter speed controls to your interface if you want.
Note: To see the latest image taken, press an on-screen button or reload the webpage.
ltr559 = LTR559()
bus = SMBus(1)
bme280 = BME280(i2c_dev=bus)
app = Flask(__name__)
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 1
cam.close()
print('exiting snaphot mode')
status = 'off'
global btn1
btn1 = 'o'
global btn2
btn2 = 'o'
message = 'All good '
print('BP:Take error')
status = 'Error'
message = 'Enable QuickSnap first'
btn1 = 'o'
else:
pass
if __name__ == "__main__":
Install a bird
box camera
Observe nesting birds without disturbing them
W
hile it’s simple enough to set up a
YOU’LL NEED
Camera Module in a weatherproof
box to observe wildlife in your • Pi NoIR Camera Module
raspivid -t 0
You’ll notice that everything looks a little strange; this is because you’re looking at a
combination of visible light and infrared light. To test it out in darkness, turn the lights off, aim
02 Wire up an IR LED
We’ll need a suitable infrared light source in the bird box. In this example we’re using
a single IR LED, but alternatives include small IR lamps and the IR version of the LISIPAROI
(lisiparoi.com). Our 890 nm IR LED is an identical component to the ones found inside TV
remote controls; the only difference is that we’re going to keep it on constantly when shooting
video or stills in the bird box.
As usual, you should turn off your Raspberry Pi before connecting anything up. If you’ve
wired up an LED to Raspberry Pi’s GPIO pins before, then please note that this LED needs to
be done slightly differently. Since an infrared LED requires more current than the GPIO pins
can provide, it needs to be connected directly to the 5V supply of Raspberry Pi with a 220 Ω
resistor inline; without the resistor the current will be too high, and the LED will burn out after
about ten seconds.
Figure 1 The
Pi NoIR Camera
Module can see
in the dark with
infrared lighting
Figure 2 shows how the LED should be wired up. You’ll notice that the LED has two legs, one
slightly longer than the other. The longer of the two is called the anode and the shorter is the
cathode. The LED needs power to flow into the anode and out of the cathode; if you get the
polarity wrong then nothing will happen.
Use a couple of female-to-female jumper wires to make the following connections. Connect
the anode (long leg) to 5 V, which is the first pin on the outside row on the GPIO. Connect the
cathode (short leg) to the 220 Ω resistor. Connect the other side of the resistor to ground
(GND), which is the third pin in on the outside row of the GPIO.
04 Adjust focus
By default, the Pi NoIR Camera Module has a fixed focal length of 50 cm and depth
of field of 50 cm to infinity. This means that objects will only appear in focus if they’re at least
50 cm away from the lens of the camera. The Gardman box we’re using in this example has an
interior height of 18 cm, so we’ll definitely need to shorten the focal length.
07 Test it again
Now reconnect your Raspberry Pi and test the focus once again. We recommend
connecting the camera flex coming from the back of the bird box to Raspberry Pi first. Then
connect the LED and resistor, followed by the screen, keyboard, and finally the power supply.
When testing this setup, it can be helpful to rest Raspberry Pi upside-down on the roof of the bird
box, but do whatever works best for you.
Boot up as usual and then start the video
preview with raspivid -t 0. With the
roof of the bird box closed, you should be
able to see the inside in black and white.
This shows that the infrared illumination is
working; you should even be able to cover
the hole and still see the inside. It will look
similar to Figure 7, but will be slightly more
zoomed in. This is because this image was
taken using the raspistill command and
not raspivid. If you can’t see anything Figure 6 The IR LED is taped to the underside
at all, then it’s likely the LED is not wired of the roof, not too close to the camera
black-on-white text into the bird box to verify Camera Module, you’ll need to disable its
the focus, such as a watch or business card. red LED that lights up whenever the camera
Ensure that the text is in focus and readable; is on. Enter sudo nano /boot/config.txt
adjust the camera focus again as necessary and add the following line to the end of the
for the nest height. Press CTRL+C when you the file, then reboot.
08 Weatherproof it
While you can attach your Raspberry Pi directly to the outside of the bird box, an
alternative is to use a longer camera cable. Either way, you’ll need to put Raspberry Pi inside a
weatherproof box. Preventing water getting into the bird box should also be a priority. The roof
could be sealed using silicone sealant, which is often used to seal the edges of windows and
bathroom sinks. Choosing a site which is beneath the overhang of an existing roof will help a
lot, so the bird box will not be rained on directly.
Lastly, you need to consider how you will get power and an internet connection to the bird
box? You could use a wireless USB dongle, or the built-in wireless LAN of a Raspberry Pi 3 /
3B+ / 4 / Zero W, but Ethernet is more reliable for streaming video, especially in built-up areas
that have a lot of wireless traffic.
09 Obtain images
With everything installed, connected, and powered up, you can SSH in to your
Raspberry Pi from another computer (see magpi.cc/ssh for details) to control it remotely.
You are then able to enter standard Terminal commands such as raspistill and raspivid
to obtain stills (including time-lapses – see
Chapter 3) and video footage. You could
also write one or more Python scripts
using the picamera library.
Note that you can’t view the live camera
preview via SSH. However, you are able to
live-stream video from the bird box. This
could be achieved using a client-server
setup, as described in Chapter 15, to pipe
the output to a video player on the client
computer. Alternatively, you could make
use of an internet video service offering
Figure 7 Make sure that the test object is live streaming, such as YouTube (see
raised up slightly and the text is in focus magpi.cc/birdboxyt for details).
Live-stream
video and stills
Stream video and regular stills to a remote computer
O
ne of the drawbacks of using SSH
or VNC to access your Camera
YOU’LL NEED
Module- or HQ Camera-equipped • Camera Module / HQ Camera
Raspberry Pi remotely from another • Remote computer
computer is that you can’t (typically) view
the camera preview via these methods. To
get around this, you’ll need to stream live video across the network. While there are various
methods available for doing this, in this chapter we’ll show you how to create a client-server
setup for video streaming using the picamera Python library. We’ll also explore how to send
a stream of stills over the network.
01 Server-side script
Note: If you are using a Linux-based computer for playback of the video stream,
there is an easier method, explained in Step 03.
First, we’ll write a Python server script, ch15listing1.py, for the remote computer that will
read the video stream (which we’ve yet to write to the code to create) and pipe it to a media
player. Note that while you can use a Raspberry Pi 4 for the task of remote playback, earlier
Raspberry Pi models won't work in this role since the CPU is not powerful enough to do the
video decoding (and neither VLC nor MPlayer supports doing this using the GPU). Therefore
you may need to run this script on a faster machine, although even an Atom-powered
netbook should be quick enough for the task at non-HD resolutions.
After importing the libraries required at the top of the script, we start listening for
connections on 0.0.0.0:8000, i.e. all IP addresses on the local network. We then accept a
single connection and make a file-like object out of it.
In the try: block, we run a media player from the command line to view it – if you want to
use MPlayer instead of VLC, add a # to the start of the cmdline = ['vlc… line to comment
it out, and remove the # from the cmdline = ['mplayer… line.
The remote server script reads the video stream and pipes it to a media player, such as VLC
In the while True: loop, we repeatedly read 1kB of data from the connection and write it to
the selected media player’s stdin (standard input) to display it.
Note: If you run this script on Windows or macOS, you will probably need to provide a
complete path to the VLC or MPlayer executable/app. If you run the script on macOS, and
are using Python installed from MacPorts, please ensure you have also installed VLC or
MPlayer from MacPorts.
02 Client-side script
Now we’ll create a client script, ch15listing2.py, on our Raspberry Pi with the
Camera Module or HQ Camera equipped. This will connect to the network socket of our
server (playback) script to send a video stream to it.
After importing the required libraries at the top, we connect a client socket to
my_server:8000 – you’ll need to change my_server to the host name of your server
(the computer that will playing back the stream). If you are using a Linux PC or Mac, just
type hostname -I in a Terminal window to find it out; in Windows, it’s the Computer Name
in Control Panel > System.
We then create a file-like object from the network socket before triggering the camera
to start recording. In this example we’re using a resolution of 640×480 with a frame rate of
24 fps, but you can adjust these numbers to your requirements. We’ve also set the camera
to record for 60 seconds with camera.wait_recording(60); again, you can change this
number to suit your preference.
Run the server script, then the client script. You should see the video stream played in
your chosen media player. You may notice some latency; this is normal and due to buffering
by the media player.
03 Server-side script
As mentioned, if you’re using a Linux PC for playback of the video stream, there is a
much quicker and easier way to achieve what we’ve done in Steps 01 and 02. On the server
(playback) machine, enter the following command into a Terminal window:
Then, on the client – your Raspberry Pi with the Camera Module or HQ Camera – issue the
following command:
04 Switch it around
An alternative method is to reverse the direction so that Raspberry Pi acts as a
server. We can then get it to wait for a connection from the client before streaming video.
Enter the ch15listing3.py example on Raspberry Pi and run it.
The big advantage of this method is that you then only need to use a single command to
initiate playback on the remote computer:
vlc tcp/h264://my_pi_address:8000/
…replacing my_pi_address with your Raspberry Pi’s IP address (again, discovered using
hostname -I). Or, in VLC running on the desktop, go to File > Open Network and enter the
same address: tcp/h264://my_pi_address:8000/
05 Stream stills
Now let’s stream camera stills taken at regular intervals in a variation on a standard
time-lapse setup. Entered on a remote computer (which could be another Raspberry Pi), the
server script, ch15listing4.py, starts a socket to listen for a connection from your Raspberry
Pi with the camera. At the top, we import the required libraries; here we’re using PIL (you
can install it using sudo pip install pillow) to read JPEG files, but alternatives include
OpenCV and GraphicsMagick. The script then checks the image length and, if it is not zero,
constructs a stream to hold the image data and then reads it from the connection. The
image.show() command will open each image in the default image viewer: it can create a
lot of windows if left going for a while! Now to create a client script…
ch15listing2.py / Python 3
import socket
import time
import picamera
ch15listing3.py / Python 3
import socket
import time
import picamera
camera = picamera.PiCamera()
camera.resolution = (640, 480)
camera.framerate = 24
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)
camera.start_recording(connection, format='h264')
camera.wait_recording(60)
camera.stop_recording()
finally:
connection.close()
server_socket.close()
ch15listing4.py / Python 3
import io
import socket
import struct
from time import sleep
from PIL import Image
ch15listing5.py / Python 3
import io
import socket
import struct
import time
import picamera
client_socket = socket.socket()
client_socket.connect(('my_server', 8000))
connection = client_socket.makefile('wb')
try:
camera = picamera.PiCamera()
camera.resolution = (640, 480)
# Start a preview and let the camera warm up for
camera.start_preview()
time.sleep(2)
start = time.time()
stream = io.BytesIO()
connection.write(struct.pack('<L', stream.tell()))
connection.flush()
# Rewind the stream and send the image data over the wire
stream.seek(0)
connection.write(stream.read())
stream.seek(0)
stream.truncate()
connection.write(struct.pack('<L', 0))
finally:
connection.close()
client_socket.close()
Set up a
security camera
Protect your home using motionEyeOS
T
he specialist motionEyeOS distro
turns your Raspberry Pi and
YOU’LL NEED
Camera Module or HQ Camera • Camera Module / HQ Camera
into a fully fledged security camera that • motionEyeOS
can stream a live view, detect motion, and • Pushover app
capture video and stills. In this chapter we’ll • Remote computer
show you how to install it, get started using
it, and even send custom push notifications
to your phone when motion is detected!
01 Install motionEyeOS
motionEyeOS is a Linux distribution that turns a single-board computer into a video
surveillance system. To see the list of supported devices and download the relevant distro
image, go to magpi.cc/motioneyeoslist. Note that there are four different versions available
for Raspberry Pi, so make sure you download the correct one for your model.
With the image downloaded, you can write it to a microSD card in a similar fashion to
Raspbian, by using the Raspberry Pi Imager tool (magpi.cc/imager) for example. However,
motionEyeOS’s creator provides a write utility for Linux and macOS. The advantage of using
this is that you can preconfigure the wireless network connection so that you don’t have to
connect Raspberry Pi to your router via Ethernet at first. This is particularly useful if you are
using a Raspberry Pi Zero, which lacks an Ethernet port. To download the utility and make it
executable, enter the following commands in a Terminal window:
curl https://raw.githubusercontent.com/ccrisan/motioneyeos/
master/writeimage.sh
chmod 775 writeimage.sh
To write to the microSD card and preconfigure the wireless connection, use:
update_config=1
ctrl_interface=/var/run/wpa_supplicant
network={
scan_ssid=1
ssid="your_network"
psk="your_password"
}
You will need to turn the file into an executable with chmod +x wpa_supplicant.conf
before moving it to the boot partition of your microSD card (alongside start.elf etc.). Note:
From version 20190119, WiFi configuration will be read every time the device boots.
03 Remote access
Insert the microSD card into your Raspberry Pi and boot it. There is no need to
attach it to a monitor as it won’t show much and it’s intended to be run headless. Assuming
you’ve preconfigured the wireless connection, it should connect to the router after a couple
of minutes. If you have any problems, check the wireless details you entered; if your router
syncs 2.4GHz and 5GHz on the same SSID, you may need to split this into separate SSIDs to
get a connection. If you’re still having trouble, you can connect to the router via an Ethernet
cable and set up a wireless connection from the remote web interface later.
Either way, to find Raspberry Pi’s IP address, just visit your router’s homepage (e.g.
192.168.1.254) and view the list of attached devices; your Raspberry Pi will appear as
meye- followed by a hex number. Enter the IP address for it in a web browser on a remote
computer. You will be presented with a login screen: just enter the default admin username
without a password. You can add a password later, as well as a standard user.
04 Camera features
Once you’re logged in, you will be able to see the live view from the camera, which
you can also expand. Open the options menu on the left (the icon is three horizontal parallel
lines) to access numerous options; change Layout Columns to 1 to enlarge the standard
camera view.
Click the ‘switch user’ icon near the top left and enter the username ‘admin’ with no
password to reveal a host of extra options. These include camera settings such as video
resolution and rotation. You can also adjust motion detection settings and options for
capturing stills and movies, which can be viewed via the icons shown on the camera view
after you click on it. The Motion Notifications panel enables you to send yourself an email
whenever motion is detected, or call a web hook, or run a command. This last option is what
we’ll be using for our custom notifications using the Pushover service.
cd /data
nano ch16listing1.py
Once here, you’ll need to type in the code listing, while also including your API token and
user key where required. As with any script, we need to make sure it can be executed,
otherwise it’s nothing more than a fancy collection of text! From the command line, make
sure you’re in the data folder and then type:
chmod +x ch16listing1.py
In the Motion Notifications menu, set Run A Command to the path of your script
conn = httplib.HTTPSConnection("api.pushover.
net:443")
conn.request("POST", "/1/messages.json",
urllib.urlencode({
"token": "APP_TOKEN",
# Insert app token here
"user": "USER_TOKEN",
# Insert user token here
"html": "1",
# 1 for HTML, 0 to disable
"title": "Motion Detected!",
# Title of the message
"message": "<b>Front Door</b> camera!",
# Content of the message
"url": "http://IP.ADD.RE.SS",
# Link to be included in message
"url_title": "View live stream",
# Text for the link
"sound": "siren",
# Define the sound played
}), { "Content-type": "application/x-www-form-urlencoded" })
conn.getresponse()
Quick reference
To help you get to grips with your HQ Camera or
Camera Module, here’s a handy reference guide to the
hardware, commands, and picamera Python library
T
he first thing to note about the Raspberry Pi HQ Camera and Camera Module is that
they both feature a rolling shutter. So, when capturing an image, the camera reads
out the pixels from the sensor one row at a time. Unlike the global shutter on a DSLR
camera, it also lacks a physical shutter that covers the sensor when not in use.
In addition, the HQ Camera or Camera Module acts more like a video camera than a
stills camera, as it is rarely idle. Once initialised, it is constantly streaming rows of frames
down the ribbon cable to Raspberry Pi for processing. Numerous background tasks include
automatic gain control, exposure time, and white balance. That’s why it’s best to give it a
couple of seconds or more once activated, to adjust the exposure levels and gains before
capturing an image.
For more details on how the camera hardware works, see the picamera documentation
at magpi.cc/cameradoc.
FIXED FOCUS
Unlike the HQ Camera, the Camera Module has a fixed focal length of 50 cm and depth of field of
50 cm to infinity. This means that objects will only appear in focus if they’re at least 50 cm away from
the lens of the camera. However, it is possible to alter this by using a focus adjustment tool – or fine
tweezers – to unscrew the lens slightly in order to shorten the focal length. See Chapter 14 Step 04
for more details.
4056 × 3040
2028 × 1520
Image modes
2028 × 1080
1012 × 760
Sensor resolution 2592 × 1944 pixels (5MP) 3280 × 2464 pixels (8MP)
HQ Camera
Aspect Binning/
Mode Resolution Ratio Frame rates Video* Image FoV Scaling
Camera Module v1
Aspect Binning/
Mode Resolution Ratio Frame rates Video* Image FoV Scaling
Figure 1 Figure 2
Camera Module v2
Aspect Binning/
Mode Resolution Ratio Frame rates Video* Image FoV Scaling
Video recording is limited to a maximum 1080p (1920 × 1080) resolution on all camera models,
*
Common options
When using raspistill or raspivid from the command line, you have access to an array
of useful switches to change numerous parameters….
No preview window
--nopreview or -n
Disables the preview window completely. Note that even though the preview is disabled, the
camera will still be producing frames, so it will be using power.
Preview opacity
--opacity or -op
Sets the opacity of the preview window; 0 = invisible, 255 = fully opaque.
Image height
--height or -h
Sets the height of the resulting image. Up to 1944 (Camera Module v1), 2464 (CM v2), or 3040
(HQ Camera) – the upper limit for video footage is 1080.
Image rotation
--rotation or -rot (0 to 359)
Sets the rotation of the preview and saved image. Note that only 0, 90, 180, and 270 degree
rotations are supported (other values are rounded down).
Horizontal flip
--hflip or -hf
Flips the preview and saved image horizontally.
Vertical flip
--vflip or -vf
Flips the preview and saved image vertically. Note: Using -hf and -vf together is equivalent to
a 180° rotation.
Output to file
--output or -o
Specifies the output file name. If this is not specified, no file is saved. If the file name is ‘-’, then
all output is sent to stdout, which is handy when using another application that expects image
or video data through a standard input.
Timeout
--timeout or -t
The program will run for this length of time; the default is five seconds. If output is specified, it
will then take a capture with raspistill. If using raspivid, this is the length of the recording.
Verbose information
--verbose or -v
Outputs verbose debugging information during the run.
Sharpness
--sharpness or -sh (-100 to 100)
Sets the sharpness of the image. 0 is the default.
Contrast
--contrast or -co (-100 to 100)
Sets the contrast of the image. 0 is the default.
Saturation
--saturation or -sa (-100 to 100)
Sets the colour saturation of the image. 0 is the default.
ISO
--ISO or -ISO (100 to 800)
Sets the ISO to be used for captures. In effect, this adjusts the light sensitivity of the sensor.
EV compensation
--ev or -ev (-10 to 10)
Sets the EV compensation of the image. Default is 0.
Exposure mode
--exposure or -ex
Sets the exposure mode to any of: auto, night, nightpreview, backlight, spotlight,
sports, snow, beach, verylong (long exposure), fixedfps (for video only), antishake,
or fireworks. Not all of these settings may be implemented, depending on camera tuning.
Image effect
--imxfx or -ifx
Sets an effect to be applied to the image. Choose from the following: none, negative,
solarise, posterise, sketch, denoise, emboss, oilpaint, hatch, gpen (graphite sketch
effect), pastel, watercolour, film, blur, saturation (colour saturate the image),
colourswap, washedout, colourpoint, colourbalance, or cartoon.
Colour effect
--colfx or -cfx (U:V)
The supplied U and V parameters (range 0 - 255) are applied to the U and Y (colour) channels
of the image. For example, --colfx 128:128 will result in a monochrome image.
Demo mode
--demo or -d
Cycles through the range of camera options. No capture is taken, and the demo will end at the
end of the timeout period. The time between cycles should be specified in milliseconds.
Metering mode
--metering or -mm
Specifies the metering mode used for the preview and capture. Choose from: average, spot,
backlit, or matrix.
Shutter speed
--shutter or -ss
Sets the shutter speed to the specified value (in microseconds). The upper limit is around
6000000 µs (6 s) for CM v1; 10000000 µs (10 s) for CM v2; 200000000 µs (200s) for HQ Camera.
Image statistics
--stats or -st
This displays the exposure, analogue and digital gains, and AWB settings used.
AWB gains
--awbgains or -awbg
Sets red and blue gains (as floating point numbers) to be applied when -awb off is set. For
instance, -awbg 1.5,1.2.
-a 4 Time 20:09:33
-a 8 Date 02/14/17
-a 16 Shutter Settings
-a 32 CAF Settings
-a 64 Gain Settings
Examples:
-ae 32,0xff,0x808000 -a "Text" gives size 32 white text on black background.
TWO CAMERAS
Since the Raspberry Pi has only one CSI connector for a camera, the use of two cameras is only
possible with a Compute Module (see magpi.cc/cmtwocameras). In this case, the following
commands may be used for stereoscopic images and video.
Photo options
The following options are only available when using the raspistill command (and most of
them also when using raspiyuv).
Time-lapse mode
--timelapse or -tl
The specific value is the time between shots in milliseconds. Note that you should specify
%04d at the point in the file name where you want a frame count number to appear. So, for
example, the following code will produce a capture every two seconds, over a total period of
30 seconds, named image0001.jpg, image0002.jpg and so on, through to image0015.jpg:
If a time-lapse value of 0 is entered, the application will take pictures as fast as possible.
Note that there’s a minimum enforced pause of 30 ms between captures to ensure that
exposure calculations can be made.
Raw data
--raw or -r
Adds raw Bayer data to JPEG metadata.
Thumbnail parameters
--thumb or -th (x:y:quality)
Allows specification of the thumbnail image inserted into the JPEG file. If not specified,
defaults are a size of 64×48 at quality 35. If --thumb none is specified, no thumbnail
information will be placed in the file; this reduces the file size slightly.
EXIF tag
--exif or -x (format as ‘key=value’)
Allows the insertion of specific EXIF tags into the JPEG image. You can have up to 32
EXIF tag entries. This is useful for tasks like adding GPS metadata. For example, to set the
longitude to 5 degrees, 10 minutes, 15 seconds, use:
--exif GPS.GPSLongitude=5/1,10/1,15/1
See EXIF documentation for more details on the range of tags. Setting --exif none will
prevent any EXIF information being stored in the file; this reduces the file size slightly.
Keypress mode
--keypress or -k
The camera is run for the requested time (-t), and a capture can be initiated throughout
that time by pressing the ENTER key. If you are using raspivid, this will pause or resume
shooting video.
Pressing X then ENTER will exit the application before the timeout is reached. If the
timeout is set to 0, the camera will run indefinitely until exited.
With raspivid, the timeout value will be used to signal the end of recording, but is only
checked after each ENTER keypress.
Signal mode
--signal or -s
The camera is run for the requested time (-t), and a capture can be initiated throughout that
time by sending a USR1 signal to the camera process; or, with raspivid, it toggles between
paused and recording. This can be done using the kill command:
Burst mode
--burst or -bm
Enables burst capture mode, to capture a sequence of images (using time-lapse, -tl)
without switching back to preview mode between them. This helps to prevent dropped
frames when using a short delay.
RASPIYUV OPTIONS
The raspiyuv command uses most of the same options as raspistill. Unsupported ones are
--exif, --encoding, --thumb, --raw, and –quality.
One extra option is --rgb or -rgb. This forces the image to be saved as RGB data with 8 bits per
channel, rather than YUV420.
Note that the image buffers saved in raspiyuv are padded to a horizontal size divisible by 32, so there
may be unused bytes at the end of each line. Buffers are also padded vertically to be divisible by 16,
and in the YUV mode, each plane of Y,U,V is padded in this way.
Bitrate
--bitrate or -b
Sets the bitrate for the video. Use bits per second, so 10Mbits/s would be -b 10000000.
For H.264, 1080p30 a high-quality bitrate would be 15Mbits/s or more. Maximum bitrate is
25Mbits/s (-b 25000000), but much over 17Mbits/s won’t show noticeable improvement
at 1080p30.
Frame rate
--framerate or -fps
Specifies the frames per second to record. This varies depending on the camera mode used.
The maximum is 90 fps, when using a resolution of 640 × 480. See the Camera Hardware
section for more details.
Video stabilisation
--vstab or -vs
Turns on video stabilisation, which attempts to account for camera shake when it is moving.
Quantisation
--qp or -qp
Sets the initial quantisation parameter for the stream. Varies from approximately 10 to 40,
and will greatly affect the quality of the recording. Higher values reduce quality and decrease
file size. Combine this setting with a bitrate of 0 to set a completely variable bitrate.
H.264 profile
--profile or -pf
Sets the H.264 profile to be used for the encoding. Options are: baseline, main, or high.
…will record for a period of 25 seconds. The recording will be over a time frame consisting
of 2500 ms (2.5 s) segments with 5000 ms (5 s) gaps, repeating over the 20 s. So the entire
recording will actually be only ten seconds long.
Segment stream
--segment or -sg
Rather than creating a single file, the file is split into segments of approximately the number
of milliseconds specified. In order to provide different file names, you should add %04d or
similar at the point in the file name where you want a segment count number to appear.
For example:
…will produce video clips of approximately 3000 ms (3 s) long, named video0001.h264,
video0002.h264 etc.
Circular buffer
--circular or -c
Runs encoded data through circular buffer until triggered, then saves.
Flush buffers
--flush or -fl
Flushes buffers in order to decrease latency.
Timestamps
--save-pts or -pts
Saves timestamps to file for mkvmerge.
Codec
--codec or -cd
Specifies the codec to use: H264 (default) or MJPEG.
H.264 level
--level or -lev
Specifies H.264 level to use for encoding: 4, 4.1, or 4.2.
Raw video
--raw or -r
Outputs raw video to file when used with -o.
Raw format
--raw-format or -rf
Specifies output format for raw video: yuv, rgb, or gray.
PiCamera class
Here are some of the most commonly used methods and options of the PiCamera class.
start_preview(**options)
Displays the preview overlay. Options include fullscreen (True or False), window (x,y,w,h for
position and size), layer, and alpha.
stop_preview()
Hides the preview overlay.
wait_recording(timeout=0, splitter_port=1)
Pauses recording for the number of seconds specified in timeout. This method is
recommended over the standard time.sleep(), since it checks for errors during recording
and will immediately raise an exception.
stop_recording(splitter_port=1)
Stops recording video from the camera. The optional splitter_port parameter specifies
which port of the video splitter the encoder you wish to stop is attached to. Valid values are 0
to 3 (default 1).
remove_overlay(overlay)
Removes a static overlay from the preview output. The overlay parameter specifies the
PiRenderer instance that was returned by add_overlay().
request_key_frame(splitter_port=1)
Requests the encoder, running on the specified splitter_port, to generate a key-frame (full-
image frame) as soon as possible.
analog_gain
Retrieves the current analogue gain of the camera.
annotate_text
Retrieves or sets a text annotation for all output.
awb_gains
Gets or sets the auto-white-balance gains of the camera, as a tuple (red, blue) – values are
between 0.0 and 8.0. This attribute only has an effect when awb_mode is set to 'off'.
awb_mode
Retrieves or sets the auto-white-balance mode of the camera. Possible values are:
'off', 'auto' (default), ‘sunlight', 'cloudy', 'shade', 'tungsten', 'fluorescent',
'incandescent', 'flash', or 'horizon'.
brightness
Retrieves or sets the brightness setting of the camera, as an integer between 0 and 100
(default 50).
color_effects
Retrieves or sets the current colour effect applied by the camera, as a (u, v) tuple – values
are between 0 and 255. When set to (128, 128), it results in a black and white image.
contrast
Retrieves or sets the contrast setting of the camera, as an integer between -100 and 100
(default 0).
digital_gain
Retrieves the current digital gain of the camera.
drc_strength
Retrieves or sets the dynamic range compression strength of the camera. Valid values are:
'off' (default), 'low', 'medium', or 'high'.
exposure_compensation
Retrieves or sets the exposure compensation level of the camera, as an integer between -25
and 25 (default 0). Each increment represents 1/6th of a stop.
exposure_mode
Retrieves or sets the exposure mode of the camera. Valid values are: 'off', 'auto' (default),
'night', 'nightpreview', 'backlight', 'spotlight', 'sports', 'snow', 'beach',
'verylong', 'fixedfps', 'antishake', or 'fireworks'.
flash_mode
Retrieves or sets the flash mode of the camera. Valid values are: 'off' (default), 'auto',
'on', 'redeye', 'fillin', or 'torch'.
Note: You must define which GPIO pin the camera is to use for flash (and optional privacy
indicator). This is done within the device tree configuration, as detailed in Chapter 7.
frame
Retrieves information about the current frame recorded from the camera.
framerate
Retrieves or sets the frame rate at which video-port based image captures, video recordings,
and previews will run. It can be specified as an int, float, or fraction. The default is 30.
Note: The actual sensor frame rate and resolution used by the camera is influenced – but not
directly set – by this property.
hflip
Retrieves or sets whether the camera’s output is horizontally flipped. Default is False.
image_denoise
Retrieves or sets whether denoise will be applied to image captures. Default is True.
image_effect
Retrieves or sets the current image effect applied by the camera. Valid values are: 'none'
(default), 'negative', 'solarize', 'sketch', 'denoise', 'emboss', 'oilpaint',
'hatch', 'gpen', 'pastel', 'watercolor', 'film', 'blur', 'saturation', 'colorswap',
'washedout', 'posterise', 'colorpoint', 'colorbalance', 'cartoon', 'deinterlace1',
or 'deinterlace2'.
image_effect_params
Retrieves or sets the parameters for the current effect, as a tuple of numeric values up to six
elements long.
iso
Retrieves or sets the apparent ISO setting of the camera, which represents its sensitivity
to light. Lower values tend to produce less ‘noisy’ images, but operate poorly in low light
conditions. Valid values are: 0 (auto), 100, 200, 320, 400, 500, 640, or 800.
led
Sets the state of the camera’s LED (CM v1 only) via GPIO. If the RPi.GPIO library is available
and the Python process is run as root via sudo, this property can be used to set the state of
the camera’s LED as a Boolean value (True is on, False is off).
Note: This doesn’t work on the Raspberry Pi 3 or 4, due to a GPIO reconfiguration.
meter_mode
Retrieves or sets the metering mode of the camera. Valid values are: 'average' (default),
'spot', 'backlit', 'matrix'.
recording
Returns True if the start_recording() method has been called, and no stop_recording()
call has been made yet.
resolution
Retrieves or sets the resolution at which image captures, video recordings, and previews will
be captured. It can be specified as a (width, height) tuple, a string formatted ‘WIDTHxHEIGHT’,
or as a string containing a commonly recognized display resolution name (e.g. ‘VGA’, ‘HD’,
‘1080p’, etc). The camera must not be closed, and no recording must be active when the
property is set.
rotation
Retrieves or sets the current rotation of the camera’s image. Valid values are: 0 (default), 90,
180, and 270.
saturation
Retrieves or sets the saturation setting of the camera, as an integer between -100 and 100
(default 0).
sensor_mode
Retrieves or sets the input mode of the camera’s sensor. By default, mode 0 is used, which
allows the camera to automatically select an input mode based on the requested resolution
and frame rate. Valid values are currently between 0 and 7. See the Camera Hardware section
for more details on modes.
sharpness
Retrieves or sets the sharpness setting of the camera as an integer between -100 and 100
(default 0).
timestamp
Retrieves the system time according to the camera firmware.
vflip
Retrieves or sets whether the camera’s output is vertically flipped. The default value is False.
video_denoise
Retrieves or sets whether denoise will be applied to video recordings. The default value
is True.
video_stabilization
Retrieves or sets the video stabilisation mode of the camera. The default value is False.
Note: The built-in video stabilisation only accounts for vertical and horizontal motion,
not rotation.
zoom
Retrieves or sets the zoom applied to the camera’s input, as a tuple (x, y, w, h) of floating point
values ranging from 0.0 to 1.0, indicating the proportion of the image to include in the output
(the ‘region of interest’). The default value is (0.0, 0.0, 1.0, 1.0), which indicates that everything
should be included.
raspberrypi.org