Kodi And NextPVR Setup On The Raspberry Pi

As with everything in Pi land, following the “just install Kodi” guides left a few things that didn’t work. Extra commands and steps so far are:

  • Install NextPVR on the command line with sudo apt-get install kodi-pvr-nextpvr
  • Install some other library that services like the iPlayer need with sudo apt-get install kodi-inputstream-adaptive
  • Restart Kodi
  • Installed the Amazon Video plugin following instructions here: https://www.videoconverterfactory.com/kodi/amazon-prime-video-on-kodi.htmla
  • Editing /boot/config.txt and adding these two lines to keep HDMI on (where the second line is to force audio):
    hdmi_force_hotplug=1
    hdmi_drive=2
  • More instructions here that fixed permissions, especially with sound and crashing on exit: https://forums.raspberrypi.com/viewtopic.php?f=66&t=109088&start=475#p936238

Despite following about fifteen different methods I still haven’t gotten Kodi to autostart on reboot.

Setting Up The Hauppauge DualTV Tuner On A Raspberry Pi

I’ve been thinking about replacing my old PVR with a Raspberry Pi based one. Setup so far has been a bit bumpy, so just in case this helps future me: this page has a lot of info, including the drivers.

The problem I had was that the tuner seemed to work on Windows but was picking up no channels on the Pi. Eventually some searching led me to the command “dmesg” on the terminal, and all kinds of red rows of “firmware not found”. Downloading and copying the missing files into /lib/firmware and rebooting did the trick.

The other thing was to find out what my nearest receiver might be. I guessed at Crystal Palace, because I’ve seen the aerial up there… But this website gives a more authoratitive answer if you enter your postcode: https://www.freeview.co.uk/help/what-transmitter-receive-signal

Next is to try to install Kodi (I think).

Detecting Pico-W LiPo Shim Battery Charge Level

I’ve got a couple of Pimoroni’s excellent LiPo SHIM for Pico. When soldered to a Pico they allow a LiPo battery to be connected, or USB power, and if the latter then the battery is charged. There’s also a power button, which is a bonus.

I wanted to programattically determine charge level, which it appears is done by reading voltage which diminishes as battery level reduces, and found a seemingly ready-made script here.

Except it didn’t seem to work, even when I removed all of the code relating to the display and just logged to a file using this library.

Firstly, it didn’t detect if the device was connected to USB power. And secondly, it didn’t read voltage at all.

More searching and this post on the Pimoroni forum pointed to a solution. Apparently the WiFi on the Pico-W interferes with the voltage reader so some tweaking to Pin inputs/outputs needs to be made.

I had to set the output of the Pin to “high” on every voltage read and haven’t looked too deeply into whether there’s a better solution, because this Micropython works for my purposes:

from machine import ADC, Pin
import time
import logging                # the values could vary by battery size/manufacturer so you might need to adjust them


while True:
    spi_output = Pin(25, Pin.OUT)
    spi_output.value(True)
    vsys = ADC(29)                      # reads the system input voltage
    charging = Pin("WL_GPIO2", Pin.IN)          # GP24 on the PIco, WL_GPIO2 on the Pico-W, tells us whether or not USB power is connected
    conversion_factor = 3 * 3.3 / 65535

    full_battery = 4.2                  # these are our reference voltages for a full/empty battery, in volts
    empty_battery = 2.8 
    # convert the raw ADC read into a voltage, and then a percentage
    voltage = vsys.read_u16() * conversion_factor
    percentage = 100 * ((voltage - empty_battery) / (full_battery - empty_battery))
    if percentage > 100:
        percentage = 100.00
    if charging.value() == 1:         # if it's plugged into USB power...
        logging.info("Charging!")
        
    else:                             # if not, display the battery stats
        logging.info('{:.2f}'.format(voltage) + "v")
        logging.info('{:.0f}%'.format(percentage))

    time.sleep(0.5)

I haven’t calibrated it to the battery I’m using to work out whether 4.2 and 2.8 is the correct min/max range, but this is a start.

Moist; Moister; Moisterest (aka Three Raspberry Pico Moisture Sensors Tested)

Gardening is not my thing at all. I can do houseplants, because they’re right in front of me, wilting away as they beg for water. But anything that’s out of sight may as well not exist. And, by the end of summer, they generally don’t, and the only sign that they ever might have done is a few dry twigs.

So, of course, I do what all programmers do when faced with a simple but mundane task that would only take a few minutes a week: I spend days trying to automate it.

Ideally I’d automate the watering, but that’s too ambitious for me right now. The baby steps version is some kind of simple alert that tells me when the plants in the shed are about to expire. Because it’s what I know, I’ll be using a Raspberry Pico, and at some point I’ll send data to an API on a bigger machine, and then there’ll be a separate project to display relevant info.

In my mind I don’t need to monitor every pot out there, but ideally I’d have three or four pots monitored by a single Pico, and in the end that would be powered by a battery that’s kept topped up by a solar panel.

Without much planning I dove into moisture sensors as the first step. I had four things in my mind:

  1. Could I get them to work?
  2. Could I interpret the results?
  3. How much power do they need?
  4. How many can I attach to one Pico?

First up was the one that looked the easiest, based mostly on the fact that it’s the most expensive.

The Monk Makes Plant Monitor

A Monk Makes plant monitor

I’ve already played around this when testing different ways of measuring temperature using the Pico, because as well as soil moisture content it also does temperature and humidity. That’s three things in one, but then it also costs more than three other mid-range moisture sensors.

In fact, it’s generally a clever piece of kit, as it does all the hard work of turning analog information (such as a moisture scale) into digital output. It means it can work with microcontrollers that don’t have analog to digital convertors (ADC), although slightly redundant with the Pico which does have ADCs.

Getting it working was one of the simplest things so far, with a nice library and a very compact example that uses it. Moisture level is returned as an integer between 1 and 100 and it’s up to you to work out what that means in terms of whether a plant needs more water or not, although there’s also a light on the sensor itself that goes from green, through to orange and red, depending on what it thinks of the current situation.

It was easy to use, but I have a suspicion it uses a fair amount of power (in Pico terms), not least because there are LEDs shining all the time. It also appears that because of the sophisticated way it communicates with the Pico (through UART) that you can only have one per device.

The Waveshare Moisture Sensor

A Waveshare soil moisture sensor

I’m beginning to learn that whatever it is you might want to do with a Pico, Waveshare has something that will do it. It might not be top of the range, and the documentation might only just make sense, but it will generally a) work and b) be cheap.

Costing roughly a third of the Monk Makes sensor, they are also a lot simpler. It also appears they use an out-of-favour technique to measure moisture sensor, in that they pass a current between the two prongs and better conductivity means moister soil. This sounds fine, but the current causes corrosion of the prongs, which is not something that’s a problem for the Monk Makes sensor which is capacitive, not resistive as the Waveshare ones are. A write-up of the differences between resistive and capacitive is here.

The code was simple, though, with demo code available to download from their website. Basically, it makes use of the analog to digital convertor on the Pico, and just like the Monk Makes sensor you get an integer back which is a kind of “moistness percent”.

Without all the library imports, the important part is:

analog_value = machine.ADC(sensor_pin_number)
reading = analog_value.read_u16()
wetness = reading / 65535 * 100

It means it has to be plugged into one of the three ADC pins on the Pico (26, 27 or 28).

Summary, though, was that it was easy, seemed to work, and cheap.

Grow Moisture Sensor

Three Grow soil moisture sensors

Somewhere between the price of Monk Makes and Waveshare was the Grow sensor. (The picture above shows three together, which is how they come, for about the same price as the Monk Makes device.)

The Grow sensor is intended to be used as part of a more fully-fledged ecosystem, but the docs said it could be used standalone so, like a fool, I believed them.

To be fair, it just took a bit of searching to find a standalone Python example. Plus some soldering as I hadn’t bought the right cable…

Functionally, the Grow sensor is capacitive, like the Monk Makes, but to send data back to the Pico it uses yet another technique: Pulse Wave Modulation.

I’d used this a bit as an OUT signal to change the brightness of some LEDs, but I had no idea how I’d use it with an IN signal. More searching and a small amount of code provided the answer:

last_value = sensor_pin.value()
start = time.ticks_ms()
first = None
last = None
ticks = 0
while ticks < 10 and time.ticks_diff(time.ticks_ms(), start) <= 1000:
  value = sensor_pin.value()
  if last_value != value:
    if first == None:
      first = time.ticks_ms()
    last = time.ticks_ms()
    ticks += 1
    last_value = value
  if not first or not last:
    wetness = 0.0
  else:
    # calculate the average tick between transitions in ms
    average = time.ticks_diff(last, first) / ticks
    # scale the result to a 0...100 range where 0 is very dry
    # and 100 is standing in water
    # dry = 10ms per transition, wet = 80ms per transition
    min_ms = 20
    max_ms = 80
    average = max(min_ms, min(max_ms, average)) # clamp range
    scaled = ((average - min_ms) / (max_ms - min_ms)) * 100
    wetness = round(scaled, 2)

There sensor_pin above is a machine.Pin object.

The good thing about this is that it can use any pin, not just the three ADC ones available. Again, the figure returned is a number between 0 and 100, and it’s down to interpretation as to what’s considered too dry or too wet.

And The Moistest Winner?

The Monk Makes sensor is the most sophisticated, but the limitation of one per Pico, plus my assumptions about power consumption, rule this out. On the plus side it’s very easy to use, looks robust, and you get bonus info in terms of temperature and humidity.

At the other end, the Waveshare sensors are the cheapest, and they do work. The downsides are a) being resistive, which apparently means they could only have a lifespan of a year or two, and b) you can only have three per Pico. I might not need more than three, but I think that and the corrosion tips the balance.

The Grow sensor, then, was the surprise winner. I say “surprise” because it started as the least promising as I tried to find out how to interface with it outside of the rest of the Grow ecosystem. But when it does work, it does everything: it’s capacitive, the number you can have on one microcontroller is really only limited by pins, and it also looks and feels like a nice product.

Some Comparison Output Data

Finally, I plugged all three into the same Pico and put them all in the same plant pot, close together near the edge of the pot The numbers that came back are all in approximately the same range. This is in a large pot that was watered a few days ago so should be fairly evenly moist:

Monk Makes: 29
Waveshare: 27
Grow: 24

Not too far apart, and I think as a general rule I’ll take it that anything under 20 from any sensor means that watering is required.

Now to work out how to run them all from solar so I can put them in the shed…

LCDs And Pico Memory Management

It may not be quite true to say modern computers have unlimited processing power and memory, but the high performance and huge storage space I’ve gotten used to does make things a lot easier.

Yet one of the fun things about programming for the Pico is in working within tight constraints. The Pico is advertised as “2MB of Flash memory”, which isn’t much at all by modern standards. And for many purposes that figure doesn’t really represent usable memory, as “Flash memory” is where the firmware sits, and hence isn’t readily available for programmers to use.

The important figure for the Pico is 256kb “on-chip SRAM”, which is effectively the space available both for storing program code and for running it. The latter is important and very different to developing on most devices (or most devices that I’ve worked on) where processor memory and storage memory are generally separate.

The limitation hit me hard with the next little project I started on: trying to display live tube arrivals on a tiny LCS screen.

Displaying Live Tube Arrivals

I ran into a few memory issues making the e-ink weather display and had to go through my code and eliminate some pretty lazy mistakes I’d made whilst learning the platform.

But that’s nothing compared to what I’ve had to learn on the most recent mini-project which, on the face of it, seemed very simple: to call the TFL (Transport for London) API and display the next few Central Line departures from our local Tube station.

I’d bought a Waveshare 1.3 inch LCD displays with something else in mind, but I wanted to do something a little less amitious first and “query an API and display some numbers” seemed simple enough. Au contraire, as I was to learn.

Before I go into the problems, this is what the result looks like:

A Waveshare 1.3inch LCD showing the times of the next four Westbound and Eastbound Central Line departures from

The buttons and the little toggle do nothing at the moment; the whole app is just a display that refreshes every 60 seconds.

In terms of how it works, it’s three or four main steps, depending on how you look at it:

  1. Initialise Wifi, set datetime etc
  2. Make an API call to retrieve Central Line status. (“Good Service” as shown above.)
  3. Make an API call to retrieve arrivals at a single (hardcoded) Tube station
  4. Draw to the screen

Did it work? Did it heck.

Refactoring For Memory Management

The problems came due to the requirements of the Waveshare LCD.

It has a display size of 240×240 pixels and 65k colours, which means that each pixel on the screen requires two bytes of memory. Remember that with the Pico there’s no such thing as separate graphics storage so all of this has to be done within the 256kb of on-chip RAM. A 240×240 display has 57,600 pixels, and if each pixel requires 2 bytes then 115,200 bytes of our available memory has to be allocated to screen display.

More precisely: 115,200 bytes of memory needs to be available at the time the screen buffer is created. This is just under half the available memory, but then of the 256kb RAM in the Pico, actually only about 200kb (or a little under) ever seems to be usable.

Therefore close to 60% of available memory needs to be free just to draw some text and rectangles to a tiny 1.3 inch LCD.

Basically, as soon as I tried to run the first end-to-end version of my code, I ran into all kinds of memory errors. Or in particular, I got network errors in making the API call, and after some digging it turned out that memory errors in the urequests module often manifest as network errors in the output, or things like:

OSError: (-29312, 'MBEDTLS_ERR_SSL_CONN_EOF')

which does not look like a memory error to me, but lack of memory was the cause.

The reason for this was that memory for the screen was being initialiased early in the code, as a byte array:

super().__init__(bytearray(height * width * 2), width, height, framebuf.RGB565)

Straight away this allocates 115,200 bytes of memory that will be used for the screen, which unfortunately left very little for the API calls that followed.

I went down some semi-productive rabbit-holes that resulted in me freezing a lot of my helper modules into the Pico’s firmware. It wasn’t as hard as I thought it would be, once I stopped trying to build the firmware on Windows, and it meant that a few 10s of k of program were now going to be run from inside the Pico’s secret 2MB Flash memory and not take up valuable 0n-board memory space.

It still didn’t work, of course. Code never does.

The next thing was to move things around in the code. This got the API calls working, but then screen initialisation failed.

I added some debugging code using the built-in Garbage Collector, which every Pico programmer must be familiar with, and cleared some run-time memory by running inline garbage collection gc.collect() calls. Coupled with a bit of code refactoring to help the garbage collector do its job, and switching https calls to the API to http (yes, I know, but it worked), I freed up quite a bit of memory.

Yet it still failed, despite my debug output showing a whopping(!) 178,464 bytes of free memory right before I try to initialise the screen – which in theory only needs 2/3rds of that.

I needed 115,200 bytes, and I had 178,464 to spare, so what wasn’t it working?

Going Mad With Memory Fragmentation

I had to go further down the rabbit hole and learn about memory fragmentation. Adding a few more debug lines made it certain that fragmentation was the problem.

micropython.mem_info()

is the line of code to add and, indeed, it revealed that although I had plenty of free memory, it was “scattered” all over the 256kb available.

Memory fragmentation is not something many programmers, especially lazy web programmers like me, have needed to deal with for decades. So what is it?

The best analogy I can think of is to consider the Pico’s memory like a bookshelf. Every time Micropython needs some memory to store something, whether an object, a string, or something big like the JSON results from an API call, it looks for the first available space on the shelf in which it can fit the object. (I’m assuming it’s the first space, and if not technically true it seems a viable mental model .)

Slowly the shelf fills up, adding objects big or small from left to right as they arrive, and if you keep adding things eventually you’ll run out of shelf space. That’s what had happened in the first un-optimised version of my code.

Sometimes shelf space get cleared when an object is no longer needed. Micropython will try to manage this space itself through automatic garbage collection. When memory usage reaches a certain threshold it will look through the shelf and throw away anything that it’s sure it doesn’t need. Things like strings that were declared inside a method call and then never used again, or the raw results from an API call that have been processed.

Running gc.collect() forces this garbage collection process to run, meaning that even if Micropython doesn’t think it’s running low of memory you can force it to sort its bookshelf out. You’ll likely to this because you know you’re about to ask it to store something big and you want to free the maximum amount of space.

The problem is that it only really does half a job of sorting its bookshelf out, because although it will clear out unwanted items it leaves gaps behind in the process.

Let’s say there are three books on the shelf and you ask the garbage collector to decide which ones to keep. The book on the left is the complete works of Dickens in a single volume and nobody’s ever going to read it, so Micropython throws it out (or takes it to the charity shop if the idea of throwing a book away quite rightly offends you). Eight inches of shelf space have been freed for future use!

The book to the right of it might be a little paperback, but it’s very popular so it will have to stay there.

To the right of that are the complete works of Shakespeare, again in a single volume. I’m not sure many people read those things outside of school, so out it goes! Another ten inches of shelf space cleared!

So by clearing out eight inch and ten inch tomes we’ve now got eighteen inches of shelf space free. That should be enough for all of Game of Thrones (if for some reason they were published in a single volume and needing a whopping twelve inches of free space).

Easy, right? But not so fast: we don’t have a twelve inch space where it can fit. We have eighteen inches of space, which is more than enough, but that little paperback is sitting right in the middle of it. Wouldn’t it be ideal of Micropython could move that paperback to the left, or to the right, so it could fit Game of Thrones in? It would be great if it did, but unfortunateley it won’t.

The problem is that by adding a popular paperback in exactly the wrong place we’ve introduced fragmentation, and as far as I can tell when you have it you can’t get rid of it. (Unless you can work out how to throw the paperback away without upsetting anyone.)

In terms of code, what I’d written did the following:

  1. Allocate space for the first API call
  2. Garbage collect, which took us almost back to as good a state as the start
  3. Allocate space for the second API call
  4. Process the API call and return the results
  5. Garbage collect again. What this did was remove the API call data but the space just after it was now taken up by the results of processing that data. This was my small, popular paperback
  6. Try to allocate a large contiguous space for the display. Just like trying to fit Game of Thrones on the shelf, I had plenty of space, but it wouldn’t fit in the space before the API results, and it wouldn’t fit in the space after the API results either. I had plenty of space, but I had something blocking right in the middle

What To Do About Fragmentation

If I had a gripe about the Pico/Micropython, it’s that it requires rather tight memory management without offering the tools to do it.

In an ideal world, I would be able to specify where objects are stored at the time that they’re created. Imagine if I could specify to stack small books from the right hand of the shelf and large books that are often recycled from the left. I’d be much less likely to be short of space due to a small book occupying a large empty space in the middle of the shelf. Maybe I could do that if I got into the firmware myself, but I’m not that clever.

The Micropython docs encourage allocating space for large items up-front, before fragmentation can occur, for precisely that reason. (The docs also contain plenty of other useful tips.) In this scenario my screen’s bytearray would be allocated as part of initialisation, as I first had it in my code. The problem is that the API calls then don’t have enough space to run because I’ve effectively taken 115,200 bytes of memory away from the Pico permanently. My problem with declaring the bytearray first is that the problem isn’t that I don’t have enough memory to achieve what I want, because I do, but I don’t have enough memory if I trie to allocate it all at the same time.

Managing fragmentation requires quite fine-tuning of the code. In my case, I created dictionary objects and empty strings before I made the second API call, effectively forcing the paperback to be put on the shelf before either Dickens or Shakespeare took up all the space.

The problem with this approach is that memory is allocated in all kinds of places that can be hard to spot, such as in string concatenation. If, for instance, I want to build a custom dictionary as the result of processing an API call, then I create strings every time I create a dictionary key. These could potentially fragment memory.

I wish I could force Micropython to move everything along the shelf and consolidate free space, or that I could tell it exactly where to put things as it needs to stack them, but I can’t. So as it stands, if I need a big chunk of memory for something I have to be very careful about how I use that memory in the run-up to needing it.

What Did I Learn?

I learnt quite a few specific things:

  1. The order that things are declared in in code can matter as much as the size of them
  2. HTTP urequest calls seem less demanding on the system than HTTPS
  3. Running gc.collect() after a urequest call is more-or-less mandatory in order to reclaim memory in the most efficient way possible, even if the results of the call are still available. This is because it seems garbage can be created inside Micropython modules (which seems obvious when you think about it but wasn’t to me)
  4. Putting things in small, discreet functions, so that variables have very limited scope, is not only good practice overall but makes garbage collection much easier

What Next?

As I said above, I hadn’t intended this display to be used for this purpose, but I wanted a quick win. It turned out to be anything but quick, but then that seems to be the way with microprocessors.

What I’ve learned has changed my opinion about what to do with it next. To be useful this really needs a bigger display, such as this 2 inch LCD by Waveshare. The problem with this is that at 320×240 bytes it requires 33% more memory, or a whopping 153,600 bytes. That’s over 75% of all the memory the Pico has available, and I feel that the 115,200 bytes I’m already allocating is taking me close enough to the limit anyway. Although I’ve tried allocating the 153,600 bytes at the same stage in the code and it seems to work… But still: it might be fragile.

Knowing that, the 1.8 inch display looks a better prospect. It’s just about big enough as a screen, and only .2 of an inch less than the bigger display, although it would be nice to have some buttons built-in like the 2 inch display does. But at a 160×128 resolution which, on the face of it, seems like a major step down, it only needs 40,960 bytes of memory for the screen display, which is less than half the 1.3 inch screen. Maybe less really is more.

That adds a fourth item to “What did I learn?”: that displaying anything on a colour screen with a Pico is a really big deal.

Measuring Temperature With The Raspberry Pico

Measuring temperature with the Pico is one of the easiest “quick wins” for getting familiar with the platform. Along with turning the onboard LED on and off, you can achieve something that feels like interaction with the real world with nothing more than a few lines of Micropython due to the Pico’s onboard temperature sensor.

A quick search for some example code brought up:

import machine
import utime
 
sensor_temp = machine.ADC(4)
conversion_factor = 3.3 / (65535)
 
while True:
    reading = sensor_temp.read_u16() * conversion_factor 
    temperature = 27 - (reading - 0.706)/0.001721
    print(temperature)
    utime.sleep(2)

It works by measuring differences in internal voltage that vary by temperature. A better explanation, and the reason for all the maths, can be found on the same page as the code.

So far so good, until I looked at the output, which was a couple of degrees warmer than the Hive radiator thermostat that I was sitting right next to. It wasn’t beyond the realms of possibility that Hive was wrong, or was measuring a very localised temperature (although I had no reason to think so).

Some more searching raised doubts about the accuracy of the Pico, and the variability between different boards. I tried a couple of others and got similar results, all around the same range, and all warmer than Hive was telling me. I was more inclined to believe the Hive thermostat anyway, especially as it didn’t feel all that warm at the time.

A bit more searching and it appears that voltage variability can be a big factor in these calculations. Basically the conversion_factor above takes the Pico voltage at 3.3v, which it more-or-less is, but if this varies by even the smallest amount then the temperature can be out by a couple of degrees.

It seemed likely that I’d found my culprit, especially since I had the Pico plugged into a USB hub connected by USB-C to the PC, which was turning AC into DC voltage anyway. There were a number of places where a few millivolts could get lost.

A Temperature Measurement Comparison

I decided to experiment with some other measurement systems, including:

  • A very basic analog temperature sensor, the TMP36
  • A TMP225 analog temperature sensor
  • A Monk Makes plant monitor, which has its own analog to digital convertor onboard (and does things other than just temperature)

They look like this respectively:

A wooden thermometer

I can’t tell if there’s much difference in principle between the first two TMP sensors, beyong one being on a board, but they were both about the same price.

And for comparison I have a wooden thermometer, which I hope should be accurate enough. Plus I still have the trusty Hive radiator thermostat.

To cut a long story short, I ended up with all three components connected to the Pico at once, outputting a temperature on a polled basis. They were very simple to connect up and find code examples for. The wooden thermometer would just have to sit on its own.

And the results? These are averaged over 10 readings to smooth out voltage fluctuations after leaving running for a few minutes:

Hive radiator valve21.1°C
Wooden thermometerJust over 21°C
Pico onboard sensor21.9°C
TMP3623.0°C
TMP23523.4°C
Monk Makes plant monitor22.6°C

Maybe I’m too willing to accept an answer that happens to be believable, but to my eyes the fact that the thermometer agrees with the professionally constructed Hive radiator valve points to the four Pico measurements all being out by some degree. Reading a few more posts on forums certainly seems to indicate that none of these temperature sensors are going to be really accurate with a variable input voltage, especially since the Pico and the two TMP models explicitly need the 3.3v figure in the temperature calculation.

Changing The Temperature Calculation Formula

For all three except the Monk Makes plant monitor the voltage figure is in the temperature calculation equation. If the voltage is less than 3.3v then output temperature will show as higher than actual. The equation looks like this:

adc_value = adc2.read_u16()
volt = adc_value * (3.3 / 65535)
temperature = 100 * volt - 50

Changing the 3.3 figure to something a little lower has a substantial effect on temperature. adc_value is generally quite low and therefore the voltages are quite low – somewhere between .5v (which would be 0°C), and .8v at 30°C. In the equation above a voltage change of just .01V equates to a temperature variance of 1°C.

For the purposes of “calibration” I’m going to assume that the only thing I need to change is the value of the input voltage. And hence if I amend the above code for the TMP36 sensor to:

adc_value = adc2.read_u16()
volt = adc_value * (3.21 / 65535)
temperature = 100 * volt - 50

then I start to get about the same temperature as Hive and the thermometer are telling me, with a tenth of a degree anyway. Similarly, 3.20v seems to work for the TMP225.

For the Monk Makes sensor I just scale down the temperature it tells me, by 0.925.

Next Steps

The results all seem to be plausible now, but it’s not very scientific. Plus, I’ve only tested it at a very narrow temperature range. If I want to put sensors in a greenhouse I’ll need to check that the linear scale assumptions I’ve made above still hold.

Secondly, I’m assuming that it’s the voltage that’s the culprit. I have a multimeter ordered so I can test what’s actually being passed to the sensors and that’ll hopefully tell me if my estimate of 3.2v is correct or if the inaccuracy is somewhere else in the system.

Thirdly, this has been powered by a USB hub, which is much noisier than a smooth DC power source (like a battery). I did a quick experiment with a USB battery power pack and got similar readings, but there’s no real way to know if that’s outputting exactly 5v or not.

The good news? The cheap and cheerful TMP36 seems to be just as reliable (or unreliable) as any of the other options.

Playing Simple Sounds With The Pico And Adafruit STEMMA Speaker

I’m starting to get more confident with putting components together with the Pico, so without too much research I bought a simple speaker/amplifier breakout board. Still, given how every time I try something simple on the Pico it turns out to involve hours of Googling I assumed that getting a speaker to work would be more fiddly than it should be.

But, au contraire, it turned out to be remarkably easy. Just three cables and not even a dozen lines of code (stolen by Googling, of course…) produced a simple note:

A Raspberry Pico connected to a Adafruit STEMMA Speaker by three crodocile clips

With this code:

from machine import Pin, PWM
from utime import sleep

SPEAKER_PIN = 21

# create a Pulse Width Modulation Object on this pin
speaker = PWM(Pin(SPEAKER_PIN))
# set the duty cycle to be 50%
speaker.duty_u16(1000)
speaker.freq(1000) # 50% on and off
sleep(1) # wait a second
speaker.duty_u16(0)
# turn off the PWM circuits off with a zero duty cycle
speaker.duty_u16(0)

Despite really haven’t done nothing except copy and paste I was proud of the beep that emitted from the small speaker.

The next challenge: to play an audio file, of any format. Back to Google I went, and after some dead ends (due to Micropython changing over the previous five years) I settled on a library called PicoAudioPWM by Daniel Perron.

A bit of MP3 to WAV mangling later, and this code produced a very ropey sound:

from machine import Pin
import wave
from wavePlayer import wavePlayer

try:
    player = wavePlayer()
    player.play('clip.wav')
    player.stop()
except Exception as ex:
    Pin(2, Pin.OUT, Pin.PULL_DOWN)

Pin(2, Pin.OUT, Pin.PULL_DOWN)

I changed the output pin from 21 in the simple beep experiment to 2 here, as that’s the default the library uses.

For a reason that I expect is specific to the speaker I had to add a Pin PULL_DOWN after playback was complete otherwise the audio was followed by constant hissing from the speaker, even after the program had stopped running.

The sound quality was pretty awful, but hey-ho: at least I got noise out of the thing.

Now to go and read about what “Pulse Width Modulation” might be…

The Arduino Mini Camera OV2640 2MP Plus And The Pico

An Arduino Mini Camera OV2640 2MP Plus

I’m starting to realise that “compatible with the Pico” might have as much meaning as “sure it’s possible to ride a tiger”: “compatible” is not the same as “conducive” in the same way that “possible” is not the same as advisable.

Perhaps I was boosted by the “ease” with which I got the Pico and e-ink display working, but for my next project I have my sights on a wildlife camera for the garden. I realise there are plenty on the market, but they all seem to require a row batteries, or need charging every few days, or a mains power cable. All of these are possible, but for me not ideal.

My ideal wildlife camera would be charged from solar (which may or may not be possible) and would automatically upload images to a server. Both these things mean that once I put it up somewhere I’ll never have to touch it again. (Hah!)

There seemed to be no reason why a Pico W, motion sensor and camera couldn’t do most of the work, and then I’d assess the viability of solar power later. Battery power, as I said, is not ideal, but not a deal-breaker. I’ll also need to make sure that if the Pico is out in the garden that it can still connect to the Wifi, because I didn’t want to have to store images locally.

Knowing that I had to take small steps, I decided to get the camera part working first. I bought a 2mp camera that claims compatibilty with the Pico, the Arduino Mini OV2640 2MP Plus, and instantly hit the first hurdle when I tried to find code to run it on.

To cut a long story short, I found I needed to be using CircuitPython (what’s that?) instead of Micropython. That proved to be a non-issue, although slightly annoying as I was just getting used to the libraries available in Micropython.

The official example worked, once I’d followed the instructions to the letter, but all it did was display the output of the camera in a bespoke Windows app. At least it told me that the camera was wired up properly and the code on the Pico was working, but it didn’t give me much of a clue as to how to get images into a state where I could do something with them, preferably something like a JPG file, but at least a byte array.

More Googling brought back some clues, and eventually I had something that appeared to be reading from the camera. Except it was hard to tell what I was reading as there’s nowhere to save the images. Circuitpython mounts the Pico memory as an external drive, and that locks out any attempts in code to write to it. It’s possible to change it to writeable mode, but then it’s a pain to change it back again to put more code onto it.

In my simple little mind I’d thought I could solve the problem by POSTing the output of the camera to a lightweight web server somewhere. Except the problem of base64 encoding bits reared its head. No problem, I thought: installing a base64 encoding module will fix that. Except, even a 7kb JPG caused an out of memory error when I tried to encode it. Base64 encoding seemed to be right out.

Like Inception, I needed to go deeper, so I dug into the Sockets documentation, both for Python and CircuitPython. Half a day later I was getting somewhere, managing to pipe JPG pixels straight from the camera to a tiny socket connection on my PC and save them to the hard drive. The problem was that only about one in four images seemed to be valid.

More long story to cut short, but just after I capture the image in the code I’ve added

time.sleep(1)

and that seems to have “fixed” it. Le shrug.

For posterity, here is the v1 of “click run and it’ll send a picture somewhere” code, with Wifi and IP details placeholdered, and using the Arducam library:

import time
import os
from digitalio import DigitalInOut
import wifi
import socketpool

#import wifi
import gc
from Arducam import *

gc.enable()

led = digitalio.DigitalInOut(board.LED)
led.direction = digitalio.Direction.OUTPUT
led.value = False

wifi.radio.connect(ssid='SSID',password='WIFI PASSWORD")

mycam = ArducamClass(OV2640)
mycam.Camera_Detection()
mycam.Spi_Test()
mycam.Camera_Init()
time.sleep(1)
mycam.clear_fifo_flag()
mycam.OV2640_set_JPEG_size(OV2640_1600x1200)

def get_still(mycam):
    mycam.flush_fifo()
    mycam.clear_fifo_flag()
    mycam.start_capture()
    once_number = 1024
    buffer=bytearray(once_number)
    count = 0
    finished = 0
    
    time.sleep(1)

    pool = socketpool.SocketPool(wifi.radio)
    s = pool.socket(pool.AF_INET, pool.SOCK_STREAM)
    s.connect(("SERVER IP ADDRESS", SERVER PORT))
    
    length = mycam.read_fifo_length()
    mycam.SPI_CS_LOW()
    mycam.set_fifo_burst()
    
    while finished == 0:
        mycam.spi.readinto(buffer, start=0, end=once_number)
        count += once_number
        s.send(buffer)
        
        if count + once_number > length:
            count = length - count
            mycam.spi.readinto(buffer, start=0, end=count)
            s.send(buffer)
            mycam.SPI_CS_HIGH()
            finished = 1
    s.close()
    gc.collect()

get_still(mycam)

And the Python server code to receive and save the image with consecutive numbered file names, in the most basic POC (piece of crap?) form possible:

from os import curdir
from os.path import join as pjoin
import socket

HOST = IP ADDRESS  # IP of this server
PORT = PORT NUMBER  # Port to listen on
photo_num = 0

while True:
    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
        s.bind((HOST, PORT))
        s.listen()
        conn, addr = s.accept()
        with conn:
            with open(pjoin(curdir, 'photo_{0}.jpg'.format(photo_num)), 'ab+') as fh:
                while True:
                    part = conn.recv(1024)
                    if len(part) <= 0:
                        break
                    fh.write(part)
            fh.close()
            conn.sendall(b"Complete")
    photo_num = photo_num + 1

So far so feasible at least, so the next step is to prototype movement sensing. In the latest edition of “famous last words”: the movement (or PIR) sensor part looks like it might be quite easy. On to that next!

The Final E-ink Weather Display

Pico project number one is finally “shipped” with the addition of a badly made wooden frame to hold the thing and let me stick it to the wall, plugged into a USB power socket.

Let’s face it: the frame is not it’s strong point, but just putting the raw components on the wall would look very unfinished.

Is it actually useful? Who knows, although I can’t imagine I’ll ever save as much time from not opening my phone to look at the weather as I spent making it. Was it fun? Totally.

The journey to get here:

I’ve already discovered a couple of downsides to the frame I made (apart from how rough it looks):

  1. It’s a bit chunky and projects from the wall too far
  2. I thought a USB power cable with right angle connectors would let me really shrink the space requirements, which it has, but it’s also limited the placement options
  3. It’s fixed to the wall by stick picture hanger things, which is fine, but when I spotted a problem I wanted to fix I had to take it off and then use another pair of stickers to put it back up again. i.e. it’s a real pain in the bum to update
  4. And talking of which, if the Pico inside ever needed replacing I’d basically be taking the case to pieces to swap it out

Next time I think a longer cable so it can sit on a table or shelf rather than fix to the wall.

Now onto the next project.

A Raspberry Pico Bird Camera

Hot on the heels of, if not success, then at least lack of failure in my rather simple project to build a Raspberry Pico e-paper weather display, what I’d really like to have is a camera to photograph the very infrequent visits to the feeder in the garden.

There are plenty of cameras on the market but I have something quite specific in mind. It would upload images to a server, so that I didn’t have to go and check a memory card, and ideally be solar powered so I didn’t have to go and recharge or swap batteries. Many of the available cameras seem to use disposable batteries and that just seems like a bad thing in this day and age.

At least I have a clear idea of what I want, but I don’t have a clear idea of whether it’s achievable. But I have a plan.

  1. Check I can get a Pico and camera combo working. Like my first step with e-paper, given that I don’t really know what I’m doing I might not even get a basic image out
  2. Post the photos to a Wifi-connected server, which will probably just be my desktop in the first instance
  3. Get a motion detector working
  4. Put the two together to take photos based on my own movement
  5. Test to see if bird movements trigger the motion detector, and if they do see if the image quality is good enough. I’m more interested in knowing what kinds of birds come to visit and how often than winning Wildlife Photographer Of The Year
  6. Assess power consumption and feasibility of a solar supply
  7. If solar is reasonable, decide on whether to get a retail package with a USB micro, or to go with something more home-spun, that could potentially power multiple devices in the future (because I have many more plans!)
  8. Make some kind of weatherproof case, possibly just for the “camera” part, but possibly also the solar panel (and battery) depending on size requirement. I’m as useless with my hands as I am with electronics so this might be the hardest part.

For components, as well as another Pico W, I’m going for an Arducam Mini 2MP Plus OV2640 SPI camera and a Mini PIR sensor, although since they’re so cheap I’ll give the slightly larger (yet strangely cheaper) regular PIR sensor a try too. I have a very small garden, at only 9m long, and a mesh transmitter right near the door already so Wifi range shouldn’t be an issue.

A Arducam Mini 2MP Plus OV2640 SPI camera
The Arducam Mini 2MP Plus OV2640 SPI camera
The smaller Mini PIR sensor

And so a new adventure begins.