By default, sending pixels from a Raspberry Pico to a Waveshare e-ink display will result in a portrait image. In the case of the 2.13inch display I started with, this means a screen of 122 pixels wide by 250 pixels high.
But what if you want to display in portrait mode? As someone who’s used to the simple world of CSS it was actually fun to get back into the world of bits and pixels again.
Just like the ZX Spectrum, where I started programming, the display is stored as a list of bytes where each byte is eight consecutive pixels, either set to 1 to display a dark pixel or 0 for white (or clear).
In Micropython, this is stored as a byte array. So the code that sets up the display looks like this:
self.buffer_black = bytearray(self.height * self.width // 8)
Thankfully, there’s a helper module in the shape of Framebuffer which makes changing the value of these pixels much easier. It contains methods to write text, lines, rectangles and circles and, most usefully, to draw images at arbitrary coordinates using the blit command.
Not only does blit make it easier to draw, say, a 50×50 image onto the 122×250 display, but it also makes it much easier to draw an 8-bit image onto the 1-bit display. You start by setting a Framebuffer object with its data source as the byte array, such as:
self.imageblack = framebuf.FrameBuffer(self.buffer_black, self.width, self.height, framebuf.MONO_HLSB)
Where the first argument is the byte array, with and height are self explanatory, and framebuf.MONO_HLSB means that this framebuffer is a mono (1-bit) image, with horizontal bits stored as a byte.
To draw an image on top, assuming you have the pixel data in a variable called image_buffer, you can do something like:
self.imageblack.blit(framebuf.FrameBuffer(image_buffer, width, height, framebuf.GS8), x, y, key)
Where width and height are the width and height of the image, framebuf.GS8 signifies an 8-bit grayscale image, x and y are the coordinates to draw to, and key tells the blit method which pixels in the image it should treat as transparent. This allows you to draw just the black pixels onto the screen without the “white” pixels (or 0 bits) overwriting anything that’s already on the screen. Because the frame buffer takes the image_buffer bytearray as a parameter but changes the pixels in it by reference, any changes to self.imageblack (which is a Framebuffer object) are applied to the bytearay in self.buffer_black – which represents the actual pixel data.
So far so simple as far as drawing an image to the screen goes, assuming you’re still working in portrait mode. I made myself a helper function to load raw files from the on-board memory:
def drawImage(display_buffer, name, x, y, width, height, background=0xFF):
image_data = get_image_data(name)
display_buffer.blit(image_data, x, y, background)
image_buffer = bytearray(width * height)
filename = 'pic/' + name + '.raw'
with open (filename, "rb") as file:
position = 0
while position < (width * height):
current_byte = file.read(1)
# if eof
if len(current_byte) == 0:
# copy to buffer
image_buffer[position] = ord(current_byte)
position += 1
display_buffer.blit(framebuf.FrameBuffer(image_buffer, width, height, framebuf.GS8), x, y, background)
Drawing a landscape image instead of portrait should be as simple as:
- Set up a Framebuffer with a width equal to the e-ink display height, and a width equal to the e-ink display width
- Draw to the Framebuffer
- Transpose columns of pixels into rows in a new bytearray
- Send to the e-ink display
It seemed like exactly the sort of thing that someone would already have tackled, and so it proved. So with some Googling and adjustment of width/height variables to suit the display I was working with I came to this:
epaper_display = bytearray(epd.height * epd.width // 8)
x=0; y=-1; n=0; R=0
for i in range(0, epd.width//8):
for j in range(0, epd.height):
R = (n-x)+(n-y)*((epd.width//8)-1)
pixel = display[n]
epaper_display[R] = pixel
x = n+i+1
y = n-1
epaper_buffer = framebuf.FrameBuffer(epaper_display, epd.width, epd.height, framebuf.MONO_HLSB)
And I have to confess I didn’t worry too much about the details. Not until I ran it and the output was complete garbage.
It took a while to work out what I was doing wrong, but once I found the problem the solution made complete sense: I was inputting a Framebuffer stored as MONO_HLSB but the function above transposed the position of bytes, not individual bits. In other words, chunks of 8 pixels were being moved to the right place on the screen, but forming a column rather than a row. No wonder it looked bad.
The solution was simple: start with a Framebuffer of the type MONO_VLSB, where the bytes represent a column of 8 pixels, not a row. Then the transposition naturally mapped to MONO_HLSB.
I ended up with something like this:
display = bytearray(epd.height * epd.width // 8)
display_buffer = Screen(display, epd.height, epd.width, framebuf.MONO_VLSB)
Draw somethings here
epaper_display = rotateDisplay(display)
where Screen was an extension of the Framebuffer class, necessary because the width and height parameters weren’t accessible in a native Framebuffer object. It just felt nicer to store them “in place” rather than pass another parameter around.
def __init__(self, display, width, height, encoding):
self.display = display
self.width = width
self.height = height
self.encoding = encoding
The final wrinkles was that the e-ink display was 122 pixels “wide” (or now 122 pixels high, as I was trying to work in landscape mode), and therefore not a number that formed a whole number of bytes. Because of the way I was treating the screen, it meant my top left origin was at the coordinates (0, 6), which felt a bit nasty but was something I could live with. The alternative would have been to transpose the whole thing down 6 pixels using another Framebuffer, but frankly I couldn’t be bothered.