2.8" pico display: loading images via usb host script

Hello gurus,
I don’t think this qualifies as a support post as I don’t really need too many specifics. I could use your ideas! 😻

End goal: I want to photograph and clean up antique stereo photographs via a Raspberry Pi 5, then load them to a set of Pico 2.8" displays to check the stereo quality in a low res stereo viewer (and for fun because it’s fun).

I was thinking I could* stream images to these display Picos via serial comms via a Python script on a Pi 5 since they will be connected via USB. The question then is, do I send over jpeg or png binary data to save as a file then show (I saw a couple of examples for loading images), or do I just stream pixel data somehow and stream to the display as the data arrives directly?

Secondly, will I run into issues while trying to stream to two serial ports/devices? Maybe they need to say hello first and say “I’m screen 2” for example? For speed reasons, I’m thinking a Pico per screen will be best. My brain is a bit mangled in regards of how to address and talk to these devices via serial (digging more today).

So far, I’ve been able to write to the display pixels directly with noise just to get things figured out:

# Pretend we are streaming in pixels from serial (not happening yet) and write to all display pixels.

import machine
import random
import time
from pimoroni import RGBLED
from picographics import PicoGraphics, DISPLAY_PICO_DISPLAY_2

# set up the display and drawing constants
display = PicoGraphics(display=DISPLAY_PICO_DISPLAY_2, rotate=90)

# set the display backlight to 50% 
display.set_backlight(0.5)

WIDTH, HEIGHT = display.get_bounds()
BLACK = display.create_pen(0, 0, 0)
WHITE = display.create_pen(255, 255, 255)

led = RGBLED(26, 27, 28)

def draw_loading_message():
    display.set_pen(WHITE)
    display.text("Loading...", 60, 150, 0, 3)

def draw_noise_pixel(x, y):
    color = display.create_pen(random.randint(0, 255), random.randint(0, 255), random.randint(0, 255))
    display.set_pen(color)
    display.pixel(x, y)
    
def draw_noise_row(y):
    x = 0
    while x < WIDTH:
        draw_noise_pixel(x, y)
        x = x + 1

def draw_noise_image():
    y = 0
    while y < HEIGHT:
        draw_noise_row(y)
        y = y + 1
        
        # draw not on every row update (full update takes 3.5 secs or so, this slightly slower yet more fun)
        if y % 4 == 0:
            display.update()
    #display.update()

while True:
    # init "loading"
    display.set_pen(BLACK)
    display.clear()
    draw_loading_message()
    led.set_rgb(100, 30, 0)
    
    # draw image (pretend we're streaming in data and drawing it as it comes in)
    draw_noise_image()
    
    # done "loading"
    led.set_rgb(0, 100, 0)

    # enjoy image until a new image is available... after a delay for now
    time.sleep(5)

It takes a short while to show the images (3-4 seconds). I’m ok with that. I haven’t tested loading image files yet. Perhaps it could be faster.

What are your thoughts on streaming full-display 240x320 images to the Pico for 2.8" display use?

Thanks for taking a look in advance! 😻

update: I’m drafting more here (untested, need to get out of Windows WSL and on a Pi shortly…)

Why not connect the two displays directly to the Pi5? This would extremely simplify the setup and the programming logic. The Pi5 has no problems driving more than a single display.

That’s the goal. :)

I’m making pretty good progress with that now, actually controlling from PC instead of Pi at the moment. I’m impressed with the data transfer speed, yet running into a few errors with partially loaded images at the moment as I go (will post fresh code at some point soon).

I am not sure we are talking about the same thing: I am suggesting that you connect both displays via SPI directly to the Pi5 (skipping the Pico, serial data-transfer and whatever) and directly sending display-commands via SPI to both displays.

1 Like

I see what you’re saying now (and pardon the delayed response). Let me dig into SPI a bit more. I don’t think* this will conflict with the LoRa board I’ll also have connected to the Pi:
chrismyers2000/MeshAdv-Pi-Hat: A Raspberry Pi hat for Meshtastic with 1 Watt Lora module

So basically, I can skip the Pico altogether yeah? Cool. Since the viewer will be an external accessory, I’ll need to create some kind of plug for it. It doesn’t seem to be a huge issue though and SPI can run a good 10 meters it seems (I’ll have something shorter). A MIDI cable has 6 pins, 4 for SPI, 2 for power?

I’m guessing I’ll need to include Pimoroni’s flavor of Micropython on the Pi to get this to work? (maybe not, will start scratching at driving displays).

Thanks for the great feedback!

update: and then I find this display (square is better) Focus, focus, focus…

This is really a challenging setup with three SPI-devices. I quickly checked the pinout of your Lora-hat and I can see no pin conflicts, but you really have to verify this. Also keep in mind that if you have multiple devices on a SPI-bus, you have to take care of your CS-lines, you cannot leave them floating.

Pimoroni’s MicroPython won’t run on a normal Pi, but you should be able to use standard ST7789 drivers for “normal” Python on the Pi.

The Presto display you linked to has an integrated RP2350, so if you want to use two of them, you are back to your original solution, i.e. stream data to these devices and don’t drive them directly from the Pi.

All in all, I do have problems understanding your complete project setup, especially when Lora enters the game. Lora is about low-power, long-range, low-bandwidth data-transmission and that is something totally different to image cleaning and stereo-viewing on a high-power Pi5. For Lora to work, you will need a dedicated system.