I am thinking to develop some custom FOSS for the Raspberry PI. I’d want it to be re-usable for other people, not just made for myself. I am thinking the project would be based on assuming the user has the following kit:
- Raspberry Pi 5 (8GB) - ~$80
- Pimoroni Pirate Audio: Line-out - ~$25-30: High-quality I2S DAC with 24-bit/192KHz audio, 240x240 display, 4 control buttons
- Pibow Coupe 5 Case - ~$15-20: Slim, hackable case designed for Pi 5 with Active Cooler compatibility
- Booster Header (40-pin) - ~$3-5: Needed to “lift your Pirate Audio board up a little” to work with the Pibow Coupe case Raspberry Pi 5 - Pimoroni
- Raspberry Pi 5 Official Active Cooler - ~$5-8: The case is “designed to fit neatly around Raspberry Pi’s new Active Cooler”
- Raspberry Pi 5 Official Power Supply (27W USB-C) - ~$12-15
Total Estimated Budget: ~$140-165
The way I started was out of frustration with design decisions in the Zynthian project where my use case really didn’t align with theirs. Eventually, I’d like to build a USB MIDI router which can also do some virtual instruments but the MIDI routing is really the main feature and not something Zynthian adequately supported. But then, as I got deeper into how I’d approach the USB MIDI router issue, I realized there’s actually a more fundamental problem my project should actually solve first which is more widely applicable than just for musicians.
There’s a “Chicken and Egg” issue that occurs when you’re trying to get one of these SBCs to start working for you at a gig as a musical performer – or using it for gaming at a friend’s house or on vacation or whatever. You don’t want to have to bring a full monitor and keyboard with you but unless you do, you can’t set up the wifi or even safely shut down the damn thing. That’s actually the first problem my device would need to solve, and then any additional functionality, like MIDI routing, could be configured later on.
The plan is first that the device would run Raspberry PI OS Lite. I’d make a C# Blazor Server web app which also has Avalonia UI as a service. The Avalonia UI would render to a framebuffer on the Linux system that would show on the physical display and would also be capable of sending frames as PNG to an HTML5 canvas in the web app via SignalR to make a “digital twin” of the physical display which would also have the four buttons clickable in the web interface to make a sort of “remote desktop” experience for the pirate audio display. That way, anything that could be configured on the physical device could also be configured on the web interface despite the reverse not being true. (the rest of the web interface would be dedicated to handling configuration that couldn’t work on the tiny screen and just four buttons)
So, if you’re at a new location and you want to connect the device to that location’s wifi, you could tell it to scan for wifi networks by using the physical display and buttons and it would record available networks to a text file. Then, you’d tell it to go into Access Point mode. It would show the wifi network it is providing, that network’s password and its IP address on the screen so that you could connect with a phone or tablet to the web interface. On the web interface, you could give it the necessary credentials to connect to local wifi and get connected. Or at least, that’s the plan.
Other options should include just sticking with Access Point mode or even just Ethernet where available. A further plan would be for additional software to be configurable via the web interface and then different software configuration states to be saveable in the web interface and loadable from the physical screen and buttons. But that’s a more long term goal after the initial wifi problem is solved. I think that could be the start of a powerful platform for automation if I could make it work.
My question would be: is this feasible? Is the hardware capable of supporting such a scheme? Am I re-inventing a wheel anyone else has already made? Are there any existing tools I should be utilizing instead of building any parts of this myself? Am I just spinning my wheels or am I making something there would be significant community interest in as a tool?
What should I even call this thing?