Project
Raspberry Pi car dashboard
Product design
Prototyping
AI
Coding

Challenge
To design a simple and elegant touch-based car media interface based on Python code with a variety of themes. The interface will be used on a 7-inch touch screen display powered by a Raspberry Pi 4 Model B. Media will be hosted on a phone connected via Bluetooth or USB.
Why?
For older vehicles, third-party head units are designed without regard for integrated styling or modern UX principles. While CarPlay and Android Auto are options for users who wish to play media from their phones on their cars, the core interface underpinning the products can feel jarring and disconnected from the rest of the product experience.
At a glance
Company
N/A
Partners
ChatGPT
My role
Product designer
Front end developer
Content designer
Timeline
May 2025 - current
Tools
Figma
ChatGPT

UX Objectives
The car dash UI is intended to mimic conventional touch patterns and interface elements that users have become accustomed to based on phone and table UX principles. In order to create this customizable interface, I have taken a two-pronged approach:
The media interface: While highly customizable, all interface elements are intended to remain in the same location and match users' mental models. For moving vehicles, it is important for elements to be in expected positions on the screen so users are not required to take their attention away from driving operations.
Backend interface: There will exist a "core" backend interface that does not match any themes and remains consistent throughout the experience. This is where users will be able to manage devices, troubleshoot issues or learn more about the device they are using.
Development
Working with ChatGPT, I'm developing the base code for the car dash UI in Python. Colloquially, this is described as "vibe coding," but I'm trying to also build out an established process for development that can be replicated for other products in the future.
My approach is to outline the ultimate goals of the project to ChatGPT and ask for recommendations for libraries and frameworks that will allow for certain features. Then, through progressive disclosure, I will ask ChatGPT to code certain features. So far, here are the elements I'm using:
Python: Core functions will be built in Python.
Kivy: This will allow for multitouch and gestures. While multitouch is outside of the scope of the project, it aligns with the use of a capacitive touchscreen and the addition of future features that might require multitouch abilities.
libimobiledevice: This allows for the code to access media on the iPhone via USB. For the MVP (minimum viable product), a USB connection will be utilized. Bluetooth will be added later but included in the design to prevent issues down the line.
RaSpotify: In order to access music played via Spotify, the current approach is to integrate RaSpotify into the main code.
Results
This project is still in development.
Copyright 2025
alecwool.com