🖌 u&i

Screenshot


Background

This was created for PennApps XVIII, which was my first “real” hackathon. Given our entire team’s experience with web development, we sought a way to expedite the front end design process. We also wanted a way to help those that are artistically gifted yet might not have the technical experience required for web work.


Functionality

Users draw the layout of a website on a sheet of paper. The user’s phone is positioned on top of the acrylic mount where the phone camera records each iteration of their sketch in real time, which is then translated directly to a live website, displayed on the user’s computer.

Using a mobile app created using React Native, our smartphone camera intermittently takes photos and sends them directly to a Flask API hosted on Heroku. This API then takes the image of the paper website layout and processes it and isolates each of its components using OpenCV. We then differentiate between each type of component, such as images and text, and reformat them to HTML. Finally, the rendered HTML is displayed live on the same website


Comments

We ran into some problems pretty early on, when our desired hardware was no longer available. Our first plan was to use a Raspberry Pi to do the processing and its associated camera to capture the images, but there was only one camera available and it was quickly taken. We were then forced to pivot to using our smartphone cameras and an app to send images to a server to do the processing. Additionally, we were not able to send the images over UPenn’s wifi network, as it was not secure. We decided to send all the data using a cell service hotspot, which greatly limited our bandwith and image quality, which negatively impacted rendering results.

Our team worked pretty well together, with each of us working on elements that we were already relatively familiar with, though we still found ourselves rushing to connect everything together in the last 8 or so hours.


GitHub repo

Devpost