I've heard a lot about AI lately, both good and bad. On the one side of my life in creative fandom, there was a huge furor over the way our work had been involuntarily scraped to create something that techbros gleefully boasted would replace us, without any sort of attribution, much less monetary compensation. On the other side, as a designer in the tech industry, it was impossible to escape the constant LinkedIn posts, conference talks, and casual chatter from those in the industry that this was a new, useful tool (and that any delay in learning would leave me as obsolete as dinosaurs after the asteroid). For the past few years, I made up the difference by keeping up with Ars Technica articles about newest developments in AI (mostly about its pitfalls), which left me with a surprising amount of trivia about ways in which it can be used and abused, and no practical knowledge on how to apply any of that information. I used ChatGPT and later Claude as a faster natural language search tool to find solutions for specific technical issues, but that was about it.
Then I got an email from Devpost, a website that runs hackathons. They were running Learning Hackathon: Spec Driven Development, to guide people through the best practices of using AI to create a software program even if they didn't already know how to code. In other words, vibecoding.
Now, you have to understand. Among the sneerers, vibecoding had become a dirty word. It was synonymous with low-quality code, clueless idiots who couldn't understand or debug the things they were pushing to prod, Icarus bragging about his ace flying abilities even as his wings were melting around him. Competent engineers did not vibecode, they said. Anyone who did was hamstringing their own learning.
On the other hand, I had just heard multiple senior designers I had reached out to for advice on a project talk about how they were using AI to create even more realistic prototypes or to work in Storybook directly to make sure UI components looked the way they should. They seemed to be having fun producing cool things.
I decided that as a designer, one who previously only made attempts to learn how to code once every few years and then forgot everything in between, I was already a pretty incompetent engineer. I might as well see how far that took me. And so it was that I signed up for the hackathon.
Starting the Hackathon
The structure of the hackathon was fairly simple. First, I would download a plugin that told Claude Code how to run the event. Then, I was expected to spend a few hours going through flipped-interaction prompting, where the AI asked me to provide long answers about my goals and ideas before producing a product. At the end, I would publish my results in the hackathon to be judged along with the other contestants.
The hackathon was supposed to take only 4-6 hours, so I went in with a fairly simple idea: a browser-based productivity tool that ran a Pomodoro timer. I had no idea how long it would take to make, but it definitely wasn't something that I had the knowledge or the will to create by myself.
I immediately ran into a prerequisite issue. Overall, I was supposed to interact with Claude Code through the Terminal, but I didn't have a lot of exposure to it and had generally walked away from every experience with it confused. On being forced to use Terminal again, I realized that the reason why I'd had so much trouble was in part that the default font size and line height of it were so small. I couldn't parse the information it was telling me because I literally could not read it. Cmd+plus made no difference. So, I asked Claude how to change that. With its instructions, I bumped up the font size from 11px to 18px, and adjusted the line height to a slightly-more-readable 1.1em to have more breathing room without breaking the ASCII art.
Developing Project Specs
The educational materials explained that I was going to be using flipped interaction, in which the AI interviewed me across multiple rounds to figure out my goals, inspirations, and feature ideas. In each round, it would create a document summarizing the most important information gained. This was to act as an external memory for it to reference when I wiped its memory in between each round, in order to avoid context rot. Only after it had a full set of scope doc, PRD, technical spec, and build checklist documents would it actually begin building. All of this was directed by the plugin provided by the event, which told Claude to act in certain ways.
Being interviewed by Claude felt remarkably similar to working with a human engineer consultant. Multiple times, I recognized the questions it was asking me as the same thing I might ask of a client. I was running through the double diamond design thinking model, prompted by Claude-with-plugin.
One major difference compared to my normal process was that I couldn't pass image input into Claude. Usually, with a human, I could show them a moodboard, or a paper sketch, or a high-fidelity Figma mockup, but as far as I could figure out, there wasn't a way to do that with Claude within the scope of the hackathon. It was words or nothing.
This ended up being an issue when Claude gave me ASCII UI mockups for me to approve. Its suggested UI was much closer to my imagination that I expected, but there were still some detailed tweaks I wanted to make. I had the bright idea that if I couldn't use Figma, at least I could still send back an edited version of the ASCII mockup with the general changes I wanted. Unfortunately, Terminal disagreed. Copying and pasting didn't show the ASCII at all, instead presenting me with the message [pasted text] in the line where my input was. I couldn't see or edit the ASCII at all. I gave up on visual communication at that stage and resorted to typing it in text, which felt a little demeaning.
Finding Assets
As part of the interviewing phase, Claude flagged that my idea would require sound files to play and an image to serve as the background. (I hadn't thought about the sourcing of these very much, in part because I wasn't sure what Claude's capabilities were around this.) I decided to go online to find some free resources. I was already very familiar with getting stock photos from Pexels, and a quick google search turned up Freesound for sound assets.
Being limited to what I could find online changed my idea, but not in a bad way. Originally, I'd been thinking of something vaguely fantasy, but Pexels didn't have any images of that. Instead, I grabbed a photo of a large, historic library from Michael D Beckwidth. Then I found that none of the drum sounds I could find on Freesound matched the vibe I wanted at all, so I switched to bell noises from Divinux and SergeQuadrado. Claude couldn't edit the audio files at all, so I used QuickTime to slice SergeQuadrado's "Alarm Bell" down to the part I wanted.
Honestly, I like that I had to provide the image and sound files myself. Being able to use existing Creative Commons assets feels more respectful than generating them directly.
Building
Here I ran into a slight hiccup. At the start of the build phase, Claude told me that I need to clear its memory and run the checklist. Then, when I did so, it told me that I needed to clear its memory and run the checklist... again. I went through this about three times before I suspected something was wrong. I told Claude as much, and it happily agreed that I was right, running the checklist and then starting the build phase itself.
Based on what I'd heard about vibecoding, I'd expected that Claude would start generating code itself and I would have nothing to do but passively receive the output. I even got out a knitting project to pass the time with. This ended up not being the case.
To kick off the build phase, Claude-with-plugin asked me if I had any preference on what order to build features in. I answered that I didn't really care other than making things easy to test. Claude apparently took that feedback to heart and would periodically tell me about the feature it added, then ask me to test it, describing both the actions I should take and the expected output. About a third of the time, the newly created feature didn't end up working right, so I had to ask Claude to fix it before it could move onto creating the next feature.
This loop of building-testing-building-testing felt a lot like the QA testing I did with engineers after I handed off a design. However, the pace of check-ins was much more frequent than what I received from human engineers, and I was able to see what lines of code Claude wrote in every step, significantly increasing my sense of control over the process.
Building the project as initially described to Claude happened much faster than I expected. So I broadened my ambitions. I decided to make the whole site responsive and keyboard-accessible. At mobile breakpoint, I wanted the interactions of the volume slider to be completely different in order to save space on the screen. This was outside of the original specifications, and took many iterations to achieve.
Using Claude to add new features outside of the flipped interaction model felt very quick and exciting in the moment, but in retrospect, was not as efficient as it could be. Unlike during the interview phase, when Claude asked me to elaborate on my ideas in detail so it implemented things as I intended, it now immediately raced to build whatever I commanded with no further questions. Several times, I asked Claude to make a UI change, and upon seeing it, immediately asked to undo it. I also found myself sending out prompts with typos. (I don't know if the typos significantly slowed down Claude's attempts to parse what I sent, but I decided to cancel Claude's processing and re-issue the command with the correct spelling when I saw that happen.)
Interestingly enough, I struggled with getting Claude to make CSS tweaks with precise numerical values, such as adjusting margins on a given element or setting the size of text. Claude took longer than I wanted to apply these changes, and sometimes it would end up going on the wrong element or with the wrong value. This made it effectively impossible to do the rapidfire tweak-until-it-looks-good editing I was used to from WYSIWYG programs. It was significantly faster to edit the CSS directly using the preexisting knowledge I had, changing values in one screen on VScode and refreshing the browser localhost on another screen.
After all that came time to host the project online. Claude had decided the project was going to go on Github and then get deployed by Vercel. Surprisingly, there were a number of problems that happened at this step. Things that worked in local host did not work in Vercel, and I had zero idea why. I frantically prompted Claude to fix these issues. This was when I felt I was flying most by the seat of my pants -- unlike before, when I had some light frontend knowledge, I didn't know backend things well enough to understand anything of the solutions Claude told me it was doing. All I could do was to hope that it solved the mystery problems, eventually. And eventually, it did. My project went live on Vercel.
Closing Thoughts
You can see what I created here at https://pomodemy.vercel.app/. In general, I'm pretty happy with it. It's not the type of thing I would have been able to create myself with my current level of knowledge, especially in the short timeframe of the hackathon. Despite all the misgivings I came in with, creating it with Claude was an overall positive experience.
As a designer, I feel like I have a unique perspective on AI. One of the biggest concerns people have about AI's use in coding is that it promotes dependence on itself. What if it creates a generation of engineers who don't know anything except how to use Claude/Copilot/Cursor/whatever, who flounder when put in situations where that's not available? Well, as a designer, I'm already used to a state of affairs where I'm dependent on finding a cooperative engineer to build my designs for me. The default is that I don't know enough to make anything at all. Being reliant on AI to execute my vision doesn't feel significantly different than being reliant on a human engineer, except the AI is more easily available for casual hobby projects.
Beyond that, I had fun using Claude to code. AI coding took out all the aspects that I had previously hated about programming, like having to memorize very specific and unintuitive strings of commands to get anything done, or playing games of hunt-the-missing-semicolon among rows and rows of code. It didn't prevent me from doing anything manually, either. I could still edit files as I pleased in VScode, and use bash in Terminal. The difference was that when my manual programming failed to do what I wanted, I could copy-paste both my code and the error message to Claude, and get it to explain why I'd been unsuccessful while performing the correct programming in a way I could observe. Dealing with occasional hallucinations/new additions of code that didn't do exactly what Claude said it would do was infinitely preferable compared to the typical frustrations of fully manual coding.
I've emerged from this experience more excited about coding than ever. With AI, I'm able to implement things that I previously couldn't, and I have a less steep learning curve for things that I do want to learn manually. I'm definitely planning to use AI to code later. What for? I'm not sure yet. A lot of possibilities have opened up for me, and I'm just beginning to explore them all.