Too Much of a Good Thing
I enjoy making and fixing things. Aside from failure — which often comes with making and fixing — putting something back together or creating from scratch is the best way I learn.
When I bought a house in 2021, I ran up a bill on the local plumber and electrician on what I later learned to be rather trivial fixes that I could've done myself. In the years since, I've replaced the u-trap under the sink (at the cost of parts and a tetanus shot), installed a light fixture, and installed a malfunctioning microwave (after discovering why it malfunctioned), among many household projects. Save for the roof that needs replacing over the next couple of years, I am handy enough to perform most maintenance and installation tasks in my home.
In the digital space, I could build websites and scripts thanks to learning HTML in middle school and my not-so-secret Computer Science dropout status. I can easily debug applications and file tickets as I know how software is made, the steps in which various processes are broken down, and what steps need to be taken next — all because I've worked closely with engineers for the better part of my career.
Yet when it came to actually building software, I could never move past the first step — until the advent of "vibe coding."
The Dumbest Term
"Vibe coding" is perhaps the worst term to come out of this current AI age while remaining the greatest use of new generative tools. Because it's the accepted term for prompting a large language model (LLM) to create software from scratch, marketing and industry folks have played fast and loose with the term, repeatedly talking about "vibing" (as if we are back in the New Jack Swing era of R&B), "vibe ____" (with the blank substituted for something decidedly coding but used to draw a parallel), and other less-than-creative ways to discuss building software and tools with LLMs.
Such a practice is fun, to put it mildly, but like with all things in tech, industry marketing and over-discussion of the practice has made the term decidedly cringeworthy. For those I've spoken to about creating something using an LLM, I've either avoided the term completely or breathed a deep sigh, looked down, and reluctantly said the dreaded term.
My History of AI-Assisted Software
I've used LLMs to help build HTML files since I started using such tools in 2022. Though I know how to use HTML and CSS, I find the assistance in finishing these files helpful enough to edit my own website and make sense of data from one place in an entirely new home. For instance, Apple Notes made it notoriously difficult to export notes into other applications like Obsidian . Yet using an early version of Claude, I was able to create a script (that I couldn't build previously) that took my notes and turned them into markdown files. (Such a tool now officially exists, built by professionals from scratch.)
As large language models improved in ability and scope, I too found new and inventive ways to use them for creating and fixing things. This includes, but is not limited to, moving my Spotify library to Apple Music, moving my Apple Music library back to Spotify, reformatting XML files to work properly in janky RSS readers, building Python scripts that power the Deepfake News Center, and plenty of other projects that ultimately improved my personal and professional quality of life.
The advent of Claude Code and other like tools (Gemini CLI), as well as the advancement (and likely peak) of large language models have allowed for way more than simple scripts or webpages. I built iOS apps and my own RSS service to my exacting specifications — all just by repeatedly talking to an LLM in command line. I made countless tools used behind the scenes at Reality Defender that have ultimately made my day easier without sacrificing quality of work or "the human touch."
The ability to build and fix software at will when I lack the proper abilities to write a line of code never existed before until recent. Thanks to these new advancements, bountiful tokens and tools, and the promise all bring, I dove head-first into building and fixing, showing others how to do so in the process.
Vibe Coding is All I Do
This newfound ability to build, fix, and learn generated some fantastic new tools of which I use and have also distributed online. It's also how I've spent the majority of my spare time over the last eight-plus months.
As of this writing, we're nearly a third through April and, aside from a few pivotal moments, I feel the year has sped by as I looked down at a keyboard. Even in the brutal months of winter and the first glimpses of spring (a false spring, given the current temperature), I spent nearly every moment outside of work plugging away at new tools, ideas, and the like in Claude Code. While the creation of my own RSS service is nice, as is the iOS app I've since stalled on, these tools already exist for free, with (likely) decidedly few security holes baked in thanks to the expertise that built them.
I've learned a lot over the last eight months in particular, albeit at the expense of other things that normally spark joy. Though I've sent out quite a few newsletters, posted on LinkedIn, and edited my book, my writing output dropped significantly compared to times before. I also used to finish at least a couple books a week, whereas now I am still one-third through Stoner as I was a month ago. (I decided to supplement it with Ben Lerner's Transcription, which is so short that I'll likely finish it this evening.) Most importantly, time spent with my wife, friends, and family (in that order) has comparatively dropped purely because I was busy telling a machine to write software and then fixing said software for hours.
Capping Out
It would be untrue and unwise to say I learned all there is to learn in using large language models to build and improve software — or "vibe coding," as I am loathe to say. As LLMs make incremental upgrades and as more powerful tools come around, I will poke and prod them, keep building things at/for work, and linger not one minute more in them than I need to be there, proving I can still learn and fix without overstaying my welcome.
I always felt that using AI tools should come into play when there is a problem that needs solving and large language models provide a quick, scalable, and easy solution when all other options provide a less than desirable outcome. I don't feel that AI tools should be in search of problems that need solving, which is something that the majority of AI companies have yet to understand as they try to find their fit in the world.
Over the last eight or so months, I did exactly that: look at the tools of which were provided to me, figure out what problems they could solve, and create my own bespoke solutions. This mostly resulted in bloated, unnecessary software of which already exists in the world, just uniquely my own. Sure, I learned how to build these things, have shared and will continue to share my knowledge with plenty of others (most recently with a journalist in Prospect Park, where we built this in ten minutes), and will build what I and others deem necessary. Between all of that, however, are the innumerable things that brought me joy and wonder prior to my eight month excursion, which I now have all the time in the world to experience.
Recommendation of the Week
A tried and true classic that I flock to when I need to be relatively offline and wholly enraptured by the world around me.
Comments ()