Large Language Models and Ethics
Checking in with some news — and a nuanced view on modern AI systems
Housekeeping
It's been some time since I wrote one of these — but I haven't stopped writing.
I launched a blog in the last month. I try to write there at least once a week. Recently, I've written a lot about technology and AI, because these are things that kind of permeate my day-to-day life (and all of our lives, in a way). I'm getting kind of sick of that, though, and will go back to writing about music soon.
If you're just here for things related to music, you should really listen to the new caroline LP or the new EP from Illuminati Hotties. The new record from Ganavya is also fantastic, but that's more or less her default mode at this point. If you're on Spotify, you can also follow my Best Of playlist for this year, which is over 100 songs by now.
If you're here for nuanced takes on things happening in the world and music, I've shared my latest blog post in full below.
-Scott
As far as I experienced, there's no such thing as ethical usage of large language models.
At the base level, these tools consume dirty sources of power to exist and persist. Their owners and leaders say that for them to continue to exist, they need to completely ignore all existing copyright laws, which they more or less already did. And while they are power hungry and trained on the sum total of copyrighted works, they not-so-secretly exist primarily to replace skilled labor (primarily of the white collar variety), which leaves major investments in AI infrastructure to look like something that will save multiples on labor costs.
Yet I continue to use them.
Avoiding the Breadline
I believe I have a fairly strong personal code of ethics and a decent moral compass, most of which was instilled in me by my father at a young age. It is my daily goal to not be an asshole — or at the very least the asshole — in any/all situations. The keyword here is "believe," mind you, and simply talking about my beliefs and trying not to be an asshole does, in some way, make me sound like one.
AI presents a challenge to this. Knowing the many negative impacts AI has on the world still poses no match for the need to use LLMs purely for survival in a growth-centric capitalist structure. It is my belief — and the belief of almost all executives today — that not using these tools now and down the line will make me as a skilled worker less desirable, necessary, and skilled of an employee, to put it mildly.
As head counts shrink while output goals remain the same or increase, proficient uses of generative tools is the difference maker between being able to provide and thrive versus being unemployed. As someone who's dealt with unemployment and who's wife is currently several months post-layoff, avoiding a scenario where I am unemployable in a field where I've spent well over a decade in is one of several reasons why I use these tools.
That's not to say that I overuse these tools or rely entirely on them to do my work. I witness many of my contemporaries fall into the trap of over-reliance on large language models and their outputs to the point where, if there is an outage, they are unable to get work done due largely to lack of knowhow. I am not someone to take an output from an LLM as law, nor do I avoid trying to recreate/replicate the output on my own manually without an LLM.
I do think we are headed to a workforce that forgets how to do things or never learns them in the first place, which will be quite troubling sooner rather than later. Yet if I am to use LLMs to continue to be a valued member of the workforce, I will continue to do so to upskill myself. In the last several months, I have learned to be a frontend, backend, and mobile engineer (albeit not a great one), excelled at building sales enablement automations at work, learned quite a lot about specific moments in world history (with verified sources, of course), and created a much easier way for me to write (like so) — among many other new or further developed abilities.
Using the Internet Sucks Right Now
I still spend the vast majority of my time using RSS readers to consume the internet. When I'm not using RSS readers, however, I use search like everyone else. The problem with search — Google, Bing, DuckDuckGo, and so on — is that it often generates crappy, lackluster results that have a high percentage chance of being AI generated themselves.
ChatGPT and Claude are really good replacements to Google. Google knows this and is scared out of their wits, as they should be. The difference between using a search engine and an LLM today is that the former requires you to sift through crap until you find what you might be looking for while the latter often returns precisely what you're looking for. You need to go through both processes with the understanding that you might be lied to or misled if not specific or if simply taking the results as law, but popular LLMs are a much better way to cut through noise than any search engine predating them.
ChatGPT tells me that using its base model typically uses ~10x–30x more energy per query than a standard Google Search. I'm not trying to use mental gymnastics to justify my contribution to the sudden surge in energy usage. I do think that if one where to find what they want faster using LLMs like ChatGPT, it could mean significantly fewer searches (and subsequently page loads) over time, which could reduce some energy usage. I do hope that the computation and energy cost of using such models decreases in the future to the point where they consume less power than the most used search engines, but a man can only dream.
--
Using an LLM leaves a constant sour taste in my mouth, one made slightly less horrible knowing that it is the difference between future employment or unemployment in my field. Yes, they are and will continue to be a contributing factor to a worsening climate. Yes, they are trying to put creative workers out of business while stealing their creations. And yes, they simply have no right to exist, full stop.
Yet they do exist, and they are not going away because of everything horrible about capitalism. Pretending that they are not a looming cloud over everything will make participating in such a structure nearly if not fully impossible at a time when no viable working alternatives exist and are widely implemented.
Creating an environment where such creations like LLMs don't exist requires sweeping changes to our way of life that, even as an optimist, I do not see happening within my lifetime. That said, while I will continue to use these tools and learn from them to an extent, I will not only do so with reluctance, but fully support, embrace, and champion any changes on a societal level that will allow for these machines to be turned off forever.