given that you can't throw a rock without hitting an ai doomer, boomer, or gloomer these days i wanted to spend a minute writing down some thoughts about ai and the framework i use for navigating it.

what is ai anyway?

you cannot, in any way that matters, differentiate between large language models, forecasting models, classifiers, etc. there is no hidden key you can turn that removes one while keeping the others. deep learning, recurrent neural networks, reinforcement learning, whatever. these are all applied research and there's no wand you can wave that stops one but allows the others (outside of policy levers; i will discuss this later)

artificial intelligence is everything from chat bots to hurricane prediction to self-driving cars to autocomplete. it's pretty much anything that involves some level of non-deterministic decision making. it's everything you love about computer and everything you hate about computer.

accepting this is the first step towards radical ai centrism.

is ai going to take my job?

you cannot, in any way that matters, have an artificial intelligence system that completely replaces a human worker. humanity is an infinite machine of reconfiguration, exploration, and learning.

there's a difference between your job and your work. ai will almost certainly automate away many work tasks that you currently do. this is a good thing. what is the productive value of my time finding receipts in my inbox and attaching them to an expense report? what is the productive value of my time trying to find one-off answers to questions that i'll never ask again?

this is not a new story. when i was small, my mother worked at a newspaper. i still have a core memory of watching the paper get made; the industry was still in the transitory phase of computerization. stories would be written on a computer, sent to someone to format and print out on paper, and someone else would take all of those printed out stories, and ads, and other copy and lay it out onto a page using this cool little machine that made things sticky. those pages were then turned into plates which were put onto a big news printing machine.

this was revolutionary to many of the people who worked there. in their lifetimes, typesetting was an actual job. you took the tiny letters out of a case and put them into a thing that would get loaded into a giant stamp and you'd press the actual characters onto paper. what did those people do? they adapted to the automation. they could do other things. the quality of the paper improved.

over the course of my youth this process changed many times. quark xpress and desktop publishing meant you didn't need the intermediate stage any more; you could just lay out the stories into a file, then print that file out onto a plate.

of course, over two decades after this, it looks different again. i don't know how many people work at that newspaper now. it got bought by some giant faceless private equity company who, by my understanding, fired pretty much everyone. this was in the 2000s? 2010s? ai didn't take their jobs, but capitalism and the destruction of local media through conglomerations sure as shit did.

back to my point -- i don't recall there being a lot of stress by the typesetters that computers made their job easier. like yeah, it's a trade, but i'm not sure its particularly ennobling to the human spirit to have to sit there, day in and day out, and place characters in a tray.

ai promises -- and there's some evidence that this is true -- that we'll be able to automate many more tasks than we can currently do. it's already happening in software development. it'll certainly happen in other 'knowledge job' tasks. if your job is 'turn this spreadsheet into that spreadsheet' then it's probably at risk, yeah. but your job was already at risk from... idk, ETL? automated reporting has been a thing for a while. if your job is to answer the phones at a call center in winchester, kentucky for like twelve bucks an hour and then transfer it to someone else and the value you provided to a company was that you didn't have an accent and followed directions... well, your job was already at risk, even before they came up with a bot that can do it.

automation does lead to job loss, reassignment, reskilling needs. thus it has always been, thus it will ever be. spitting on a particular technology because it's somehow uniquely evil vis a vis automation is short-sighted at best, and antisocial at worst.

accepting this is the second step towards ai centrism.

is ai going to kill the world?

what does it mean to kill the world? what is the exact, precise level of pain that we must endure or emit in order to prove to ourselves that the world is dead? human history is replete with the end times; each generation channels their own death drive into a particular crisis. le bon temps rouler is memento mori's companion.

doomers will tell you that ai will kill the world because of water, or energy usage, or because it'll turn us into braindead zombies, or because it'll be a paperclip maximizer. they'll tell you that ceos are pushing it to make us all part of an eternal debtor class, yoked in practical slavery to our financial betters.

boomers will tell you that ai is the only thing that can save us. that it'll get good enough to figure out all of our problems, then solve them for us. the rest of the owl is left to your imagination.

i do not believe either of these groups are being completely honest with themselves or anyone else. i tend to believe that doomers are mostly mad about capitalism as its practiced in the west (fair), and that boomers are mostly engaging in performative play for stupid people with money.

the problem is that stupid people with money aren't that stupid.

as of this writing the nominal returns on “safe” investments (bonds, stocks, whatever) is about 5% over 10 years. this has gone down a bit (and continues to shrink as interest rates are cut) but it’s a nice round number for discussion. the biggest thing to keep in mind is that it’s effectively free; you put your money in very safe stuff and you get more money. what is a venture capitalist to do? they need money to make money; they need to promise at least 2x returns.

what better, then, to promise fantastical gains? sure, they might be wrong but the LPs running the funds get paid. the institutional investors aren’t gonna lose everything (they still have plenty wrapped up in ‘safe’ bets after all), but the key insight here imo is that most of what people negatively react to is entirely elite consensus-making for the purpose of attracting more investment to highly speculative VC funds. this has significant downstream impacts (aka, why every single product has visible AI stuff) but a lot of it isn’t really for you.

a lot has been written about the byzantine and unsustainable capital expenditures of major AI firms, hyperscalers, and neoclouds. people fear a contagion into the broader economy. stats are trotted out like “AI is half of GDP!” (it’s not, it’s maybe half of GDP growth over the past little bit but we’re talking about, like, a percentage point or two.)

i’m not an economist but I mostly think these fears are overblown (at least in terms of contagion to the larger economy). the exposure the global market had to housing in 2008 was a lot bigger than what it has to AI.

so, how else is AI gonna kill us. water usage? almonds use more water than data centers. beef uses a staggering amount. we need (and in many cases, are) to green our industry and switch to renewables. practically, though, AI is no greater evil here than pretty much anything else. watching Netflix uses water. your phone uses water. singling out one particular applied tech as somehow more evil than others is short-sighted and leaves you vulnerable to missing the forest for the trees.

what else, hm. AGI? superintelligence? these are a combination of ill-defined research objectives or memes with little grounding in reality. I’m glad that spec fic authors have gainful employment in AI red teaming. I don’t think that AGI is a useful framing for applied AI.

accepting all of this is the third step towards ai centrism.

policy levers and the regulatory state

AGI and superintelligence are far better understood as rhetorical weapons being deployed to shape elite consensus in order to build a moat around frontier AI labs. the unfortunate consequence of anti-ai sentiment is that these movements are playing very squarely into the hands of this elite consensus capture.

what i find unfortunate is that quite a bit of 'anti-ai' sentiment is also aligned with the goals of this elite consensus capture but from the opposite direction.

briefly, i would suggest that the people arguing for the expansion of copyright and chilling of free expression are doing the work of the lobbyists who want to create a guaranteed backstop for openai/meta/anthropic/whoever. the idea of open source/open weight models is more or less anathema to the investment that has been made in these frontier labs. how are you gonna get ROI if someone is giving it away? the natural end-goal here is to have some sort of highly vetted, highly 'aligned' [derogatory], highly scrutinized set of AI providers that do little but extend our current nightmare of centralization and financialization.

there are actually fairly straightforward policy levers to address most current and future harms of automation. we tax rich people more and use that to pay for social programs. we should have UBI -- hell, it should be more than basic -- and build more housing to drive down the cost of homeownership and renting. we should especially do that in places that are desirable to live. we should aggressively fund foundational science and research, including AI research. we should use automation to streamline and reduce regulatory burden. we should do a lot of things! automation can make some of those things much easier!

what i find most distressing about this current moment is that there are extremely real problems that are mostly being unaddressed. i think there's a crisis in provenance of media. i think there's a crisis in provenance of everything you read online, and most people aren't equipped to deal with it. i'm not sure if there's great regulatory levers we can pull here, but i think that the solutions to these problems are going to be discovered via more research that's open by default rather than closed. i do not necessarily think that moving back to curation is a net good; we effectively have that now, it's just that our curators all happen to be bigots and ghouls at scale.

the only thing i really know is that we are woefully underprepared for the future, and the future will keep happening regardless of our thoughts on it.

accepting this last point is the final step towards ai centrism.

where do we go from here?

i have no frigging idea. i'm just some nerd with a keyboard. i can tell you what i mourn, though. i mourn the division that seems endemic to the broader western internet society. i mourn the ignorance that's held as strength, and the two-faced hucksters that sell hate and fear from one side of their mouth while profiting off that image. i mourn the lack of grace in our rituals, and our generic inability to touch grass. i mourn myself, for spending a few hours writing this giant stream of consciousness post rather than doing something edifying.

i mostly mourn people, though. i mourn the lights who have been snuffed out. i mourn the world that gave them freedom but denied them succor. i mourn the hardening of our spirit in the face of daily horrors. AI centrism will not save them, it will not save me. it is not a path that leads to redemption, i merely hope it is a path that will lead to truth.