The Rot That Is “AI” Keeps Spreading Into Things I Love

Square Enix, the company that publishes (and makes, for the most part, but it’s also a little more complicated than that because of how companies are structured) Final Fantasy 14, has announced recently that they’re partnering with a bunch of local academics in Japan to study current “AI” products in an effort to have seventy percent of all Quality Assurance (testing, aka the work I do) and debugging fully automated sometime during 2027. Now, this is, of course, patently ridiculous and not something that will actually work out or is even possible in the way they want it to work, but the past years of watching the tech industry get gutted and people begin to lose their jobs to “AI” bullshit has taught me the obvious lesson that it doesn’t need to work for it to be adopted. It doesn’t need to be good for people to lose their jobs. It just needs to be good enough that someone can pretend it’s great and fire all of their QA staff in order to “reduce costs” so that shareholders can get a fraction of a decimal percent more money at the end of the fiscal year. It’s bleak, I know, but all of the studies done on it, every post-mortem on “AI Workflows” and almost every single company to adopt it have all shown that it is a neutral move at BEST and a net loss in every other case. People hate needing to check over the work of “AI” because of how often its wrong. People hate being forced to use tools that don’t work properly. As I watch it chip away at the industry I work in and watch it start being focused specifically on the work I do, I can’t help but feel like I’m staring down the barrel of a loaded gun every time it comes up. Which means that, every time someone jokes about it trying to take my job, I feel like they’re joking about the gun pointed at me going off and then insisting that it will never actually happen whenever I try to talk about how all this feels or why it’s bad to have around.

I’m so tired of needing to get people to look at the whole picture. I brought the issue of what Square Enix is doing in my FF14 discord and while a couple people have given the sympathetic “AI sucks” type responses I was looking for, more people have tried to brush it off, saying there’s no way it’ll happen because it literally can’t do the work required to properly test things. All while ignoring my comment that I have the job they’re trying to destroy and “AI” not working correctly hasn’t stopped any executive ever from firing people or fully adopting it into their company. Yes, absolutely, it can’t do my job, but that’s not going to help me when someone who thinks it can do my job fires me long enough to find out that their fancy “AI” tool actually can’t do that and then try to hire me again for worse pay (which is a thing that is already happening to the tech industry)! I’m still going to be jobless! What it was assigned to work on will still be shit! And it will happen to games workers even more easily than it will happen to me (I have a bit more job security than most testers because whatever takes my place has to be able to stand up to the pressure of people possibly dying if things go badly enough). It’s not like public outcry against price increases, paid DLC, always-online “play,” and all that other shit has ever stopped a games company from continuing to do that stuff. Why would one Triple-A game being a “failure” stop them from continuing to use “AI” or even doubling down on it? It’s not like the shittiest game launches in history have stopped any company from continuing to launch buggy games. None of these companies learn from each other and every single one of them is convinced that they’re the exception; that they’re the one that everything is going to work out for. If one of them manages to figure it out plausibly enough to release a game, then they’re all going to do it and whatever remains of the games industry at that point will probably spiral and collapse. As will a lot of other tech industries, I’m sure, since they wouldn’t hesitate to remove all QA workers either if they think they can get away with it.

Just yesterday, literally less than 30 hours ago, I had a random thought about some software I was testing and found a bug that wasn’t really a bug but an oversight of design, that could have easily killed someone if the conditions were right and it made it out of the testing environment I use (an environment I’ve insisted on maintaining for years in order to, no joke, avoid bodily harm to myself or others in cases exactly like this one). No alogorithm would have even had that thought. No one but a human being with the degree of expertise in the product being tested and a great deal of knowledge about both the industry I work in (R&D) and the industry my company serves ([REDACTED]), would have ever thought of this. Hell, this software got through design, programming, and review without anyone thinking of this potential oversight until It got to me, so it also takes someone who has literally trained their brain to push in unexpected and strange direction. All of which “AI” could never do because the current garbage-spewing text-generators can’t actually do anything new or novel. All they can do is cobble together stuff that already exists into arrangements that might feel novel but also already surely existed prior to that moment given how often it just reproduces something that already exists. But that’s still good enough for a lot of people making decisions at the top of a corporation who have no idea what conditions are like below them and who don’t have any idea about what it takes to make things. All they know is that letting people go boosts their company’s numbers and “AI” is a great tool to use as an excuse to let people go. And, often, hire them again later for less money because the industry we all work in is slowly drying up as companies shutter or merge so everyone is desperate for what limited pool of jobs remain.

This is all just so draining. I’m so tired of feeling alone in my understanding of this problem. I’m so tired of needing to at least act like the people I’m talking to aren’t full of shit every time they talk about some new “AI” feature they love or how, actually, it’s not that bad and I should probably play around with it, get a feel for it, and surely I’d change my tune once I saw what it could do. I love how I’m supposed to take their word for it but they’re never supposed to take mine. It definitely doesn’t make me want to scream and dash out into the cold fall forests of where I live and disappear until I’m either claimed by the elements or this shitty bubble has popped. I can’t even engage people about the fact that every leader of every major “AI” company has openly admitted that if they were forced to pay for all the information they stole from writers, artists, and various makers of things, it would ruin them since their business isn’t viable at all and is instantly made unviable by the expectation that they pay for all the creative works they’ve consumed to make their shitty garbage bot capable of pretending it works. No one wants to think about that! No one wants to admit that, maybe, they got caught up in a stupid fad! No one wants to think that they were wrong to get excited about something, that they fell for the snake oil saleperson’s pitch! So I’ve got basically no one to talk to about this stuff and have to content myself with ranting about it on my blog and occasionally losing my temper about it in a chat with a friend who kinda gets it up but isn’t faced with it as pervasively as I am and can just not think about it most of the time. Some day I’m going to leave this job and never use a computer for work again. I can’t wait for that day.

Did you like this? Tell your friends!

Leave a Reply

Your email address will not be published. Required fields are marked *