Tobold's Blog
Thursday, February 19, 2026
 
Working with cheap idiots

There is another board game drama, because the COO of a large board game company believes that in the future board games will be designed by AI. He said on Linkedin about AI: "Three years ago it was essentially nothing but a curiosity. Now it's useful but in limited cases. In a year or two it will be as useful as any new hire. A year or two after that it will be as useful as an experienced employee. For a few dollars a day in cost."

I don't care about the drama and calls to boycott the company. But I find it interesting what top managers think about AI. And while I am in no way an expert, I am pretty certain that the above quote reveals a number of very important misunderstandings about the use of AI to replace employees.

First of all, there is the question how useful AI is and will be as an employee. A research institute on AI tried to test that, by creating a Remote Labor Index benchmark. It takes real world examples of jobs that have been posted on freelance platforms like Fiverr. It then compares how various LLM AI models perform in those jobs, compared to humans. In the paper the AI succeeds in only 2.5% of the time to do as well as a human freelancer, while in a later update with the latest LLM models it still doesn't get better than 3.75%, failing to complete over 96% of projects at a level that would be accepted as commissioned work in a realistic freelancing environment.

Will there be further progress in these LLM AI models? Most certainly! But there is already a lot of indicators saying that the progress is slowing down. If a future generation of LLM model with the help of even more data centers and computing power then manages to do 10% of tasks as good as a human freelancer, that is still not going to be good enough for companies to really replace employees by AI. And that is just for the type of work that is already identified as being very possible to do remotely, just with a computer. The "robo plumber" that comes to your house, identifies the problem with your faucet, and fixes it is still very much science fiction, and will be for at least decades to come.

The other big issue with the statement about the future of AI employees is the "few dollars a day" part. The current cost for users to use LLM models is not in any way even covering the cost to run them. And the cost to run them is increasing fast. AI is still very much in the honeymoon phase of the enshittification business process, where it is sold below cost to attract and bind users. Once the "winning" AI models have been identified and established a monopoly or oligopoly, the companies will substantially increase prices to get their trillion dollar investment back.

Will AI change the economy? Absolutely it will! But we aren't heading for a future where all of us are unemployed. We are heading for a future where most work that requires any manual part will be still done by humans, because even if an AI controlled robot can do it, that won't be cost efficient. And the type of office work that doesn't require a robot will be done by AI supervised by human handlers. While a LLM AI is not actually "intelligent", there are some tasks that it clearly can do faster and cheaper than a human, especially in the generation of text and images. But it has been shown that hallucinations are a fundamental part of the technology, and won't be fixed by future versions. So for any type of work output that somebody else then has to rely on, a human needs to verify the AI work, which often takes longer than the AI needed to produce it.

And now we come to the part nobody talks about: So-called bullshit jobs. In any large company, a significant percentage of the employees is doing work that isn't actually essential or useful to the companies bottom line. You don't want AI designing a bridge and rely on that bridge not crashing. You might want to use AI to make Powerpoint slides for internal company use, or to write long boring documents for some compliance paperwork task. The less actually useful your current job is now, the more likely you are to be replaced by AI in the future. And even then you might be promoted to AI handler, with an increased output of Powerpoint slides and paperwork that isn't essential. The future of AI is us working with cheap idiots that nobody trusts to do the real work, because for the real work the cost of verification is higher than letting the work be done by a human in the first place.

Comments:
Good point about bullshit jobs. I'm not so sanguine about arts / entertainment, though. Sure, niche board games for enthusiasts will probably still be human made. But the market for say Monopoly could easily be taken over by AI slop.

AI will certainly be creating a lot of the assets for computer games etc. Is is already, and would be more except for customer resistance - but it's unlikely that the latter will last forever.

And to get outside the internet stuff, I think popular music could be in a far bit of danger.

[On the plus side, AI scripting can only make most streaming drama better at this point...]
 
I am extremely sceptical about the idea that everybody who produces artwork is an artist. Especially in computer games, there are a lot of people who create visual representations of boring stuff like grass, bushes, and trees. On the other side there are people who create for example the art for the boss mob, which then has a major impact on the artistic direction of the whole game.

I know that for the boring stuff in indie games often existing art assets of the graphics engine are used, because they simply don't have the manpower to remake everything in their own style. I would think that AI could actually improve things there.
 
2.5% to 3.75% over 1 AI generation is a 50% improvement. That should impress you. And those generations are coming faster than ever, with leading LLMs now largely coding their own next version, going at it 24/7, with humans mostly directing and supervising. If 3-4 such generations are now possible per year, and each one were to deliver a 50% improvement, I think you can calculate that the even the most optimistic predictions are reachable within a few years.
 
Software development is a place where they will have very high usage very quickly. They are already able to generate decent code very quickly and can multiply the output of an expensive developer. The dirty secret of programming is that almost none of your work is actually inventing something brand new - more like construction or a skilled machinist than a real inventor. The LLMs are trained on billions of lines of code - structured languages that are designed to be understood by a machine, whose suitability can be externally verified. Writing full programs or even methods from scratch is as defunct as a company's willingness to pay for the AI models, with the cost/capability improving.

(I hate it, but I might as well howl at the tide)
 
This benchmark has only been around for 2 generations. But other benchmarks have shown that the improvement per generation is slowing down. It is getting more and more expensive to improve results.

You are imagining an exponential curve with no upper limit. I do believe that it is more likely to be an S-curve, where we have already reached the upper half, where improvements are becoming asymptotical, approaching an upper limit of possibility.
 
You touched on what is the core problem with AI as it exists now. How do you make money off of it? The current model is licensing it to other companies. But that is only going to work insofar as other companies get value out of that licensing agreement.

No one is lining up to pay for completely AI generated products yet. We are very far off from wholly generated movies, books or games being anywhere near the quality where someone might actually pay to experience them as anything other than a novelty.

So if Company B is paying Company A for access to an AI model but Company B is struggling to generate a profit off it or see substantial producitvity gains off that license inevitably Company B will stop paying for it.

As AI companies increase the cost of accessing their models, which they will absolutely have to do since the costs are so obscene and increasing, the math just doesn't work out.

In the future for a business it may actually end up being cheaper to pay a human to create your artwork then paying a license to an AI company.
 
I have worked in IT for almost 40 years now and in that period there was a hype cycle every 5 or so years. Something management believed would be the answer to all their perceived problems (in hindsight: it was not). The I in AI is a huge overstatement, current models available to the public are not intelligent, at all. A search engine on steroids, useful for research and amusing as a toy. I read somewhere that AI solved mythical math problems. Turns put the solutions were already there, the “AI” was able to wade through all the (scientific) data and present it. At work we were given the opportunity to work with an AI to help with work on legacy code (written in RPG). I ran one simple program through the AI and the output was unusable. Several routines were completely missing, a critical call to another program, also missing. To be fair this tool was in a beta stage, but the errors are so basic, i would call this pre alpha at best. I do see a future use case for this tool (after a lot of refining) and other very specialist tools, but that’s about it for LLMs. For companies it is a great excuse to fire employees, the new “out sourcing”.
 
Post a Comment

<< Home
Newer›  ‹Older

  Powered by Blogger   Free Page Rank Tool