Tobold's Blog
Saturday, February 10, 2024
 
General vs. specific AI

Even intelligent people rarely say intelligent things. In our day-to-day interactions with others, the level of intelligence required is extremely low. This is because we rarely debate complicated stuff in private conversations. We are more likely to do small talk, or stick to practical communications like "pass me the milk, please!". This is why we can chat with a generative chatbot like ChatGPT and come away thinking that this AI is intelligent, because it sounds much like a human that we do small talk with. We overestimate our capability of getting information over another person's intelligence by talking with him, which is why companies hire people based on interviews, and only later find out that the person they hired isn't in fact suitable for the job at all.

In a widely published case last year, a lawyer used ChatGPT to write a legal brief. The document *looked* like a legal brief, but in fact the case citations in it were simply made up by the artificial intelligence. ChatGPT is a general AI, which is good at *sounding* real. But it isn't a specific AI, and has no specific knowledge. It doesn't have a legal case citation database, or understanding what case to cite as a reference to what legal opinion. It just put together words that looked like case citations. And while 2023 was a year in which AI was much talked about, and much progress was shown, this was all about general AI. Machines sounding human, without actual knowledge behind their words.

I was reminded on how much specific AI is lagging behind by watching some video content from people who had been given a beta version of Millennia. They were allowed to play the game beyond turn 60, where the public demo stops, although still restricted to the third age. But the further beyond the 60 turns you play, the more two things become obvious: Millennia is a very interesting game, with some very interesting deep game mechanics and complex options; and the specific AI playing your opponents in the game is dumb as a brick, and doesn't even manage relatively simple tasks very well.

That has been a problem with 4X and grand strategy games for decades. These are games which take a rather large number of hours for one game, which makes them hard to set up for multiplayer. So a lot of people play a lot of games against AI opponents, and these generally aren't very good, in spite of being specifically designed for just one game. They generally get by with a mixture of plain cheating and hiding stupidity from the player through fog of war. Age of Wonders 4 got lauded for much improving their AI in a patch last year, when all that patch did was increase the priority of AI units attacking already wounded units, thus leading to some basic focus fire. Actually losing units to an AI in a pitched battle was a noticeable step up.

I really wished game developers would put more manpower into the development of the specific AI that plays the opponents. The bar is relatively low. Specific AI to play chess at grandmaster level exists, but we neither need nor want that in our AI opponents. In most 4X and strategy games I played, I would already be extremely happy if an AI opponent that declared war on me would be able to coordinate an attack against me with several separate armies. An AI that could play as well as a totally mediocre human would be a quantum leap in specific strategy game AI.

The good news for half the working population is that general AI will not be able in the foreseeable future to take your job if your job requires specific knowledge. Even a lowly paralegal would have done a better job with that legal brief than ChatGPT. There is no way ChatGPT could repair your car or fix your sink. Only if your current job consists mostly of spouting general phrases, you are much more in danger of your job being replaced by a general AI bot. Journalists, customer service representatives, and politicians, beware!

Comments:
I think that while 'general AI' may be used sometimes to indicate that ChatGPT can chat credibly about anything that humans have ever chatted about, the usual meaning is an intelligence like ours that can reason about anything.

There could be some use for that in 4X-type games, although an Alpha derivative could probably do a decent job. It's possible that an Alpha solution (as for Go and Chess) would be fragile against small variations in the rules, where a more general intelligence would quickly grasp the implications of a change.
 
Tobold: "[...] a totally mediocre human [...]"

But how do you specify and quantify a mediocre human?
Let's take chess for simplicity.
A dumb human/AI is simple: just pick a piece and perform a legal move. So you iterate over all pieces to build your set of legal moves, then pick one at random.
A grandmaster human/AI is also simple: play the winning move. You iterate over all pieces and do a path analysis on your possible moves and pick the one that will win.
But what is a mediocre chess human/AI? Your moves must look like they are smart but can't be smart enough to actually win all the time. So you have your set of moves, you know which one will win (but you don't want to pick that) and all the others are by definition "bad" moves - but they are only bad if your opponent plays optimally. If they move their king into an obvious check situation, do you then take that or do you perform a dumb move?

For us humans "mediocre" is easy to judge: you look at the outcome - but that is post hoc.
Try to deliberately do a mediocre job. Write a mediocre post, cook a mediocre meal, do the dishes mediocrely, etc.
We think our moves are the winning ones according to our judgement of the situation at the time but we often lack all the information or don't consider important factors so that something is missing.
 
Try to deliberately do a mediocre job.

Been there, done that. And I think so has everyone who ever got a work assignment from his boss. When planning hiw to do a task assigned to you, you automatically consider how much effort you want to put into it. Is it something where doing a good job is likely to be noticed, or something you personally think is important? Or is it a menial task and nobody cares how well the job is done? Sometimes you just keep your powder dry, do an unimportant task with minimal effort and predictably mediocre results, and save your energy for something more important.
 
I guess when you are in a position where you can blend with the masses, that will work.
For my assignments "mediocre" will be noticed as there aren't many people around and missing the brief is visible (it also leads to me doing the corrections) but I see that in the end the sum of actions is probably also just mediocre.

But I find it hard to put that into a formula for an AI to follow: produce wasteful units? Spend resources on irrelevant things? And how do you make it look like effort has been put in?
 
I think an AI taht calculates the best move for a game would give a score to each possibility. Playing mediocre would then be choosing a random moves among all moves that score above a certain threshold, instead of just taking the one with the best score.
 
The fact that most game AI still sucks so bad is one of my personal pet peeves. I remember spending days learning all the complexities of Stellaris some years back, only to learn that once I knew how to play it, the AI was so awful it was unplayable, short of letting it outproduce me by a vast margin at a very high difficulty setting. Most games don't even make an effort on their AI.

But, you're greatly underselling what AI is already capable of. Using LLMs is not an appropriate approach to play a strategy game well. The fact that 99% of computer strategy games have neither the expertise nor the budget to challenge a player on equal terms doesn't mean it couldn't be done.

Chess engines have played at a >= grandmaster level for almost 30 years now! Deep Blue vs Kasparov was 96-97. Check what an elite custom effort like Google Deepmind's AlphaStar was able to achieve 5 years ago in Starcraft II. And that's a fast real-time game with huge fog of war and dozens of varying units in simultaneous motion. Played it at elite level after a few months. That we can't still make decent AI for turn-based games in 2024 is sad. Not to argue it's simple, but doing it well needs to become a recognized, specialized discipline.
 
4x games are also just very, very complex so any attempt at a robust AI would require a lot of money and I guess devs or publishers either don't want to spend money on that or they feel it isn't worth it.

Chess is a very simple game with a limited ruleset compared to the options a "true" AI would have to calculate in any 4x game.
 
I can't remember the game and a brief Google search didn't help me, but I seem to recall that quite a few years ago (90s maybe?) there was a 4X game designed from the ground up to be able to have a good computer opponent. Anyone else know what I'm talking about?
 
I did a curious experiment and tried to play my homebrew tabletop RPG system with AI.

After I've explained the rules, chatgpt was able to use mechanics correctly, at least if prompted immediately. It "forgets" details if you don't mention the rule for a while though, and in time the rules completely fade from its memory.

The biggest problem is that chatgpt fixates on details, but can't formulate strategic plans, and because of that fails as a player. It literally behaves like a disinterested player glued to their phone: they can answer immediate question if you pole them, but have no clue about the big picture.

Interestingly, asking chatgpt to be the GM turned out much better. While for players strategic thinking is a must, for GMs it's just nice to have. The worst case result for a GM without strategy is a somewhat random collection of unrelated encounters that are nevertheless playable and keep the game moving. The worst case outcome for a player who can't use strategic thinking is death of their character. I was quite surprised to see such results, as GMing is usually referred as a more cognitively demanding activity than playing.

Anyways, for me as a GM, AI looks like a very useful tool for trying out rules from various existing systems and my own hacks. Experiencing your own rules from the players' perspective used to be a luxury that few homebrew designers ever got: you had to teach somebody your system, and persuade them to GM it just for the tests, with full expectations that it might be crap. AI has no problems with that.
 
>4X game designed from the ground up to be able to have a good computer opponent
Could it be AI War?
 
@Camo: maybe you never did a mediocre job at work - but surely you have at least made a mediocre meal for yourself when you need fuel but can't be bothered to put in much effort?

As for good AIs in 4X, one option is asymmetry, in which the AI plays by different rules instead of trying to be like another human player. In this case you can have a challenge without the need for a good simulation of human intelligence.
 
AI isn't really an issue of "General vs. specific AI"

The FAKE narrative (ie propaganda) nearly everyone, including "alternative news" sources, have been spreading is that the TRULY big threat is that AI just creates utter chaos in society and that it might achieve control over humans. Therefore it must be regulated.

The TRUE narrative (ie empirical reality) virtually no one talks about or spreads is that the TRULY big threat with AI is that AI allows the governing psychopaths-in-power to materialize their ultimate wet dream to control and enslave everyone and everything on the whole planet, a process that's long been ongoing in front of everyone's "awake" (=sleeping, dumb) nose .... www.CovidTruthBeKnown.com (or https://www.rolf-hefti.com/covid-19-coronavirus.html)

Like with every criminal inhumane self-concerned agenda of theirs the psychopaths-in-control sell and propagandize AI to the timelessly foolish (="awake") public with total lies such as AI being the benign means to connect, unit, transform, benefit, and save humanity.

The official narrative is… “trust official science” and "trust the authorities" but as with these and all other "official narratives" they want you to trust and believe …

"We'll know our Disinformation Program is complete when everything the American public [and global public] believes is false." ---William Casey, a former CIA director=a leading psychopathic criminal of the genocidal US regime

The proof is in the pudding... ask yourself, "how is the hacking of the planet going so far? Has it increased or crushed personal freedom?"

Since many of the same criminal establishment "expert" psychopaths, such as Musk (https://archive.ph/9ZNsL) and Harari (Harari is the psychopath affiliated with Schwab's WEF [https://www.bitchute.com/video/Alhj4UwNWp2m]) or Geoffrey Hinton, the "godfather of AI" who have for many years helped develop, promote, and invest in AI are now suddenly supposedly have a change of heart (they grew a conscience overnight) and warn the public about AI it's clear their current call for a temporary AI ban and/or its regulation is just a manipulative tactic to misdirect and deceive the public, once again.

This scheme is part of The Hegellian Dialectic in action: problem-reaction-solution.

This "warning about AI" campaign is meant to raise public fear/hype panic about an alleged big "PROBLEM" (these psychopaths helped to create in the first place!) so the public demands (REACTION) the governments regulate and control this technology =they provide the "SOLUTION' FOR THEIR OWN INTERESTS AND AGENDAS... because... all governments are owned and controlled by the leading psychopaths-in-power (see CovidTruthBeKnown.com).

What a convenient self-serving trickery ... of the ever foolish public.

"AI responds according to the “rules” created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy." ---Unknown

"Almost all AI systems today learn when and what their human designers or users want." ---Ali Minai, Ph.D., American Professor of Computer Science, 2023

“Who masters those technologies [=artificial intelligence (AI), chatbots, and digital identities] —in some way— will be the master of the world.” --- Klaus Schwab, at the World Government Summit in Dubai, 2023

“COVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance.” --- Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [https://archive.md/vrZGf]

"The whole idea that humans have this soul, or spirit, or free will ... that's over." --- Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [https://archive.md/vrZGf]
 
I wonder how the crazy conspiracy theorists found their way to this remote corner of the internet that isn't exactly in line with their beliefs.
 
Likely some script looking for key words that we just so happened to use.
 
About Chess : One easy way to make a mediocre player is the number of step you are predicting. A good player is able to project himself in 3 or 5 moves (I guess, I am not good) while a mediocre one can only evaluate the next move of its opponent.
On AI, limiting memory, size of the neural network, training dataset are all different ways of generating worse result, without having first to find the optimal one. For Chess, you can forbid the AI to do too many branching prediction.
 
Post a Comment

<< Home
Newer›  ‹Older

  Powered by Blogger   Free Page Rank Tool