Tobold's Blog
Tuesday, March 21, 2023
 
Artificial Dumbness

This is post number 6224 on my blog. I have no idea whether my previous posts have been part of the vast amount of internet content used to train ChatGPT. Probably not. But if it were, ChatGPT could easily reply to a request like "write a blog post about World of Warcraft in the style of Tobold". And the result would look pretty convincing. Now if you read some of my blog posts, you probably have some opinion on my intelligence, whether that is a positive or a negative opinion. But that opinion is something that has formed in your brain, based on the very limited information you have about me from my blog posts. The same leap to judgement would happen if you read a ChatGPT blog post: If it sounded reasonable and was well argued, you would attribute some "intelligence" to its author, ChatGPT. Thus we talk about the progress of artificial intelligence, because we can see outputs that resemble the output of intelligent people. But that is all fake.

I have written enough blog posts about MMORPGs and World of Warcraft for an artificial intelligence to be able to recognize patterns in that wealth of data, and to be able to reproduce something superficially similar. If you used the same data and asked ChatGPT to write a blog post about the NFL, the result would look a lot more like garbage, because I never wrote about that subject, and so there isn't much data to go on. ChatGPT would still have a lot of information about how I use the English language, and how I tend to write, but with no data on the subject, the "write a blog post about the NFL in the style of Tobold" request would probably yield something extremely vague, lacking logical coherence about the subject matter.

You also couldn't ask ChatGPT to write a "review in the style of Tobold" about a newly released game I haven't talked about yet. Unless you feed it with both my blog, and a bunch of reviews from other people about the game, in which case ChatGPT could project my writing style on the information about the game it got from elsewhere. But if the other reviewers liked the fast action sequences of the game in question, ChatGPT might well repeat that in fake Tobold's review, and regular readers of mine might spot that to be inconsistent with my preference for slower games.

The surprising thing is that the limited ability of ChatGPT to write a text based on pattern recognition and repeating what other people have said can very well be all that is needed for certain applications. A teacher giving students the homework to write an essay about some subject doesn't expect the result to be brilliant and original. You can easily let ChatGPT write that for you, and just edit it in case there are some mistakes a human wouldn't have made. I don't know if you ever googled for some sort of "game guide" information about some game, but if you do, you would find that there are already a lot of sites out there which seem to have copied and pasted that sort of information. It wouldn't be hard for ChatGPT to replace some of the people writing for that sort of publication. Even scientific publications (of which I have written a bunch) tend to start with a more general introduction to the subject area, e.g. climate change, and that general part could easily be written by ChatGPT.

A lot of articles about ChatGPT have been written this year in which the author prodded ChatGPT with questions until ChatGPT answered with something that sounded creepy, or emotional, or self-aware. You can get ChatGPT to devise a plan for robots to take over the world. And that makes for a nice, click-bait headline about "artificial intelligence plans to take over the world!". But that only looks scary because we tend to attribute thinking and intention to what other people say. ChatGPT producing the plan for robots to overthrow the rule of humanity doesn't know what it is writing, nor does it have any intention behind that. It just assembled a text, based on user prompts and pulling from existing texts in its database.

There are areas of human content creation, e.g. on social media sites like YouTube, where humans deliberately adjust their style to the algorithm the site uses to recommend content. An artificial intelligence application that was specifically trained on YouTube recommendations would be quite good at writing a script for a YouTube video that would have a lot of success. The same artificial intelligence would be unable to write of script for a YouTube video that was original and highly intelligent. Because artificial intelligence can't even recognize real intelligence; it can however recognize patterns of success and reproduce them. That says more about our patterns of consumption of online content than it says about the ability of machines to think.


Comments:
Looks like you made the cut! I asked ChatGPT if it was familiar with "the blogger Tobold" and it said:

Yes, I am familiar with the blogger Tobold. Tobold's Blog is primarily focused on massively multiplayer online role-playing games (MMORPGs), particularly games like World of Warcraft, as well as other online games and gaming-related topics. The author of the blog, Tobold, is known for providing news, opinions, and discussions on various aspects of gaming, such as game mechanics, industry trends, and player behavior.

Although Tobold's Blog was more active in the past, particularly during the peak of World of Warcraft's popularity, it remains a resource for those interested in MMOs and gaming in general. Please note that my knowledge only extends up to September 2021, and I may not have information on more recent developments or posts from Tobold's Blog.

 
@Tobold you should experiment with ChatGPT and see what it can actually do, it sounds from your post like you are making some assumptions and it would be interesting to test out what actually happens if queried with the suggested prompts.
 
You asked so here it is:

Prompt: write a post about the NFL in the style of the blogger Tobold from https://tobolds.blogspot.com/

Answer: Greetings, sports fans! Today, I'd like to take a look at the National Football League, commonly known as the NFL. This beloved American institution has been around for over a century, and it's still going strong.

First, let's take a quick history lesson. The NFL was founded in 1920 as the American Professional Football Association. It later changed its name to the National Football League in 1922. Over the years, the league has grown in popularity, with teams from all over the country competing for the coveted Vince Lombardi Trophy.

One thing that sets the NFL apart from other sports leagues is its physicality. Football is a rough and tumble sport, and injuries are common. Concussions, broken bones, and torn ligaments are all par for the course. The league has taken steps in recent years to address player safety, but it's still a risky sport for those who play it.

Another unique aspect of the NFL is its structure. The league is divided into two conferences: the American Football Conference (AFC) and the National Football Conference (NFC). Each conference is further divided into four divisions, with four teams in each division. The teams play a 16-game regular season schedule, with the top teams from each conference advancing to the playoffs.

Of course, no discussion of the NFL would be complete without mentioning the Super Bowl. This annual championship game is one of the biggest events in American sports, and it's watched by millions of people around the world. The game is typically held on the first Sunday in February, and it's become an unofficial holiday for many Americans.

So, what's the state of the NFL today? Well, the league is still incredibly popular, with millions of fans tuning in to watch games each week. However, the league has faced some challenges in recent years. Concerns about player safety, protests during the national anthem, and declining TV ratings have all been hot topics of discussion.

Despite these challenges, the NFL is still a force to be reckoned with. The league has a devoted fan base, and it's sure to continue to captivate audiences for many years to come. Whether you're a die-hard fan or a casual observer, there's something special about the NFL that can't be denied. So, let's sit back and enjoy the game!
 
Well, that is pretty much what I would have expected: It gets certain style elements right, but can’t tell what I think about the NFL, because there are no data on that.
 
Take a look at the article "The Waluigi Effect" which discusses both how ChatGPT generates answers and the inherent problems. The obvious problem is that if 90% of internet content says that the consequence of breaking a mirror is seven years of bad luck, that's what ChatGPT will answer.

The much more subtle and mindbending part of the article is when he delves into creating the "personas" that ChatGPT answers in. In effect, ChatGPT is "programmed" into creating answers as honestly as possible by writing a story in which ChatGPT is cast as a "trustworthy, accurate, and harmless" assistant and then leaving the answer for ChatGPT to fill in. Check it out - just skip over the math part; you don't need to understand that to get the more general point.
 
@Scott Very interesting article. The chatbot can take on a “persona” to answer questions. But my point was that even if you prime the chatbot to hate croissants or be highly intelligent, it still doesn’t actually have any feeling about croissants, or have any actual intelligence. It just is able to pretend to hate croissants or to be intelligent.

There is no original thought in the “Tobold writes about the NFL” text. There are just facts collected from the internet combined with patterns collected from my blog. It is a language model, not a machine that has learned actual reasoning.
 
It all depends what you define by "actual reasoning". ChatGPT is mediocre at solving physics exercices, but it still does a decent job when compared to students..... does this mean that students are incapable of "actual reasoning"?

Some people have a completely different take on the question, have a look at Sabine Hossenfelder's youtube video on the subject (title = "I believe chatbots understand part of what they say. Let me explain.").
 
@Tobold We're in violent agreement. I'd heard it said that AI "lacks intentionality" but I couldn't really wrap my head around that philosophical concept. The article finally brought home what it means - ChatGPT successfully creates the illusion of intelligence when it's really just "complete the sentence."

In fact, it's made me think that the whole concept of "generalized AI" is a pipe dream. Consider the difference between the concepts of "house" and "home." What separates the two? Safety and belonging. How on earth would you expect a machine to be able to grasp those concepts? The concepts are rooted in biochemical responses, with some of them so instinctual that we do them automatically.
 
Post a Comment

<< Home
Newer›  ‹Older

  Powered by Blogger   Free Page Rank Tool