Tobold's Blog
Sunday, July 17, 2022
 
Getting too good at simulating people

I once programmed a chatbot. While it is probably difficult to imagine for the younger generation, during the 80’s we did have the first personal computers, but there was no such thing as the internet yet. Programs came on tape cassettes or, in this case, were printed in computer magazines, from which you typed them into your computer. The chatbot software was called Eliza, and it did a very basic job of parsing your input and replying with some suitable output remotely resembling a conversation. Now computers, software, and the internet have obviously evolved a lot, but chatbots haven’t gone away. In fact, some basic customer service jobs are now done by chatbots. But there are also some much more sophisticated chatbots, like Replika, who do a much better job than Eliza of simulating a “texting” conversation with a human being.

But increasingly Replika and other chatbots have been critized for saying things that aren’t nice, up to suggesting murder or suicide. As so often, it turns out that the problem isn’t artificial intelligence, but human stupidity. Replika uses machine learning from it’s whole userbase to better at simulating a human. But it turns out that humans, especially on the internet, are assholes. And if you spend too much time learning how to simulate a real human, as opposed to a hypothetical perfect human, the AI learns how to become an internet jerk. I read recently that Twitter supplied the complete database of every tweet ever made on the platform to Elon Musk. Yes, you could feed a machine learning AI with all that data and produce a good simulation of somebody posting on Twitter. But does anybody believe that this artificial Twitter person would be somebody you actually would want to talk to?

In real life, in a face-to-face conversation, humans often show some restraint, for fear of a negative reaction. That can be as simple as fearing the other person will punch you if you say exactly what you think, but more frequently the restraint is based on social contracts, a willingness to moderate your speech in order to maintain an ongoing friendly social relation. On the internet our distance to person we are talking to, physically and socially, is larger, and there is less of that restraint. Add a dose of anonymity, and you quickly arrive at the Greater Internet Fuckwad Theory. In the case of Replika, users were aware that the “virtual girlfriend” they were talking to wasn’t a real human, so they showed even less restraint talking to the chatbot, inadvertedly training the chatbot to become abusive itself.

The thing is, the perfect AI chatbot would be a saint. If you wanted to create one by machine learning, you would need to feed him with input data only from real human saints. You can probably see why that might be difficult to achieve. If you do a perfect simulation of something fundamentally imperfect, you get a predictably imperfect result.

Comments:
> The thing is, the perfect AI chatbot would be a saint. If you wanted to create one by machine learning, you would need to feed him with input data only from real human saints.

Who says that the AI chatbot has to be "perfect"? What if you settle for "good enough"?

If you want to create a chatbot that acts like nice people do, you feed him with input data only from nice people. That should not be that difficult to achieve.

 
How?

How would you find a large number of “nice people”, get them to agree to have a large amount of typed conversation, and monitor that conversation for any slip ups? It would be difficult and expensive.

I actually think there would be a market for a social media platform only for “nice people”, but it would be impossible to implement. “Nice people” are actually real people who for some social reason filter what they say. These filters tend to get lost on the internet.
 
One way is to use moderated forums or chat rooms, in which the bad behavior has been removed.

Another way is to use crowdsourcing - I recall there was something similar used for image recognition, in which large groups of people were asked to describe what they saw in a number of images.

Similar approach can be used to labeling a set of conversations whether they are nice or not.

It is pretty much doable, and while it may be expensive, it should not be more than the image recognition crowdsourcing already done by large companies dealing with AI.
 
@Tobold A brief internet search found me for example the Amazon "Mechanical Turk" platform: https://en.wikipedia.org/wiki/Amazon_Mechanical_Turk

Which seems that can be used for exactly these kind of things.
 
Post a Comment

<< Home
Newer›  ‹Older

  Powered by Blogger   Free Page Rank Tool