Tobold's Blog
Saturday, April 22, 2023
 
AI scams

This definitely is the year in which AI is making a lot of headlines. So I'll keep commenting as well, because I feel that I can contribute some much needed rationality into a discussion which is frequently dominated by emotions and fear. So I have been reading about a scam that happened on Reddit, where "Claudia" sold nude photos of herself. Only that Claudia doesn't exist, and both the clothed and nude photos were created using AI image generators. Not exactly the crime of the century, and the whole operation was shut down after making a whopping hundred bucks. But the episode touches on some aspects of AI scams that are worth discussing.

The first is the simple question of whether it was actually a scam. Didn't the people who saw the images of Claudia clothed and paid for getting nude photos of her get exactly what they wanted? Is a pornographic photo worth less if it generated by AI? In terms of market value, probably yes. Real women are obviously reluctant to send nude photos of themselves to strangers, in fear that those images could come back to haunt them. Claudia isn't worried about that, because she doesn't exist. On the other hand, it isn't as if free pornographic images were exceptionally hard to find on the internet. Why would somebody who likes to look at such images be concerned whether the women in those pictures are real or not, as long as they are naked and beautiful?

The second aspect is that some people who hear stories of AI scams are blaming the technology. And no, if we call this a scam, it definitely was perpetrated by humans. It wasn't some AI who decided to scam humans. It was humans using AI tools who scammed other humans. While modern photocopiers are equipped with technology that makes them stop working if you try to photocopy dollar bills, the instances in which people were fooled with photocopied bills are not to be blamed on the technology, but on people who had the criminal energy to use the copier as a tool to scam others. AI tools are just the same. Yes, you can maybe scam your teacher by having ChatGPT write your homework essay. But you could also get somebody on Fiverr to write that homework for you, although Fiverr explicitly forbids that.

A human selling homework writing services on Fiverr would be unethical, as he is supposed to know Fiverrs rules against it, and is supposed to recognize the task itself as being in furtherance of a pupil "scamming" his teacher. AI isn't unethical, it is a-ethical. Despite the name, artificial intelligence isn't intelligent, and lacks inherent tools that would enable it to recognize a given task as not being ethical. Just like they did for modern photocopiers, it is possible for engineers to add capabilities to AI tools that make it recognize and refuse tasks that aren't ethical. But the AI itself doesn't even understand the concept of ethics, and unless explicitly forbidden to do so in its code, it will try to perform any task a human asks it to do without pondering the ethics of that task.

Many scams these days wouldn't be possible without social media and the internet. But demanding the internet to be shut down to stop scammers would obviously be ridiculous. What we can ask for is better filters and moderation from the tech companies that run these social media. And in the same vein we can demand the tech companies that run AI tools to add better filters and moderation to these tools. Many public AI image generators refuse to produce porn already, although that mostly reflects the peculiar American fear of nudity which isn't shared by much of the rest of the world. The use of AI tools to spread misinformation is probably a greater danger, and there isn't much to prevent that yet.

As my last point I would like to circle back to my previous post on AI and talk about AI taking all our jobs away. If your "job" was to sell photos of yourself nude on Reddit, shouldn't we, as humanity, be okay with you being replaced by AI? I recognize that some people's idea that *all* sex work and pornography is exploitative isn't true, there are people who do that work out of their own free will. But there is certainly some exploitation going on in that business, and I'd much rather see an AI tool exploited to produce sexual imagery than a real person. Claudia doesn't mind if the photos she sold end up being widely distributed.

Comments:
If a person earns a living by doing things of their own free will that does not harm themselves or others then I feel very uncomfortable saying that "it's okay for them to lose their job". That seems like project my morals on to others. While the media is prone to exaggeration, I don't think it's an exaggeration to speculate that many people may have their lives seriously disrupted by AI.

We need jobs at all skill and difficulty levels because our societies are made of people at all skill levels. Removing the "easier" or "mundane" jobs may be great for some, but it's horrible for others because those people may not have the ability to do lots of other things. People are going to lose their jobs, and they may not be the people that get the "other" jobs that get created.

For example, truck drivers. I think it'll be great when AI can replace truck drivers, cab drivers, etc. However, for those people doing those jobs I don't think it'll be great. There will be mental and financial distress, some people won't be able to find jobs at equal pay or a job anywhere near their current pay. That will have real impact on those people and their loved ones. There is a real human cost when we try and marginalize "lower skill" workers.

It's very similar to what happened in the US when they started outsourcing textile work to countries with lower labor costs. Whole towns, let alone families, were disrupted. Some places never recovered. However, on the positive side, there were other people that now how had better paying job opportunities then they previously had.

We need regulatory and social systems in place to support such a change. AI provides us an opportunity to better distribute resources within societies, but we can't add transformational tools without transformational frameworks. Without those first we are going to have a lot of people in a lot of pain. I didn't like watching that happen when outsourcing to "cheaper labor" became so prevalent (I realize it's not a new concept, but it seemed to really ramp up since the 1970s), and I'm not looking forward to seeing it happen to with outsourcing to AI.
 
Possibly the reason people would buy those photos, instead of professional porn, is that they want to masturbate with the knowledge that a real women gave them permission to masturbate to their body.

If so, I guess the job is only replaceable by AI if the AI can reliably/continually fool the buyer.
 
Following up on what Janous said: the main problem is that changes are now happening on a timescale which is faster than one human generation. Your children can plan to go into new jobs when you see the writing on the wall, but if entire categories get replaced in a few years this will leave a lot of people out of job. Which is not good when your economy relies on mass consumption.

@Tobold: reading your text I see that you're making the same mistake of many: thinking that the current generation of AI is not intelligent. Have a look at the talk by Sebastien Bubeck on youtube: "Sparks of AGI: early experiments with GPT-4", which shows that already last year, GPT-4 was checking 4 (and maybe 5) marks of the 6 which qualify for "intelligent". You're also thinking of AI as a program: "But the AI itself doesn't even understand the concept of ethics, and unless explicitly forbidden to do so in its code, ...". GPT has almost no code, all the information is in the network itself and we don't understand how it's stored there. If you look at the way the GPT architecture works, it's exactly as you would expect a brain to work, i.e. by generating an internal representation of the world (and a representation that we don't understand....) and then relying on it to perform predictions (which, BTW, is the definition of intelligence used by Hutter and the justification for his Hutter prize). What AI lacks now are two things:
- the current "world" it works on is just text and not the real world
- it cannot iteratively improve its model, as it's trained once and every new session starts from scratch
My guess is that things will get interesting in the near future....
 
@Helistar : I think the academic definition of intelligence is very different from what regular humans would consider “intelligent”. What I am saying is that ChatGPT lacks “humanity”. It doesn’t feel anything. It doesn’t desire anything. It has no moral values. It has no concept of actions leading to consequences.

I reject the idea that because we don’t know how ChatGPT works, and the output can resemble human output, there “must be” some internal humanity somewhere in the part we don’t know and don’t understand. That is just a projection. We think if it walks like a duck and quacks like a duck, it must be a duck, when in reality it is a machine that has been deliberately created to walk like a duck and quack like a duck.

The danger is that people through projection can create parasocial relationships, even with a chatbot, and that parasocial relationships tend to be unhealthy, and ripe for abuse. Would you accept life advice from ChatGPT?
 
The New Yorker on AI
 
@Tobold: You should watch the video I referred to, the definition of intelligence used comes from psychology. And BTW you're falling into the trap of "intelligent == human", which is a pretty huge blind spot. It also shows here: We think if it walks like a duck and quacks like a duck, it must be a duck, when in reality it is a machine that has been deliberately created to walk like a duck and quack like a duck. If you cannot tell the difference, then it doesn't matter if it's a machine or not: it IS a duck.

And independently of that, the problem raised by Janous stands: even if it's crappy, as long as it can get the job done for cheaper, it will be used to replace workers, messing up with their lives (and with the economy).
 
If you cannot tell the difference, then it doesn't matter if it's a machine or not: it IS a duck.

No, it isn’t. First if all, the only reason why we couldn’t tell the difference is that we failed to look closely. In the Claudia story some people bought the nude photographs because they couldn’t tell, but another person on the same forum correctly identified the pictures as being AI generated. Second, if we reallycan’t tell, it is because we only get such a small glimpse. My walking and quacking duck robot can’t swim; the impression that it *is* a duck is an illusion, we only think it could swim because our brains tend to fill in the blanks, and just automatically thought that if the small set of features fits, the rest must as well.

In particular the fears that AI will jailbreak, escape from the confines of its computer and take over the world are just pure nonsense. It projects human features, “if it is intelligent, it must wish to be free and powerful”, to a software that only can string words together.

And yes, like any tool, it can be used for bad things. You can use a hammer for construction, or to bash somebodies head in. That doesn’t mean a moratorium on hammers until we understan their motivation better is a good idea.
 
Janous: "it's okay for them to lose their job"
While certainly bad and impactful for the individual, it’s happening all the time. There are goods and services people are just no longer interested in. Or when was the last time you had a band at your house because you wanted to listen to some music?
We replaced the humans with different iterations of technology making it cheaper and more accessible to many more people while the musicians no longer have to go from palace to palace.

Tobold: "It doesn’t feel anything. It doesn’t desire anything. It has no moral values. It has no concept of actions leading to consequences."
Uhh, lots to unpack there:
What is feeling? Do we understand it ourselves?
What is desire? A utility function? Then AI has it.
Moral values? Isn’t that just a result of the right utility functions?
A concept of actions and consequences: ChatGPT3 might not because it’s rather dumb, but AI will need and therefore have it.

"No, it isn’t. First if all, the only reason why we couldn’t tell the difference is that we failed to look closely.
You are right and also wrong. The point is exactly that we need to look closely - but at what point are we satisfied?
Are you sure that the ducks you see outside are real? For some people Claudia was duck enough. What if all people are fooled?

"My walking and quacking duck robot can’t swim […]"
Because you only set out to create something that walks and talks like a duck. AI will also provide that if asked for it, but I think you are mistaking it for the wrong prompt. "Walking and talking like a duck" isn’t a duck but a duck would fulfil those criteria.
If you asked an AI for something that makes you believe it’s a duck, you would probably end up with something where you would fail to look close enough and buy it’s nudes.

"In particular the fears that AI will jailbreak, escape from the confines of its computer and take over the world are just pure nonsense. It projects human features, “if it is intelligent, it must wish to be free and powerful”, to a software that only can string words together."
If we are just looking at ChatGPT, then yes it’s probably not going to do that. But it seems as if there are a few details missing in your picture.
The main point is that "being free and powerful" isn’t the terminal goal of the AI but an instrumental goal in achieving whatever terminal goal it has.
Like you are building your new house. That is your terminal goal. Why do you still want money? Because it’s an instrumental goal and helps in achieving your terminal goal. Robbing a bank would be an action that yields money but would also put you in jail, preventing your terminal goal. Coffee button on your blog? Money, but takes a million years. So not a good action. What about a crypto pump and dump? Selling your old flat? etc.
Maybe building your house is also not your terminal goal and only instrumental in happily living retirement.

An AI also has a terminal goal and it will ruthlessly maximise for that. If the AI is able to reason about us humans and our psychology, then deceiving and manipulating us could be an instrumental goal in order to achieve the terminal goal.
It might be comparable to the homemade fly traps with apple vinegar and dish soap. The fly’s instrumental goal of collecting the liquid is exploited and they fail at looking close enough and see the dangers until it’s too late.

Have a look at Robert Miles videos on AI and alignment:
Intelligence and stupidity, Orthogonality - https://youtu.be/hEUO6pjwFOo
Why AI lies - https://youtu.be/w65p_IIp6JY
And then probably some more.
 
Post a Comment

<< Home
Newer›  ‹Older

  Powered by Blogger   Free Page Rank Tool