Friday, October 13, 2023
Trading eyeballs
Once upon a time I was subscribed to the History Channel on my cable TV. I watched documentaries about actual history, like The Men Who Built America, and history-related reality shows like Pawn Stars. But over time the History Channel changed its program, and became more and more sensationalist, producing more and more fake history stuff like Ancient Aliens. Apparently fake history sold better than real history, and making up stuff is often cheaper than documenting reality.
Now the BBC is warning of fake science videos on YouTube. When ChatGPT became popular, a lot of people started thinking about how to make money with publicly available AI programs. And one way to do so was making "faceless" YouTube channels, with AI writing the scripts, assembling videos, and doing the voiceover. And lurid fake science and history, about aliens, monsters, and conspiracies is still cheaper to make an more popular than actual science.
I'm a big fan of behavioral economics, the idea that you can explain much of the world by looking at the economic incentives for doing things. Now the economic incentives for YouTube videos is how many views they get, how much engagement they get. And as the saying goes, "You get what you measure": People optimize content to maximize views and engagement, not usefulness. Usefulness or truth aren't measured, don't have economic value, aren't incentivized.
A good example of this X / Twitter having introduced ad revenue sharing this summer. Write a tweet that gets millions of views, and get money for it. Truth isn't measured, views and engagement are. And you get the maximum number of views and the most engagement with the most outrageous fake content. With the recent Israel Gaza war, that led to an explosion of fake "citizen journalist" tweets, using anything from AI-generated images to video game footage to make outrageous fake claims about the war. Now part of that was Hamas political propaganda, but due to the high interest the subject provoked, a lot of fake news were created simply for commercial reasons, for the ad revenue sharing. It got so bad that the EU opened an investigation and is threatening X with fines or suspension of the service.
The solution to all this is clear: Measure truth, and make it part of the monetization. If you don't pay ad revenue sharing for fake news, the incentives aren't there and the activity stops. But that would require content moderation, and Elon Musk famously fired the people doing that. And YouTube has a misinformation policy, but doesn't seem very good at enforcing it, probably because of the cost of doing so. The Like and Subscribe buttons are heavily advertised on YouTube, the button to report misinformation is hidden behind another menu layer. Furthermore science misinformation is considered less harmful than political misinformation.
Comments:
<< Home
Newer› ‹Older
But who measures the truth?
It can be done in math, and to a lesser extent in science. When it comes to politics, or anything that intersects politics, you can be sure that those 'checking for truth' will have a heavy thumb on the scale.
It can be done in math, and to a lesser extent in science. When it comes to politics, or anything that intersects politics, you can be sure that those 'checking for truth' will have a heavy thumb on the scale.
With the amount of content uploaded to X (Twitter) and YouTube, it doesn't surprise me that the companies are not interested in hiring sufficient moderation staff, and would rather have their viewers do the content review for them.
There is also the issue of, if fake news makes more money for X (Twitter) and YouTube, why would they spend money and resources to reduce it? Not until they value the truth of content on their platform more, have an incentive to do so, or are forced to should I think.
There is also the issue of, if fake news makes more money for X (Twitter) and YouTube, why would they spend money and resources to reduce it? Not until they value the truth of content on their platform more, have an incentive to do so, or are forced to should I think.
Gerry: "But who measures the truth?"
The trick is: you don't.
You only decide who didn't tell it afterwards and then fine them. But only if they are above a certain audience threshold. Imagine the fallout if a broadcaster or paper was caught spreading misinformation that would need to be corrected days later in a footnote on page 3...
The trick is: you don't.
You only decide who didn't tell it afterwards and then fine them. But only if they are above a certain audience threshold. Imagine the fallout if a broadcaster or paper was caught spreading misinformation that would need to be corrected days later in a footnote on page 3...
Gerry Quinn - IMO, ideally you get a diverse group and arm them with access to subject matter experts. Then fine tune the process over time, make decisions transparent, and have external entities audit the process. It won't be perfect, but it's better than ignoring the issue.
On a more general note - this happens all the time in places that I've worked at over the years. It seems to me that the wrong metrics are tracked or weighted too highly and that leads to behaviours that can be anti-integrity. I've raised this many times to many different people and while some may agree, I've never seen a place change or re-weight those metrics though, because they roll downhill like brown smelly substance...
Post a Comment
On a more general note - this happens all the time in places that I've worked at over the years. It seems to me that the wrong metrics are tracked or weighted too highly and that leads to behaviours that can be anti-integrity. I've raised this many times to many different people and while some may agree, I've never seen a place change or re-weight those metrics though, because they roll downhill like brown smelly substance...
<< Home