Tobold's Blog
Tuesday, June 25, 2024
 
Sounds like

If you turn on the radio (younger member of my audience, ask your parents what a "radio" is) and hear a country song, how do you know that this is a country song? Well, probably because it "sounds like" other country songs you have heard before. The whole classification of music into genres is built upon songs "sounding like" other songs. That makes the lawsuit of the Recording Industry Association of America against two AI music services somewhat interesting. Isn't the AI doing exactly the same as most music producing artists, which is having listened to other songs producing something that "sounds like" those?

Now of course there have to be limits to that. If I created a virtual AI musician called Saylor Twift, whose music is exclusively made by "sounding like" existing Taylor Swift songs, I could see a strong justification for getting copyright sued. I also fully support the rights of Scarlett Johansson against OpenAI about the voice that "sounds like" her. It seems to me that this has a rather simple algorithmic solution. If we block AI music generators and voice generators from using more than let's say 10% of any single existing human voice in generating an AI voice from training data, the result wouldn't sound like anybody specific.

One of the Twitch streamers I sometimes watch has used an AI music generator to create several songs of different music styles, all with also AI generated lyrics that sing about how great a streamer he is. It is pretty funny. There actually used to be a time in history where lords paid bards to do exactly that, create music praising them. Some games these days actually have in their settings a "streamer music" option, so that no copyrighted music is played when a streamer is streaming that game. Sometimes music is just background, and it is good to have a non-copyright source for that, even if it just sounds generic. I doubt that AI generated music is actually a threat to great artists. And if minor artists are threatened by AI being better at creating generic music than they are, that isn't a great loss.

Comments:
What if some artist's sound can be described as "10% of each of these 10 artists"?

Popular musicians are probably under least threat from AI anyway, because they can perform live. (Electronic help in that is frowned upon, at least to some degree - I think that pitch adjustment doesn't bother fans in general but lip-synching is a no-no.)
 
The issue here though is AI companies using work of artists without their permission to train the model and then trying to profit off of that AI model.

At this point the genie's out of the bottle but I actually side with the artists on this one. Companies seeking to profit off of AI models should be forced to license the materials used to train their models if they are currently under copyright protection.
 
10% is actually quite a large overlap considering that artists are getting into trouble for using just some riffs without permission.

Ok, we are talking about voice here. But then we just copy and paste the most known phrase 1:1 and that should be fine if it's less than 10%?

Also what would 10% "voice" even look like? Pitch, speed,...? How would you be able to pin that to a single person? What if my "a" sounds like your "a"? Maybe not from the same word, maybe pushed through some modulation. Would it my "a" or yours?
If you sang a cover of Taylor Swift and fed it into the AI, would it be Taylor or Tobold Swift? What if you sang it pitch perfect?
 
The problem with saying it is okay, "if minor artists are threatened by AI being better at creating generic music than they are, that isn't a great loss," is that all of the major artists started as minor artists. If we don't have minor artists and the many opportunities they are vying for, honing their skills on, building themselves up with, there will never be an opportunity for the cream to rise to the top and become those major artists we all love.
 
Post a Comment

<< Home
Newer›  ‹Older

  Powered by Blogger   Free Page Rank Tool