Playing Along With the (AI) Radio
This is a pop song I generated from my lyrics in an early-mid 70s pop style--a style I grew up with on AM radio, so this is somewhat of a nostalgia trip. It's like the late '70s when I would turn on the radio and play along. So I attempted to put my own bass part on it as I would on any track. The existing part was almost imperceptible.
My approach to bass parts is always compositional in nature. You can't ride the root all the time, and you have to think about the overall shape of the lines, what range they are played in, and how it is a counterpoint with the other parts. For pop music like this, you don't want too much below the staff, keeping in the low range of a guitar, then changing octaves for contrast.
The playing on this is superb--session musician level--as it would have been in the 70s. Who is this guitarist, or is it a composite of many players?--I still don't know how generation works).
Does music matter when we generate music with AI?
I asked a question on Reddit recently as to whether anyone had attempted to perform the songs they generated. I got only 2 responses that essentially said they didn't care.
For real musicians, using AI to generate music can be seen as cheating or anathema to the art, but it really isn't. It's a way of prototyping music. But once you reverse engineer it, you can see how empty it is--in the sense that it has no room for human interaction--either because the music has no "nutrition" in it, or you simply don't have the skills to play it.
I think we're moving (even farther) away from manually played music. We're in the era of DIY 2.0, or even DIY 3.0, where the internet is the instrument. It used to be that the studio was an instrument, and now we're using generators on the internet to create music and not even using instruments.
The instruments that we hear in AI-generated music are instruments that we think we're playing. It gives people the sense that they can play music as if they were musicians. Perhaps that's good enough. People don't want to spend the time to really work to shape a song themselves. This part took a while to compose. The only take I did was for the video; I wasn't doing multiple takes.
I like that I can write songs where I don't write and record them, but I also like that I can compose parts for them.
I probably would never write a song like this for myself. It's a song that I listened to in the 1970s. As I've said, AI music is primarily a listener's activity. But you could take it to the next level by emulating it and playing the parts yourself. But what you came up with would be something completely different. Would it be something you wanted?






Comments
Post a Comment