Gigantic Reaction Indeed, At Least From Me. AI To Embarrass Musicians And Make Them Redundant.
You know I have been making music, and so I understand somewhat the process a do-it-yourself musician has to go through before a finished track could be produced. The process of making a good track is going to be a long one for such a musician for sure. Even a short track requires some heavy attention such as mastering a track and whatnot. The mixing part is not a clean process either, because it’s about making the track sound great. So, besides imbuing a feeling into a track to convey a feeling to the listeners, the track must be done right in a technical manner in order for the track to sound great.
Every time I’m about to start mixing a track, I question myself a lot. Thus, should a track be a happy one or a sad one? Should a track be an uplifting one or a sleepy one? Should a track use synthesizers or acoustic/real instruments? Should a track add a piano piece or just go with the guitar or go with both? Should a track use any percussion or just go with kick/drum? The list would go on.
Personally, when mixing a song I go with feeling a lot. Even when picking a note/scale for starting a beat, I use feeling too. Sometimes though I decide to go with just whatever sounds good. For example, if my voice for the track begins with musical note B, out of laziness I just go with beats starting with the B key. If everything sounds good, then the plan is just to stick with the B key as the root key for the beats of the track. Of course, I could do otherwise if the B key isn’t working out. I have to also think about the octave of the scale because a higher octave would give a higher pitch thus providing a much different feeling for the track. The base should use lower octaves I guess. The major or minor scale is also another important dimension I have to think about when I try to mix a track.
How complex should a track be? How many layers of musical instruments should I combine to give a complex voice to a track? Should a track be simple when I add a human voice (i.e., when I’m singing)? If a track has to have many layers of musical instruments, then during the mastering process I have to try hard to allow my voice (i.e., my singing voice) to cut through the mix. It’s bad when layers of musical instruments clash against one another, and it’s worse when my voice couldn’t cut through the mix. A track with heavily layered musical instruments could drown out or thin up the voice of a singer/performer, so it’s rather sensitive to mix a track with human voices in the mix.
One more thing, should I mix a great portion of the track in mono first before I switch to stereo? Sometimes, the trick is to mix a track in mono first to have coherent sounds before switching to stereo to finalize the track.
Anyhow, basically, producing a track isn’t easy for a human. Yet, artificial intelligence is going to make the situation even direr for human musicians. Lately, a lot of AI algorithms come to the fore to convince the world that AI can now produce unique music in seconds. I got proof of this. I visited Jukedeck.com and created a brand new synth pop track produced by AI. Without any human intervention from me — besides the fact that I actually prescribed the track should be three minutes and fifteen seconds long, a pop genre with melancholic feeling and musical instruments should be synthesized — Jukedeck’s AI was able to create a beautiful track “Gigantic Reaction.” You can listen to this track at https://usharemix.com/index.php?a=track&id=6. I have to say I’m very impressed by the result. A gigantic reaction from me indeed! Now if AI can sing just like a human can, then musicians may become something of the past.
Perhaps, in the future, people may prefer to have AI make music than to listen to real musicians. Who needs musicians right? You never know right? I don’t think such a future is far off because AI algorithms can already churn out endless musical tracks that sound great.