Interesting, I wonder why this couldn’t be accomplished with conventional techniques. I own a handful of “AI” Plugins meant to achieve similar cleanup and I feel like it always needs to be tweaked to sound right. And that’s for a guy like me without practiced mixing ears. I wonder why real studio engineers needed AI.
Yeah. From what I understand, Giles Martin used the same technique (or, at least, a similar one) when making the 2022 remix of Revolver. The AI was able to make clean stems from the original tracks where the instruments were mixed. So, I would suspect that this technique was able to separate the vocal from tape hiss, room noise and other factors–it isn’t a deepfake of John Lennon.
From what I’ve gathered, it was just AI to clean up a track that was previously too poorly recorded to release.
It’s not synthesised, it’s just repaired.
Damn, was looking forward to a.i. Yoko Oni ruining the track.
Interesting, I wonder why this couldn’t be accomplished with conventional techniques. I own a handful of “AI” Plugins meant to achieve similar cleanup and I feel like it always needs to be tweaked to sound right. And that’s for a guy like me without practiced mixing ears. I wonder why real studio engineers needed AI.
It was likely a combination of AI and manual tuning, like with modern photoshop plugins.
AI in this usecase is another tool for the engineer.
Yeah. From what I understand, Giles Martin used the same technique (or, at least, a similar one) when making the 2022 remix of Revolver. The AI was able to make clean stems from the original tracks where the instruments were mixed. So, I would suspect that this technique was able to separate the vocal from tape hiss, room noise and other factors–it isn’t a deepfake of John Lennon.