This one is going to be longer. Hopefully, I don't have to break it out into multiple posts.
Despite relatively simple answers/practices that avoid the issues - I can't figure out how to make some of the explanations shorter without glossing over too much of it.
I've been manually pulling this back down to make it come in under -14LUFS.. Is this really necessary ? Should I just let Ozone decide ?
Honest opinion - neither.
-14 LUFS
is not a target. No streaming level is a target. They were never supposed to be targets. Anyone who says to "master to" -14 LUFS (typically also quoted with -1 dBTP) does not
really know what they're talking about. Yes, I'm aware that I'm directly contradicting several very popular youtubers by saying that. They're wrong.
The streaming services are going to do that for you when they play it back if they need to (assuming the listener has the setting enabled, which it seems like most do by default), and they're going to do it with a linear negative gain change. That's it. It can't
hurt your audio to be louder...it's exactly the same as turning a perfect volume control down. Okay, that's not totally true - if the listener is using a 16-bit DAC and turns down a song with 16-bit dither before conversion, it can introduce quantization distortion that can be audible. That's not a common occurrence. I think all modern phones and computers use 24-bit DACs even when they're playing 16-bit files, which preserves your dither. And, in general, people who can hear and care about quantization distortion and listen to CDs have the cd "transport" output full scale and turn down the volume later. (A CD Transport is just a CD player with a digital output.)
Anyway....
Spotify used to turn quiet songs up and apply a limiter to keep it from clipping - but they screwed up basically all classical music to the point that the outcry made them stop. I don't think anyone does that anymore.
If anything, -14 LUFS-i should be a soft
minimum for modern music in loud styles (rock, metal, punk, pop, hip hop, etc.) just so your track doesn't wind up
quieter if it gets into your typical spotify/apple/tidal/whatever playlist. That's still plenty of dynamic range for drums to smack you in the chest and have real movement between sections of a particularly dynamic song. If you want your song to be more dynamic, there's nothing wrong with that either - just make the decision artistically.
FWIW, I'm also
seriously opposed to streaming normalization and literally never use it when I'm listening to music for pleasure. The way that at least tidal does it (by normalizing an
album to their ref level and playing songs at those levels regardless of context) is better than normalizing every song. But, even if you're listening on shuffle or in some weird playlist....there's no reason Elton John should be as loud/squashed as Metallica. Playing them at the same level is stupid.
I trust literally
anyone who calls themself a mastering engineer and is technically capable of creating a redbook-valid DDP to decide on the level of an album more than any streaming service, and I have a volume control. I literally find myself adjusting my playback volume
more with that stupid setting enabled.
So, yeah...-14 as a target is stupid.
As for Ozone....I own Ozone Advanced because there are a few plugins in it that I like (mostly the maximizer*, also low end focus and very rarely stabilizer or master rebalance for very small tweaks), but I don't think it's particularly hard for a human mastering engineer to do better than the Assistant. It's a fine sanity check, and it's valid to send what it does to clients as a loud reference during your approval process. But blindly trusting it....I think that
can be a problem, or at least a shortcut that doesn't really work out all that well. FWIW, I put Landr, Waves's service, PA's service, and basically every other "AI" mastering thing in the same bin. They can sound fine. I honestly think almost any human with good monitors will sound better. IDK...maybe I'm giving people too much credit.
What I'd
actually do for the loud reference...is just learn one limiter inside and out and put that at the end of your mix bus. If you're not going to send it to an actual mastering engineer, that's also what I would do for release.
At least in theory, you've been working on the mix to make it sound as good as you possibly can. If you're not going to get the outside more listener-y perspective from an actual mastering engineer who hasn't been working on the track before and works on a very nice, full-range system...you're not really going to get anything out of using mastering processors/methods. If you were going to do something different, why wouldn't you have done it in the mix?
I am a mastering engineer. I'm not famous, and I'm not particularly successful (yet), but I do it for clients and do get paid. When I mix...that's exactly what I do. Or I send it to someone else.
* Note: the reason I like the Ozone Mazimizer is that you can get it to distort the low-mids similar to the way the Waves L2 hardware did while running in unlinked stereo mode, and it actually works better both from sonic and workflow perspectives than the L2 Plugin. It's also a fine limiter in its own right, and you can push it a lot harder than an L2 before it sounds like trash. But, I prefer other limiters unless I want that specific sound. That sound is honestly most of the reason I own Ozone.