Having a hard time hitting -14LUFS

I decided a while ago I have no interest in trying to compete or even fit in with most modern mixes. If I lose a listen/listener because someone has to turn the volume knob up on whatever they’re listening on, so be it, but I refuse to spend the amount of time I do working on stuff, just to smash the shit out of it for the sake of volume.

And it’s not like I’m banging out beautiful mixes, I’m still a novice with it, but it’s not even something I’m interested in pursuing. I want as much dynamic range as I can possibly achieve in them. All my favorite albums have a ton of dynamics, even the heavy stuff.

Once Upon A Time In Audio In The 90's and 2Ks, There Were The Loudness Wars

I’m not saying smash your mixes to within an inch of their life, just get loudness to the point the mix reaches its potential. Mastering is a discipline unto itself. Punch, clarity, separation, stereo field and final loudness should all be either determined or enhanced at this stage. Rarely was this done by the person mixing it, but home production and technology being what it is, more and more masters happen at home. Learning to master your mixes can actually reveal mix problems you may not have realized. The big downside to DIY is you lose that 2nd set of ears and perspective. That is also worth its weight in gold. For that you can use friends who promise to be honest.
 
Just a note in my experience, I dont worry so much about LUFS anymore. Things like YouTube or Apple Music are going to attenuate to their desired loudness measurement anyway. From what I’ve experienced, it is pretty transparent, too. Master the track so you get good loudness and clarity out of it and you’re done. You’ll go crazy trying to master for a growing landscape of platforms who all have their own target loudness.
Big picture it’s not really that important to me. I was more curious why something like that couldn’t hit the mark.
 
This one is going to be longer. Hopefully, I don't have to break it out into multiple posts.

Despite relatively simple answers/practices that avoid the issues - I can't figure out how to make some of the explanations shorter without glossing over too much of it.

I've been manually pulling this back down to make it come in under -14LUFS.. Is this really necessary ? Should I just let Ozone decide ?

Honest opinion - neither.

-14 LUFS is not a target. No streaming level is a target. They were never supposed to be targets. Anyone who says to "master to" -14 LUFS (typically also quoted with -1 dBTP) does not really know what they're talking about. Yes, I'm aware that I'm directly contradicting several very popular youtubers by saying that. They're wrong.

The streaming services are going to do that for you when they play it back if they need to (assuming the listener has the setting enabled, which it seems like most do by default), and they're going to do it with a linear negative gain change. That's it. It can't hurt your audio to be louder...it's exactly the same as turning a perfect volume control down. Okay, that's not totally true - if the listener is using a 16-bit DAC and turns down a song with 16-bit dither before conversion, it can introduce quantization distortion that can be audible. That's not a common occurrence. I think all modern phones and computers use 24-bit DACs even when they're playing 16-bit files, which preserves your dither. And, in general, people who can hear and care about quantization distortion and listen to CDs have the cd "transport" output full scale and turn down the volume later. (A CD Transport is just a CD player with a digital output.)

Anyway....

Spotify used to turn quiet songs up and apply a limiter to keep it from clipping - but they screwed up basically all classical music to the point that the outcry made them stop. I don't think anyone does that anymore.

If anything, -14 LUFS-i should be a soft minimum for modern music in loud styles (rock, metal, punk, pop, hip hop, etc.) just so your track doesn't wind up quieter if it gets into your typical spotify/apple/tidal/whatever playlist. That's still plenty of dynamic range for drums to smack you in the chest and have real movement between sections of a particularly dynamic song. If you want your song to be more dynamic, there's nothing wrong with that either - just make the decision artistically.

FWIW, I'm also seriously opposed to streaming normalization and literally never use it when I'm listening to music for pleasure. The way that at least tidal does it (by normalizing an album to their ref level and playing songs at those levels regardless of context) is better than normalizing every song. But, even if you're listening on shuffle or in some weird playlist....there's no reason Elton John should be as loud/squashed as Metallica. Playing them at the same level is stupid.

I trust literally anyone who calls themself a mastering engineer and is technically capable of creating a redbook-valid DDP to decide on the level of an album more than any streaming service, and I have a volume control. I literally find myself adjusting my playback volume more with that stupid setting enabled.

So, yeah...-14 as a target is stupid.

As for Ozone....I own Ozone Advanced because there are a few plugins in it that I like (mostly the maximizer*, also low end focus and very rarely stabilizer or master rebalance for very small tweaks), but I don't think it's particularly hard for a human mastering engineer to do better than the Assistant. It's a fine sanity check, and it's valid to send what it does to clients as a loud reference during your approval process. But blindly trusting it....I think that can be a problem, or at least a shortcut that doesn't really work out all that well. FWIW, I put Landr, Waves's service, PA's service, and basically every other "AI" mastering thing in the same bin. They can sound fine. I honestly think almost any human with good monitors will sound better. IDK...maybe I'm giving people too much credit.

What I'd actually do for the loud reference...is just learn one limiter inside and out and put that at the end of your mix bus. If you're not going to send it to an actual mastering engineer, that's also what I would do for release.

At least in theory, you've been working on the mix to make it sound as good as you possibly can. If you're not going to get the outside more listener-y perspective from an actual mastering engineer who hasn't been working on the track before and works on a very nice, full-range system...you're not really going to get anything out of using mastering processors/methods. If you were going to do something different, why wouldn't you have done it in the mix?

I am a mastering engineer. I'm not famous, and I'm not particularly successful (yet), but I do it for clients and do get paid. When I mix...that's exactly what I do. Or I send it to someone else.

* Note: the reason I like the Ozone Mazimizer is that you can get it to distort the low-mids similar to the way the Waves L2 hardware did while running in unlinked stereo mode, and it actually works better both from sonic and workflow perspectives than the L2 Plugin. It's also a fine limiter in its own right, and you can push it a lot harder than an L2 before it sounds like trash. But, I prefer other limiters unless I want that specific sound. That sound is honestly most of the reason I own Ozone.
 
@marsonic Did you have time to listen to the second clip I posted? Again, the first one was me saying, "I did all this to it and the one thing I wanted to work didn't work." I didn't post something with every track having a comp set to its highest level on it thinking it would sound like anything anyone would want to hear. I really appreciate you taking the time to write up all this advice, and I've learned a lot, but it would be more helpful for me to have feedback on my normal output, which I'm not saying is good either.

I did. It's definitely not trash now and shows serious "improvement" compared to the other one. I hear a big hole in the low mids (say 200-400ish) and not really any air (over like 8-10k), but part of that could just be how those things sound. It's much better to work from. It's very much in the realm of workable. Those two "issues" might be part of why (you said) people call your mixes dark. If your monitors/speakers have a particular kind of scoop and not super extended low end, they could be the reason.

Yes, taking an idea to it's absolute extreme is a good way to learn what that idea really does. And, it seems to have lead to some learning, which is awesome. I feel like I started really learning clipping/limiting when I decided to compare different algorithms smashing some tracks up to like -2. They all sounded like trash, but the differences became stark. You don't really know where the edge is until you go past it. Fortunately, with a DAW, "fixing" what happens when you go too far is usually just turning a knob back the other way. You're nor burning tape or (in other contexts) driving off a cliff or tearing a muscle or anything with real consequences.

The "issue" I hear with that is most obvious in the cymbals (though it's probably in other sounds). It could be really bad lossy compression, but it doesn't quite sound like that (not quite that cloud of tweety-birds sound). It seems like something you're doing is causing aliasing, and that might be why you either low-passed or just turned down the air, trying to get rid of some of that. It sounds like a "splish" (a bit more than a "splash") that's pretty consistent in the high end of that clip.

IMHO, you need to trace your signal path. Find any clipping, saturation, distortion, compression, or limiting (those are all variations of the same thing with different details) especially that the hats and cymbals (or those sounds in the overheads or room mics) are hitting. At least those specific processors need to be oversampled.

In reality, anything that's making a quick level/gain change needs to be oversampled. Yes, this is still true if you're working at high sample rates (which I define as 88.2k and up) - oversampling is better for this specific thing than just working at a high sample rate. If you can't eliminate that sound like that, the next big culprit is intermodulation distortion, and you can go a long way to solving that by putting a (very high, say ~20k) low-pass filter right before anything that adds distortion (which includes soft saturation, compression, clipping, limiting, etc.).

That last part is actually one of the "tricks" that makes a lot of people think analog distorts/compresses better than digital. All analog distortion things are inherently band limited, meaning that they low-pass on their input and don't cause these problems (IMD and aliasing). Digital doesn't have to do that as an inherent part of the design, and a lot of designers incorrectly skip one or both steps. FWIW, TDR Ultrasonic was made for exactly that purpose (avoiding IMDs), and it's incredibly good at it.

If you use a lot of oversampling, you may need to turn up your ASIO/CoreAudio buffer to give your computer enough time to do it. If you're still tracking or doing overdubs, turn it off while you're recording and just deal with the bad sound, then turn it back on when you go back to mixing. Pro Tools and Reaper have settings that do this for you; other DAWs may as well, but I haven't noticed them.

Big picture it’s not really that important to me. I was more curious why something like that couldn’t hit the mark.

It seems like you learned at least some of that, so....it was a good experiment.
 
-14 LUFS is recommended (not hard & fast rule) for certain streaming sites (Soundcloud for example) because they have their own compression (and it gets applied to every track). That's not to say you can't go -12 LUFS and get good results; depending on the program information.

AFAIK, and someone please correct me if I'm wrong, no streaming services add compression or limiting anymore after Spotify caused a huge internet outcry about it. It is just a totally linear volume change.

It's also neither recommended nor necessary to actually target those levels. See above.

Just a note in my experience, I dont worry so much about LUFS anymore. Things like YouTube or Apple Music are going to attenuate to their desired loudness measurement anyway. From what I’ve experienced, it is pretty transparent, too. Master the track so you get good loudness and clarity out of it and you’re done. You’ll go crazy trying to master for a growing landscape of platforms who all have their own target loudness.

1000%.

In all honesty, make it sound good and work like it's going to go on a CD (at whatever sample rate and bit depth you want) or a standalone music player, and all the streaming services will take care of themselves. You don't have to do anything to "optimize" a master for streaming.

The only thing that streaming normalization means is that you don't have to squash your track to stupid-loud levels just so it'll be competitive. You absolutely can still do it if you want that sound, and nothing really bad will happen (mostly your drum transients just won't hit as hard as more dynamic masters). But, you can make the choice artistically instead of being afraid that your music won't "keep up".

Now...I do use LUFS all the time. But, I use the momentary and short-term values/meters because I prefer my Clarity M to every VU or RMS meter I've used. It correlates better to what I hear. I don't even look at integrated LUFS except out of curiosity or during my revision process if a client asks for something louder/quieter....or if the client says I have to hit some broadcast standard (the actual EBU R-128 spec, theoretically the Atmos spec if any of my clients ever care, etc.).

I decided a while ago I have no interest in trying to compete or even fit in with most modern mixes. If I lose a listen/listener because someone has to turn the volume knob up on whatever they’re listening on, so be it, but I refuse to spend the amount of time I do working on stuff, just to smash the shit out of it for the sake of volume.

+1. The upside of streaming normalization is that you have the freedom to make that choice artistically. Which is awesome. And there's no down side for people like me who don't use it - I just get the sound you wanted and set my playback level the way I always have.

And it’s not like I’m banging out beautiful mixes, I’m still a novice with it, but it’s not even something I’m interested in pursuing. I want as much dynamic range as I can possibly achieve in them. All my favorite albums have a ton of dynamics, even the heavy stuff.

This is where LUFS meters (specifically monetary and short-term) can really help you learn. They're very closely related to how we hear.

Integrated LUFS is just a number and is almost irrelevant to this learning.

I’m not saying smash your mixes to within an inch of their life, just get loudness to the point the mix reaches its potential. Mastering is a discipline unto itself. Punch, clarity, separation, stereo field and final loudness should all be either determined or enhanced at this stage. Rarely was this done by the person mixing it, but home production and technology being what it is, more and more masters happen at home. Learning to master your mixes can actually reveal mix problems you may not have realized. The big downside to DIY is you lose that 2nd set of ears and perspective. That is also worth its weight in gold. For that you can use friends who promise to be honest.

Agreed.

I started learning mastering because I was just curious about it. I'd made and mixed a dance song that a local DJ really liked, and he told me to get it mastered. I had no idea what he was talking about, so I started reading.

A few weeks later, a friend was in the same situation, and I offered to try my hand at mastering it. Listening back years later, that first master kinda sucked. But...my master of his song got played on A State of Trance (actually big internet radio show), and he got a record deal out of it. Yes, the label re-mastered it for both the vinyl and digital releases, and yes, they sounded much better than mine. But, that's when the bug hit.

If you're "mastering" your own stuff...the best advice I've ever heard and will repeat here comes from a Mastering Engineer who goes by Tarekith: don't. (FWIW, he seems to not be accepting new clients, and I'm not sure why; I haven't talked to him in a few years.)

If you want to send it to someone, then send it to someone. I'd love for it to be me, but anyone you think you'll like working with is worth a shot. The fact that it's a different person who's passionate about that part of the job and the weird mix of the very constrained artistry that comes from working with a stereo track and the extreme technicality of modern delivery formats is the important part.

Otherwise, focus all your attention on the mix and then just add one, good limiter to the end of your master bus to get it up to the loudness you want. Listen in 1:1 or gain-compensated mode to really hear whether you're making things better or just louder. Then turn off gain compensation, bounce, QC, and package it up to release the way you need to.

That's it.

You're not going to get anything out of using a bunch of stuff that says "mastering" on the box. The advantage of mastering comes from the different person, the different perspective, ideally a fantastic monitoring environment, and knowledge/workflow for delivery formats.
 
Mix into a good stereo compressor. I keep the ratio at 4:1 and compress no more than a couple dB. It'll keep your headroom in check and a good hardware compressor really does some magic. I swear by this one and it is CHEAP as far as the hardware game goes. Great company overall. This compressor will adjust your final volume and put you somewhere in between a mix and a master. It does very good things to a mix. You can always compress and print with it as you track, mix and then render your mix through it, too. Trust me, this thing is GOOD and if you don't like it, it keeps its value, too.

 
Per the "Loudness Wars" Wikipedia article I previously posted:

Normalization per streaming service.jpg
 
Soundcloud-wise, I once uploaded a perfectly fine sounding loud track (in the days before I used LUFS level recommendations), and upon playback in Soundcloud, it turned it into distorted crap. Upon further research and following LUFS level recommendations, I never have that issue anymore.
 
Another snippet of data:

I determined that your typical .mp3 (non-streaming) is around -10 LUFS.

So, if I'm going to up a track to Soundcloud, I keep it around -14 LUFS
If I'm going to make an .mp3 of my track for standard player purposes, I keep it around -10 LUFS
 
There is a plugin that lets you select the streaming service from a dropdown menu and tells you if your track is over, under or on target. I forget the name of it but it exists.

There's a number of them, but one I find very useful is:

 
There's a number of them, but one I find very useful is:

I have that one. There is a super simple one, just informational only and offers no adjustments. It shows you how far off you are from optimal for the platform selected. Still can’t remember.
 
Also, thanks to Marsonic and Drew for pointing out early on that I wasn't actually controlling my transients like I thought I was. I really should have picked up on that when I looked at the waveform after I rendered. It seems pretty obvious that letting all the transients through with a slow attack would contribute to my problem with transients, but I was so locked into thinking transients were the part that made drums actually sound like drums that I was afraid of touching them, so I ended up compressing everything around them to death which just made everything worse. The peaks on the final waveform got a tiny bit smaller, so I thought I was doing as much as I could. When that didn't work I slapped more instances of the wrong compressor on and here we are.:rofl

That's what I get for using my eyes and not my ears.
 
Also, thanks to Marsonic and Drew for pointing out early on that I wasn't actually controlling my transients like I thought I was. I really should have picked up on that when I looked at the waveform after I rendered. It seems pretty obvious that letting all the transients through with a slow attack would contribute to my problem with transients, but I was so locked into thinking transients were the part that made drums actually sound like drums that I was afraid of touching them, so I ended up compressing everything around them to death which just made everything worse. The peaks on the final waveform got a tiny bit smaller, so I thought I was doing as much as I could. When that didn't work I slapped more instances of the wrong compressor on and here we are.:rofl

That's what I get for using my eyes and not my ears.
Keep learning. Do it better next time. Thats what we all do.
 
Also, thanks to Marsonic and Drew for pointing out early on that I wasn't actually controlling my transients like I thought I was. I really should have picked up on that when I looked at the waveform after I rendered. It seems pretty obvious that letting all the transients through with a slow attack would contribute to my problem with transients, but I was so locked into thinking transients were the part that made drums actually sound like drums that I was afraid of touching them, so I ended up compressing everything around them to death which just made everything worse. The peaks on the final waveform got a tiny bit smaller, so I thought I was doing as much as I could. When that didn't work I slapped more instances of the wrong compressor on and here we are.:rofl

That's what I get for using my eyes and not my ears.

I’m 100% certain that if I opened up some old Logic sessions, you’d see a shitload of plugins on every single track doing all kinds of things opposite of what I was aiming for in the track.

There’s certainly a difference between what we take in for information and what we actually understand. There’s a TON of gray area with mixing because there’s so many strong opinions and most of the time, things are discussed as if the intent is to make something comparable to today’s standard, which isn’t always the goal.

Just learning how to discuss the stuff, knowing the terminology and what it’s all truly doing within a mix is a whole learning curve in itself!
 
There is a plugin that lets you select the streaming service from a dropdown menu and tells you if your track is over, under or on target. I forget the name of it but it exists.

NuGen MasterCheck?

There are also a handful of limiters that will compare your levels to streaming levels.

Just learning how to discuss the stuff, knowing the terminology and what it’s all truly doing within a mix is a whole learning curve in itself!

It absolutely is.

Add to that the fact that there are a lot of very talented musicians/engineers that have no idea how to describe what they're doing in mathematically precise or technical language, and you've got a great recipe for confusion.

Soundcloud-wise, I once uploaded a perfectly fine sounding loud track (in the days before I used LUFS level recommendations), and upon playback in Soundcloud, it turned it into distorted crap. Upon further research and following LUFS level recommendations, I never have that issue anymore.

You're right. Soundcloud does automatic "mastering", and there are reports of it doing some really bad normalization even without you paying for that. I honestly forgot it existed.
 
Back
Top