Good introduction/tutorial for audio mixing (possibly with a focus on podcasting)?

ThirithThirith Registered User regular
We had something of an audio emergency today: one of the tracks for our monthly podcast was pretty awful, since Audacity switched to a different, worse mic. I did what I could to make the track work, but I very much lack knowledge when it comes to audio mixing, equalising, using audio filters etc. I've got some filters set up based on a couple of podcasting guides, but I don't know the basics and am therefore pretty much just going through the motions without much of an understanding of how these things work, how best to tweak frequencies etc.

Can anyone here point me in the direction of good web-based introductions or tutorials for that kind of thing? I don't expect to become a pro just by watching a couple of YouTube videos, but I think that it would help me and the podcast if I had more of the basic knowledge.

webp-net-resizeimage.jpg
"Nothing is gonna save us forever but a lot of things can save us today." - Night in the Woods

Posts

  • Hahnsoo1Hahnsoo1 Make Ready. We Hunt.Registered User regular
    NPR has a thing, which is pretty good (but very beginner level stuff). But you should know all of these basics before starting your journey:
    https://training.npr.org/2017/01/31/the-ear-training-guide-for-audio-producers/
    https://training.npr.org/2018/10/31/mixing-diy/

    The Pro Audio Files has good information, but it's not very compressed or straightforward. Most of the articles sound like ramblings from very well-educated people who don't really know how to teach:
    https://theproaudiofiles.com/podcast-editing-and-mixing/

    Another article with good advice:
    https://medium.com/better-marketing/so-you-want-to-edit-and-mix-your-own-podcast-but-dont-know-where-to-start-9b7a99ac9fa3

    ALWAYS use a reference track of how you WANT to sound so that you can come back to it when you are trying to make a new track sound like that.

    Take breaks from editing every 30 min to 1 hour. Your ears will fatigue and that affects the mix.

    You won't get golden ears overnight! It takes dedicated practice. I do mostly a cappella audio tracks, and anything outside of that, I'm not great on. But I found using TrainYourEars EQ Edition 2 very useful (https://www.trainyourears.com/) for training up my ears to hear the differences in frequencies. I noticed a difference in just two weeks of daily training on that program.

    Finally, Audacity is great for one thing, and that's straight up recording tracks. It's lousy for pretty much every other audio purpose when it comes to polishing and mixing/mastering. You want something that allows you to adjust things in real time and has robust support for VST plug-ins. I personally use Reaper, but any DAW like Pro Tools will fit the bill. Find one you like, and stick with it.

    Di87pOF.jpg
    PSN: Hahnsoo | MHGU: Hahnsoo, Switch FC: SW-0085-2679-5212
    Thirith
  • ThirithThirith Registered User regular
    I use Reaper to merge the separate audio tracks and edit the podcast; I'm only using Audacity for the recording because it's free and simple, and one of the co-hosts isn't exactly very tech savvy. Reaper would most likely scare him...

    Thanks a lot for the links and other info. They should prove very helpful.

    webp-net-resizeimage.jpg
    "Nothing is gonna save us forever but a lot of things can save us today." - Night in the Woods
  • Hahnsoo1Hahnsoo1 Make Ready. We Hunt.Registered User regular
    edited January 26
    Thirith wrote: »
    I use Reaper to merge the separate audio tracks and edit the podcast; I'm only using Audacity for the recording because it's free and simple, and one of the co-hosts isn't exactly very tech savvy. Reaper would most likely scare him...

    Thanks a lot for the links and other info. They should prove very helpful.
    Might I ask why you aren't using Reaper to record all of the tracks? Or do you all have separate USB microphones on separate computers or something and everyone is recording off of their own Audacity installation (not an ideal way to do this, but... hey, whatever works)? Like, the Audacity mixup that you had to fix this time would be one reason that you might want to change your setup to be something more robust or foolproof, but everyone has different hardware and aspirations.

    EDIT: For the specific problem of making a track recorded on a different mic to sound like a reference track (say, a track recorded on the mic you intended), I'd probably use either an EQ or mastering plugin that has EQ Matching as a feature like FabFilter Pro-Q 3 or Ozone 9, or perhaps a standalone plug-in like Master Match or Eventide's EQuivocate. I own Ozone 9, so that's probably what I personally would roll with. It can be done with just the default ReaEQ plug-in, but that requires some time and expertise to match the sound. In all cases, you'll probably not get it 100% identical, but close enough to work.

    Hahnsoo1 on
    Di87pOF.jpg
    PSN: Hahnsoo | MHGU: Hahnsoo, Switch FC: SW-0085-2679-5212
  • ThirithThirith Registered User regular
    Yeah, we're not physically in the same place (or even the same country, in some cases), so we talk via Google Hangouts and install each track separately. I agree with you that it increases the risk of a tech snafu, but at the same time I've found that there's a lot to be said for separate audio tracks when it comes to editing the podcast.

    I'll have a look at those EQ/mastering plugins you've mentioned. We're running an amateur podcast with a pretty small listenership, so I don't try to be ultra-professional, but I do want the result to be nice to listen to.

    webp-net-resizeimage.jpg
    "Nothing is gonna save us forever but a lot of things can save us today." - Night in the Woods
  • EggyToastEggyToast Registered User regular
    One of the biggest things you can do from an audio perspective is to run your voice tracks through some compression, possibly with a hard limiter for pops and particularly loud sounds.

    A compressor will literally 'compress' the audio file, if you look at it from a waveform view, taking the high peaks and pushing them down. You then increase the amplitude on the tracks, so that the end result is that the loud parts are less loud and the quiet parts are less quiet. You can overdo it, but for a podcast you would know when you're overdoing it pretty quickly.

    A limiter is a type of compressor where it sets a hard limit for loudness, so if you have a sudden increase in volume it will 'limit' that, also pushing it down. Both are tried and true techniques for controlling audio and making you sound more even, without making your listener ride the volume.

    I can't think of an audio recording tool that doesn't have some type of compression/limiter built into it, since they are simple tools. If you two agree on settings, then it will go a long way for making you sound more even.

    In general when you're working with a podcast, you don't need to worry much about EQ and other things related to music.

    || Flickr — || PSN: EggyToast
  • ThirithThirith Registered User regular
    Thanks. I'm already doing those things, but I'm basically following instructions semi-blindly, tweaking a little here and there without a deep understanding of what I'm doing.

    In this case, it's specifically about EQ. I was able to make the awful recording at least somewhat clearer by boosting the midrange, but again, I'm fumbling around in the dark, mostly. It's that sort of technique that I would prefer to have a better understanding of.

    webp-net-resizeimage.jpg
    "Nothing is gonna save us forever but a lot of things can save us today." - Night in the Woods
  • Hahnsoo1Hahnsoo1 Make Ready. We Hunt.Registered User regular
    Thirith wrote: »
    Thanks. I'm already doing those things, but I'm basically following instructions semi-blindly, tweaking a little here and there without a deep understanding of what I'm doing.

    In this case, it's specifically about EQ. I was able to make the awful recording at least somewhat clearer by boosting the midrange, but again, I'm fumbling around in the dark, mostly. It's that sort of technique that I would prefer to have a better understanding of.
    My biggest advice that should override all other advice is to trust your ears. It doesn't matter if all of the websites out there tell you to cut X frequency or boost Y frequency to do an effect... if you listen to it, and it doesn't sound right, don't do the change. Try not to do the same EQ every time to every track at every recording... instead, get a reference recording and do the EQ changes to your audio to match the reference, if you can. EQ is not a "set and it's done forever" thing, as every recording will be slightly different, even under the same conditions.

    The big example I use is the high pass filter. A lot of websites will tell you to cut below 100 Hz with a 12 db/octave (or more) sloped curve. The justifications are usually that the human voice doesn't go that low (I'm a Bass singer, and my lowest note is a C2 which is 65.41 Hz, but I digress). While this may be true, a lot of voices rumble with subharmonics that are lost when you do this (which you may STILL want to cut anyway, for intelligibility, mind you), and sometimes you can reduce the harshness of a voice overall by not using the high pass filter (a subtractive cut in the low end results in hearing the high end more, which can cause harshness). I still high pass filter a lot of things (footsteps and AC noises and movement are easily removed with a high pass), but I always A/B test it first (listen to the unfiltered audio and the filtered audio side by side).

    In general, I would suggest boosting wide (lower Q curve, which is a "gentler" slope) and cutting narrow (higher Q curve, which is a "sharper" curve). For music, I would recommend small changes (3-6 dB), but for podcasts and voice, you can get away with starting at 6dB (cuts and boosts) because you aren't focusing on trying to make music sound good. You can boost a small 50 Hz band between the frequencies of 200-400 Hz (lower for deep baritone voices, higher for children or high pitched voices) to increase fullness, but this also can increase the "muddiness" (sounds like someone speaking behind a closed window). Cuts in that range can increase clarity, but also make a voice sound thin. The 800 Hz - 1 kHz range can make someone sound nasal and whiny, so you can cut there to reduce this effect (although some people just have whiny and nasal voices, sadly... can't fix biology). You can also boost around 2-6 kHz to increase clarity, but this can also boost sibilant sounds (Sss, ffff) and also make a voice sound harsh (especially audio recorded on most VOIP programs). I usually slowly roll the cuts or boosts up and down the frequency range while listening to the audio until I find the cut/boost that I like.

    Feel free to low pass filter 15 kHz and above. Only teenagers and kids can hear in that range anyway. Erm.

    The reason that old landline telephone audio sounds so bad is because the frequency response of the telephone speaker is between 300 Hz to 3.4 kHz. That means it has massive cuts at both the high end and midrange to low end, so if you really wanted to make a voice sound like a butt, you can cut in a similar manner. But I find this reference useful, because it gives me a mental example of how EQing out the high and low end can sound if you do too much.

    All of this comes with practice. And I get it, it's hard to get practice in without someone guiding you and telling you over your shoulder "do this" or "cut here", etc. I can highly recommend using Train Your Ears and loading up some dialogue files and hear what cuts/boosts can do to your audio. Once you get a baseline of what it sounds like ("Hey, a cut here sounds like an AM radio broadcast. A boost here makes me sound like I'm shouting behind a door"), it becomes so much easier to fix things in your audio.

    Di87pOF.jpg
    PSN: Hahnsoo | MHGU: Hahnsoo, Switch FC: SW-0085-2679-5212
    ThirithKlytus
Sign In or Register to comment.