Honestly, most of the mixes and masters from the Mangini era have been good, or at the very least fine. ADToE gives every instrument ample room to breathe, nothing is stepping on each other's toes, and the mastering isn't classical levels of quiet where you have to turn the volume up to max to hear the slow movement and then someone freaking coughs and your house explodes from the sudden jump in volume and it's not painfully loud either. The Astonishing is in a similar situation. D/T tends to push Rudess a bit back in the mix, sure, but it doesn't have any problems outside of that. DT12, on the other hand, is loud. Very loud. Things are constantly fighting each other for attention, Rudess is pushed back in the mix again, and the album is almost painful to listen to it's so loud. DT12 has some issues in its mix, and it probably has the worst mastering job out of any DT album.
Now let's break some of the common complaints about poorly mixed and mastered albums:
1. Why Are Albums So Loud These Days?First off, I want to clarify that albums being too hot is mostly a fault of the mastering, not the mixing. The mixing, arrangement, composition, and recording all play a role in the dynamics, but most of the problems arise from mastering.
It's basically common knowledge that songs are on the whole, getting louder. While a lot of the complaints are probably exaggerated (not every song is completely brickwalled, and even most loud recordings tend to have a bit of dynamics), the claims aren't completely unfounded. The levels of recordings from the 20s to the 40s tended to be mostly around -15db, which began to steadily increase from the 50s to the 80s, and then accelerated during the 90s and exploded during the 00s, where it's stayed. Songs now tend to hover around -5db, and those hitting as low as -15db are very rare (still, those at 0db are even rarer).
[1]*
Now why did these jumps occur?
Well, these three movements coincide with three moments in music history: the explosion of rock'n'roll in the 50s, the adoption of the CD in the late 80s, and the advent of streaming and digital files in the 00s. Rock'n'roll pushed the limits of what was considered acceptable levels of noise and volume, especially with its introduction of the distorted guitar and louder drum and vocal styles. Around this time, the amplifier, microphone, and 45rpm record were all being introduced into the music industry, which allowed greater dynamic heights to be reached. [2] It also doesn't help that things like song length and placement would affect the dynamics, bass frequency levels, and speed of the song. [3]
The second of these three jumps happens in the 80s, when the Compact Disc started making waves in the music industry. The Compact Disc had a unique advantage over the vinyl record and cassette: it allowed for a much greater dynamic range than ever before. It could go as quiet as it wants and as loud as it wants. This also coincided, interestingly enough, with the birth of grunge and the rise of metal, which are two genres known for being loud.
The third of these jumps happens in the 00s, with the universal acceptance of the CD as the go-to form of music distribution and (as erwinrafael pointed out) the advent of streaming and digital files. The elimination of giant leaps in dynamic ranges makes for a smoother listening experience when shuffling between songs by different artists and is part of the reason why every music service has some sort of audio compression applied automatically to everything uploaded. I can attest from personal experience that it's really annoying to have to turn the volume WAY up for a classical piece and then get deafened by a Dream Theater or Taylor Swift song (or by another part of that classical piece). One of the things to note is that dynamic levels haven't just gotten louder, they've gotten more homogeneous too. Almost every song tends to hit around the -10db to -5db range, where as in the 20-40s it was anything goes really.
Now, the question is, why have things gotten louder? The ability for recorded music to get quieter has also increased in the last 30 years. Well, there's two phenomenon at play here: 1) as mentioned previously, the homogenization of dynamics to fit streaming culture and 2) the fact that loud music sounds better to us. [3] Radio stations, streaming apps/sites/whatever, digital music stores, and record companies all want to take advantage of this phenomenon and ensure their product is going to sell. It's also why most live amateur DJ sets usually end up with blown out stage monitors.
However, an interesting peculiarity is that this decade is actually slightly QUIETER than the one that came before it. It's only by a bout 1 or 2db or so, but it's still worth mentioning. Weird, huh? And, as an aside, songs have actually gotten longer over time too!
*Yes I know this is a chart on reddit, but they DO cite their sources here, which I checked out and they are legit.
[2] Bogdanov, Vladimir, et al. All Music Guide to Rock the Definitive Guide to Rock, Pop and Soul. Backbeat Books, 2002.
[3] Gray, Kevin. “PRODUCING GREAT SOUNDING PHONOGRAPH RECORDS (or Why Records Don’t Always Sound Like the Master Tape).” Record Technology Incorporated, 3 May 1997, recordtech.com/prodsounds.htm.
[4] Blesser, Barry. “The Seductive (Yet Destructive) Appeal of Loud Music.” EContact!, no. 9.4, June 2007.
2. Why does the song sound muddy and cluttered?This actually isn't the master's fault! This can arise from poor composition, arrangement, or mixing.
1) Composition: probably the rarest source for this problem in modern music. One of the biggest things you learn about in music school is counterpoint, which according to Wikipedia "...is the relationship between voices that are harmonically interdependent (polyphony) yet independent in rhythm and contour." [5] Basically, it's like how Iron Man is his own character, but he comes to form a part of the Avengers. He is simultaneously his own
independent character and "a part of a much bigger universe". How does this concept relate to something sounding muddy and cluttered? Well, it could be that the song has too many distinct melodic voices occurring at the same time, or there's a lot of similar but not identical rhythms, or that the melodic voices are written in a way that makes them less independent of each other. Modern rock/pop/dance/rap/metal is not very contrapuntal, however, so it's not that big of an issue most likely. In metal, however, poor rhythmic layering is probably the biggest culprit!
Here's a good explanation of rhythmic layering using Gourmet Race from Kirby Super Star as an example.
2) Arrangement: more common than you think! There are a number of ways in which arranging can make a song sound muddy. The first is related to the harmonic series. Humans (that's us!) have trouble distinguishing frequencies and pitches at the extremes of pitch. So we have a harder time parsing out two separate "voices" (a fancy term for a unique part of an instrument) when they're really low or really high. That's why you'll frequently see orchestration books recommending that you keep all the crazy contrapuntal stuff in the middle-low (the "tenor" voices) or middle ("alto") or middle-high (usually the "soprano", but that can leap up into the high/stratosphere) rather than in the extremes, especially the low parts. If you have the bass drum doing its own thing, the bass guitar doing its own thing, and two low rhythm guitars doing their own things, it's going to be hard to hear it as anything other than mush. I reference this as an example because I remember hearing a mediocre prog metal song not too long ago that did this exact thing. I can't remember what song it is, because it failed to grasp my attention.
It could also be that the arrangement is too "bottom heavy", where's there's a lot of stuff going on in the low end and it ends up drowning out the middle and higher parts. The Enemy Inside is a really good example of this, where the T H I C C guitar and bass end up stepping on the keyboard and vocal's toes during the choruses. The main part of Blind Guardian's The Ninth Wave (and, to a lesser extent, And Then There Was Silence) also has this problem.
Finally, and this is related to mixing, the dynamics might be bad for a particular section or song or whatever. By this I mean there's no sense of priorities in the song. Everything is constantly demanding equal attention on the part of the listener and they end up stepping on each other's toes. Sometimes the best thing you can do is push a less important part further back so that the important stuff can shine.
3) Mixing: In mixing, we have a "holy quantinity" (dibs on THAT band name) of sorts: volume, panning, eq, compression, and reverb. These are what help define the "3D space" that a recording or live concert sound takes up. Volume defines the forward-backward positioning, panning the left-right, EQ the up-down, Compression the amount of forward-backward movement and how the instruments breathe, and reverb the characteristics of the space and the specifics of location in that space. Instruments and sounds are slotted into a 3D box using this information. Now what happens when two instruments or sounds end up overlapping each other in this box? That's right, they end up competing for attention and you get that muddied/cluttered sound.
An instrument that's too loud can end up drowning out other instruments, or you have that problem I mentioned earlier where everything is trying to compete for the listener's attention because the priorities aren't set. Not mentioned nearly often enough is panning. If everything is in the left, right, or center, you're bound to get more toe-steppin' than awkward cousins at a Kentucky prom. If an instrument with a lot of frequency content (like a distorted guitar/synth/cello or a brass instrument) isn't EQ'd aggressively enough or it has too much going on elsewhere, you might get some muddiness. Compression is also a handy tool for creating space, to the point that it's abused by side-chaining the bass to the drums (side-chaining is basically "one comes in, other goes out. Can't expla-well, actually I guess you can never mind") in genres like Electrohouse. But it's best used for augmenting groove by allowing instruments to breathe. Too aggressive of a reverb can also add unwanted frequencies into a mix, and might end up stepping on another instrument's toes! Usually, you need a good combination of all 5 to get a mix to both breathe and leave enough space for Jesus in the mix.
And I want to echo the claim earlier that Dynamic Range has literally nothing to do with the quality of music. It's almost completely meaningless. A song that centers around -15db and a song that centers around -5db might have the same dynamic range. It has no effect on the quality of a mix or a piece of music. If it did, then Bach wouldn't be the literal, objective greatest composer ever.
[5] Stewart-Macdonald, R. “The Complete Musician: An Integrated Approach to Tonal Theory, Analysis, and Listening. By Steven G. Laitz.” Music and Letters, vol. 95, no. 2, 2014, pp. 318–321., doi:10.1093/ml/gcu015.
3. Complaining About The SnareThe thing that armchair audio engineers do the most. Most complaints about the snare are divided into three categories: 1) internet intellectual gesticulating, 2) weirdly specific personal preferences, or 3) a genuinely bad snare sound (the rarest of the three). If someone starts complaining about the snare around you, it's best to lie back and think of English Rock and wait for them to finish.
(for a live performance version, replace with old people complaining about the "drums being too loud", which is also an inevitability)