You are here

Synth FX: Part 3

Using Onboard Synth Effects By Paul Wiffen
Published June 1999

Synth FX: Part 3

Having looked at the history and development of built‑in effects on synths over the last decade and a half, Paul Wiffen now explores the practicalities of using them, and how to get the best from the keyboards and modules which contain them. This is the third article in a four‑part series.

For our hands‑on look at onboard effects, we shall take the same chronological approach that we did when tracing the development and expansion. This has the practical advantage that the first effects I will talk about here are likely to be present on whatever synth you have, but as the article progresses the effects I describe will become more exclusive to more recent synths. Those who don't like to read about how to use what they haven't got can drop out when something they don't recognise turns up, whereas those who continue past the point where their own facilities are covered may well find something to whet their appetite for a more modern machine.

Chorus

Roland's Juno ‑series analogue synths incorporated a chorus unit principally to thicken up their single‑oscillator‑ per‑voice sound.Roland's Juno ‑series analogue synths incorporated a chorus unit principally to thicken up their single‑oscillator‑ per‑voice sound.

The original effect I first came across all those years ago in analogue form (see part 1 of this series from the March issue) is still going strong on all effects‑laden synths and keyboards. As a general rule, it makes thin things sound fatter, small things sound bigger and cold things sound warmer. I often think that a chorus unit in the Yamaha DX7 could have extended its life another five years (that's essentially what everyone did with the Yamaha TX816 — which was basically eight DX7s in one box — by detuning the individual tone generators against each other) and when their SY77 and 99 added one, it made it much easier to use FM sounds live without a lot of processing plumbed into the mixer.

Chorus essentially takes an electronic copy of the sound (just like a very short delay unit) and then moves that copy around in time relative to the original by constantly changing (or in techspeak, modulating) the actual length of the delay time. Originally that copy of the sound would have been analogue, made by

If the delay is short enough it makes it sound like the instrument has been double‑tracked (engineers nicknamed it ADT — Automatic Double Tracking, because it was so much faster than trying to get a musician to play exactly the same thing twice).

discrete circuitry as on my old Elka Synthex or a Roland Juno, but now it is a simple numerical copy of the sound in the digital domain. The clever business nowadays is in the mathematics of how the copy and the original are recombined. However, the user interface on even the most recent machines still tends to use the terminology of the old analogue parameters. There will be an LFO of some sort (though this may just be referred to obliquely by a single Speed or Rate parameter used to alter the LFO frequency) and on more complex implementations, you may even be able to change the waveshape the LFO uses (normally between triangle and sine — sine gives a smoother transition around the extremes of delay modulation). There is also bound to be a Depth or Amount control, which simply sets how large the variation in delay times is — this is what governs how rich/warm/obvious the chorus effect is. Additional parameters may include fixed pre‑delay (two if you have a stereo chorus implementation) and the ability to change the modulation source of LFO speed so that the speed is governed by things like the keyboard aftertouch or velocity. Of course, if you have the old preset chorus amounts of the early‑'80s machines, then the designers may have chosen two or three settings they found particularly pleasing and fixed them with resistor values. Switching between them is therefore the only choice you'll have.

Another name by which chorus appears on many synth DSP units is Ensemble — because, used correctly, it can make certain solo instruments sound like multiple players. Ensemble chorus is usually a variation where the parameters have been restricted or refined to help you acheive this particular effect quickly. There may be a parameter called Shimmer or Variation which actually adds a second level of very small fast modulations to the LFO'd delay, on top of the basic slower modulation. However, using this on the wrong sounds (basses in particular) can make them sound uneven, jittery or even plain out of tune, so restrict the use of Ensemble types of chorus to the those instruments which are played in ensembles, like strings, brass, and so on.

The obvious sounds on which to use the more general chorus effect include the original sounds which cried out for it and led to its early inclusion on synths: strings, basses, and solo leads which sound thin or isolated in the mix. But try very small amounts of it on almost anything. The often subtle change which it makes can add character and presence to almost any instrument. Over‑use, on the other hand, can make a mix muddy, with individual instruments indistinct and blousy (though of course, this may be exactly what you are looking for!).

Delay/Adt

Yamaha's SY99 made good the omissions of the DX7 by adding a built‑in chorusYamaha's SY99 made good the omissions of the DX7 by adding a built‑in chorus

Delay was one of the very first digital effects to feature on synths and, as the name suggests, allows you to repeat your source sound a short time after the original for something which can produce an echo or doubling effect, depending on the delay time.

One of the problems with modern digital delays produced by DSP algorithms is that they can sound too good. In real echoes or in the old tape‑based delay units, there is a tendency for the high frequencies to die away, making each echo or repeat duller than the previous one. Of course there is nothing to stop you using artificially bright delays, but if you are after a more natural‑sounding echo, most modern DSP delay algorithms on units built into keyboards now have the wherewithal to do this. Refinements like High‑Frequency Damping or HFD (a high‑pass filter to you) can help reproduce the loss of high frequencies that occurs in natural echoes; to get the classic retro delay sound on modern keyboards, you may also need to add some chorus and/or tape saturation, amp simulation or decimation (sample rate/bit reduction, which we look at next time). Most modern multi‑effects in synths will allow you to do at least one of these, but hopefully the PCM source sounds in your keyboard are sufficiently well recorded and reproduced that whether you decide to go for the retro delay is a matter of taste rather than necessity as with more vintage keyboard sounds. For example, the Mellotron could sound very harsh without putting it through one of these old analogue delays, and the sound that you hear on most old Mellotron recordings would have been probably have been treated with analogue tape delay to soften it up and make it less abrupt.

The human ear is very drawn to changes and exaggerations of the harmonic series in a sound, so anything you flange will stand out in a track.

Another advantage of modern delay units is that they are more finely adjusable than tape‑based analogue units, so shorter and more accurate delays can be achieved. This will allow you to recreate other early studio techniques like ADT. If the delay is short enough it makes it sound like the instrument has been double‑tracked (engineers nicknamed it ADT — Automatic Double Tracking, because it was so much faster than trying to get a musician to play exactly the same thing twice). Previously, double‑tracking a lead part or a bass line had been the best way of fattening things up. Nowadays, a digital delay can be used to do the same thing more reliably in real time (and save a track on the multitrack as well). This was a very popular effect in the early '80s, and could be used to create anything from the slapback echo so beloved of Elvis Presley and John Lennon, to extremely close doubling which sounded like scat jazz musicans showing off just how tightly they can play. ADT settings on built‑in DSP units are good for giving a part a nostalgic '60s/'70s feel or just a bit more presence in the track, but for real fattening or thickening up of a sound, you need to go for chorus, as discussed earlier.

Phasing & Flanging

Korg's revolutionary M1 workstation was one of the first synths to offer built‑in reverb.Korg's revolutionary M1 workstation was one of the first synths to offer built‑in reverb.

Two extreme variants on chorusing actually have their own names, drawn in one case from the technical name for what happens when the delay is made very short, and the other from the tape‑based process by means of which it was originally produced.

When the length of time by which one part is delayed with respect to the other becomes less than the wavelength of the consituent components of the sound, the delayed signal is said to be out of phase with the original. As a result, certain harmonics are cancelled out and others are exaggerated. When this is a static process (ie. when the delay and therefore the phase discrepancy remains constant), the sound can become thin and strange (like when you put the signal in mono to test for phase problems in sound‑reproduction systems). But vary the delay times — and thereby the phase relationships — and a very pleasing shifting in harmonic content and character takes place, known for reasons which should now be crystal‑clear as 'phasing'. Phasing works best on sounds with an already rich harmonic content, simply because the more harmonics are present, the more the sound can afford to temporarily lose some of those harmonics through phase cancellation and the more 'meat' there is for the phasing to work on.

Another name by which chorus appears on many synth DSP units is Ensemble — because, used correctly, it can make certain solo instruments sound like multiple players.

Phasing is not ideal for sounds which are thin and lacking in higher harmonics, as the results are frequently too subtle to make out. Worse still, if the phasing catches the right frequencies, such thin sounds can almost completely disappear. This is especially true of bass sounds, although some recent keyboards have special Phasing/Chorus algorithms which only work on the higher harmonics. This is done by separating the high harmonics with a filter first. If you don't have this capability, even short delays may cause phasing on the lower harmonics because the delay may be shorter than their wavelength. The upper harmonics will not be so markedly affected, as the delay is still likely to be longer than their wavelength.

At the opposite end of the delay range, flanging makes the most marked effect of the delay‑related DSP effects by using feedback and high depths of modulation with relatively slow modulation cycles to produce sweeps through the harmonic content of the source sound. The feedback serves to highlight the affected frequencies even more by putting them back in the loop again (although high damping is often needed and provided to prevent howl‑round). Flanging allegedly gets its name from the fact that the earliesexamples of it were produced by engineers who sync'ed two reel‑to‑reel tape machines playing the same signal and then dragged their fingers on the flanges of one set of tape spools, then the other to change the relative position of one machine's signal to another. Again, it works best on harmonically rich signals. Of all the effects you might apply, flanging is one of the most noticeable. The human ear is very drawn to changes and exaggerations of the harmonic series in a sound, so anything you flange will stand out in a track. For this reason it is perhaps best used to highlight something temporarily — maybe a synth lead line which fills in between verses or other major elements in your piece — rather than leave it going all the time, or you may get a reputation as something of a hippy. Of course, if your music is in a more experimental vein, then this kind of over‑the‑top effect may be just what you need to set your stuff apart from the common herd.

My favourite application of flanging is on fretless bass sounds, and most modern PCM‑based synths offer if not a fretless sound itself then something which can be converted into one with judicious use of the pitch‑bend wheel and a flanger algorithm. In no time, you'll be sounding like Pino Palladino on Paul Young tracks or John Giblin on Kate Bush's third album.

Reverb

We will finish this instalment by looking at the most ubiquitous spacial effect of the last twenty years, reverberation. It can be thought of as extremely complex version of echo/delay, where the various different surfaces in an enclosed space each send their separate echoes back to the listener at slightly differing times and volumes, depending on the angle and composition of the surface. The good thing about reverb is that it puts a sound into a sonic space, and our ears find this a natural way to hear sounds, as the majority of sound we hear is as much made up of reverberations from the environment as the source sound itself. Often it is the reverb which gives a sound character and interest (think how much better your singing sounds in the shower than anywhere else), but over‑use can result in drowning the transients in the sound in a bath of reflections. Signals become indistinct and woolly, especially drum or percussion sounds. This is why long reverb tails are used on snares in ballads where there is the space for them, but not in faster music, where the smearing drowns the impact of the following drum hits.

The good thing about reverb is that it puts a sound into a sonic space, and our ears find this a natural way to hear sounds, as the majority of sound we hear is as much made up of reverberations from the environment as the source sound itself.

The complexity of the interplay between all the different 'echoes' in a reverb meant that before digital signal processing it was extremely difficult to produce an authentic recreation of the reverberations heard in real spaces. In the days when DSP chips cost a fortune, good‑quality digital reverb was still a very expensive, stand‑alone luxury. But eventually DSP prices tumbled, and it became so cheap to produce DSP chips that digital reverb was added first to synths (Roland's D50 and Korg's M1), and then to samplers, organs and even digital pianos. Nowadays, even the cheapest portable keyboards seem to be swimming in it, and any decent keyboard workstation will have a reverb of the sort of quality that would have cost a fortune even 10 years ago as a stand‑alone unit.

Perhaps as a consequence, current fashion seems to favour sparse in‑your‑face productions which get by on an absolute minimum of reverb. Me, I still like buckets of it, but only on certain instruments in the mix. It is great for placing something like synth strings in the distance, so that a lead instrument can stand out against it, or to give a tired snare sound a character and presence it would otherwise lack. It is also good for placing a sound back in the mix when dropping the level would simply make it inaudible and leaving it dry would cause it to intrude over the lead vocal or other primary sound. As modern workstations or multitimbral modules handle an ever‑greater part of the production process, so built‑in reverb becomes increasingly important for placing sounds in the mix. The ability to send different amounts of the various timbres on a keyboard to a single reverb used as a global or master effect means that the sounds' apparent nearness to the listener can be controlled whilst keeping a sonic coherence that is lost when using half‑a‑dozen different reverbs on individual instruments.

A common problem arising from using built‑in reverbs on keyboards is that the reverb your keyboard sounds are treated with frequently cannot be applied to vocals and other acoustic (or non‑keyboard) sounds in your track (unless you are lucky enough to have a synth with an audio input that is routed through the internal effects). As a result, you may end up with a reverb on (say) your lead vocals which has a completely different decay time to the effect that is bathing your keyboard timbres. If this is what you want, fine, but it can tend to confuse the ear and destroy the cohesion in a mix. If you don't have an external input on your keyboard for processing other sounds through the internal effects, you can always try to match the type of reverb (hall, room, plate, and so on) and the actual times and settings, between the reverb on your synth and the stand‑alone effects unit you are using to process the vocals and other acoustic sources through. This should prevent the sonic confusion which can occur when you set up different sources in your mix in apparently greatly differing sonic spaces! Needless to say, though, if that is the effect you want to achieve, go ahead.

As modern workstations or multitimbral modules handle an ever‑greater part of the production process, so built‑in reverb becomes increasingly important for placing sounds in the mix.

As touched on earlier in this series, one area which definitely does not benefit from reverb is the bass end of the spectrum. Any more than a smidgeon of reverb will make the bass end muddy and indistinct, and if you push it too far, the entire low end of your mix will turn into an all‑obscuring rumble. So start by defeating all the reverb send on any parts with a significant low‑end content and reapply sparingly. You will find your productions gain in clarity and punch, and you can still swamp your higher frequencies in reverb if you want to. I know I do!

Next month, we will move from the ambient and spacial effects which place a sound in a virtual position to those effects which turn a sound inside out, grunge it up and generally make a sonic thug out of the wimpiest input. I am talking about the likes of Overdrive, Distortion and Decimation (bit reduction), which are best used when you don't like your starting sound very much at all, or you just want to get attention.

The Name Game

Synths these days offer a wide variety of effects, and manufacturers are increasingly inventive in the names they create for them. Sometimes you find cases of mutton dressed as lamb, where an old favourite is given a new name to convince you of the machine's groundbreaking technology, when in fact that machine is just a darn good workhorse. If what the effect actually does is not apparent from its name, how can you find out what it's likely to do to your sound? Try taking a look at the factory patches which use that effect. These days, when ever fewer users are actually bothering to program synths themselves (ironically at a time when manufacturers spend more and more time to make the things easy to program), a huge amount of time and expertise goes into the creation of the synth's factory soundset, as this is felt to be what does or does not 'sell' the synth. As a result, factory presets tend to give a fairly good picture of the keyboard's strengths (and weaknesses) and this applies as much to the effects as the other parts of the machine's architecture. Tracking down those patches which use (say) the 'Intensifier' effect will quickly let you determine whether it is a fancy name for a loudness maximiser, a necessary renaming for a legal reason (eg. an audio enhancing process which cannot be called an 'Exciter', as that is an Aphex trademark) or something genuinely unique.

Once you've found a program which uses the effect, the first thing to do is to try and ascertain exactly what it is doing to the original sound. Look for a Bypass parameter which should allow you to quickly defeat the effect(s) and switch them back on again, so that you can make an A‑B comparison between the effected and the dry sound. If the synth has multi‑effects, make sure you know whether the bypass disables just the effect you are looking at or the entire multi‑effects circuitry. Both types of bypass have their uses — the latter is particularly good for discovering whether the synth suffers from D50‑itis (ie. its source samples really need the built‑in effects to hide their brevity and low fidelity) or whether the effects capability is the icing on the cake of a great set of ROM sounds.

The other way of identifying at least the general type of effect (apart from listening to it) is to run through the parameter listing for an effect. You may find that the list of parameters and the range over which they can be varied gives you a clue as to what family of effects this fancy‑sounding new algorithm belongs to. For example, if Delays of 10‑50 milliseconds are on offer, you have probably selected a Chorus/Delay (any less would create phasing effects, whereas greater delays would bring you into the realm of true multi‑tap delay/ADT effects).

Once you have discovered exactly what sort of effect it is you are dealing with (of course, you may be lucky enough to have a machine where the effects have simple titles like 'Chorus' or '7‑band EQ'), and the sorts of sounds the factory programs use it for, you can get on with the business of learning how it works and making the best of it. Try simply moving one parameter at a time to listen to the effect it has. Then move a second parameter to a different value and move the first one over the same range and try to hear what has changed. In this way you will not only learn what each parameter does, but how they interact (or not). In this regard, learning how to program effects is very little different from general synth programming. Parameters which do interact tend to be grouped together just as they are in the synthesis sections (although the groupings may well be less defined on the machine — try breaking the habit of a lifetime as well and look at the manual, as this may show the parameter groupings or even the effects architecture).