Stereo Spread: Timbaland Style

  January 1, 2016

A few months ago, in one of our Reason production tips, we wrote about how to use mixing gear (or the Reason mixer) to transform mono sounds into captivating stereo images. Since then, many of our readers have asked privately how to replicate that effect using a standard software DAW such as Audacity, or Pro Tools without having to route through a mixer. The answer is quite easy: do it Timbaland style. Timbaland produces using the Ensoniq ASR-10: a sampler that allows a producer to work with two copies of one sample. You can hard pan one copy of the sample to the left, and the other copy to the right, and create a stereo image by delaying one sample in relation to the other. In addition to creating a stereo image, the technique we show below will pave the way for additional stereophonic experiments you can implement to give your productions a tantalizing shimmer that other producers will be hard-pressed to match. We’re serious. Check out how it’s done:

Duplicate Your Sample

In many of Timbaland’s tracks you will hear samples that Timbaland recorded, then tweaked to a perfect stereo image in his ASR-10 or in a sequencer. Take, for example, some of the background tracks in “Bounce”. This is a very easy technique to perform in an ASR-10, but it’s even easier in a DAW like Ableton Live. At left, In Ableton Live, is a mono recording we made of a few unrehearsed notes on the guitar. In Ableton Live, you can copy a sample clip easily by using “CTRL+left click” and then dragging the file to the next track. If your DAW doesn’t allow this action, most likely it allows you to copy and paste. So, copy and paste your mono sample clip into a new track, and synchronize it with the original clip.

Pan Your Tracks

Once you have two identical mono tracks, the next step is to pan your two tracks to opposite sides of the stereo field. At left, in Live’s mixer view, you can see that we’ve panned track 1 all the way to the left, and track 2 all the way to the right. To get to Live’s mixer view, all you need to do his hit the “tab” key, which toggles between mixer and sequencer views. By creating two identical mono tracks - one panned hard-left, and one panned hard-right - essentially we’ve created a stereo track out of two mono tracks. But, in contrast with recording a single stereo track, the action of panning two mono tracks has the massive advantage of being able to change one side of the stereo image independently of the other side.

Create Your Stereo Image

Now, you’re ready for the final step. At left, we’ve zoomed to an extreme close-up of the very beginning of the sample clips, which start at beat 1:3 on the timeline, or the third quarter note of the first bar. From here, the process is easy. Simply click on either clip and offset the sample’s timing by dragging it forward (to the right, graphically) by a range of 15 to 60 milliseconds in time. As you shift the clip from smaller millisecond values to larger, listen closely how the stereo image changes. The greater distance apart the start of each clip becomes, the wider your stereo image. It’s as simple as that.

Create Your Own Sound

This method of creating a stereo image out of 2 mono samples is a particularly strong, yet flexible method. Not only is it fast and easy, but it can be done with any mixing software; Cubase, Audacity, Logic, you name it. Another advantage to this widening method is the independence you create by turning 1 sample signal into

2 independent samples, left and right. It’s this independence, between left and right, that makes this technique such fertile ground for experimentation. Here’s why: routing a mono sample through a stereo delay or chorus restricts you to using the same effects for both left and right. The pre-mix, sample-copying method we explain above gives you the control to treat each channel with its own set of effects processing, dynamics processing, or synthesis as desired.

If you enjoy Timbaland’s production style, and you want to push new frontiers just like he does, this technique is a sure bet. By processing each channel - i.e., each side of the stereo field - with unique effects, you can treat the listener’s brain to a new sonic goodies it has never tasted before. For example, what happens when you process one side of the stereo image with an overdrive, and the other with a subtle phase effect? The same sample is playing in either ear, but it has a different timbre in the right ear than it does in the left, lending not only depth, but also shape to the stereo field. Overdrive/phase was our arbitrary suggestion, but by now, your imagination should be generating plenty of its own possibilities. For now, we’ll leave you to experiment. Keep the beats rollin’ and the sequencer scrollin’.*

*When experimenting with the stereo image like this, don’t forget to check for frequency conflicts, interference, or canceling by summing your song to mono. This is a prudent step in every production, but it becomes especially important with experimental stereo production because the risk of conflicts/cancellation is greater.

Submit Music Production Questions or Comments

89 Responses to “Stereo Spread: Timbaland Style”

  1. Emmanuel on April 8th, 2009 10:38 pm

    I had a question about mixing. I Use Reason 4 and Sonar Home Studio 6. Do you mix beats in a DAW like Sonar, or do you mix down in a program like Reason. I heard that modern beats production team uses Reason rewired with Cubase SX 3.0. For an example do they mix with Cubase or reason rewired to Cubase or do they bounce all the multi tracks in to Cubase. Like the Demo songs ModernBeats website.

  2. Emmanuel on April 8th, 2009 10:40 pm

    Is that Why Modern Beats Production Team Demo songs mixes are clean and wide?

  3. Hit Talk Staff on April 8th, 2009 11:34 pm

    Reason and Home Studio should be a solid combination. Yes, we’d advise you to re-wire Reason through Home Studio 6… Try installing some mastering plugins as VSTs in Home Studio. (check KVR or download some mastering plugins from a site like Kjaerhus Audio). Then try routing sub-mixes from Reason to the busses at the top of Reason’s GUI, and re-wire those sub-mixes to channels in Home Studio to be processed by independent plugins. Reason 4 has a pretty solid mastering suite though.

    Reason 4 is pretty jam-packed with instruments and effects, so often producers will use it as a stand-alone, and get pretty spectacular results. The main advantage to using it in conjunction with Home Studio, then, would be Home Studio’s ability to handle loops and recordings.

    And yes, we often use the widening technique we explained above; makes a big difference.

  4. leisurebeats on April 9th, 2009 4:23 am

    You can achieve the “delay effect” more easily, if you choose a tool like SAMPLE DELAY in Logic 8….

  5. Rodney Mayfield aka, Noc Blaxx on April 9th, 2009 6:31 am

    How do I avoid frequency conflicts using the Roland Fantom X-6? Is there a way to give each track its own frequency channel and if so can you briefly explain please? I know alot about the Fantom but this particular one has eluded me and I believe this is the difference in my tracks banging like the should be. Thanks.

    Noc Blaxx

  6. 7horns on April 9th, 2009 6:33 am

    I do this all the time, but never realised what I was doing haha

  7. Hit Talk Staff on April 9th, 2009 9:04 am

    hey leisurebeats,

    Yeah, there’s nothing stopping you from using Logic’s sample delay. In programs like Audacity, though, it’s more convenient to just offset the sample, we think.

  8. Hit Talk Staff on April 9th, 2009 9:10 am

    Yo Rodney,
    That’s the purpose of scupting the timbre of each instrument with a parametric or graphic EQ. It’s just proper mixing. well… there is a way to do what you suggested… by using a crossover, but an EQ is the best right tool for avoiding frequency conflicts.

  9. Thomas Lichtenstein on April 9th, 2009 12:01 pm

    At the end of the article it says:

    *When experimenting with the stereo image like this, don’t forget to check for frequency conflicts, interference, or canceling by summing your song to mono. This is a prudent step in every production, but it becomes especially important with experimental stereo production because the risk of conflicts/cancellation is greater.

    Can you explain how to sum a song to mono, and the best way to check for frq conflicts, etc. and resolve them?

  10. Hit Talk Staff on April 9th, 2009 1:16 pm

    It’s different for every software. In Ableton Live there’s an option to “convert to mono” in the render dialogue that sums to mono digitally. It’s just a precaution. The solution would depend on the problem - would depend on what effects you were using… If a mono sum sounds good and you don’t hear clipping or attenuation, don’t worry.

  11. Thomas Lichtenstein on April 9th, 2009 3:03 pm

    I’d know it if I heard clipping, but I wouldn’t know attenuation if it hit me in the head…

  12. Hit Talk Staff on April 9th, 2009 3:17 pm

    if you A/B your summed mix with the stereo mix, you’ll hear it.

  13. Ralph Bermz on April 13th, 2009 10:50 am

    it’s very good

  14. Thomas Lichtenstein on April 14th, 2009 12:17 pm

    I looked up attenuation and this is what I got:

    “Weakening in force or intensity”

    So you’re saying if I export the audio as a mono audio file and listen I might hear parts that sound weak? Like the sound fading away suddenly?

  15. John on April 15th, 2009 6:06 pm

    Hi thomas,

    Attenuation happens a lot with Compression/Limiting. It’s what happens when the signal attempts to exceed the threshold, the signal will be pulled back (attenuated), to much of this will make your signal sound “pumping”, but not in a good way.

    Hope that helps man.

  16. Emmanuel on April 16th, 2009 10:48 am

    Hi, Tried to rewire my Reason 4 to Sonar Home Studio 6, but I can’t render my session through Sonar. I did all of my Midi in Reason and routed my audio through sonar.

    Do I have to record all of my midi in sonar in order to export my whole session?
    Do I have to route all of my devices in reason in to the hardware interface at the top of reason and route them al to seprate channels in Cakewalk Sonar so that I can mix with Sonar?

  17. Emmanuel on April 16th, 2009 10:51 am

    what do you mean frequency clashing. Is it EQing every in instrument or what? and what is the frequency in mix is about?

  18. Emmanuel on April 16th, 2009 10:56 am

    How come when I rewire my reason to sonar all of my multi channels routed at the top of my reason rack comes out mono. Do I have to pan two channel one hard left and the other hard right. and How would mix when every thing is paned hard left and right. What if I wanted to do automation with paning and the cannels are paned hard left and right?

  19. Hit Talk Staff on April 16th, 2009 8:08 pm

    Hi Emmanuel,

    Yo we’ll help you with what we can. We’ve working hard on a release just at the moment… Yes routing sub-mixes means routing to the hardware interface (or hardware device) in reason. If you want to mix your Reason tracks in Sonar, then yes you have to route to separate stereo channels in the hardware interface, and make sure to arm the corresponding home studio channels to record the output of those busses on the hardware device. You shouldn’t have to pan left and right on separate channels, in every DAW worth its salt, there’s a way to record left/right on one channel.

    Let us know if that helps. On the frequency stuff, sign up for the 10 email tips.

  20. Hit Talk Staff on April 16th, 2009 8:20 pm


    John’s operational example of attenuation is essentially correct, however in this context, the attenuation might come from the peaks of the left channel waveform matching up with the troughs of the right channel waveform.

    There could be any number of risks… Basically we’re just offering the warning that if you’re using two different effects on a left and a right channel of the same instrument or track, there is a risk that when converted to mono, it might sound ugly compared to the stereo version… just sum and check…

    But it may not apply to you at all: maybe your beat will never be converted to mono - Youtube uses stereo now, and AM radio is used less and less - it’s a good bet that if you make a beat in stereo (unless it becomes a huge success and gets played everywhere) it’ll stay in stereo… You guys should sign up for the 10 free email tips… think there’s a few tips relating to this discussion.

  21. Thomas on April 17th, 2009 10:19 am

    I signed up for the tips the first time.

    I get the point finally of summing into mono. I’ve never had my tracks played anywhere in mono (unfortunately). Even game consoles (I’ve worked on a bunch of game tracks) are stereo. AM radio is something I haven’t listened to or thought of for at least 15 years. Heh

  22. Emmanuel on April 17th, 2009 10:20 am

    Is It true that R&B came from Jazz?

  23. Emmanuel on April 22nd, 2009 9:58 am

    Do you think it would be better to bounce my reason tracks into sonar and mix it down, and is that the professional way to do it like producers in the music industry.

  24. Emmanuel on April 22nd, 2009 10:00 am

    And is there a sound quality difference if i mix either in reason or sonar?

  25. Emmanuel on April 22nd, 2009 10:38 am

    Hi again I hope I’m not asking too many qestions LOL. But I was wondering do you mix the beat and vocals together or do you mixdown the beat bounce it in to daw and record vocals or the beat. Which way do pros do it. I mix in Cakewalk

  26. Emmanuel on April 22nd, 2009 10:41 am

    opps I missed spelIed, meant Record Over the Beat with Vocals which the beat is a single stereo mix file and the vocals are beeing record in my DAW.

  27. John on April 22nd, 2009 2:46 pm

    Hi Emmanuel,
    I’d say bounce down all the single tracks before you mix, rewiring from Reason to Sonar. The advantages of this are great, when you import the tracks back into Sonar you’ll b able to see all the tracks in waves allowing you to manipulate them more efficiently.
    For better sound quality i would say use effects plug-ins in Sonar or get other vst’s, the reason ones are ok but not that advanced.
    It depends on the situation when mixing a beat with a vocal, i think there is a tip on hear already describing different scenarios and good processes.
    Peace man

  28. Emmanuel on April 22nd, 2009 10:55 pm

    Thanks for the Info John good lookin. I’m mixing a whole song with the lead vocals backvocals and Hook vocals so do I mix those vocals with my beat in the same project. The beat is bounced in sonar in single multi-tracks and the vocals are recorded over the beat in the same project.

    By the way I’m actually producing for my artist that was a former member of P-Diddy’s MTV Making The Band 2 back in 2001 he’s an R&B Singer.

  29. Hit Talk Staff on April 22nd, 2009 11:41 pm

    Hey Emmanuel

    THanks, btw, for that answer, John dood lookin. ;) We’d agree that bouncing down as single tracks is preferable to first summing everything to a stereo file in Reason (is that the gist of the conversation?). The other reason for that besides what John mentioned, is that the summing in Sonar might be better. Sonar (at least the Producer Edition) sums using a 64 bit engine.

  30. Emmanuel on April 23rd, 2009 11:22 am

    I download the 10 Free Music Tips, Was reading about frequency conflicts. They talked out using the Equalizer. I was confused about what do they mean when your working with the frequency with the Equalizer. For an example how would I use my Sonitus Equalizer in Sonar as in correcting frequency conflicts. And do I have to eq every thing in my mix?

  31. Hit Talk Staff on April 23rd, 2009 3:28 pm


    A good ear is key. Being able to hear the mix with a flat-response monitoring system (in an environment that doesn’t resonate sympathetically with certain frequencies) is also key. You can also see see if Sonar has a spectrum analyzer plugin. You might also be able to download one. Using a spectrum analyzer on your main mix will help you decide which frequencies need to adjust using your Sonitus EQ…

  32. Hit Talk Staff on April 23rd, 2009 3:30 pm

    PS You don’t need to EQ every single channel, except where there are conflicts. You may notice, for example that the upper harmonics of your vocals and synths add together and become piercing at a certain frequency. In that case you may decide to use a lowpass filter or a bandpass filter to reduce the gain of the upper harmonics on the synth.

  33. John on April 23rd, 2009 5:36 pm

    Hi Emmanuel and HT

    No problem on the comment.. Nice to hear about the project Emmanuel, If you’ve got all the separated tracks you’ll be able to make a much tighter mix.

    How you approach the mix really depends on what’s going on in the song and what you’re looking to put in there effect wise.. A good mix could take a good few hours even days so don’t have your volume that loud just turn it up for reference, this will prevent ear fatigue… I’m sure there’s a tip in Modernbeats 30 Tips explaining some good techniques. Be disciplined and see it out to the end, try listen to the mix on a few different speaker systems besides your own. cars, studio headphones, ipod headphones, friends stereo, club system if possible,, the idea is to get a good overall balance.. But like HT say, a good ear is key, use flat response monitors like the Yamaha HS80M to get a true sound and you’re on the way.. Anyways, Hope it comes out good for you.

  34. Quick on May 14th, 2009 5:04 pm

    I was wondering how could the offsetting of the two mono tracks be accomplished in FL studio

  35. Hit Talk Staff on May 19th, 2009 11:33 pm

    Yo Quick. Does the FL playlist (sequencer) allow you to zoom in enough to move clips by miliseconds? Also, most programs also have a magnetic cursor; that if disengaged allows you to move clips and midi data by small increments that aren’t locked to 16th or quarter notes.

    If moving the clips doesn’t work, you can always use a delay on the second channel with a small offset time.

  36. hebron on May 28th, 2009 3:11 am

    how do i became a good producer

  37. Mainman on June 13th, 2009 7:08 am

    How can i mix voice on fl studio and reason.Morover how can i make use of rewire in a proffesional way.Cos am a self learned producer,so i need more lecture from you guys.Big up men.

  38. Hit Talk Staff on June 17th, 2009 4:19 pm

    in FL STudio you can use the “Edison” recorder. In Reason it’s more difficult. You would have to record a wave file (synched to a click track) and import it into reason via the Dr. Rex player. Using rewire “in a professional way” is like using anything in a professional way. Rewire allows you integrate two separate production suites. We’ve dealt with Rewiring a bit in the comments section of this article on bouncing audio.

  39. Emmanuel on June 19th, 2009 2:46 am

    While mixing and editing, do I normalize every single vocal audio track that I recorded, Is that the professional way to do it. I use Cakewalk Sonar

  40. Hit Talk Staff on June 19th, 2009 7:28 am

    not necessarily. Some producers even record at a lower gain when tracking multiple vocal tracks. If you’re recording an 8 part harmony, or if you’re doubling several takes, it’s not necessary to normalize everything or even to record at the highest possible gain without clipping.

  41. Emmanuel on June 24th, 2009 9:45 am

    I was curious if that was the motif on the piano part of RnBKlubLoops1-Demo1.mp3 I can tell a motif by it’s distinctive sound. Also what did you use to produce that while demo track. I’m sure the drums are trigger from reason NN-XT Sampler and rewire to some DAW.

  42. Hit Talk Staff on June 24th, 2009 1:10 pm

    It really depends on which sound developers worked on the project. In fact it could have been any one of those things. ;)

  43. Jack on June 29th, 2009 4:54 pm

    Hey guys the question I have is what’s the best settings (Ratio,Knee,Gain,etc) for compressing Hip-Hop beats. I know its different from track to track but what’s generally the rule of thumb? Also how do you compress (EQ too) Hip-Hop snares to give it that hard hit? Again what’s the rule of thumb? Thanks a lot guys!

  44. Hit Talk Staff on July 1st, 2009 5:32 pm

    Jack, have you had a look at the other tips? There is one that answers your question pretty directly. (production tips in the orange nav bar)

  45. on July 2nd, 2009 12:18 pm

    Wats up, someone recommended me your site and i have to admit that you guys have it all down…I’m looking to get started in music production and reading your tips has given an insight on what I need to do but still i was wondering if you could help a little bit on how to get started on a beat, that’s where i have the most problem with….thanx

  46. Emmanuel on July 14th, 2009 4:23 pm

    Hi, I was wondering how do i make head room for vocals when i mix a beat by it’s self, to send to a song writer and artist. I wanted to mix my track down in away so that i can add vocals to later. Do I mix the vocals together with the beat? (For an exampIe I bounced the whole beat in Sonar tack by track snare, kick, Hi hat…ect., but do i add the vocals with in the these same multi tracks in Sonar while I mix the beat and vocals at the same time if my computer can handle the heavy load. Or do I mix the beat seprate and the vocals seprate and thin blend them together? I’m trying to get the levels right so that the beat doesn’t sound louder than the vocals or the vocals sound louder than the beat. Thanks HT

  47. Hit Talk Staff on July 24th, 2009 8:07 pm

    Yo Emmanuel. Sorry for the wait. The truth is you can do it either way. Probably the easiest thing for you to do will be to sum the beat first, and then add the vocals in a separate channel.

    What we recommend, the ideal scenario, is to build the bass and drum mix, then add the vocals. Then add your other song elements. The embellishments, etc… because that way the song can mold itself around the vocal line, which should take priority.

    With regard to attaining the right level, it doesn’t matter which method you use. But the advantage to inserting the vocals into the original multi-track session of your beat is - say you decide the guitar is just wayy to loud in the mids, you can use an EQ to carve those mids out of the guitar track, thus making the vocals more audible. The advantage of having that multitrack session open while you’re putting the vocals into it is that you’re going to be able to fix whatever’s holding the vocals back. And remember to be creative with your vocal arrangements.

    sorry you got ignored, dude.

  48. Emmanuel on July 25th, 2009 12:37 pm

    that’s ok no problem, thanks for the info. Does Timbaland mix with every thing or separate? And do the vocals have be 2 db higher then the rest of the mix or every thing has to be the same level?

  49. Hit Talk Staff on July 25th, 2009 1:10 pm

    Good question about Timbaland. We could answer, but we’d be speculating. There’s some videos on Youtube that give you clues to some of his production steps. If they’re supposed to be up front in the mix, try mixing them a little louder. 2dB? Don’t worry so much about level as about mixing. The vocals should just be clear. IF the vocals aren’t loud enough, usually the best practice is to bring down the rest of the mix, or the relevant parts of the mix.

  50. Lonnie on August 2nd, 2009 3:41 pm

    I used Fruity Loops to make most of my tracks..and i was wonder wats is da best program or tools to mix my beatz???

    And what is the best way to mix my beatz??

  51. Hit Talk Staff on August 3rd, 2009 12:41 am

    Hi Lonnie,

    Try a sequencing program like Audacity… or if you’re paying, Audition. Then try applying some of the mixing techniques we’ve explained in our Hit Reports and other online tips. You’re going to have to use mastering effects, such as EQ and compression, and you’ll need to use the sequencer and mixer section of whatever DAW you end up using. since you’ve used FL, you’ve at least got a solid start. Cheers,


  52. Emmanuel on August 11th, 2009 4:00 pm

    Hi, Matthew, Are you tracking out the beat from FL 7 and bouncing each track into Sonar or are you importing the whole beat as one stereo file in Sonar. As from my experience there’s no sonic difference as in sound quality in any major DAW, for as Cubase SX, Logic Pro, Sonar, Protools LE, Live etc… Plus choosing a DAW is primary base on user preference. If you do, make sure you have flat response reference monitors and I good audio interface. I personally own Sonar myself as my primary DAW, and I have no problems what so ever. I urge you to upgrade if you want to stick with Sonar. The lasted version is 8.3.1. From what I heard that Cubase handles VST’s more effectively than Sonar and other DAW ’s because Steinberg is the original inventor of VST (Virtual Studio Technology), but every DAW has it’s own different’ versatile capability’s, but keeps it’s same principle’s far as midi and audio recording. Plus read some of the comment’s on this page related to this subject. But bottom line choosing a DAW is based on user preference. Hope this helps

    Peace, Emmanuel

  53. Иван Милюков on August 20th, 2009 12:20 pm

    Большое спасибо за информацию, буду использовать. :)

  54. Mixwerk on September 7th, 2009 1:26 pm

    To make a stereo file out of a mono file like this is not very cool because it is pseudo stereo and not mono kompatible. In some parts of a room with 2 speakers you will hear it not as good as in other parts. It is better to do different filtering and to detune the two parts, because this has much greater effect and is working even when you hear the track mono.

  55. Hit Talk Staff on September 7th, 2009 9:11 pm

    We hear you on the cancellation Mixwerk, that’s why it’s good to preview via summing… not that preview summing eliminates the possibility of acoustic cancelling, as you’ve just mentioned. So the readers know, the AAY Stereolizer does a good, fast job of mono-compatible widening. We’d like to hear more of what you have to say on filtering and detuning.

  56. Digitawy on September 20th, 2009 12:10 pm

    Can I make a professional high quality sound beat with only my PC and Reason 3. i dont have any proper gear 4nw but i really want to market my beats

  57. Hit Talk Staff on September 21st, 2009 1:23 pm


    It can be done. Reason 3 is a solid start. You’ll want to audition your beats on multiple stereo systems as often as possible. If you only have cheap headphones or computer speakers, don’t trust them to give you an accurate picture of your mix. If your beats are creative enough they might go a long way on their own merits. If you’re looking to take your beats to the marketplace, keep practicing them, and build your gear arsenal. Take a peek at the Hit Theory previews.


  58. Nlhanhla Ngema on October 8th, 2009 8:57 am

    i use audacity to record my vocals, i use fl studio 9 to help me master the tracks and beats i produce. how can i make my tracks sound complete and mixed well using the programs i have.(i also have cubase,ableton live7)

  59. Chabibanton on November 22nd, 2009 3:26 am

    I am about to do a recording of my own voice,which is very light and sounds childish.what am i to do because i have tried dubbling and it is till the same,can i tripple it?and what kind of fx can i add to it to make it thicker. i raised the lowpass yet it sounded sounds as if am singing through my nose abit and its a little lighter than marc anthony’s voice.plz help me

  60. Hit Talk Staff on November 23rd, 2009 10:15 am

    Hi Chabibanton,

    You can triple and quadruple and quintuple your voice. That is an excellent way to thicken it up. With regard to your voice sounding childish, that’s not necessarily a bad thing. Instead of trying to thicken it up and deny the way it sounds, try to explore your own voice’s uniqueness. Develop your lyrical delivery and content to embrace what makes your voice unique, and don’t worry too much about adding reverb, chorus, or delay though these effects can help depending on the production. A lead vocal line should be dry and up front in the mix. Backing tracks usually are more effect-laden, and quieter. Don’t worry about how you sound, just develop a proficient lyrical style. Think of Q-tip. He rules.

  61. Chabibanton on November 27th, 2009 5:41 pm

    So i should not put any fx to would sound raw.that is without reverb,autoturn…etc.i am collorating with someone,but this persons voice pitch is higher than i found out dat i was stressing to get in2 the instrumental.what if i voice it in my own convient pitch and when am through the pitch is raised to that of the instrumental.would it be noticed

  62. Hit Talk Staff on November 28th, 2009 9:49 am

    As a rule, the less correction you need on the vocal line, the better.

  63. Eric Floyd on February 10th, 2010 11:53 am

    How can i better enhance my ability in producing with Cubase software

  64. william on March 30th, 2010 12:48 pm

    man i want to know how to make my recordings sound like having many voices behind am using reason do believe it’s a good reverb trick pls can you help

  65. Hit Talk Staff on March 30th, 2010 1:59 pm

    In Reason, the Unison chorus, not the Reverb is what you need to use.

  66. Hit Talk Staff on March 30th, 2010 2:30 pm

    @Eric, Try some mastering plugins, check out some of the Hit Reports… It’s a big topic, and there’s about 1000 different ways we could answer that question.

  67. DavidG on March 31st, 2010 8:56 am

    Awesome report!

    Do you think you could do one for a RedOne track(He has plenty of hits lol)? Would love to see how you breakdown his sound.


  68. william on April 5th, 2010 1:29 pm

    i hear great vocal recordings in west coasts rappers like reakwon, kurupt how can i get such recordings

  69. BeatsForSale on April 6th, 2010 8:26 pm

    Very well done, learn from the best and you will get better results, this is true!

  70. Ryan on April 16th, 2010 8:57 am

    wow, this was very useful. quick question - instead of duplicating a mono track, can you duplicate and offset a stereo track that is then hard panned left and right ? you metioned guitars, but is this technique also good for getting a wide vocal ? thanks

  71. Ryan on April 17th, 2010 7:00 am

    update : am i imagining it that when I hard pan a guitar track left and the copy right, and offset with something between 15 and 60 ms, that sometimes it seems like more volume is coming out one of the speakers (or monitors or headphones) ? lastly, if i understand correctly, once you have done the “procedure”, if it sounds good and nice and wide on the monitors, then you need to export the two tracks (left and right) down to mono to make sure it still sounds fine ? thanks

  72. Hit Talk Staff on April 19th, 2010 11:33 am

    You’ve got it, Ryan. Sometimes a hard pan is not necessary, you might consider 75% and 75% panned. And yes, you’re only bouncing to the mono track to make sure none of the sample’s tone has been yanked out by cancellation.

    With vocals, it’s a bit of a different story. Normally you want vocals to be right up front and center. The exception, of course, is if you have multiple tracks for harmony and background parts, in which case, by all means, you can try widening them, but usually on a lead vocal, keep it mainly dry, with perhaps a bit of reverb and keep it in the middle of the stereo field

  73. Hit Talk Staff on April 22nd, 2010 7:47 pm

    @william re:recordings - Hey William, sorry we missed yours. Good recordings mean good equipment. Get set up with a solid recording mic and preamp. Plus a good recording environment. There’s lots about that in Hit Theory.

  74. william on April 28th, 2010 1:38 am

    i use fl studio and am having pro findin’instruments like the string used in all of the above by maino and forever by drake can you guyspkls help me thanks

  75. Hiran on May 3rd, 2010 5:13 am

    I use reason rewired with cubase. When I get the L and R channels of a reason track in cubase mixer( lets say its Main Kick 1),
    a) what happens to the total gain and phase when the both channels kept on Centre compared to when the two channels kept hard L and hard R? (provided the stereo pan law is -3db in Cubase , the sample used in reason is a modern beats stereo kick sample and we use same effects for both channels)
    b) what is more benefitial for the final gain of the mix?
    I prefer having the Main kick and Main snare in the centre but this problem worried me for a long time

  76. Hit Talk Staff on May 3rd, 2010 4:20 pm

    Safe bet is to keep the kick in the middle. It’s a bit hard to decipher exactly what scenario you’re talking about. Do you mean you’ve got 2 channels with the same kick sample (or snare sample) and you’re trying to decide whether to keep them panned centre or panned hard left and right for the same sample?

  77. Hiran on May 4th, 2010 2:50 am

    say I have recorded one midi track in reason using modern beats stereo kick sample and have got L and R mono channels of that same kick sample in cubase mixer (rewired) ,
    what is the difference between keeping them panned centre and panned hard L & R?

  78. Hit Talk Staff on May 4th, 2010 5:06 pm

    Right, so the question is, what’s the difference between using one center-panned mono sample, and using two mono samples panned hard left and right. There might be a slight difference in summed volume. But basically, having the mono sample panned centre is the same as having two identical sounds playing at the same volume in either channel. Unless working with stereo effects, slightly tweaking one sample or moving it forward or backward, it makes the most sense to use a centre-panned mono sample.

  79. Hit Talk Staff on May 4th, 2010 5:06 pm

    and when in doubt you can always test it.

  80. Fm Agape on August 3rd, 2010 5:05 am

    Helo,pls how do i get the best of mix when all of my backup vocals are sent to a groupchanel track using stereo.i use reason4 rewired with cubase

  81. Hit Talk Staff on August 3rd, 2010 5:04 pm

    @Fm Agape: That’s a good question.

    If you route all your backup vocals to a stereo group channel in Cubase, you’ll be able to solo that channel and make sure the levels of all channels in the group are set properly, AND you’ll be able to lower the volume of all the backup vocals at once without changing how they’re mixed. That’s the big advantage to group channels.

    As far as summing and stereo image goes, routing to a group channel is the same as routing to master, and all the same mixing lessons apply.

  82. amicable on September 9th, 2010 6:59 am

    what other music programmers you can use in the studio and wont experience problems?

  83. sajo on September 20th, 2010 1:50 am

    i want to know if this thing can be functioned well in cubase sx. and if i panned the samples,in mixing it do i use the same plugins to each sample or i use different plugins?

  84. Hit Talk Staff on September 26th, 2010 7:52 pm

    It should be as easy in cubase. We were suggesting using different effect settings on left versus right. Whether use the same plugins is up to you.

  85. A.Montana on October 1st, 2010 11:19 am

    hi hittalk,

    i analyzed tracks with good stereo work and found out that most have a stereo image (lissajous figure) which looks like a square between both axis (R/L and -R/-L ; R/-R and L/-L). This represents the whole frequency range. My Question would be how to arrange the low frequencies with the high ones to achieve that “square”. For example in one track there is a bass, should be a saw form, coming up and it creates this square. It plays up to the high frequencies. In this track the square is spinning around the mono axis but doesnt lose its form. Ive been playing around with simple signals like sines in various octaves but dont find out how to achive this stereo image. Ive noticed that the low frequencies is dominant for the form but how to lay high frequencies over to not interfere and just kind of “fill the square out” ? So can you help me and tell me how to get such stereo images??? What else can you suggest to look for while arranging stereo work (frequency specific)?


  86. Hit Talk Staff on October 17th, 2010 9:17 pm

    HI A.Montana, thanks for your question. Sorry for the delay in response. I think I follow the direction of your question. How exactly have you been playing with the sine waves? Have you been tweaking phase?

  87. Brent FABR Labasan on December 26th, 2010 3:18 am

    Are there any sounds/instruments that don’t work well with this technique? Are there any disadvantages to it?

  88. Hit Talk Staff on December 30th, 2010 4:10 pm

    @Brent - Generally speaking, you wouldn’t use this on a kick or bass instruments. Any instrument you get a lot of bassy tone from usually sits in the middle of the mix anyway.

  89. prince idenyi d victor on August 9th, 2011 6:05 am

    i will love more about music production and mastering

Submit your music production related questions or comments below...

Get 10 Free Music Production Tips!
Sign-up Now & Get 10 Free Music Tips!