FL Studio - latency settings - buffer size ASIO

What is the latency in audio recording?

The latency problem with digital audio recording is that it can take several milliseconds for the audio to be delivered to the monitor, because the monitor is not a discrete device.

This is particularly true in a multi-display setup. With a discrete monitor, there is no latency in audio, because the monitor is just one unit in a larger audio system.

For a multi-display setup, however, there will still be a slight delay before audio is transferred, because the monitors are moving around inside of the system. That delay is called the latency in audio.

In order to make things more efficient, audio is recorded at a lower sample rate to minimize the delay when recording. In order to do so, a video capture application (e.g. MPEG-1 to AVC) uses an adaptive sampling rate to capture audio, then interpolates that audio using the latency in audio recording.

Because the video is recorded at a higher sample rate than the audio, it can be processed faster, because it is recorded at a different rate. The video recording application is responsible for making sure that the video is delivered at the appropriate sample rate and that the audio file is processed properly.

This can improve audio performance, especially for high frame rates with large, loud monitors. To minimize this delay, the MPEG-1 to AVC process uses the adaptive sampling rate. This process allows the video file itself to be recorded at a slightly wider sample rate to ensure the audio file gets processed correctly

How to avoid latency in audio?

Latency could be a serious problem in recording in real-time and synchronization. For example when drums section is delayed with guitar you may have a experience that band is playing out of tempo but this is only the result of delay of digital buffer in the audio device.

In digital recording devices all acoustic wave (sine signal with harmonics) is digitized and sampled with some sample rate and bit-rate (size of the sample). This process is called sampling and quantitating.

Such computation operation requires some time and CPU is doing it on tiny pieces of analog sound called time-windows stored in buffer. The buffer size indicates latency. Smaller buffer size gives smaller latency but requires much more computation power. If you decrease buffer size over the limit of computation power – the output sound will be shredded, not continuous.
So to avoid latency you should use small buffers, the buffer size is limited (like 5 to 10 mB).

This is just one technique. The best technique is using the same buffer size with a small buffer size.
The best sample rates and time windows you can use is based on your budget.

For example if you can use a small buffer size for a recording and use a lot of samples at a high sample rate, this is very helpful in reducing latency and the amount of processing time for your audio. But if you use a lot of high sample rate and very little samples it will take longer to achieve the sound quality.

For example a low sample rate that is used a lot for a good track and a high sample rate used a little a lot for a bad track. This leads to some loss of bass and high pitched note.
This can also cause some harsh clipping. It could cause a loss of balance.


If this technique does not result in that, try to make the recording more interesting by adding more instruments or adding sound to the track.
This could cause a bit of distortion.
Another way that you can reduce latency is to mix your sample rate and sample size together. This is a good way to improve the quality of your output and also to reduce your processing cost and CPU.


A lot of people use a very narrow sample rate, and then make it big by making more than two channels of samples. It is usually very difficult to get a low level of bass to this low channel level.
It is always better to use two channels or two different sampling frequency.
This is a great technique if you have only some budget and time to work, and you have a lot of time for editing, mixing and mastering. I recommend you to start by experimenting with a very narrow sample rate. You can try to use a very small sample rate, or use different types of frequencies. If you are using a wide sample rate you can also try to use a lot of sample rate. For example using 16 bit.


This will give you the highest possible quality of audio with no latency problem

If you are using a very narrow sample rate it is better to use a sample frequency that is between 40 to 80 kHz. And this sample rate will give you a bit less and bit less, but the result will always be more balanced (and more accurate).


I use a 16 bit and 4 channel sample rate. It gives me a lot of stability in my audio and can be very easy to edit and mix.


A very narrow sample rate is a very common technique I hear. It also makes it easy to do the recording and to mix it on the same day with very little processing power.

However there are several things that you can do to decrease latency. Full elimination of latency is not possible because even the most expensive device (most powerful CPU) always need to use a buffer to make a mix or attach a effect like distortion, chorus, delay, vocoder.

Latency below 30 milliseconds is acceptable and almost not possible to notice by the listener.

Of course pro – drummers could even “hear” the delay even less than 30 ms but the ordinary man easy hear that 200 ms is much “out of the tempo”

In this article I will show you some solutions to decrease the latency in you audio system.

The first step that you can do is to decrease the buffer size of the recording/mixing device.

Low latency audio recorded – how to find them

You can always check our products:

Latency on Video codecs

Latency is not only the problem with audio. It also can be serious problem with recording audio and video at the same time. Video streams encoding could be much more power-consuming than audio.

This one is a bit trickier than most but, you can use a low bitrate.
The second step is to increase the bitrate that you are recording.
The last thing is that the file size of mp4 is not that large.
Now that we know how to increase the bitrate, we need to add the codec codec .
This is really easy. Just open your favorite video player and find the MP4 codec option on the left. I will share an example in the video section.
The above video will use MPEG2 and you should be able to hear the difference between the bitrate you can play and the bitrate of the codec .
To add the codec codec you can do a search for the codecs in your favorite video player, you should find a file called ‘mp4’.
In the above video, you can hear the difference between the bitrate you can play and the bitrate of the codec .

If you can not find the codec codec file in the above video, try adding a copy of it to your computer and copy over it.

If you don’t see the codec codec in your video player, then the second step in this video has already been done!
Finally, if you want to add the codec codec in your video player, then this is the next part.

Now, once you’ve added the codec codec to your video player, there is another simple but also tricky one you should be doing.
So, what you should be doing first is to check in the codec.
This is really easy to check if this is a valid codec.
Now, the next thing you should be doing is to copy the codec file to the computer.

The last thing you should be doing is to open your codec file.
Finally, you have to copy it and paste it into your video player.

Leave a Reply

Your email address will not be published. Required fields are marked *.

*
*
You may use these <abbr title="HyperText Markup Language">HTML</abbr> tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>