Page tree
Skip to end of metadata
Go to start of metadata


Discussion of testing performance of FreeSWITCH™ with links to test scenario open source projects.


 Click here to expand Table of Contents


Measures of Performance

When people say performance it can mean a wide variety of things. In reality performance typically comes down to two bottle necks which are SIP, and RTP. These typically translate into calls per second and concurrent calls respectively. Additionally, high volume systems might experience bottlenecks with database servers running out of connections or even bandwidth when looking up account or configuration data.

Calls per Second (CPS)

Since calls per second is simply a measure of how many calls are being setup and torn down per second the limiting factor is the ability to process the SIP messages. Depending on the type of traffic you have this may or may not be a factor. There are a variety of components that can contribute to this bottleneck, FreeSWITCH and it's libraries being only some of them.

Concurrent Calls

Using modern hardware concurrent calls is less a limit of SIP but rather the RTP media streaming. This can further be broken down to available bandwidth and the packets per second. The theoretical limit on concurrent calls through a gigabit Ethernet port would be around 10,500 calls without RTCP, assuming G.711 codec and the link-level overheads. Theory is great and all, but in reality the kernel networking layer will be your limiting factor due to the packets per second of the RTP media stream.


FreeSWITCH uses threading. In modern linux kernels threading and forking are very similar. Normally the 'top' utility only shows 1 FS process because top by default rolls up all the threads into one entry. The command 'top -H' will show individual threads.

'htop' by default shows you the individual threads.

In FreeSWITCH there are several threads running on just a base idle FreeSWITCH process. Each additional call leg is at least 1 more thread. Depending on which applications are active on a call, there could be more than 1 thread per call leg.


Recommended Configurations

A 64-bit CPU running a 64-bit operating system and a 64-bit version of FreeSWITCH is recommended. A bare metal system provides consistent, predictable performance and most importantly for real–time applications like this, a reliable kernel clock for RTP packet timing. With a virtual machine it is difficult to determine where any problems might originate and improper propagation of the hardware clock through the VM host to the guest operating system is not always available so the RTP tests will be rendered meaningless.

Debian linux is the recommended OS, since that's the OS used by the core developers and therefore the best tested. It will work on some other operating systems though.

Recommended ULIMIT settings

The following are recommended ulimit settings for FreeSWITCH when you want maximum performance. Ulimit settings you can add to initd script before do_start().


Recommended SIP settings

  • Turn off every module you don't need that is not also needed by FreeSWITCH
  • Turn presence off in the profiles
  • libsofia only handles 1 thread per profile, so if that is your bottle neck use more profiles
  • mod_cdr_csv is slower than mod_xml_cdr
  • Reports of running more than a single instance of FreeSWITCH has helped.
  • Disable console logging when not needed - loglevel 0

Ethernet Tuning in linux

Beware buffer bloat

Prior to the bufferbloat guys coming in and talking to us there was a note in here that one should "set the buffers to maximum." That advice is WRONG on so many levels. To make a long story short, when you're doing real-time media like VoIP you absolutely do not want large buffers. On an unsaturated network link you won't notice anything, but when you have a saturated network the larger buffers will cause your RTP packets to be buffered instead of discarded.


So, what should your rx/tx queuelens be? Only you can know for sure, but it's good to experiment. Normally in linux it defaults to 1000. IF you are using a good traffic shaping qdisc (pfifo_fast or SFB or others) AND prioritizing udp/rtp traffic you can leave it alone, but it still is a good idea to lower it significantly for VoIP applications, depending on your workload and connectivity.

Don't use the default pfifo qdisc, regardless. It outputs packets in strict fifo order.

To see your current settings use ethtool:

ethtool settings



These were the defaults on my Lenny install. If you needed to change it you can do this:

ethtool suggested changes


There is no one correct answer to what you should set the ring buffers to. It all depends on your traffic. Dave Taht from the Bufferbloat project reports that, based on his observations and experiences and papers such as , that at present in home systems it is better to have no more than 32 unmanaged TX buffers on a 100Mbit network. It appears on my Lenny they are 32/64:

One man's buffer settings

You'll note you can't with this driver reduce the TX buffer to a more optimum level!! This means that you will incur nearly a 10ms delay in the driver alone (at maximum packet size and load) on packets if you are on a 100Mbit network.

(similarly, a large TXQUEUELEN translates to lots of delay too)

On a gigibit network interface, the default TX queues and TXQUEUELEN are closer to usable, but still too large.

Having larger RX buffers is OK, to some extent. You need to be able to absorb bursts without packet loss. Tuning and observation of actual packet loss on the receive channel is a good idea.

And lastly, the optimum for TX is much lower on a 3Mbit uplink than a 100Mbit uplink. The debloat-testing kernel contains some Ethernet and wireless drivers that allow reducing TX to 1.

TCP/IP Tuning

For a server that is used primarily for VoIP, TCP Cubic (the default in Linux) can stress the network subsystem too much. Using TCP Vegas (which ramps up based on measured latency) is used by several FreeSWITCH users in production, as a "kinder, gentler" TCP for command and control functions.

To enable Vegas rather than Cubic you can, at boot:

modprobe tcp_vegas
echo vegas > /proc/sys/net/ipv4/tcp_congestion_control

--- Some interesting comments about tcp_vegas at

FreeSWITCH's core.db I/O Bottleneck

On a normal configuration, core.db is written to disk almost every second, generating hundreds of block-writes per second. To avoid this problem, turn /usr/local/freeswitch/db into an in-memory filesystem. If you use SSDs, it is CRITICAL that you move core.db to a RAM disk to prolong the life of the SSD.

On current FreeSWITCH versions you should use the documented "core-db-name" parameter in switch.conf.xml (simply restart FreeSwitch to apply the changes):

   <param name="core-db-name" value="/dev/shm/core.db" />

Otherwise you may create a dedicated in-memory filesystem, for example by adding the following to the end of /etc/fstab

fstab Example

To use the new filesystem run the following commands (or the equivalent commands for your OS):

mount /usr/local/freeswitch/db
/etc/init.d/freeswitch restart

An alternative is to move the core DB into an ODBC database, which will move this processing to a DBMS which is capable of handling large numbers of requests far better and can even move this processing onto another server. Consider using freeswitch.dbh to take advantage of pooling.

Stress Testing



Using SIPp is part dark art, part voodoo, part Santeria. YOU HAVE BEEN WARNED

When using SIPp's uas and uac to test FreeSWITCH, you need to make sure there is media back and forth. If you just send media from one sipp to another without echoing the RTP back (-rtp_echo), FS will timeout due to MEDIA_TIMEOUT. This is to avoid incorrect billing when one side has no media for more than certain period of time.

See Also

FreeSWITCH performance test on PC Engines APU — Stanislav Sinyagin tests FreeSWITCH™ transcoding performance with only one test machine

Real-world observations — Post measurements of your experience at using FreeSWITCH™ in a real-world configuration, not a stress test.

SSD Tuning for Linux — special considerations for systems using Solid State Drives for storage

SIPp — Open source test toll and traffic generator for SIP

SIPpy Cup — Ben Langfeld contributes this scenario generator for SIPp to simplify the creation of test profiles and especially compatible media

check_voip_call — Henry Huang contributes this project to work with Nagios — Bandwidth calculator for different codecs and use cases — Paper on buffer sizing based on bittorrent usage