{"id":1991,"date":"2016-11-16T11:06:28","date_gmt":"2016-11-16T09:06:28","guid":{"rendered":"https:\/\/www.systemcodegeeks.com\/?p=1991"},"modified":"2016-11-16T11:06:28","modified_gmt":"2016-11-16T09:06:28","slug":"performance-tuning-haproxy","status":"publish","type":"post","link":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/","title":{"rendered":"Performance Tuning HAProxy"},"content":{"rendered":"<p>In a <a href=\"https:\/\/blog.codeship.com\/tuning-nginx\/\">recent article<\/a>, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those performance-tuning concepts and walk through some basic tuning options for HAProxy.<\/p>\n<h2>What is HAProxy<\/h2>\n<p><a href=\"http:\/\/www.haproxy.org\/\">HAProxy<\/a> is a software load balancer commonly used to distribute TCP-based traffic to multiple backend systems. It provides not only load balancing but also has the ability to detect unresponsive backend systems and reroute incoming traffic.<\/p>\n<p>In a traditional IT infrastructure, load balancing is often performed by expensive hardware devices. In cloud and highly distributed infrastructure environments, there is a need to provide this same type of service while maintaining the elastic nature of cloud infrastructure. This is the type of environment where HAProxy shines, and it does so while maintaining a reputation for being extremely efficient out of the box.<\/p>\n<p>Much like NGINX, HAProxy has quite a few parameters set for optimal performance out of the box. However, as with most things, we can still tune it for our specific environment to increase performance.<\/p>\n<p>In this article, we are going to install and configure HAProxy to act as a load balancer for two NGINX instances serving a basic static HTML site. Once set up, we are going to take that configuration and tune it to gain even more performance out of HAProxy.<\/p>\n<h2>Installing HAProxy<\/h2>\n<p>For our purposes, we will be installing HAProxy on an Ubuntu system. The installation of HAProxy is fairly simple on an Ubuntu system. To accomplish this, we will use the Apt package manager; specifically we will be using the <code>apt-get<\/code> command.<\/p>\n<pre class=\"brush:php\"># apt-get install haproxy\r\nReading package lists... Done\r\nBuilding dependency tree       \r\nReading state information... Done\r\nThe following additional packages will be installed:\r\n  liblua5.3-0\r\nSuggested packages:\r\n  vim-haproxy haproxy-doc\r\nThe following NEW packages will be installed:\r\n  haproxy liblua5.3-0\r\n0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.\r\nNeed to get 872 kB of archives.\r\nAfter this operation, 1,997 kB of additional disk space will be used.\r\nDo you want to continue? [Y\/n] y<\/pre>\n<p>With the above complete, we now have HAProxy installed. The next step is to configure it to load balance across our backend NGINX instances.<\/p>\n<h2>Basic HAProxy Config<\/h2>\n<p>In order to set up HAProxy to load balance HTTP traffic across two backend systems, we will first need to modify HAProxy\u2019s default configuration file <code>\/etc\/haproxy\/haproxy.cfg<\/code>.<\/p>\n<p>To get started, we will be setting up a basic <code>frontend<\/code> service within HAProxy. We will do this by appending the below configuration block.<\/p>\n<pre class=\"brush:php\">frontend www\r\n    bind               :80\r\n    mode               http\r\n    default_backend    bencane.com<\/pre>\n<p>Before going too far, let\u2019s break down this configuration a bit to understand what exactly we are telling HAProxy to do.<\/p>\n<p>In this section, we are defining a <code>frontend<\/code> service for HAProxy. This is essentially a frontend listener that will accept incoming traffic. The first parameter we define within this section is the <code>bind<\/code> parameter. This parameter is used to tell HAProxy what IP and Port to listen on; <code>0.0.0.0:80<\/code> in this case. This means our HAProxy instance will listen for traffic on port <code>80<\/code> and route it through this <code>frontend<\/code> service named <code>www<\/code>.<\/p>\n<p>Within this section, we are also defining the type of traffic with the <code>mode<\/code> parameter. This parameter accepts <code>tcp<\/code> or <code>http<\/code> options. Since we will be load balancing HTTP traffic, we will use the <code>http<\/code> value. The last parameter we are defining is <code>default_backend<\/code>, which is used to define the <code>backend<\/code> service HAProxy should load balance to. In this case, we will use a value of <code>bencane.com<\/code> which will route traffic through our NGINX instances.<\/p>\n<pre class=\"brush:php\">backend bencane.com\r\n    mode     http\r\n    balance  roundrobin\r\n    server   nyc2 nyc2.bencane.com:80 check\r\n    server   sfo1 sfo1.bencane.com:80 check<\/pre>\n<p>Like the <code>frontend<\/code> service, we will also need to define our <code>backend<\/code> service by appending the above configuration block to the same <code>\/etc\/haproxy\/haproxy.cfg<\/code> file.<\/p>\n<p>In this <code>backend<\/code> configuration block, we are defining the systems that HAProxy will load balance traffic to. Like the <code>frontend<\/code> section, this section also contains a <code>mode<\/code> parameter to define whether these are <code>tcp<\/code> or <code>http<\/code> backends. For this example, we will once again use <code>http<\/code> as our backend systems are a set of NGINX webservers.<\/p>\n<p>In addition to the <code>mode<\/code> parameter, this section also has a parameter called <code>balance<\/code>. The <code>balance<\/code> parameter is used to define the load-balancing algorithm that determines which backend node each request should be sent to. For this initial step, we can simply set this value to <code>roundrobin<\/code>, which is used to send traffic evenly as it comes in. This setting is pretty common and often the first load balancer that users start with.<\/p>\n<p>The final parameter in the <code>backend<\/code> service is <code>server<\/code>, which is used to define the backend system to balance to. In our example, there are two lines that each define a different server. These two servers are the NGINX webservers that we will load balancing traffic to in this example.<\/p>\n<p>The format of the <code>server<\/code> line is a bit different than the other parameters. This is because node-specific settings can be configured via the <code>server<\/code> parameter. In the example above, we are defining a <code>label<\/code>, <code>IP:Port<\/code>, and whether or not a health <code>check<\/code> should be used to monitor the backend node.<\/p>\n<p>By specifying <code>check<\/code> after the web-server\u2019s address, we are defining that HAProxy should perform a health check to determine whether the backend system is responsive or not. If the backend system is not responsive, incoming traffic will not be routed to that backend system.<\/p>\n<p>With the changes above, we now have a basic HAProxy instance configured to load balance an HTTP service. In order for these configurations to take effect however, we will need to restart the HAProxy instance. We can do that with the <code>systemctl<\/code> command.<\/p>\n<pre class=\"brush:php\"># systemctl restart haproxy<\/pre>\n<p>Now that our configuration changes are in place, let\u2019s go ahead and get started with establishing our baseline performance of HAProxy.<\/p>\n<h2>Baselining Our Performance<\/h2>\n<p>In the <a href=\"https:\/\/blog.codeship.com\/tuning-nginx\/\">\u201cTuning NGINX for Performance\u201d<\/a> article, I discussed the importance of establishing a performance baseline before making any changes. By establishing a baseline performance before making any changes, we can identify whether or not the changes we make have a beneficial effect.<\/p>\n<p>As in the previous article, we will be using the <a href=\"http:\/\/httpd.apache.org\/docs\/2.4\/programs\/ab.html\">ApacheBench<\/a> tool to measure the performance of our HAProxy instance. In this example however, we will be using the flag <code>-c<\/code> to change the number of concurrent HTTP sessions and the flag <code>-n<\/code> to specify the number of HTTP requests to make.<\/p>\n<pre class=\"brush:php\"># ab -c 2500 -n 5000 -s 90 http:\/\/104.131.125.168\/\r\nRequests per second:    97.47 [#\/sec] (mean)\r\nTime per request:       25649.424 [ms] (mean)\r\nTime per request:       10.260 [ms] (mean, across all concurrent requests)<\/pre>\n<p>After running the <code>ab<\/code> (ApacheBench) tool, we can see that out of the box our HAProxy instance is servicing <code>97.47<\/code> HTTP requests per second. This metric will be our baseline measurement; we will be measuring any changes against this metric.<\/p>\n<h2>Setting the Maximum Number of Connections<\/h2>\n<p>One of the most common tunable parameters for HAProxy is the <code>maxconn<\/code> setting. This parameter defines the maximum number of connections the entire HAProxy instance will accept.<\/p>\n<p>When calling the <code>ab<\/code> command above, I used the <code>-c<\/code> flag to tell <code>ab<\/code> to open <code>2500<\/code> concurrent HTTP sessions. By default, the <code>maxconn<\/code> parameter is set to <code>2000<\/code>. This means that a default instance of HAProxy will start queuing HTTP sessions once it hits <code>2000<\/code> concurrent sessions. Since our test is launching <code>2500<\/code> sessions, this means that at any given time at least <code>500<\/code> HTTP sessions are being queued while <code>2000<\/code> are being serviced immediately. This certainly should have an effect on our throughput for HAProxy.<\/p>\n<p>Let\u2019s go ahead and raise this limit by once again editing the <code>\/etc\/haproxy\/haproxy.cfg<\/code> file.<\/p>\n<pre class=\"brush:php\">global\r\n        maxconn         5000<\/pre>\n<p>Within the <code>haproxy.cfg<\/code> file, there is a <code>global<\/code> section; this section is used to modify \u201cglobal\u201d parameters for the entire HAProxy instance. By adding the <code>maxconn<\/code> setting above, we are increasing the maximum number of connections for the entire HAProxy instance to <code>5000<\/code>, which should be plenty for our testing. In order for this change to take effect, we must once again restart the HAProxy instance using the <code>systemctl<\/code> command.<\/p>\n<pre class=\"brush:php\"># systemctl restart haproxy<\/pre>\n<p>With HAProxy restarted, let\u2019s run our test again.<\/p>\n<pre class=\"brush:php\"># ab -c 2500 -n 5000 -s 90 http:\/\/104.131.125.168\/\r\nRequests per second:    749.22 [#\/sec] (mean)\r\nTime per request:       3336.786 [ms] (mean)\r\nTime per request:       1.335 [ms] (mean, across all concurrent requests)<\/pre>\n<p>In our baseline test, the <code>Requests per second<\/code> value was <code>97.47<\/code>. After adjusting the <code>maxconn<\/code> parameter, the same test returned a <code>Requests per second<\/code> of <code>749.22<\/code>. This is a huge improvement over our baseline test and just goes to show how important of a parameter the <code>maxconn<\/code> setting is.<\/p>\n<p>When tuning HAProxy, it is very important to understand your target number of concurrent sessions per instance. By identifying and tuning this value upfront, you can save yourself a lot of troubleshooting with HAProxy performance during peak traffic load.<\/p>\n<p>In this article, we set the <code>maxconn<\/code> value to <code>5000<\/code>; however this is still a fairly low number for a high-traffic environment. As such, I would highly recommend identifying your desired number of concurrent sessions and tuning the <code>maxconn<\/code> parameter before changing any other parameter when tuning HAProxy.<\/p>\n<h2>Multiprocessing and CPU Pinning<\/h2>\n<p>Another interesting tunable for HAProxy is the <code>nbproc<\/code> parameter. By default, HAProxy has a single worker process, which means that all of our HTTP sessions will be load balanced by a single process. With the <code>nbproc<\/code> parameter, it is possible to create multiple worker processes to help distribute the workload internally.<\/p>\n<p>While additional worker processes might sound good at first, they only tend to provide value when the server itself has more than 1 CPU. It is not uncommon for environments that create multiple worker processes on single CPU systems to see that HAProxy performs worse than it did as a single process instance. The reason for this is because the overhead of managing multiple worker processes provides a diminishing return when the number of workers exceeds the number of CPUs available.<\/p>\n<p>With this in mind, it is recommended that the <code>nbproc<\/code> parameter should be set to match the number of CPUs available to the system. In order to tune this parameter for our environment, we first need to check how many CPUs are available. We can do this by executing the <code>lshw<\/code> command.<\/p>\n<pre class=\"brush:php\"># lshw -short -class cpu\r\nH\/W path      Device  Class      Description\r\n============================================\r\n\/0\/401                processor  Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz\r\n\/0\/402                processor  Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz<\/pre>\n<p>From the output above, it appears that we have <code>2<\/code> available CPUs on our HAProxy server. Let\u2019s go ahead and set the <code>nbproc<\/code> parameter to <code>2<\/code>, which will tell HAProxy to start a second worker process on restart. We can do this by once again editing the <code>global<\/code> section of the <code>\/etc\/haproxy\/haproxy.cfg<\/code> file.<\/p>\n<pre class=\"brush:php\">global\r\n        maxconn         5000\r\n        nbproc          2\r\n        cpu-map         1 0\r\n        cpu-map         2 1<\/pre>\n<p>In the above HAProxy config example, I included another parameter named <code>cpu-map<\/code>. This parameter is used to pin a specific worker process to the specified CPU using CPU affinity. This allows the processes to better distribute the workload across multiple CPUs.<\/p>\n<p>While this might not sound very critical at first, it is when you consider how Linux determines which CPU a process should use when it requires CPU time.<\/p>\n<h3>Understanding CPU Affinity<\/h3>\n<p>The Linux kernel internally has a concept called CPU affinity, which is where a process is pinned to a specific CPU for its CPU time. If we use our system above as an example, we have two CPUs (<code>0<\/code> and <code>1<\/code>), a single threaded HAProxy instance. Without any changes, our single worker process will be pinned to either <code>0<\/code> or <code>1<\/code>.<\/p>\n<p>If we were to enable a second worker process without specifying which CPU that process should have an affinity to, that process would default to the same CPU that the first worker was bound to.<\/p>\n<p>The reason for this is due to how Linux handles CPU affinity of child processes. Unless told otherwise, a child process is always bound to the same CPU as the parent process in Linux. The reason for this is to allow processes to leverage the L1 and L2 caches available on the physical CPU. In most cases, this makes an application perform faster.<\/p>\n<p>The downside to this can be seen in our example. If we enable two workers and both <strong>worker1<\/strong> and <strong>worker2<\/strong> were bound to CPU <code>0<\/code>, the workers would constantly be competing for the same CPU time. By pinning the worker processes on different CPUs, we are able to better utilize all of our CPU time available to our system and reduce the amount of times our worker processes are waiting for CPU time.<\/p>\n<p>In the configuration above, we are using <code>cpu-map<\/code> to define CPU affinity by pinning <strong>worker1<\/strong> to CPU <code>0<\/code> and <strong>worker2<\/strong> to CPU <code>1<\/code>.<\/p>\n<p>After making these changes, we can restart the HAProxy instance again and retest with the <code>ab<\/code> tool to see some significant improvements in performance.<\/p>\n<pre class=\"brush:php\"># systemctl restart haproxy<\/pre>\n<p>With HAProxy restarted, let\u2019s go ahead and rerun our test with the <code>ab<\/code> command.<\/p>\n<pre class=\"brush:php\"># ab -c 2500 -n 5000 -s 90 http:\/\/104.131.125.168\/\r\nRequests per second:    1185.97 [#\/sec] (mean)\r\nTime per request:       2302.093 [ms] (mean)\r\nTime per request:       0.921 [ms] (mean, across all concurrent requests)<\/pre>\n<p>In our previous test run, we were able to get a <code>Requests per second<\/code> of <code>749.22<\/code>. With this latest run, after increasing the number of worker processes, we were able to push the <code>Requests per second<\/code> to <code>1185.97<\/code>, a sizable improvement.<\/p>\n<h2>Adjusting the Load Balancing Algorithm<\/h2>\n<p>The final adjustment we will make is not a traditional tuning parameter, but it still has an importance in the amount of HTTP sessions our HAProxy instance can process. The adjustment is the load balancing algorithm we have specified.<\/p>\n<p>Earlier in this post, we specified the load balancing algorithm of <code>roundrobin<\/code> in our <code>backend<\/code> service. In this next step, we will be changing the <code>balance<\/code> parameter to <code>static-rr<\/code> by once again editing the <code>\/etc\/haproxy\/haproxy.cfg<\/code> file.<\/p>\n<pre class=\"brush:php\">backend bencane.com\r\n    mode    http\r\n    balance static-rr\r\n    server  nyc2 nyc2.bencane.com:80 check\r\n    server  sfo1 sfo1.bencane.com:80 check<\/pre>\n<p>The <code>static-rr<\/code> algorithm is a round robin algorithm very similar to the <code>roundrobin<\/code> algorithm, with the exception that it does not support dynamic weighting. This weighting mechanism allows HAProxy to select a preferred backend over others. Since <code>static-rr<\/code> doesn\u2019t worry about dynamic weighting, it is slightly more efficient than the <code>roundrobin<\/code> algorithm (approximately 1 percent more efficient).<\/p>\n<p>Let\u2019s go ahead and test the impact of this change by restarting the HAProxy instance again and executing another <code>ab<\/code> test run.<\/p>\n<pre class=\"brush:php\"># systemctl restart haproxy<\/pre>\n<p>With the service restarted, let\u2019s go ahead and rerun our test.<\/p>\n<pre class=\"brush:php\"># ab -c 2500 -n 5000 -s 90 http:\/\/104.131.125.168\/\r\nRequests per second:    1460.29 [#\/sec] (mean)\r\nTime per request:       1711.993 [ms] (mean)\r\nTime per request:       0.685 [ms] (mean, across all concurrent requests)<\/pre>\n<p>In this final test, we were able to increase our <code>Requests per second<\/code> metric to <code>1460.29<\/code>, a sizable difference over the <code>1185.97<\/code> results from the previous run.<\/p>\n<h2>Summary<\/h2>\n<p>In the beginning of this article, our basic HAProxy instance was only able to service <code>97<\/code> HTTP requests per second. After increasing a maximum number of connections parameter, increasing the number of worker processes, and changing our load balancing algorithm, we were able to push our HAProxy instance to <code>1460<\/code> HTTP requests per second; an improvement of <strong>1405 percent<\/strong>.<\/p>\n<p>Even with such an increase in performance, there are still more tuning parameters available within HAProxy. While this article covered a few basic and unconventional parameters, we have still only scratched the surface of tuning HAProxy. For more tuning options, you can checkout <a href=\"http:\/\/www.haproxy.org\/download\/1.7\/doc\/configuration.txt\">HAProxy\u2019s configuration guide<\/a>.<\/p>\n<div class=\"attribution\">\n<table>\n<tbody>\n<tr>\n<td><span class=\"reference\">Reference: <\/span><\/td>\n<td><a href=\"https:\/\/blog.codeship.com\/performance-tuning-haproxy\/\">Performance Tuning HAProxy<\/a> from our <a href=\"http:\/\/www.systemcodegeeks.com\/join-us\/scg\/\">SCG partner<\/a>\u00a0Ben Cane at the <a href=\"http:\/\/blog.codeship.com\/\">Codeship Blog<\/a> blog.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>In a recent article, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those performance-tuning concepts and walk through some basic tuning options for HAProxy. What is HAProxy HAProxy is a software load balancer commonly used to distribute TCP-based traffic &hellip;<\/p>\n","protected":false},"author":36,"featured_media":195,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[25],"tags":[73],"class_list":["post-1991","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-nginx","tag-haproxy"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Performance Tuning HAProxy - System Code Geeks - 2026<\/title>\n<meta name=\"description\" content=\"In a recent article, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Performance Tuning HAProxy - System Code Geeks - 2026\" \/>\n<meta property=\"og:description\" content=\"In a recent article, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/\" \/>\n<meta property=\"og:site_name\" content=\"System Code Geeks\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/systemcodegeeks\" \/>\n<meta property=\"article:published_time\" content=\"2016-11-16T09:06:28+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"150\" \/>\n\t<meta property=\"og:image:height\" content=\"150\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Ben Cane\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@systemcodegeeks\" \/>\n<meta name=\"twitter:site\" content=\"@systemcodegeeks\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Ben Cane\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"13 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/\"},\"author\":{\"name\":\"Ben Cane\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/#\/schema\/person\/86302616000bcfa1e56e85bf9e0fb377\"},\"headline\":\"Performance Tuning HAProxy\",\"datePublished\":\"2016-11-16T09:06:28+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/\"},\"wordCount\":2223,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg\",\"keywords\":[\"HAProxy\"],\"articleSection\":[\"NGINX\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/\",\"url\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/\",\"name\":\"Performance Tuning HAProxy - System Code Geeks - 2026\",\"isPartOf\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg\",\"datePublished\":\"2016-11-16T09:06:28+00:00\",\"description\":\"In a recent article, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those\",\"breadcrumb\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#primaryimage\",\"url\":\"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg\",\"contentUrl\":\"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg\",\"width\":150,\"height\":150},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.systemcodegeeks.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Web Servers\",\"item\":\"https:\/\/www.systemcodegeeks.com\/category\/web-servers\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"NGINX\",\"item\":\"https:\/\/www.systemcodegeeks.com\/category\/web-servers\/nginx\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Performance Tuning HAProxy\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/#website\",\"url\":\"https:\/\/www.systemcodegeeks.com\/\",\"name\":\"System Code Geeks\",\"description\":\"Operating System Developers Resource Center\",\"publisher\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.systemcodegeeks.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/#organization\",\"name\":\"Exelixis Media P.C.\",\"url\":\"https:\/\/www.systemcodegeeks.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png\",\"contentUrl\":\"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png\",\"width\":864,\"height\":246,\"caption\":\"Exelixis Media P.C.\"},\"image\":{\"@id\":\"https:\/\/www.systemcodegeeks.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/systemcodegeeks\",\"https:\/\/x.com\/systemcodegeeks\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/#\/schema\/person\/86302616000bcfa1e56e85bf9e0fb377\",\"name\":\"Ben Cane\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.systemcodegeeks.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/58e760dbb7e91f10c242039f23e9c0f2d82f86e5d10798ad76f2a34820fa06d0?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/58e760dbb7e91f10c242039f23e9c0f2d82f86e5d10798ad76f2a34820fa06d0?s=96&d=mm&r=g\",\"caption\":\"Ben Cane\"},\"description\":\"Benjamin Cane is a systems architect in the financial services industry. He writes about Linux systems administration on his blog and has recently published his first book, Red Hat Enterprise Linux Troubleshooting Guide.\",\"url\":\"https:\/\/www.systemcodegeeks.com\/author\/ben-cane\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Performance Tuning HAProxy - System Code Geeks - 2026","description":"In a recent article, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/","og_locale":"en_US","og_type":"article","og_title":"Performance Tuning HAProxy - System Code Geeks - 2026","og_description":"In a recent article, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those","og_url":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/","og_site_name":"System Code Geeks","article_publisher":"https:\/\/www.facebook.com\/systemcodegeeks","article_published_time":"2016-11-16T09:06:28+00:00","og_image":[{"width":150,"height":150,"url":"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg","type":"image\/jpeg"}],"author":"Ben Cane","twitter_card":"summary_large_image","twitter_creator":"@systemcodegeeks","twitter_site":"@systemcodegeeks","twitter_misc":{"Written by":"Ben Cane","Est. reading time":"13 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#article","isPartOf":{"@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/"},"author":{"name":"Ben Cane","@id":"https:\/\/www.systemcodegeeks.com\/#\/schema\/person\/86302616000bcfa1e56e85bf9e0fb377"},"headline":"Performance Tuning HAProxy","datePublished":"2016-11-16T09:06:28+00:00","mainEntityOfPage":{"@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/"},"wordCount":2223,"commentCount":0,"publisher":{"@id":"https:\/\/www.systemcodegeeks.com\/#organization"},"image":{"@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#primaryimage"},"thumbnailUrl":"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg","keywords":["HAProxy"],"articleSection":["NGINX"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/","url":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/","name":"Performance Tuning HAProxy - System Code Geeks - 2026","isPartOf":{"@id":"https:\/\/www.systemcodegeeks.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#primaryimage"},"image":{"@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#primaryimage"},"thumbnailUrl":"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg","datePublished":"2016-11-16T09:06:28+00:00","description":"In a recent article, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those","breadcrumb":{"@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#primaryimage","url":"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg","contentUrl":"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2016\/01\/nginx-logo.jpg","width":150,"height":150},{"@type":"BreadcrumbList","@id":"https:\/\/www.systemcodegeeks.com\/web-servers\/nginx\/performance-tuning-haproxy\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.systemcodegeeks.com\/"},{"@type":"ListItem","position":2,"name":"Web Servers","item":"https:\/\/www.systemcodegeeks.com\/category\/web-servers\/"},{"@type":"ListItem","position":3,"name":"NGINX","item":"https:\/\/www.systemcodegeeks.com\/category\/web-servers\/nginx\/"},{"@type":"ListItem","position":4,"name":"Performance Tuning HAProxy"}]},{"@type":"WebSite","@id":"https:\/\/www.systemcodegeeks.com\/#website","url":"https:\/\/www.systemcodegeeks.com\/","name":"System Code Geeks","description":"Operating System Developers Resource Center","publisher":{"@id":"https:\/\/www.systemcodegeeks.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.systemcodegeeks.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.systemcodegeeks.com\/#organization","name":"Exelixis Media P.C.","url":"https:\/\/www.systemcodegeeks.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.systemcodegeeks.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png","contentUrl":"https:\/\/www.systemcodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png","width":864,"height":246,"caption":"Exelixis Media P.C."},"image":{"@id":"https:\/\/www.systemcodegeeks.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/systemcodegeeks","https:\/\/x.com\/systemcodegeeks"]},{"@type":"Person","@id":"https:\/\/www.systemcodegeeks.com\/#\/schema\/person\/86302616000bcfa1e56e85bf9e0fb377","name":"Ben Cane","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.systemcodegeeks.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/58e760dbb7e91f10c242039f23e9c0f2d82f86e5d10798ad76f2a34820fa06d0?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/58e760dbb7e91f10c242039f23e9c0f2d82f86e5d10798ad76f2a34820fa06d0?s=96&d=mm&r=g","caption":"Ben Cane"},"description":"Benjamin Cane is a systems architect in the financial services industry. He writes about Linux systems administration on his blog and has recently published his first book, Red Hat Enterprise Linux Troubleshooting Guide.","url":"https:\/\/www.systemcodegeeks.com\/author\/ben-cane\/"}]}},"_links":{"self":[{"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/posts\/1991","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/users\/36"}],"replies":[{"embeddable":true,"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/comments?post=1991"}],"version-history":[{"count":0,"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/posts\/1991\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/media\/195"}],"wp:attachment":[{"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/media?parent=1991"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/categories?post=1991"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.systemcodegeeks.com\/wp-json\/wp\/v2\/tags?post=1991"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}