{"id":15403,"date":"2016-12-14T12:15:53","date_gmt":"2016-12-14T10:15:53","guid":{"rendered":"https:\/\/www.webcodegeeks.com\/?p=15403"},"modified":"2016-12-09T19:00:32","modified_gmt":"2016-12-09T17:00:32","slug":"getting-every-microsecond-uwsgi","status":"publish","type":"post","link":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/","title":{"rendered":"Getting Every Microsecond Out of uWSGI"},"content":{"rendered":"<p>In recent articles, I covered performance tuning both <a href=\"https:\/\/blog.codeship.com\/performance-tuning-haproxy\/\">HAProxy<\/a> and <a href=\"https:\/\/blog.codeship.com\/tuning-nginx\/\">NGINX<\/a>. Today\u2019s article will be similar, however we\u2019re going to go further down the stack and explore tuning a <strong>Python<\/strong> application running via <strong>uWSGI<\/strong>.<\/p>\n<h2>What Is uWSGI<\/h2>\n<p>In order to deploy a web application written in Python, you would typically need two supporting components.<\/p>\n<p>The first is a traditional web server such as <strong>NGINX<\/strong> to perform basic web server tasks such as caching, serving static content, and handling inbound connections.<\/p>\n<p>The second is an application server such as <a href=\"https:\/\/uwsgi-docs.readthedocs.io\/en\/latest\/\"><strong>uWSGI<\/strong><\/a>.<\/p>\n<p>In this context, an application server is a service that acts as a middleware between the application and the traditional web server. The role of an application server typically includes starting the application, managing the application, as well as handling incoming connections to the application itself.<\/p>\n<p>With a web-based application, this means accepting HTTP requests from the web server and routing those requests to the underlying application.<\/p>\n<p>uWSGI is an application server commonly used for Python applications. However, <strong>uWSGI<\/strong> supports more than just Python; it supports many other types of applications, such as ones written in Ruby, Perl, PHP, or even Go. Even with all of these other options, uWSGI is mostly known for its use with Python applications, partly because Python was the first supported language for uWSGI.<\/p>\n<p>Another thing uWSGI is known for is being performant; today, we\u2019ll explore how to make it even more so by adjusting some of its many configuration options to increase throughput for a simple Python web application.<\/p>\n<h2>Our Simple REST API<\/h2>\n<p>In order to properly tune uWSGI, we first must understand the application we are tuning. In this article, that will be a simple REST API designed to return a Fibonacci sequence to those who perform an HTTP GET request.<\/p>\n<p>The application itself is written in Python using the <a href=\"http:\/\/flask.pocoo.org\/docs\/0.11\/deploying\/uwsgi\/\"><strong>Flask<\/strong> web framework<\/a>. This application is extremely small, and meant as a quick and dirty sample for our tuning exercise.<\/p>\n<p>Let\u2019s take a look at how it works before moving into tuning uWSGI.<\/p>\n<p><code>app.py<\/code>:<\/p>\n<pre class=\"brush:php\">''' Quick Fibonacci API '''\r\n\r\nfrom flask import Flask\r\nimport json\r\nimport fib\r\n\r\napp = Flask(__name__)\r\n\r\n@app.route(\"\/&lt;number&gt;\", methods=['GET'])\r\ndef get_fib(number):\r\n    ''' Return Fibonacci JSON '''\r\n    return json.dumps(fib.get(int(number))), 200\r\n\r\nif __name__ == '__main__':\r\n    app.run(host=\"0.0.0.0\", port=\"8080\")<\/pre>\n<p>This application consists of two files. The first is <code>app.py<\/code>, which is the main web application that handles accepting HTTP GET requests and determines what to do with them.<\/p>\n<p>In the above code, we can see that <code>app.py<\/code> is calling the <code>fib<\/code> library to perform the actual Fibonacci calculations. This is the second file of our application. Let\u2019s take a look at this library to get a quick understanding of how it works.<\/p>\n<p><code>fib.py<\/code>:<\/p>\n<pre class=\"brush:php\">''' Fibonacci calculator '''\r\n\r\ndef get(number):\r\n    ''' Generate fib sequence until specified number is exceeded '''\r\n    # Seed the sequence with 0 and 1\r\n    sequence = [0, 1]\r\n    while sequence[-1] &lt; number:\r\n        sequence.append(sequence[-2] + sequence[-1])\r\n    return sequence<\/pre>\n<p>From the above code, we can see that this function simply takes an argument of <code>number<\/code> and generates a Fibonacci sequence up to the specified number. As previously mentioned, this application is a pretty simple REST API that does some basic calculations based on user input and returns the result.<\/p>\n<p>With the application now in mind, let\u2019s go ahead and start our tuning exercise.<\/p>\n<h2>Setting Up uWSGI<\/h2>\n<p>As with all performance-tuning exercises, it\u2019s best to first establish a baseline performance measurement. For this article, we will be using a bare-bones setup of uWSGI as our baseline. To get started, let\u2019s go ahead and set up that environment now.<\/p>\n<h3>Installing Python\u2019s package manager<\/h3>\n<p>Since we are starting from scratch, we\u2019ll need to install several packages. We\u2019ll do this with a combination of <a href=\"https:\/\/pypi.python.org\/pypi\/pip\"><code>pip<\/code><\/a>, the Python package manager, and <a href=\"https:\/\/help.ubuntu.com\/lts\/serverguide\/apt.html\"><strong>Apt<\/strong><\/a>, the system package manager for Ubuntu.<\/p>\n<p>In order to install <code>pip<\/code>, we will need to install the <code>python-pip<\/code> system package. We can do so with the <code>apt-get<\/code> command.<\/p>\n<pre class=\"brush:php\"># apt-get install python-pip<\/pre>\n<p>With <code>pip<\/code> installed, let\u2019s go ahead and start installing our other dependencies.<\/p>\n<h3>Installing Flask and uWSGI<\/h3>\n<p>To support our minimal application, we only need to install two packages with <code>pip<\/code>: <code>flask<\/code> (the web framework we use) and <code>uwsgi<\/code>. To install these packages, we can simply call the <code>pip<\/code> command with the <code>install<\/code> option.<\/p>\n<pre class=\"brush:php\"># pip install flask uwsgi<\/pre>\n<p>At this point, we have finished installing everything we need for a bare-bones application. Our next step is to configure uWSGI to launch our application.<\/p>\n<h3>Bare-bones uWSGI configuration<\/h3>\n<p>uWSGI has many configuration parameters. For our baseline tests, we will first set up a very basic uWSGI configuration. We\u2019ll do this by adding the following to a new <code>uwsgi.ini<\/code> file:<\/p>\n<pre class=\"brush:php\">[uwsgi]\r\nhttp = :80\r\nchdir = \/root\/fib\r\nwsgi-file = app.py\r\ncallable: app<\/pre>\n<p>The above is essentially just enough configuration to start our web application and nothing more. Before we move into performance testing, let\u2019s first take a second to understand what the above options mean and how they change uWSGI behaviors.<\/p>\n<h2><code>http<\/code> \u2013 HTTP bind address<\/h2>\n<p>The first parameter to explore is the <code>http<\/code> option. This option is used to tell uWSGI which IP and port to bind for incoming HTTP connections. In the example above, we gave the value of <code>:80<\/code>; this means listen on all IPs for connections to port <code>80<\/code>.<\/p>\n<p>The <code>http<\/code> option tells uWSGI one more thing: that this application is a web application and will be receiving requests via HTTP methods. uWSGI also supports non-HTTP-based applications by replacing the <code>http<\/code> option with options such as <code>socket<\/code>, <code>ssl-socket<\/code>, and <code>raw-socket<\/code>.<\/p>\n<h2><code>chdir<\/code> \u2013 Change running directory<\/h2>\n<p>The second parameter is the <code>chdir<\/code> option which tells uWSGI to change its current directory to <code>\/root\/fib<\/code> before launching the application. This option may not be required for all applications but is extremely useful if your application must run from a specified directory.<\/p>\n<h2><code>wsgi-file<\/code> \u2013 Application executable<\/h2>\n<p>The <code>wsgi-file<\/code> option is used to specify the application executable to be called. In our case, this is the <code>app.py<\/code> file.<\/p>\n<h2><code>callable<\/code> \u2013 Internal application object<\/h2>\n<p>Flask-based applications have an internal application object used to start the running web application. For our application, it is the <code>app<\/code> object. When running a Flask application within uWSGI, it\u2019s necessary to provide this object name to the <code>callable<\/code> parameter, as uWSGI will use this object to start the application.<\/p>\n<p>With our basic configuration defined, let\u2019s test whether or not we are able to start our application.<\/p>\n<h3>Starting our web application<\/h3>\n<p>In order to start our application, we can simply execute the <code>uwsgi<\/code> command followed by the configuration file we just created; <code>uwsgi.ini<\/code>.<\/p>\n<pre class=\"brush:php\"># uwsgi .\/uwsgi.ini<\/pre>\n<p>With the above executing successfully, we should now have a running application. Let\u2019s ago ahead and test making an HTTP request to the application using the following <code>curl<\/code> command:<\/p>\n<pre class=\"brush:php\">$ curl http:\/\/example.com\/9000\r\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946]<\/pre>\n<p>In the above example, we can see the output of the <code>curl<\/code> command is a JSON list of numbers in a Fibonacci sequence. From this result, we can see that our application is running and responding to HTTP requests appropriately.<\/p>\n<h2>Measuring Baseline Performance<\/h2>\n<p>With our application up and running, we can now go ahead and run our performance test case to measure the application\u2019s base performance.<\/p>\n<pre class=\"brush:php\"># ab -c 500 -n 5000 -s 90 http:\/\/example.com\/9000\r\nRequests per second:    347.28 [#\/sec] (mean)\r\nTime per request:       1439.748 [ms] (mean)\r\nTime per request:       2.879 [ms] (mean, across all concurrent requests)<\/pre>\n<p>In the above, we <a href=\"https:\/\/blog.codeship.com\/tuning-nginx\/\">once again<\/a> used the <code>ab<\/code> command to send multiple web requests to our web application. Specifically, the above command is sending <code>5000<\/code> HTTP GET requests to our web application in batches of <code>500<\/code>. The results of this test show that <code>ab<\/code> was able to send a little over <code>347<\/code> HTTP <strong>requests per second<\/strong>.<\/p>\n<p>For a basic out-of-the-box configuration, this level of performance is pretty decent. We can, however, achieve better with just a little bit of tweaking.<\/p>\n<h2>Multithreading<\/h2>\n<p>One of the first things we can adjust is the number of processes that uWSGI is running. Much like our earlier exercise with HAProxy, the default configuration of uWSGI starts only one instance of our web application.<\/p>\n<p>With our current application, this basically means each HTTP request must be handled by a single process. If we distribute this across multiple processes, we may see a performance gain.<\/p>\n<p>Luckily, we can do just that by using the <code>processes<\/code> option for uWSGI into the <code>uwsgi.ini<\/code> file.<\/p>\n<pre class=\"brush:php\">processes = 4<\/pre>\n<p>The above code will tell uWSGI to start four instances of our web application, but this alone isn\u2019t the only thing we can do to increase our possible throughput.<\/p>\n<p>While tuning <a href=\"https:\/\/blog.codeship.com\/performance-tuning-haproxy\/\">HAProxy<\/a>, I talked a bit about CPU Affinity. By default, uWSGI processes have the same CPU Affinity as the master process. What this means is that even though we will now have four instances of our application, all four processes are using the same CPU.<\/p>\n<p>If our system has more than one CPU available, we are neglecting to leverage all of our processing capabilities. Once again we can check the number of available CPUs by executing the <code>lshw<\/code> command as shown below:<\/p>\n<pre class=\"brush:php\"># lshw -short -class cpu\r\nH\/W path      Device  Class      Description\r\n============================================\r\n\/0\/401                processor  Intel(R) Xeon(R) CPU E5-2650L v3 @ 1.80GHz\r\n\/0\/402                processor  Intel(R) Xeon(R) CPU E5-2650L v3 @ 1.80GHz<\/pre>\n<p>From the output above, our test system has two CPUs available. This means even with four processes, we are only using about half of our processing capability. We can fix this by adding two more uWSGI options, <code>threads<\/code> and <code>enable-threads<\/code>, into the <code>uwsgi.ini<\/code> configuration file.<\/p>\n<pre class=\"brush:php\">processes = 4\r\nthreads = 2\r\nenable-threads = True<\/pre>\n<p>The <code>threads<\/code> option is used to tell uWSGI to start our application in <em>prethreaded mode<\/em>. That essentially means it is launching the application across multiple threads, making our four processes essentially eight processes.<\/p>\n<p>This also has the effect of distributing the CPU Affinity across both of our available CPUs.<\/p>\n<p>The <code>enable-threads<\/code> option is used to enable threading within uWSGI. This option is required whether you use uWSGI to create threads or you use threading within the application itself. If you have a multithreaded application and performance is not what you expect, it\u2019s a good idea to make sure <code>enable-threads<\/code> is set to <code>True<\/code>.<\/p>\n<h3>Retesting for performance changes<\/h3>\n<p>With these three options now set, let\u2019s ago ahead and restart our uWSGI processes and rerun the same <code>ab<\/code> test we ran earlier.<\/p>\n<pre class=\"brush:php\"># ab -c 500 -n 5000 -s 90 http:\/\/example.com\/9000\r\nRequests per second:    1068.63 [#\/sec] (mean)\r\nTime per request:       467.888 [ms] (mean)\r\nTime per request:       0.936 [ms] (mean, across all concurrent requests)<\/pre>\n<p>The results of this test are quite a bit different than the original baseline. In the above test, we can see that our <strong>Requests per second<\/strong> is now <code>1068<\/code>. This is a <code>207%<\/code> improvement by simply enabling multiple threads and processes.<\/p>\n<p>As we have seen in previous tuning exercises, adding multiple uWSGI workers seems to have drastic improvements in performance.<\/p>\n<h2>Disable Logging<\/h2>\n<p>While the most common option, multithreading is not the only performance tuning option available for uWSGI. Another trick we have available is to disable logging.<\/p>\n<p>While it might not be immediately obvious, logging levels often have a drastic effect on the overall performance of an application. Let\u2019s see how much of an impact this change has on our performance before we dig into why and how it improves performance.<\/p>\n<pre class=\"brush:php\">disable-logging = True<\/pre>\n<p>In order to disable logging within uWSGI, we can simply add the <code>disable-logging<\/code> option into the <code>uwsgi.ini<\/code> configuration file as shown above.<\/p>\n<p>While this option may sound like it disables all logging, in reality uWSGI will still provide some logging output. However, the amount of log messages is drastically decreased by only showing critical events.<\/p>\n<p>Let\u2019s go ahead and see what the impact is by restarting uWSGI and rerunning our test.<\/p>\n<pre class=\"brush:php\"># ab -c 500 -n 5000 -s 90 http:\/\/example.com\/9000\r\nRequests per second:    1483.35 [#\/sec] (mean)\r\nTime per request:       337.076 [ms] (mean)\r\nTime per request:       0.674 [ms] (mean, across all concurrent requests)<\/pre>\n<p>From the above example, we can see that we are now able to send <code>1483<\/code> <strong>requests per second<\/strong>. This is an improvement of over <code>400<\/code> requests per second; quite an increase for such a small change.<\/p>\n<p>By default, uWSGI will log each and every HTTP request to the system console. This activity not only takes resources to present the log message to the screen, but also within the code there is logic performing the logging and formatting of the log message. By disabling this, we are able to avoid these activities and dedicate those same resources to performing our application tasks.<\/p>\n<p>The next option is an interesting one; on the surface, it does not seem like it should improve performance but rather degrade it. Our next option is <code>max-worker-lifetime<\/code>.<\/p>\n<h2>Max Worker Lifetime<\/h2>\n<p>The <code>max-worker-lifetime<\/code> option tells uWSGI to restart worker processes after the specified time (in seconds). Let\u2019s go ahead and add the following to our <code>uwsgi.ini<\/code> file:<\/p>\n<pre class=\"brush:php\">max-worker-lifetime = 30<\/pre>\n<p>This will tell uWSGI to restart worker processes every <code>30<\/code> seconds. Let\u2019s see what effect this has after restarting our uWSGI processes and rerunning the <code>ab<\/code> command.<\/p>\n<pre class=\"brush:php\"># ab -c 500 -n 5000 -s 90 http:\/\/example.com\/9000\r\nRequests per second:    1606.62 [#\/sec] (mean)\r\nTime per request:       311.212 [ms] (mean)\r\nTime per request:       0.622 [ms] (mean, across all concurrent requests)<\/pre>\n<p>What is interesting is that one would expect uWSGI to lose some capacity while restarting worker processes. The result of the test however, increases our throughput by another <code>100<\/code> <strong>Requests per second<\/strong>.<\/p>\n<p>This works because this web application does not need to maintain anything in memory across multiple requests. This specific application actually works faster the newer the process is.<\/p>\n<p>The reason for this is simple: A newer process has fewer memory management tasks to perform, as each HTTP requests create objects in memory for the web application. Eventually the application has to clean up these objects.<\/p>\n<p>By restarting the processes periodically, we are able to forcefully create a clean instance for the next request.<\/p>\n<p>When leveraging a middleware component such as uWSGI, this process can be very effective. This option can also be a bit of a double-edged sword; a value too low may cause more overhead restarting processes then the benefit it brings. As with anything, it\u2019s best to try multiple values and see which fits the application at hand.<\/p>\n<h2>Compiling Our Python Library to C<\/h2>\n<p>Now that we\u2019ve tuned uWSGI, we can start looking at other options for greater performance, such as modifying the application itself and how it works.<\/p>\n<p>If we look at the application above, all of the Fibonacci sequence generation is contained within the library <code>fib<\/code>. If we were able to speed up that library, we may see even more performance gains.<\/p>\n<p>A somewhat simple way of speeding up that library is to convert the Python code to C code and tell our application to use the C library instead of a Python library. While this might sound like a hefty task, it is actually fairly simple using Cython.<\/p>\n<p><a href=\"http:\/\/cython.org\/\">Cython<\/a> is a static compiler that is used for creating C extensions for Python. What this means is we can take our <code>fib.py<\/code> and convert it into a C extension.<\/p>\n<p>Let\u2019s go ahead and do just that.<\/p>\n<h3>Install Cython<\/h3>\n<p>Before we can use Cython, we are going to need to install it as well as another system package. The system package in question is the <code>python-dev<\/code> package. This package includes various libraries used during the compilation of Cython-generated C code.<\/p>\n<p>To install this system package, we will once again use the <strong>Apt<\/strong> package manager.<\/p>\n<pre class=\"brush:php\"># apt-get install python-dev<\/pre>\n<p>With the <code>python-dev<\/code> package installed, we can now install the <code>Cython<\/code> package using <code>pip<\/code>.<\/p>\n<pre class=\"brush:php\"># pip install Cython<\/pre>\n<p>Once complete, we can start to convert our <code>fib<\/code> library to a C extension.<\/p>\n<h3>Converting our library<\/h3>\n<p>In order to facilitate the conversion, we will go ahead and create a <code>setup.py<\/code> file. Within this file, we\u2019ll add the following Python code:<\/p>\n<pre class=\"brush:php\">from distutils.core import setup\r\nfrom Cython.Build import cythonize\r\n\r\nsetup(\r\n    ext_modules=cythonize(\"fib.py\"),\r\n)<\/pre>\n<p>When executed, the above code will \u201cCythonize\u201d the <code>fib.py<\/code> file, creating generated C code. Let\u2019s ago ahead and execute <code>setup.py<\/code> to get started.<\/p>\n<pre class=\"brush:php\"># python setup.py build_ext --inplace<\/pre>\n<p>Once the above execution is completed, we should see a total of three files for the <code>fib<\/code> library.<\/p>\n<pre class=\"brush:php\">$ ls -la\r\ntotal 196\r\ndrwxr-xr-x 1 root root    272 Dec  5 21:52 .\r\ndrwxr-xr-x 1 root root    136 Dec  3 21:05 ..\r\n-rw-r--r-- 1 root root    317 Dec  4 03:22 app.py\r\ndrwxr-xr-x 1 root root    102 Dec  3 21:03 build\r\n-rw-r--r-- 1 root root 105135 Dec  5 21:52 fib.c\r\n-rw-r--r-- 1 root root    281 Dec  3 21:03 fib.py\r\n-rwxr-xr-x 1 root root  80844 Dec  5 21:52 fib.so\r\n-rw-r--r-- 1 root root    115 Dec  5 21:51 setup.py<\/pre>\n<p>The <code>fib.c<\/code> file is the C source file that was created by Cython, and the <code>fib.so<\/code> file is the compiled version of this file that our application can import at run time.<\/p>\n<p>Let\u2019s go ahead and restart our application and rerun our test again to see the results.<\/p>\n<pre class=\"brush:php\"># ab -c 500 -n 5000 -s 90 http:\/\/example.com\/9000\r\nRequests per second:    1744.61 [#\/sec] (mean)\r\nTime per request:       286.598 [ms] (mean)\r\nTime per request:       0.573 [ms] (mean, across all concurrent requests)<\/pre>\n<p>While the results do not show as much of an increase \u2014 <code>144<\/code> <strong>requests per second<\/strong> \u2014 there is an increase in throughput none the less. As with most things, the results with Cython will vary from application to application.<\/p>\n<h2>Summary<\/h2>\n<p>In this article, with just a few tweaks to uWSGI and our application, we were not only able to increase performance, we were able to do so significantly.<\/p>\n<p>When we started, our app was only able to accept <code>347<\/code> requests per second. After changing simple parameters, such as the number of worker processes and disabling logging mechanisms, we were able to push this application to <code>1744<\/code> requests per second.<\/p>\n<p>The number of requests is not the only thing that increased. We were also able to reduce the time our application takes to respond to each request. If we go back to the beginning, the \u201cmean\u201d application request took 1.4 seconds to execute. After our changes, this same \u201cmean\u201d is <code>286<\/code> milliseconds. This means overall we were able to shave about 1.1 seconds per request; a respectable difference.<\/p>\n<p>While this article covered most of the available performance-tuning options within uWSGI, there are still quite a few that we haven\u2019t touched. If you have a parameter that you feel we should have explored, feel free to drop it into the comments section.<\/p>\n<div class=\"attribution\">\n<table>\n<tbody>\n<tr>\n<td><span class=\"reference\">Reference: <\/span><\/td>\n<td><a href=\"https:\/\/blog.codeship.com\/getting-every-microsecond-out-of-uwsgi\/\">Getting Every Microsecond Out of uWSGI<\/a> from our <a href=\"http:\/\/www.webcodegeeks.com\/join-us\/wcg\/\">WCG partner<\/a>\u00a0Ben Cane at the <a href=\"http:\/\/blog.codeship.com\/\">Codeship Blog<\/a> blog.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>In recent articles, I covered performance tuning both HAProxy and NGINX. Today\u2019s article will be similar, however we\u2019re going to go further down the stack and explore tuning a Python application running via uWSGI. What Is uWSGI In order to deploy a web application written in Python, you would typically need two supporting components. The &hellip;<\/p>\n","protected":false},"author":158,"featured_media":1651,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[53],"tags":[410],"class_list":["post-15403","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-python","tag-uwsgi"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Getting Every Microsecond Out of uWSGI - Web Code Geeks - 2026<\/title>\n<meta name=\"description\" content=\"In recent articles, I covered performance tuning both HAProxy and NGINX. Today\u2019s article will be similar, however we\u2019re going to go further down the stack\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Getting Every Microsecond Out of uWSGI - Web Code Geeks - 2026\" \/>\n<meta property=\"og:description\" content=\"In recent articles, I covered performance tuning both HAProxy and NGINX. Today\u2019s article will be similar, however we\u2019re going to go further down the stack\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/\" \/>\n<meta property=\"og:site_name\" content=\"Web Code Geeks\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/webcodegeeks\" \/>\n<meta property=\"article:published_time\" content=\"2016-12-14T10:15:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"150\" \/>\n\t<meta property=\"og:image:height\" content=\"150\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Benjamin Cane\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@webcodegeeks\" \/>\n<meta name=\"twitter:site\" content=\"@webcodegeeks\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Benjamin Cane\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/\"},\"author\":{\"name\":\"Benjamin Cane\",\"@id\":\"https:\/\/www.webcodegeeks.com\/#\/schema\/person\/4f5d918df9c19fab91b5b205357ce0b8\"},\"headline\":\"Getting Every Microsecond Out of uWSGI\",\"datePublished\":\"2016-12-14T10:15:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/\"},\"wordCount\":2639,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg\",\"keywords\":[\"uWSGI\"],\"articleSection\":[\"Python\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/\",\"url\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/\",\"name\":\"Getting Every Microsecond Out of uWSGI - Web Code Geeks - 2026\",\"isPartOf\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg\",\"datePublished\":\"2016-12-14T10:15:53+00:00\",\"description\":\"In recent articles, I covered performance tuning both HAProxy and NGINX. Today\u2019s article will be similar, however we\u2019re going to go further down the stack\",\"breadcrumb\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#primaryimage\",\"url\":\"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg\",\"contentUrl\":\"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg\",\"width\":150,\"height\":150},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.webcodegeeks.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Python\",\"item\":\"https:\/\/www.webcodegeeks.com\/category\/python\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Getting Every Microsecond Out of uWSGI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.webcodegeeks.com\/#website\",\"url\":\"https:\/\/www.webcodegeeks.com\/\",\"name\":\"Web Code Geeks\",\"description\":\"Web Developers Resource Center\",\"publisher\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.webcodegeeks.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.webcodegeeks.com\/#organization\",\"name\":\"Exelixis Media P.C.\",\"url\":\"https:\/\/www.webcodegeeks.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.webcodegeeks.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png\",\"contentUrl\":\"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png\",\"width\":864,\"height\":246,\"caption\":\"Exelixis Media P.C.\"},\"image\":{\"@id\":\"https:\/\/www.webcodegeeks.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/webcodegeeks\",\"https:\/\/x.com\/webcodegeeks\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.webcodegeeks.com\/#\/schema\/person\/4f5d918df9c19fab91b5b205357ce0b8\",\"name\":\"Benjamin Cane\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.webcodegeeks.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/09c6af2f1a7430456089189937094b817ef1b7c75ab9968bfd3ec35d938d914b?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/09c6af2f1a7430456089189937094b817ef1b7c75ab9968bfd3ec35d938d914b?s=96&d=mm&r=g\",\"caption\":\"Benjamin Cane\"},\"description\":\"Benjamin Cane is a systems architect in the financial services industry. He writes about Linux systems administration on his blog and has recently published his first book, Red Hat Enterprise Linux Troubleshooting Guide.\",\"url\":\"https:\/\/www.webcodegeeks.com\/author\/benjamin-cane\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Getting Every Microsecond Out of uWSGI - Web Code Geeks - 2026","description":"In recent articles, I covered performance tuning both HAProxy and NGINX. Today\u2019s article will be similar, however we\u2019re going to go further down the stack","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/","og_locale":"en_US","og_type":"article","og_title":"Getting Every Microsecond Out of uWSGI - Web Code Geeks - 2026","og_description":"In recent articles, I covered performance tuning both HAProxy and NGINX. Today\u2019s article will be similar, however we\u2019re going to go further down the stack","og_url":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/","og_site_name":"Web Code Geeks","article_publisher":"https:\/\/www.facebook.com\/webcodegeeks","article_published_time":"2016-12-14T10:15:53+00:00","og_image":[{"width":150,"height":150,"url":"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg","type":"image\/jpeg"}],"author":"Benjamin Cane","twitter_card":"summary_large_image","twitter_creator":"@webcodegeeks","twitter_site":"@webcodegeeks","twitter_misc":{"Written by":"Benjamin Cane","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#article","isPartOf":{"@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/"},"author":{"name":"Benjamin Cane","@id":"https:\/\/www.webcodegeeks.com\/#\/schema\/person\/4f5d918df9c19fab91b5b205357ce0b8"},"headline":"Getting Every Microsecond Out of uWSGI","datePublished":"2016-12-14T10:15:53+00:00","mainEntityOfPage":{"@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/"},"wordCount":2639,"commentCount":0,"publisher":{"@id":"https:\/\/www.webcodegeeks.com\/#organization"},"image":{"@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#primaryimage"},"thumbnailUrl":"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg","keywords":["uWSGI"],"articleSection":["Python"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/","url":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/","name":"Getting Every Microsecond Out of uWSGI - Web Code Geeks - 2026","isPartOf":{"@id":"https:\/\/www.webcodegeeks.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#primaryimage"},"image":{"@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#primaryimage"},"thumbnailUrl":"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg","datePublished":"2016-12-14T10:15:53+00:00","description":"In recent articles, I covered performance tuning both HAProxy and NGINX. Today\u2019s article will be similar, however we\u2019re going to go further down the stack","breadcrumb":{"@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#primaryimage","url":"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg","contentUrl":"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2014\/11\/python-logo.jpg","width":150,"height":150},{"@type":"BreadcrumbList","@id":"https:\/\/www.webcodegeeks.com\/python\/getting-every-microsecond-uwsgi\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.webcodegeeks.com\/"},{"@type":"ListItem","position":2,"name":"Python","item":"https:\/\/www.webcodegeeks.com\/category\/python\/"},{"@type":"ListItem","position":3,"name":"Getting Every Microsecond Out of uWSGI"}]},{"@type":"WebSite","@id":"https:\/\/www.webcodegeeks.com\/#website","url":"https:\/\/www.webcodegeeks.com\/","name":"Web Code Geeks","description":"Web Developers Resource Center","publisher":{"@id":"https:\/\/www.webcodegeeks.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.webcodegeeks.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.webcodegeeks.com\/#organization","name":"Exelixis Media P.C.","url":"https:\/\/www.webcodegeeks.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.webcodegeeks.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png","contentUrl":"https:\/\/www.webcodegeeks.com\/wp-content\/uploads\/2022\/06\/exelixis-logo.png","width":864,"height":246,"caption":"Exelixis Media P.C."},"image":{"@id":"https:\/\/www.webcodegeeks.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/webcodegeeks","https:\/\/x.com\/webcodegeeks"]},{"@type":"Person","@id":"https:\/\/www.webcodegeeks.com\/#\/schema\/person\/4f5d918df9c19fab91b5b205357ce0b8","name":"Benjamin Cane","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.webcodegeeks.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/09c6af2f1a7430456089189937094b817ef1b7c75ab9968bfd3ec35d938d914b?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/09c6af2f1a7430456089189937094b817ef1b7c75ab9968bfd3ec35d938d914b?s=96&d=mm&r=g","caption":"Benjamin Cane"},"description":"Benjamin Cane is a systems architect in the financial services industry. He writes about Linux systems administration on his blog and has recently published his first book, Red Hat Enterprise Linux Troubleshooting Guide.","url":"https:\/\/www.webcodegeeks.com\/author\/benjamin-cane\/"}]}},"_links":{"self":[{"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/posts\/15403","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/users\/158"}],"replies":[{"embeddable":true,"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/comments?post=15403"}],"version-history":[{"count":0,"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/posts\/15403\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/media\/1651"}],"wp:attachment":[{"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/media?parent=15403"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/categories?post=15403"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.webcodegeeks.com\/wp-json\/wp\/v2\/tags?post=15403"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}