Quantcast
Channel: Active questions tagged python - Webmasters Stack Exchange
Viewing all articles
Browse latest Browse all 46

Flask on gunicorn/cheroot scalability [closed]

$
0
0

I have developed a HTTP API endpoint using flask which accepts json data on POST requests and sends back json response.

I have tried using multiple WSGI servers: gunicorn, cheroot, Bjoern behind Nginx as a reverse proxy.

I noticed that no matter which WSGI server I use, the application is unable to handle a sustained loads of 500 requests per second. A sudden burst of 500 is handled fine. But not when it is sustained. The requests start getting delayed responses and quite a few requests just time out.

The flask web is deployed on a 24 core physical server. So it has 48 logical cores. I am using a C++ application on another similar 24 core server to fire these requests asynchronously. One request every 2ms, thus 500 per second.

Consider the below example of the simple single file flask application on cheroot WSGI server I created to benchmark the performance. It only logs the request json and sends back a response json. Even this is unable to handle a sustained load of 500 requests on a powerful 24 core physical server. The CPU usage is always below 5% during the test.

import osimport jsonimport loggingfrom logging.handlers import TimedRotatingFileHandlerfrom flask import Flaskfrom flask import request, jsonifyfrom cheroot.wsgi import PathInfoDispatcherfrom cheroot.wsgi import Serverapp = Flask(__name__)# Setup logger for the appif not os.path.exists('logs'):        os.mkdir('logs')file_handler = TimedRotatingFileHandler('logs/simpleflaskapp.log', when='midnight', interval=1, backupCount=10)file_handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(message)s [in %(pathname)s:%(lineno)d]'))file_handler.setLevel(logging.INFO)app.logger.addHandler(file_handler)app.logger.setLevel(logging.INFO)app.logger.info("simpleflaskapp startup")# end setup logger@app.route( '/test', methods = [ 'POST' ] )def test():    app.logger.info(json.dumps(request.json))    res = {"statusCode": 200,"message": "OK",    }    return jsonify(res)d = PathInfoDispatcher({'/': app})server = Server(('0.0.0.0', 8000), d, numthreads=os.cpu_count(), request_queue_size=int(os.cpu_count()/2))if __name__ == '__main__':    try:        server.start()    except KeyboardInterrupt:        server.stop()

The author of the blog post https://www.appdynamics.com/blog/engineering/a-performance-analysis-of-python-wsgi-servers-part-2/ is able to serve multiple thousands of requests per second on a 2 core machine. What am I doing wrong?

Eliminating the disk IO by commenting out logging in the above sample app, allows me to reach 666 requests per second. But not more. This is still low considering the hardware I am running it on.

I have already checked the Nginx configuration and it is configured to handle much higher loads. I have also tried firing requests directly to the WSGI server skipping Nginx and the results of that are worse.


Viewing all articles
Browse latest Browse all 46

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>