Setup Highly Available MySQL Cluster with ProxySQL

Abhishek Patel
6 min readMay 7, 2020


As defined in the official documentation, ProxySQL is a high performance, high availability, protocol aware proxy for MySQL and forks (like Percona Server and MariaDB). As the name suggests ProxySQL proxies traffic coming to it to backend MySQL clusters. As ProxySQL is developed by DBAs for DBAs, it is not limited to just proxy pass incoming traffic. ProxySQL helps DBAs to solve their daily issues easily with many available features like Read/Write split, Multiplexing, Query Caching and many more to provide high availability and scalability for the MySQL clusters.

Here we will see how ProxySQL features can help us to make our MySQL clusters highly available. We will mainly focus on the given features of ProxySQL to design the architecture:

  • Multiplexing
  • Read/Write split
  • Load balancing with hostgroup weights


When it comes to scaling high traffic applications, mainly those which require many connections to Databases, one of the bottlenecks we hit is the max connections limit available on the MySQL servers. MySQL max-connection limit is determined by the resources (Memory, CPU) available on the server, so we can’t tweak this configuration beyond one limit. Also increasing this limit might result in poor performance as MySQL will create one thread per connection and if we get too many connections on the server then MySQL server will be busy scheduling all the threads rather than executing queries.

To solve this problem we often introduce proxies with connection pooling enabled between application servers and our MySQL servers. ProxySQL provides the same functionality with efficient implementation than most of the proxies available in the market. ProxySQL provides a multiplexing feature which uses thread pool to connect to backend servers. So whenever any request is completed, ProxySQL’s hostgroup manager checks if the connection can be reused; If connection can be reused (no transaction is active on the connection, no temp table is created or no session variable is set) and the connection pool isn’t full then the hostgroup manager puts the connection back into the pool to serve the next request. With use of multiplexing we can scale hundreds of thousands incoming connection requests to a few hundred backend connections. When we introduce ProxySQL in our architecture it should look like this:

Issues with Above Architecture:

Everything works fine with this design until you scale your application servers for serving more traffic on your application. You will eventually hit the resource limits on ProxySQL. Also in the above topology you are increasing latency for each database connection as you need to connect to ProxySQL on the network each time a new connection is required. It is considered a good practice to keep the ProxySQL instances close to the application servers, in fact it is advised to run a ProxySQL instance locally on each application server. When you have your proxy instance running locally, your application can communicate with it using a unix socket which is way faster than normal network connections. If you run ProxySQL on application server, your architecture design should look similar to given below:

Query Caching:

ProxySQL provides a query caching mechanism out of the box. You can add query rules in proxy to notify it on what type of queries to cache and for what time. ProxySQL provides many stats to help you better decide which query we can cache. Executing below query will provide the list of query patterns executed by ProxySQL along with the number of times the query got executed and sum of time spent for those queries.

SELECT count_star,sum_time,hostgroup,digest,digest_text FROM stats_mysql_query_digest_reset ORDER BY sum_time DESC;

Once the queries to cache are identified, we can add query rules with cache ttl to apply for query caching for the given query pattern or query digest.

INSERT INTO mysql_query_rules (active,digest,cache_ttl,apply) VALUES (1,’0xE8930CB2CC9E68D7',2000,1);

With a proxy instance running locally and query caching enabled, no connection will be made to backend servers for executing cached queries and result will be returned from in memory cache. There are some known limitations with query caching in proxysql. You can read more about this here:

Why do we need a centralized ProxySQL instance?

In the above architecture we have a proxysql instance running locally on each application server and each instance connects to all configured backend servers. So for example, if we have one MySQL master server, two MySQL slave servers and three application servers with ProxySQL instances running locally then there will be 9 connections to the backend servers, each proxy instance will connect to one master server and two slave servers. It is known that with increase in network connection we expose more loopholes to the hackers, so higher security risk. It is a good practice to keep all our backend servers behind one proxysql instance configured with proper security and all ProxySQL instances running locally should connect to this centralized proxy server.

Centralized ProxySQL helps us to keep all our MySQL backend servers behind the proxy and reduces the attack risk. It is also helpful with metrics as well, as we have aggregated query statistics for all queries executed on our backend servers on our centralized ProxySQL instance. Also with centralized proxy we are limiting connections going to backend servers in limit which helps in scaling our applications better in future.

Single Point of Failure!!

Wait a minute, with centralized ProxySQL running we have introduced a hop with Single Point of Failure in our architecture!! If somehow centralized proxy goes down then our entire application won’t be able to connect to our DB servers and the entire application will go down. When we have SPoF in our architecture it is normally considered a bad design. To solve this we can utilize Read/Write split feature along with weighted traffic routing to backend servers. In ProxySQL each backend server can be categorized between hostgroups and in each hostgroup we can assign weight to each backend server for traffic routing. So with this feature at our disposal we can run two centralized proxysql instances as shown below, with failover configured.

As shown in above architecture, we configure all ProxySQL instances running locally (AP1, AP2 and AP3) to route 99% write traffic to centralized ProxySQL instance CP1, by assigning weight to 99 for it, and 1% write traffic to another centralized ProxySQL server CP2 (assigned weight for CP2 will be 1). In this case if one of the centralized proxies go down then that proxy will be shunned and the entire traffic will be served by another centralized ProxySQL instance. We need to configure read traffic split similar to above but in vise versa manner. We can configure all local ProxySQL instances to route 99% of read traffic, by assigning appropriate weight, through CP2 server and another 1% read traffic through CP1. In this architecture, if any centralized ProxySQL server goes down then it will be shunned in all ProxySQL instances running on application servers and entire read and write traffic will be served by another running instance of centralized ProxySQL server.


In this article I have defined some of the best practices to follow with ProxySQL servers to achieve High Availability and better scalability for MySQL clusters. For small applications this architecture is overkill but you can architect your own design based on the requirements of your application with trade offs. For applications serving very high load and having high SLA, this looks the ideal architecture to implement.

If you liked this article please clap and share it with your colleagues and friends.