title | summary |
---|---|
Integrate TiDB with ProxySQL |
Introduce how to integrate TiDB with ProxySQL step by step. |
This document describes how to integrate TiDB with ProxySQL using CentOS 7 as an example. If you want to integrate using other systems, refer to the Try Out section, which introduces how to deploy a test integration environment using Docker and Docker Compose. For more information, refer to:
You can refer to Build a TiDB cluster in TiDB Cloud (Developer Tier).
-
Download the TiDB source code, change to the
tidb-server
folder and run thego build
command.git clone [email protected]:pingcap/tidb.git cd tidb/tidb-server go build
-
Use the configuration file
tidb-config.toml
to start TiDB. The command is as follows:${TIDB_SERVER_PATH} -config ./tidb-config.toml -store unistore -path "" -lease 0s > ${LOCAL_TIDB_LOG} 2>&1 &
Note:
- The preceding command uses
unistore
as the storage engine, which is a test storage engine in TiDB. Make sure that you use it in a test environment only. TIDB_SERVER_PATH
: the path of the compiled binary usinggo build
. For example, if you execute the previous command under/usr/local
,TIDB_SERVER_PATH
is/usr/local/tidb/tidb-server/tidb-server
.LOCAL_TIDB_LOG
: the log file path of TiDB.
- The preceding command uses
TiUP, as the TiDB package manager, makes it easier to manage different cluster components in the TiDB ecosystem, such as TiDB, PD, and TiKV.
-
Install TiUP:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
-
Start TiDB in a test environment:
tiup playground
It is recommended to use TiDB Cloud directly when you need hosting TiDB services (for example, you cannot manage it yourself, or you need a cloud-native environment). To build a TiDB cluster in a production environment, refer to Create a TiDB cluster.
The production environment requires more steps than the test environment. To deploy an on-premises production cluster, it is recommended to refer to Deploy a TiDB cluster using TiUP and then deploy it based on hardware conditions.
-
Add the ProxySQL repository:
cat > /etc/yum.repos.d/proxysql.repo << EOF [proxysql] name=ProxySQL YUM repository baseurl=https://repo.proxysql.com/ProxySQL/proxysql-2.4.x/centos/\$releasever gpgcheck=1 gpgkey=https://repo.proxysql.com/ProxySQL/proxysql-2.4.x/repo_pub_key EOF
-
Install ProxySQL:
yum install proxysql
-
Start ProxySQL:
systemctl start proxysql
To install ProxySQL using other ways, refer to the ProxySQL README or the ProxySQL installation documentation.
To use ProxySQL as a proxy for TiDB, you need to configure ProxySQL. The required configuration items are listed in the following sections. For more details about other configuration items, refer to the ProxySQL official documentation.
ProxySQL uses a port to manage configuration, which is ProxySQL Admin interface, and a port to proxy, which is ProxySQL MySQL Interface.
- ProxySQL Admin interface: To connect to the admin interface, you can use an
admin
user to read and write configuration, or use astats
user to read part of statistics (cannot read or write configuration). The default credentials areadmin:admin
andstats:stats
. For security reasons, you can use the default credentials to connect locally, but to connect remotely, you need to configure a new user, which is often namedradmin
. - ProxySQL MySQL Interface: Used as a proxy to forward SQL to the configured service.
There are three layers in ProxySQL configurations: runtime
, memory
, and disk
. You can change the configuration of the memory
layer only. After modifying the configuration, you can use LOAD xxx TO runtime
to make the configuration effective, and/or you can use SAVE xxx TO DISK
to save to the disk to prevent configuration loss.
You can add multiple TiDB servers in ProxySQL. To add TiDB servers, perform the following at ProxySQL Admin interface:
INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0, '127.0.0.1', 4000);
LOAD mysql servers TO runtime;
SAVE mysql servers TO DISK;
Field description:
hostgroup_id
: ProxySQL manages servers by hostgroup. To distribute SQL to these servers evenly, you can configure several servers that need load balancing to the same hostgroup. To distinguish the servers, such as read and write splitting, you can configure them to different hostgroup.hostname
: The IP or domain of the TiDB server.port
: The port of the TiDB server.
After adding a TiDB server user to ProxySQL, ProxySQL allows this user to log in ProxySQL MySQL Interface and create a connection with TiDB. Make sure that the user has appropriate permissions in TiDB. To add a TiDB server user, perform the following at ProxySQL Admin interface:
INSERT INTO mysql_users(username, password, active, default_hostgroup, transaction_persistent) VALUES ('root', '', 1, 0, 1);
LOAD mysql users TO runtime;
SAVE mysql users TO DISK;
Field description:
username
: The user name.password
: The password.active
: Controls whether the user is active.1
is active, and0
is inactive. Only when theactive
is1
, the user can log in.default_hostgroup
: The default hostgroup used by the user, where SQL distributed to unless the query rule routes the traffic to a specific hostgroup.transaction_persistent
:1
indicates persistent transaction. That is, when the user starts a transaction in a connection, all statements are routed to the same hostgroup until the transaction is committed or rolled back.
In addition to configuring using ProxySQL Admin interface, you can also configure ProxySQL using a configuration file. In the Configuring ProxySQL through the config file document, the configuration file should only be considered as a secondary way of initializing ProxySQL, not the primary way. The configuration file is only used when the SQLite is not created and will not be used after the SQLite is created. When using the configuration file to configure ProxySQL, you should delete SQLite first using the following command. But this loses configuration changes in ProxySQL Admin interface.
rm /var/lib/proxysql/proxysql.db
Alternatively, you can execute the LOAD xxx FROM CONFIG
command to overwrite the current configuration.
The path of the configuration file is /etc/proxysql.cnf
. To configure required configuration items in the preceding sections with the configuration file, the following takes mysql_servers
and mysql_users
as an example. To modify other items, refer to the /etc/proxysql.cnf
.
mysql_servers =
(
{
address="127.0.0.1"
port=4000
hostgroup=0
max_connections=2000
}
)
mysql_users:
(
{
username = "root"
password = ""
default_hostgroup = 0
max_connections = 1000
default_schema = "test"
active = 1
transaction_persistent = 1
}
)
To make the preceding modifications effective, use the systemctl restart proxysql
command to restart ProxySQL. Then the SQLite database will be created automatically and the configuration file will not be ignored.
The preceding configuration items are required. For optional configuration items, refer to Global variables.
To quick start the test environment, you can use Docker and Docker Compose. Make sure the ports 4000
and 6033
are not allocated.
git clone https://github.com/Icemap/tidb-proxysql-integration-test.git
cd tidb-proxysql-integration-test && docker-compose pull # Get the latest Docker images
sudo setenforce 0 # Only on Linux
docker-compose up -d
Warning:
DO NOT use the preceding commands to create an integration in production environments.
The preceding commands start an environment integrated TiDB with ProxySQL and runs two containers. To log in to the ProxySQL 6033
port, you can use the root
username with an empty password. For more information about the configuration of containers, see docker-compose.yaml
. For more details about the configuration of ProxySQL, see proxysql-docker.cnf
.
To connect to TiDB, run the following command:
mysql -u root -h 127.0.0.1 -P 6033 -e "SELECT VERSION()"
An example result is as follows:
+--------------------+
| VERSION() |
+--------------------+
| 5.7.25-TiDB-v6.1.0 |
+--------------------+
Dependencies:
- Docker
- Docker Compose
- MySQL Client
Clone the example code repository and change to the sample directory:
git clone https://github.com/Icemap/tidb-proxysql-integration-test.git
cd tidb-proxysql-integration-test
The following sections use tidb-proxysql-integration-test
as the root directory.
Change to the sample directory:
cd example/load-balance-admin-interface
To configure load balancing using ProxySQL Admin Interface, you can run with the test-load-balance.sh
script using the following command:
./test-load-balance.sh
The preceding test-load-balance.sh
script can be run step by step as follows:
-
Start three TiDB containers and a ProxySQL instance.
docker-compose up -d
- Start three TiDB containers using
docker-compose
. All the ports in the container are4000
and host ports are4001
,4002
and4003
. - After starting TiDB containers, the ProxySQL instance is started. The port of ProxySQL MySQL Interface in container is
6033
and the host port is6034
. - The port of ProxySQL Admin Interface is not exposed because it can only be accessed in the container.
- For more details about the process, refer to
docker-compose.yaml
.
- Start three TiDB containers using
-
In the three TiDB containers, create the same table schema with different data (
'tidb-0'
,'tidb-1'
and'tidb-2'
) to distinguish TiDB instances.mysql -u root -h 127.0.0.1 -P 4001 << EOF DROP TABLE IF EXISTS test.test; CREATE TABLE test.test (db VARCHAR(255)); INSERT INTO test.test (db) VALUES ('tidb-0'); EOF mysql -u root -h 127.0.0.1 -P 4002 << EOF DROP TABLE IF EXISTS test.test; CREATE TABLE test.test (db VARCHAR(255)); INSERT INTO test.test (db) VALUES ('tidb-1'); EOF mysql -u root -h 127.0.0.1 -P 4003 << EOF DROP TABLE IF EXISTS test.test; CREATE TABLE test.test (db VARCHAR(255)); INSERT INTO test.test (db) VALUES ('tidb-2'); EOF
-
To execute the
proxysql-prepare.sql
in ProxySQL Admin Interface, execute thedocker-compose exec
command as follows:docker-compose exec proxysql sh -c "mysql -uadmin -padmin -h127.0.0.1 -P6032 < ./proxysql-prepare.sql"
The preceding SQL file runs and triggers the following operations:
- Adds hosts of three TiDB Servers and set all
hostgroup_id
as0
. - Makes the configuration of TiDb Servers effective and saves it on disk.
- Adds a
root
user with an empty password and setsdefault_hostgroup
as0
, corresponding to the precedinghostgroup_id
of TiDB Servers. - Makes the configuration of the user effective and saves it on disk.
- Adds hosts of three TiDB Servers and set all
-
Log in to ProxySQL MySQL Interface with the
root
user and query 5 times using the following statements. The expected output contains'tidb-0'
,'tidb-1'
, and'tidb-2'
three different values.mysql -u root -h 127.0.0.1 -P 6034 -t << EOF SELECT * FROM test.test; SELECT * FROM test.test; SELECT * FROM test.test; SELECT * FROM test.test; SELECT * FROM test.test; EOF
-
To stop and remove containers and networks, you can use the following command:
docker-compose down
There are three different results ('tidb-0'
, 'tidb-1'
, and 'tidb-2'
) in the expected output, but the exact order cannot be expected. The following is one of the expected outputs:
# ./test-load-balance.sh
Creating network "load-balance-admin-interface_default" with the default driver
Creating load-balance-admin-interface_tidb-1_1 ... done
Creating load-balance-admin-interface_tidb-2_1 ... done
Creating load-balance-admin-interface_tidb-0_1 ... done
Creating load-balance-admin-interface_proxysql_1 ... done
+--------+
| db |
+--------+
| tidb-2 |
+--------+
+--------+
| db |
+--------+
| tidb-0 |
+--------+
+--------+
| db |
+--------+
| tidb-1 |
+--------+
+--------+
| db |
+--------+
| tidb-1 |
+--------+
+--------+
| db |
+--------+
| tidb-1 |
+--------+
Stopping load-balance-admin-interface_proxysql_1 ... done
Stopping load-balance-admin-interface_tidb-0_1 ... done
Stopping load-balance-admin-interface_tidb-2_1 ... done
Stopping load-balance-admin-interface_tidb-1_1 ... done
Removing load-balance-admin-interface_proxysql_1 ... done
Removing load-balance-admin-interface_tidb-0_1 ... done
Removing load-balance-admin-interface_tidb-2_1 ... done
Removing load-balance-admin-interface_tidb-1_1 ... done
Removing network load-balance-admin-interface_default
Change to the sample directory:
cd example/user-split-admin-interface
To configure a user split traffic using ProxySQL Admin Interface, you can run the test-user-split.sh
script using the following command:
./test-user-split.sh
The preceding test-user-split.sh
script can be run step by step as follows:
-
Start two TiDB containers and a ProxySQL instance.
docker-compose up -d
- Start two TiDB containers using
docker-compose
. All the ports in the container are4000
and host ports are4001
and4002
. - After you start TiDB containers, the ProxySQL instance is started. The port of ProxySQL MySQL Interface in the container is
6033
and the host port is6034
. - The port of ProxySQL Admin Interface is not exposed because it can only be accessed in the container.
- For more details about the process, refer to
docker-compose.yaml
.
- Start two TiDB containers using
-
In the two TiDB containers, create the same table schema with different data (
'tidb-0'
and'tidb-1'
) to distinguish TiDB instances.mysql -u root -h 127.0.0.1 -P 4001 << EOF DROP TABLE IF EXISTS test.test; CREATE TABLE test.test (db VARCHAR(255)); INSERT INTO test.test (db) VALUES ('tidb-0'); EOF mysql -u root -h 127.0.0.1 -P 4002 << EOF DROP TABLE IF EXISTS test.test; CREATE TABLE test.test (db VARCHAR(255)); INSERT INTO test.test (db) VALUES ('tidb-1'); EOF
-
Create a new user for ProxySQL in the
tidb-1
instance:mysql -u root -h 127.0.0.1 -P 4002 << EOF CREATE USER 'root1' IDENTIFIED BY ''; GRANT ALL PRIVILEGES ON *.* TO 'root1'@'%'; FLUSH PRIVILEGES; EOF
-
To execute the
proxysql-prepare.sql
in ProxySQL Admin Interface, execute thedocker-compose exec
command as follows:docker-compose exec proxysql sh -c "mysql -uadmin -padmin -h127.0.0.1 -P6032 < ./proxysql-prepare.sql"
The preceding SQL file runs and triggers the following operations:
- Adds hosts of two TiDB Servers. The
hostgroup_id
oftidb-0
is0
andhostgroup_id
oftidb-1
is1
. - Makes the configuration of TiDb Servers effective and saves it on disk.
- Adds a
root
user with an empty password and setsdefault_hostgroup
as0
. It indicates that the SQL routes totidb-0
by default. - Adds a user
root1
with an empty password and setsdefault_hostgroup
as1
. It indicates that the SQL routes totidb-1
by default. - Makes the configuration of the user effective and saves it on disk.
- Adds hosts of two TiDB Servers. The
-
Log in to ProxySQL MySQL Interface with the
root
user androot1
user. The expected output contains'tidb-0'
and'tidb-1'
two different values.mysql -u root -h 127.0.0.1 -P 6034 -e "SELECT * FROM test.test;" mysql -u root1 -h 127.0.0.1 -P 6034 -e "SELECT * FROM test.test;"
-
To stop and remove containers and networks, you can use the following command:
docker-compose down
The following is one of the expected outputs:
# ./test-user-split.sh
Creating network "user-split-admin-interface_default" with the default driver
Creating user-split-admin-interface_tidb-1_1 ... done
Creating user-split-admin-interface_tidb-0_1 ... done
Creating user-split-admin-interface_proxysql_1 ... done
+--------+
| db |
+--------+
| tidb-0 |
+--------+
+--------+
| db |
+--------+
| tidb-1 |
+--------+
Stopping user-split-admin-interface_proxysql_1 ... done
Stopping user-split-admin-interface_tidb-0_1 ... done
Stopping user-split-admin-interface_tidb-1_1 ... done
Removing user-split-admin-interface_proxysql_1 ... done
Removing user-split-admin-interface_tidb-0_1 ... done
Removing user-split-admin-interface_tidb-1_1 ... done
Removing network user-split-admin-interface_default
Change to the sample directory:
cd example/proxy-rule-admin-interface
To configure proxy rules to use different TiDB servers for executing read and write SQLs (if not matched, use default_hostgroup
) using ProxySQL Admin Interface, you can run proxy-rule-split.sh
using the following command:
./proxy-rule-split.sh
The preceding proxy-rule-split.sh
script can be run step by step as follows:
-
Start two TiDB containers and a ProxySQL instance.
docker-compose up -d
- Start two TiDB containers using
docker-compose
. All the ports in the container are4000
and host ports are4001
and4002
. - After you start TiDB containers, the ProxySQL instance is started. The port of ProxySQL MySQL Interface in the container is
6033
and the host port is6034
. - The port of ProxySQL Admin Interface is not exposed because it can only be accessed in the container.
- For more details about the process, refer to
docker-compose.yaml
- Start two TiDB containers using
-
In the two TiDB containers, create the same table schema with different data (
'tidb-0'
and'tidb-1'
) to distinguish TiDB instances.mysql -u root -h 127.0.0.1 -P 4001 << EOF DROP TABLE IF EXISTS test.test; CREATE TABLE test.test (db VARCHAR(255)); INSERT INTO test.test (db) VALUES ('tidb-0'); EOF mysql -u root -h 127.0.0.1 -P 4002 << EOF DROP TABLE IF EXISTS test.test; CREATE TABLE test.test (db VARCHAR(255)); INSERT INTO test.test (db) VALUES ('tidb-1'); EOF
-
To execute the
proxysql-prepare.sql
in ProxySQL Admin Interface, execute thedocker-compose exec
command as follows:docker-compose exec proxysql sh -c "mysql -uadmin -padmin -h127.0.0.1 -P6032 < ./proxysql-prepare.sql"
The preceding SQL file runs and triggers the following operations:
- Adds hosts of two TiDB Servers. The
hostgroup_id
oftidb-0
is0
andhostgroup_id
oftidb-1
is1
. - Makes the configuration of TiDB Servers effective and saves it on disk.
- Adds a user
root
with an empty password and setsdefault_hostgroup
as0
. It indicates that the SQL routes totidb-0
by default. - Makes the configuration of the user effective and save it on disk.
- Adds the rule
^SELECT.*FOR UPDATE$
withrule_id
as1
anddestination_hostgroup
as0
. If a SQL statement match this rule, it used the TiDB Server withhostgroup
as0
(this rule forwardsSELECT ... FOR UPDATE
to the written database). - Adds the rule
^SELECT
withrule_id
as2
anddestination_hostgroup
as1
. If SQL statements match this rule, it uses the TiDB Server withhostgroup
as1
. - Makes the configuration of the rule effective and saves it on disk.
Note:
More details about the matching rules:
-
ProxySQL tries to match the rules one by one in the order of
rule_id
from smallest to largest. -
^
matches the beginning of a SQL statement and$
matches the end. -
match_digest
matches the parameterized SQL statement. For more details, see query_processor_regex. -
Important parameters:
digest
: match the parameterized Hash value.match_pattern
: match the raw SQL statements.negate_match_pattern
: if you set the value to1
, inverse the match formatch_digest
ormatch_pattern
.log
: whether to log the query.replace_pattern
: if it is not empty, this is the pattern with which to replace the matched pattern.
-
For full parameters, see mysql_query_rules.
- Adds hosts of two TiDB Servers. The
-
Log in to ProxySQL MySQL Interface with the
root
user:mysql -u root -h 127.0.0.1 -P 6034
You can run the following statements:
-
SELECT
statement:SELECT * FROM test.test;
The statement is expected to match rules with
rule_id
of2
and forward the statement to the TiDB servertidb-1
withhostgroup
of1
. -
SELECT ... FOR UPDATE
statement:SELECT * FROM test.test for UPDATE;
The statement is expected to match rules with
rule_id
of1
and forward the statement to the TiDB servertidb-0
withhostgroup
of0
. -
Transaction:
BEGIN; INSERT INTO test.test (db) VALUES ('insert this and rollback later'); SELECT * FROM test.test; ROLLBACK;
The
BEGIN
statement is expected to not match all rules. It uses thedefault_hostgroup
of the user (It is0
) and thus forwards to the TiDB servertidb-0
(hostgroup
is0
). And ProxySQL enables usertransaction_persistent
by default, which will cause all statements within the same transaction to run in the samehostgroup
. So theINSERT
statement andSELECT * FROM test.test;
will also be forwarded to the TiDB Servertidb-0
(hostgroup
is0
).
-
-
To stop and remove containers and networks, you can use the following command:
docker-compose down
# ./proxy-rule-split.sh
Creating network "proxy-rule-admin-interface_default" with the default driver
Creating proxy-rule-admin-interface_tidb-1_1 ... done
Creating proxy-rule-admin-interface_tidb-0_1 ... done
Creating proxy-rule-admin-interface_proxysql_1 ... done
+--------+
| db |
+--------+
| tidb-1 |
+--------+
+--------+
| db |
+--------+
| tidb-0 |
+--------+
+--------------------------------+
| db |
+--------------------------------+
| tidb-0 |
| insert this and rollback later |
+--------------------------------+
Stopping proxy-rule-admin-interface_proxysql_1 ... done
Stopping proxy-rule-admin-interface_tidb-0_1 ... done
Stopping proxy-rule-admin-interface_tidb-1_1 ... done
Removing proxy-rule-admin-interface_proxysql_1 ... done
Removing proxy-rule-admin-interface_tidb-0_1 ... done
Removing proxy-rule-admin-interface_tidb-1_1 ... done
Removing network proxy-rule-admin-interface_default
To configure load balancing using the configuration file, you can run test-load-balance.sh
using the following command:
cd example/load-balance-config-file
./test-load-balance.sh
The expected output is the same as that of Use Admin Interface to configure load balancing. The only change is using the configuration file to initialize the ProxySQL configuration.
Note:
The configuration of ProxySQL is stored in SQLite. The configuration file is only used when the SQLite is not created.
It is recommended that you use the configuration file only for initialization but not for modifying configuration items, because configuration through the ProxySQL Admin Interface supports the following features:
- Input validation.
- Remote configuration by any MySQL client.
- Runtime configuration for maximum uptime (no need to restart).
- Propagation the configuration to other ProxySQL nodes if ProxySQL Cluster is configured.