Docker Compose Setup for InnoDB Cluster
Mysql 16-Jan-2019

Docker Compose Setup for InnoDB Cluster

In the following we show how InnoDB cluster can be deployed in a container context. In the official documentation (Introducing InnoDB Cluster), InnoDB is described as:

 


MySQL InnoDB cluster provides a complete high availability solution for MySQL. MySQL Shell includes AdminAPI which enables you to easily configure and administer a group of at least three MySQL server instances to function as an InnoDB cluster. Each MySQL server instance runs MySQL Group Replication, which provides the mechanism to replicate data within InnoDB clusters, with built-in failover.

In this blog post we show how to set up InnoDB cluster using the official MySQL Docker containers and run them with docker-compose. We want to show a full example, including how to connect to the cluster through MySQL Router using a simple example application, and we end up with the following components:

  • three mysql-server containers
  • one temporary mysql-shell container (to set up the InnoDB cluster)
  • one mysql-router container (to access the cluster)
  • one simple db application using the router to access the cluster

In order to run the example we require docker as well as docker-compose. The full example is available here (and works out of the box for linux):

Docker compose files on Github

A short overview of containers and dependencies is given in the following:

The files in this example are organised around a docker-compose file:

innodb-cluster/
|-- docker-compose.yml
|-- dbwebapp.env
|-- mysql-router.env
|-- mysql-server.env
|-- mysql-shell.env
`-- scripts
    |-- db.sql
    `-- setupCluster.js

The docker-compose.yml file describes the individual containers, the .env files contain configuration for the individual parts, and the scripts folder contains Javascript and SQL scripts to set up the cluster and databases.

MySQL Server Images as a Basis

In our docker-compose.yml file we first start three mysql-server images (mysql-server-1, mysql-server-2, mysql-server-3). All three use the following startup commands to satisfy InnoDB cluster requirements (the only difference is the unique --server_id parameter):

  mysql-server-1:
    env_file:
      - mysql-server/mysql.env
    image: mysql/mysql-server:5.7
    ports:
      - "3301:3306"
    command: ["mysqld",
        "--server_id=1",
        "--binlog_checksum=NONE",
        "--gtid_mode=ON",
        "--enforce_gtid_consistency=ON",
        "--log_bin",
        "--log_slave_updates=ON",
        "--master_info_repository=TABLE",
        "--relay_log_info_repository=TABLE",
        "--transaction_write_set_extraction=XXHASH64",
        "--user=mysql",
        "--skip-host-cache",
        "--skip-name-resolve"]

This is based on Production Deployment of InnoDB Cluster and more details can be found there. In addition, we pass $MYSQL_ROOT_PASSWORD and $MYSQL_ROOT_HOST which we will use later on to provision the cluster. NOTE: this is not recommended in a production setting; sound security practices would involve creating less privileged users in this context, but we omit that here for the sake of simplicity and clarity.

MySQL Shell to Provision the Cluster

Then we start a fourth image, neumayer/mysql-shell-batch, to set up the cluster via MySQL Shell. This image is not an official MySQL image and basically waits until the given MySQL server is up and then runs the given scripts against it. We use this image to keep our example self-contained.

This image is available under MySQL Shell batch image

mysql-shell:
  env_file:
    - mysql-server.env
  image: neumayer/mysql-shell-batch
  volumes:
      - ./mysql-shell/scripts/:/scripts/
  depends_on:
    - mysql-server-1
    - mysql-server-2
    - mysql-server-3

Internally it runs the following Javascript (via the mounted scripts directory):

var dbPass = "mysql"
var clusterName = "devCluster"

try {
  print('Setting up InnoDB cluster...\n');
  shell.connect('root@mysql-server-1:3306', dbPass)
  var cluster = dba.createCluster(clusterName);
  print('Adding instances to the cluster.');
  cluster.addInstance({user: "root", host: "mysql-server-2", password: dbPass})
  print('.');
  cluster.addInstance({user: "root", host: "mysql-server-3", password: dbPass})
  print('.\nInstances successfully added to the cluster.');
  print('\nInnoDB cluster deployed successfully.\n');
} catch(e) {
  print('\nThe InnoDB cluster could not be created.\n\nError: ' + e.message + '\n');
}

And the following SQL to set up a database and user for the example app:

CREATE DATABASE dbwebappdb;
CREATE USER 'dbwebapp'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON dbwebappdb.* TO 'dbwebapp'@'%';

If all goes according to plan, the cluster is ready for use, the user for our example app is created, and the temporary image exits.

MySQL Router

Further, we set up a mysql-router container, bootstrapping using one of the existing mysql-server images (this is the official MySQL Router image on Docker Hub):

mysql-router:
  env_file:
    - mysql-shell.env
  image: mysql/mysql-router
  ports:
    - "6446:6446"
  depends_on:
    - mysql-server-1
    - mysql-server-2
    - mysql-server-3
    - mysql-shell

Internally it makes the following calls:

mysqlrouter --bootstrap $MYSQL_USER@$MYSQL_HOST:$MYSQL_PORT --user=mysqlrouter <<< "$MYSQL_PASSWORD"
mysqlrouter

The first call contacts one of the mysql-server instances and acquires information about the other servers from it. A config file is written and then used for the normal startup of the router.

Example App

Finally, we start an application container using the mysql-router container as its database. This application is described in more detail in Docker Compose and App Deployment with MySQL

dbwebapp:
  env_file:
    - dbwebapp.env
  image: neumayer/dbwebapp
  ports:
    - "8057:8080"
  depends_on:
    - mysql-router

The dbwebapp.env contains the necessary parameters to connect to the router container on the right host and port (DBHOST and DBPORT).

Putting it Together

To run this example, first check out the example repo. Running docker-compose up should pull all needed images and spin up your test cluster. If successful, the following output is displayed from the MySQL shell:

mysql-shell_1     | Adding instances to the cluster...
mysql-shell_1     | Instances successfully added to the cluster.
mysql-shell_1     | InnoDB cluster deployed successfully.

The MySQL router will report it successfully contacted the cluster and that it is ready to accept incoming connections:

mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Connected with metadata server running on mysql-server-1:3306
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Connected to replicaset 'default' through mysql-server-1:3306
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Changes detected in cluster 'devCluster' after metadata refresh
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Metadata for cluster 'devCluster' has 1 replicasets:
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] 'default' (3 members, single-master)
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-1:3306 / 33060 - role=HA mode=RW
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-2:3306 / 33060 - role=HA mode=RO
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-3:3306 / 33060 - role=HA mode=RO

And our example app:

dbwebapp_1        | 2018/03/05 12:34:19 Pinging db mysql-router.
dbwebapp_1        | 2018/03/05 12:34:19 Connected to db.
dbwebapp_1        | 2018/03/05 12:34:19 Starting dbwebapp server.

Outlook

We showed how to provision an InnoDB cluster locally with docker-compose using the official MySQL Server and MySQL Router Docker images. We also showed how to configure the cluster and use an example app to access it. Real world deployment requirements may vary, but this approach can be adjusted to any dockerised environment.

Further we want to be clear that our examples are not suitable for a production setting without adjustments. We have no focus on the security of the MySQL instances themselves, the distribution of secrets to the temporary provisioning image or our application, or general network-level security. Most of these security questions should be addressed by the design of your cloud environment or production setting. Also please note that stopping docker compose will effectively kill your test cluster: a cluster can not survive a full outage, which this would amount to. To take down the cluster and start from scratch run docker-compose down.


MySQL InnoDB cluster provides a complete high availability solution for MySQL. MySQL Shell includes AdminAPI which enables you to easily configure and administer a group of at least three MySQL server instances to function as an InnoDB cluster. Each MySQL server instance runs MySQL Group Replication, which provides the mechanism to replicate data within InnoDB clusters, with built-in failover.

In this blog post we show how to set up InnoDB cluster using the official MySQL Docker containers and run them with docker-compose. We want to show a full example, including how to connect to the cluster through MySQL Router using a simple example application, and we end up with the following components:

  • three mysql-server containers
  • one temporary mysql-shell container (to set up the InnoDB cluster)
  • one mysql-router container (to access the cluster)
  • one simple db application using the router to access the cluster

In order to run the example we require docker as well as docker-compose. The full example is available here (and works out of the box for linux):

Docker compose files on Github

A short overview of containers and dependencies is given in the following:

The files in this example are organised around a docker-compose file:

innodb-cluster/
|-- docker-compose.yml
|-- dbwebapp.env
|-- mysql-router.env
|-- mysql-server.env
|-- mysql-shell.env
`-- scripts
    |-- db.sql
    `-- setupCluster.js

The docker-compose.yml file describes the individual containers, the .env files contain configuration for the individual parts, and the scripts folder contains Javascript and SQL scripts to set up the cluster and databases.

MySQL Server Images as a Basis

In our docker-compose.yml file we first start three mysql-server images (mysql-server-1, mysql-server-2, mysql-server-3). All three use the following startup commands to satisfy InnoDB cluster requirements (the only difference is the unique --server_id parameter):

  mysql-server-1:
    env_file:
      - mysql-server/mysql.env
    image: mysql/mysql-server:5.7
    ports:
      - "3301:3306"
    command: ["mysqld",
        "--server_id=1",
        "--binlog_checksum=NONE",
        "--gtid_mode=ON",
        "--enforce_gtid_consistency=ON",
        "--log_bin",
        "--log_slave_updates=ON",
        "--master_info_repository=TABLE",
        "--relay_log_info_repository=TABLE",
        "--transaction_write_set_extraction=XXHASH64",
        "--user=mysql",
        "--skip-host-cache",
        "--skip-name-resolve"]

This is based on Production Deployment of InnoDB Cluster and more details can be found there. In addition, we pass $MYSQL_ROOT_PASSWORD and $MYSQL_ROOT_HOST which we will use later on to provision the cluster. NOTE: this is not recommended in a production setting; sound security practices would involve creating less privileged users in this context, but we omit that here for the sake of simplicity and clarity.

MySQL Shell to Provision the Cluster

Then we start a fourth image, neumayer/mysql-shell-batch, to set up the cluster via MySQL Shell. This image is not an official MySQL image and basically waits until the given MySQL server is up and then runs the given scripts against it. We use this image to keep our example self-contained.

This image is available under MySQL Shell batch image

mysql-shell:
  env_file:
    - mysql-server.env
  image: neumayer/mysql-shell-batch
  volumes:
      - ./mysql-shell/scripts/:/scripts/
  depends_on:
    - mysql-server-1
    - mysql-server-2
    - mysql-server-3

Internally it runs the following Javascript (via the mounted scripts directory):

var dbPass = "mysql"
var clusterName = "devCluster"

try {
  print('Setting up InnoDB cluster...\n');
  shell.connect('root@mysql-server-1:3306', dbPass)
  var cluster = dba.createCluster(clusterName);
  print('Adding instances to the cluster.');
  cluster.addInstance({user: "root", host: "mysql-server-2", password: dbPass})
  print('.');
  cluster.addInstance({user: "root", host: "mysql-server-3", password: dbPass})
  print('.\nInstances successfully added to the cluster.');
  print('\nInnoDB cluster deployed successfully.\n');
} catch(e) {
  print('\nThe InnoDB cluster could not be created.\n\nError: ' + e.message + '\n');
}

And the following SQL to set up a database and user for the example app:

CREATE DATABASE dbwebappdb;
CREATE USER 'dbwebapp'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON dbwebappdb.* TO 'dbwebapp'@'%';

If all goes according to plan, the cluster is ready for use, the user for our example app is created, and the temporary image exits.

MySQL Router

Further, we set up a mysql-router container, bootstrapping using one of the existing mysql-server images (this is the official MySQL Router image on Docker Hub):

mysql-router:
  env_file:
    - mysql-shell.env
  image: mysql/mysql-router
  ports:
    - "6446:6446"
  depends_on:
    - mysql-server-1
    - mysql-server-2
    - mysql-server-3
    - mysql-shell

Internally it makes the following calls:

mysqlrouter --bootstrap $MYSQL_USER@$MYSQL_HOST:$MYSQL_PORT --user=mysqlrouter <<< "$MYSQL_PASSWORD"
mysqlrouter

The first call contacts one of the mysql-server instances and acquires information about the other servers from it. A config file is written and then used for the normal startup of the router.

Example App

Finally, we start an application container using the mysql-router container as its database. This application is described in more detail in Docker Compose and App Deployment with MySQL

dbwebapp:
  env_file:
    - dbwebapp.env
  image: neumayer/dbwebapp
  ports:
    - "8057:8080"
  depends_on:
    - mysql-router

The dbwebapp.env contains the necessary parameters to connect to the router container on the right host and port (DBHOST and DBPORT).

Putting it Together

To run this example, first check out the example repo. Running docker-compose up should pull all needed images and spin up your test cluster. If successful, the following output is displayed from the MySQL shell:

mysql-shell_1     | Adding instances to the cluster...
mysql-shell_1     | Instances successfully added to the cluster.
mysql-shell_1     | InnoDB cluster deployed successfully.

The MySQL router will report it successfully contacted the cluster and that it is ready to accept incoming connections:

mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Connected with metadata server running on mysql-server-1:3306
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Connected to replicaset 'default' through mysql-server-1:3306
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Changes detected in cluster 'devCluster' after metadata refresh
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Metadata for cluster 'devCluster' has 1 replicasets:
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] 'default' (3 members, single-master)
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-1:3306 / 33060 - role=HA mode=RW
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-2:3306 / 33060 - role=HA mode=RO
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-3:3306 / 33060 - role=HA mode=RO

And our example app:

dbwebapp_1        | 2018/03/05 12:34:19 Pinging db mysql-router.
dbwebapp_1        | 2018/03/05 12:34:19 Connected to db.
dbwebapp_1        | 2018/03/05 12:34:19 Starting dbwebapp server.

Outlook

We showed how to provision an InnoDB cluster locally with docker-compose using the official MySQL Server and MySQL Router Docker images. We also showed how to configure the cluster and use an example app to access it. Real world deployment requirements may vary, but this approach can be adjusted to any dockerised environment.

Further we want to be clear that our examples are not suitable for a production setting without adjustments. We have no focus on the security of the MySQL instances themselves, the distribution of secrets to the temporary provisioning image or our application, or general network-level security. Most of these security questions should be addressed by the design of your cloud environment or production setting. Also please note that stopping docker compose will effectively kill your test cluster: a cluster can not survive a full outage, which this would amount to. To take down the cluster and start from scratch run docker-compose down.


MySQL InnoDB cluster provides a complete high availability solution for MySQL. MySQL Shell includes AdminAPI which enables you to easily configure and administer a group of at least three MySQL server instances to function as an InnoDB cluster. Each MySQL server instance runs MySQL Group Replication, which provides the mechanism to replicate data within InnoDB clusters, with built-in failover.

In this blog post we show how to set up InnoDB cluster using the official MySQL Docker containers and run them with docker-compose. We want to show a full example, including how to connect to the cluster through MySQL Router using a simple example application, and we end up with the following components:

  • three mysql-server containers
  • one temporary mysql-shell container (to set up the InnoDB cluster)
  • one mysql-router container (to access the cluster)
  • one simple db application using the router to access the cluster

In order to run the example we require docker as well as docker-compose. The full example is available here (and works out of the box for linux):

Docker compose files on Github

A short overview of containers and dependencies is given in the following:

The files in this example are organised around a docker-compose file:

innodb-cluster/
|-- docker-compose.yml
|-- dbwebapp.env
|-- mysql-router.env
|-- mysql-server.env
|-- mysql-shell.env
`-- scripts
    |-- db.sql
    `-- setupCluster.js

The docker-compose.yml file describes the individual containers, the .env files contain configuration for the individual parts, and the scripts folder contains Javascript and SQL scripts to set up the cluster and databases.

MySQL Server Images as a Basis

In our docker-compose.yml file we first start three mysql-server images (mysql-server-1, mysql-server-2, mysql-server-3). All three use the following startup commands to satisfy InnoDB cluster requirements (the only difference is the unique --server_id parameter):

  mysql-server-1:
    env_file:
      - mysql-server/mysql.env
    image: mysql/mysql-server:5.7
    ports:
      - "3301:3306"
    command: ["mysqld",
        "--server_id=1",
        "--binlog_checksum=NONE",
        "--gtid_mode=ON",
        "--enforce_gtid_consistency=ON",
        "--log_bin",
        "--log_slave_updates=ON",
        "--master_info_repository=TABLE",
        "--relay_log_info_repository=TABLE",
        "--transaction_write_set_extraction=XXHASH64",
        "--user=mysql",
        "--skip-host-cache",
        "--skip-name-resolve"]

This is based on Production Deployment of InnoDB Cluster and more details can be found there. In addition, we pass $MYSQL_ROOT_PASSWORD and $MYSQL_ROOT_HOST which we will use later on to provision the cluster. NOTE: this is not recommended in a production setting; sound security practices would involve creating less privileged users in this context, but we omit that here for the sake of simplicity and clarity.

MySQL Shell to Provision the Cluster

Then we start a fourth image, neumayer/mysql-shell-batch, to set up the cluster via MySQL Shell. This image is not an official MySQL image and basically waits until the given MySQL server is up and then runs the given scripts against it. We use this image to keep our example self-contained.

This image is available under MySQL Shell batch image

mysql-shell:
  env_file:
    - mysql-server.env
  image: neumayer/mysql-shell-batch
  volumes:
      - ./mysql-shell/scripts/:/scripts/
  depends_on:
    - mysql-server-1
    - mysql-server-2
    - mysql-server-3

Internally it runs the following Javascript (via the mounted scripts directory):

var dbPass = "mysql"
var clusterName = "devCluster"

try {
  print('Setting up InnoDB cluster...\n');
  shell.connect('root@mysql-server-1:3306', dbPass)
  var cluster = dba.createCluster(clusterName);
  print('Adding instances to the cluster.');
  cluster.addInstance({user: "root", host: "mysql-server-2", password: dbPass})
  print('.');
  cluster.addInstance({user: "root", host: "mysql-server-3", password: dbPass})
  print('.\nInstances successfully added to the cluster.');
  print('\nInnoDB cluster deployed successfully.\n');
} catch(e) {
  print('\nThe InnoDB cluster could not be created.\n\nError: ' + e.message + '\n');
}

And the following SQL to set up a database and user for the example app:

CREATE DATABASE dbwebappdb;
CREATE USER 'dbwebapp'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON dbwebappdb.* TO 'dbwebapp'@'%';

If all goes according to plan, the cluster is ready for use, the user for our example app is created, and the temporary image exits.

MySQL Router

Further, we set up a mysql-router container, bootstrapping using one of the existing mysql-server images (this is the official MySQL Router image on Docker Hub):

mysql-router:
  env_file:
    - mysql-shell.env
  image: mysql/mysql-router
  ports:
    - "6446:6446"
  depends_on:
    - mysql-server-1
    - mysql-server-2
    - mysql-server-3
    - mysql-shell

Internally it makes the following calls:

mysqlrouter --bootstrap $MYSQL_USER@$MYSQL_HOST:$MYSQL_PORT --user=mysqlrouter <<< "$MYSQL_PASSWORD"
mysqlrouter

The first call contacts one of the mysql-server instances and acquires information about the other servers from it. A config file is written and then used for the normal startup of the router.

Example App

Finally, we start an application container using the mysql-router container as its database. This application is described in more detail in Docker Compose and App Deployment with MySQL

dbwebapp:
  env_file:
    - dbwebapp.env
  image: neumayer/dbwebapp
  ports:
    - "8057:8080"
  depends_on:
    - mysql-router

The dbwebapp.env contains the necessary parameters to connect to the router container on the right host and port (DBHOST and DBPORT).

Putting it Together

To run this example, first check out the example repo. Running docker-compose up should pull all needed images and spin up your test cluster. If successful, the following output is displayed from the MySQL shell:

mysql-shell_1     | Adding instances to the cluster...
mysql-shell_1     | Instances successfully added to the cluster.
mysql-shell_1     | InnoDB cluster deployed successfully.

The MySQL router will report it successfully contacted the cluster and that it is ready to accept incoming connections:

mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Connected with metadata server running on mysql-server-1:3306
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Connected to replicaset 'default' through mysql-server-1:3306
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Changes detected in cluster 'devCluster' after metadata refresh
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] Metadata for cluster 'devCluster' has 1 replicasets:
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700] 'default' (3 members, single-master)
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-1:3306 / 33060 - role=HA mode=RW
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-2:3306 / 33060 - role=HA mode=RO
mysql-router_1    | 2018-03-05 12:34:17 metadata_cache INFO [7fbd7b7fe700]     mysql-server-3:3306 / 33060 - role=HA mode=RO

And our example app:

dbwebapp_1        | 2018/03/05 12:34:19 Pinging db mysql-router.
dbwebapp_1        | 2018/03/05 12:34:19 Connected to db.
dbwebapp_1        | 2018/03/05 12:34:19 Starting dbwebapp server.

Outlook

We showed how to provision an InnoDB cluster locally with docker-compose using the official MySQL Server and MySQL Router Docker images. We also showed how to configure the cluster and use an example app to access it. Real world deployment requirements may vary, but this approach can be adjusted to any dockerised environment.

Further we want to be clear that our examples are not suitable for a production setting without adjustments. We have no focus on the security of the MySQL instances themselves, the distribution of secrets to the temporary provisioning image or our application, or general network-level security. Most of these security questions should be addressed by the design of your cloud environment or production setting. Also please note that stopping docker compose will effectively kill your test cluster: a cluster can not survive a full outage, which this would amount to. To take down the cluster and start from scratch run docker-compose down.