Wednesday, June 17, 2020

How can I stop writing SQL Statement into MySQL Audit Log at run time without MySQL Restart ?


How can I Disable MySQL Audit Log while Server is running?



You can turn off MySQL Enterprise Audit Log without restart of MySQL , sometimes we wanted to turn off auditing to test some functionalities in database.
If you wanted to revise about 5 W (what ,why ,where,when,who) about MySQL Enterprise Audit please have a look on my previous blog.
First Check Whether Audit is running or not?
mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE'audit%';
+-------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+-------------+---------------+
| audit_log   | ACTIVE        |
+-------------+---------------+
1 row in set (0.00 sec)

mysql> show variables like '%audit%';
+--------------------------------------+--------------+
| Variable_name                        | Value        |
+--------------------------------------+--------------+
| audit_log_buffer_size                | 1048576      |
| audit_log_compression                | NONE         |
| audit_log_connection_policy          | ALL          |
| audit_log_current_session            | ON           |
| audit_log_encryption                 | NONE         |
| audit_log_exclude_accounts           |              |
| audit_log_file                       | audit.log    |
| audit_log_filter_id                  | 0            |
| audit_log_flush                      | OFF          |
| audit_log_format                     | NEW          |
| audit_log_include_accounts           |              |
| audit_log_password_history_keep_days | 0            |
| audit_log_policy                     | ALL          |
| audit_log_read_buffer_size           | 32768        |
| audit_log_rotate_on_size             | 0            |
| audit_log_statement_policy           | ALL          |
| audit_log_strategy                   | ASYNCHRONOUS |
+--------------------------------------+--------------+
17 rows in set (0.01 sec)

To disable there are two ways to do
1.Edit my.cnf file and comment these two lines and restart MySQL Server.
>>Open my.ini file
      #plugin-load=audit_log.dll     ##for Linux – audit_log.so file    
      #audit-log=FORCE_PLUS_PERMANENT
>>Restart MySQL Server

 2.Without Server Restart.
There is audit system variables(audit_log_statement_policy, audit_log_connection_policy), which control the policy how the audit log plugin writes statement events to its log file.
By default these values are ALL , which means that you can log everything’s.



mysql> set global audit_log_statement_policy=NONE;
Query OK, 0 rows affected (0.00 sec)

mysql> set global audit_log_connection_policy=NONE;
Query OK, 0 rows affected (0.00 sec)

Note:- Plugin would remain active but it audit file will not have any entry thereafter.


Thank you for using MySQL !!!
Feel free to suggest , correct , share your ideas/pains so that i can write more blog on MySQL.



Monday, June 15, 2020

Migrate Single MySQL Enterprise Database into Three Node MySQL Enterprise InnoDB Cluster with latest version 8.0.20

How to Migrate Single MySQL Enterprise Server to Three Node MySQL Enterprise  InnoDB Cluster latest version 8.0.20 ?


Assume current you production work loads resembles multiple app server and all are point to single MySQL Server.  have you thought about below questions ?

1. what if , this database goes down ? this is single point of failure ?
2. what if, data loss ?
3. Scalability ?
4. Data Inconsistencies ?
5. DR?

BTW, i am using MySQL Enterprise Edition , 

To address these basic questions on SLA , RPO/RTO ,  MySQL has another high availability solutions which is developed by MySQL Engg team, which has no noisy neighbors all components are tested by MySQL Engg team.

MySQL InnoDB Cluster= MySQL Server + MySQL Shell + MySQL Router

In This Blog , I will Talk about below topic:-

1. Introduction about MySQL InnoDB Cluster
2. New features in MySQL 8.0.20 related to InnoDB CLuster.
3. High Level Migration from one server to 03 server MySQL InnoDB Cluster.
4.Step by step Guide
5. Conclusion

High lights about MySQL InnoDB CLuster can be found here:-

In case which cluster solutions is suitable for your application workloads can be found here:-

Pain to Solutions Architecture

Let’s Deploy Single Production MySQL Server to 03 Node MySQL InnoDB Cluster.

Prerequisite:-
ü About Software
1.       MySQL Server
è  MySQL Server has to be running on all three machine, with same versions.
mysql> select @@version
2.       MySQL Shell
è  MySQL Shell has to be installed and running in all three machine, (below are installation steps)
3.       MySQL Router
è  This software has to be install where app is running)
ü  Qualification of database, whether production DB is right to deploy or not?
ü   Each Table Must have InnoDB Storage Engine .
ü  each table must have Primary key)




Assume:- You have one Production Machine , 02 new Machine.
In This demo :- 192.168.0.10 --> production database.
                          192.168.0.9 , 192.168.0.11 are New machine
Machine
Role
192.168.0.10
Primary
192.168.0.9
Secondary
192.168.0.11
Secondary
Primary Machine has two database running.
mysql> SELECT table_schema "DB Name", ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB"  FROM information_schema.tables  where table_schema not in ('mysql','information_schema','performance_schema','sys') group by table_schema;
+------------+---------------+
| DB Name    | DB Size in MB |
+------------+---------------+
| customerDB |        1899.5 |
| sales      |           0.0 |
+------------+---------------+
2 rows in set (0.07 sec)

You can see i have one machine with 1.89 GB of MySQL Database, through which idea is to form new InnoDB Cluster with additional 02 server required where nothing is installed!!!

Secondary Server has nothing installed ,fresh machine. So let install MySQL Server & MySQL Shell in both secondary Server.

MySQL Router is not needed to install on database machine , instead you install it in Application Server or else Primary Server.

Install MySQL Server
https://dev.mysql.com/doc/refman/8.0/en/linux-installation-yum-repo.html
Install MySQL Shell
https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-install-linux-quick.html
 
 
 
 
Once we are done with cluster set up>> below output has to come than we can say that our InnoDB Cluster is successfully deployed
 MySQL JS> cluster.status()

{
    "clusterName": "myCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "192.168.0.10:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "192.168.0.10:3306": {
                "address": "192.168.0.10:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.20"
            },
            "192.168.0.11:3306": {
                "address": "192.168.0.11:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.20"
            },
            "192.168.0.9:3306": {
                "address": "192.168.0.9:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.20"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "192.168.0.10:3306"
}
 
 Let's get Started (Step by Step guide)

MySQL Configuration Check

 [1.]#configure the report_host variable in the MySQL configuration of each instance to a suitable externally reachable address
mysql> show variables like '%report_host%';
+---------------+--------------+
| Variable_name | Value        |
+---------------+--------------+
| report_host   | 192.168.0.10 |
+---------------+--------------+
1 row in set (0.00 sec)
 
mysql>
 
Make Sure  result should not EMPTY o/w , go to my.cnf fileà keep à report_host='<ip Addrs>' à saveà restart mysqld
 
Do this to all MySQL Server machine.
 
[2.]#Ensure that none of the MySQL instances have any executed GTIDs:
mysql> SELECT @@global.gtid_executed;
+------------------------+
| @@global.gtid_executed |
+------------------------+
|                        |
+------------------------+
1 row in set (0.00 sec)
 
If the query returns anything like below, reset the master:
mysql> SELECT @@global.gtid_executed;
+----------------------------------------+
| @@global.gtid_executed                 |
+----------------------------------------+
| 88a2732s-2373-11e7-7ce3-080027d1eed8:2 |
+----------------------------------------+
1 row in set (0.00 sec)
 
mysql> RESET MASTER;
Query OK, 0 rows affected (0.06 sec)
 
mysql> SELECT @@global.gtid_executed;
+------------------------+
| @@global.gtid_executed |
+------------------------+
|                        |
+------------------------+
1 row in set (0.00 sec)
 
Important: The RESET MASTER statement will delete all binary logs. This example assumes there is no binary logs that must be kept and that all instances have just been set up from scratch.
 
#SQL_mode (for Demo Purposes)
set global sql_mode='STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION';
 
[3.]#Make sure Database has primary Key
 
SELECT a.table_name FROM INFORMATION_SCHEMA.TABLES as a LEFT JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS b ON (a.TABLE_NAME = b.TABLE_NAME AND b.CONSTRAINT_SCHEMA = a.TABLE_SCHEMA AND b.constraint_name = 'PRIMARY' ) WHERE a.table_schema <> 'information_schema' AND a.table_schema <> 'performance_schema' AND a.table_schema <> 'mysql' and a.table_schema <> 'sys'  AND b.constraint_name IS NULL;
 
Let’s Get Started 
 
login mysql shell with user root
MySQL  192.168.0.10:3306 ssl  JS > checkInstanceConfiguration('root@192.168.0.10:3306')
#mysqlsh root@192.168.0.10:3306
 
Creating a session to 'root@192.168.0.10:3306'
Please provide the password for 'root@192.168.0.10:3306': ********
Save password for 'root@192.168.0.10:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
.
.
.
 MySQL  192.168.0.10:3306 ssl  JS >
 
MySQL JS > dba.checkInstanceConfiguration('root@192.168.0.10:3306')
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...
 
This instance reports its own address as 192.168.0.10:3306
 
Checking whether existing tables comply with Group Replication requirements...
WARNING: The following tables do not have a Primary Key or equivalent column:
customerDB.salesaccount, customerDB.salesdetails, customerDB.usermgmt
 
Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.
 
Checking instance configuration...
 
NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+
 
Some variables need to be changed, but cannot be done dynamically on the server.
NOTE: Please use the dba.configureInstance() command to repair these issues.
 
{
    "config_errors": [
        {
            "action": "server_update",
            "current": "CRC32",
            "option": "binlog_checksum",
            "required": "NONE"
        },
        {
            "action": "server_update+restart",
            "current": "OFF",
            "option": "enforce_gtid_consistency",
            "required": "ON"
        },
        {
            "action": "server_update+restart",
            "current": "OFF",
            "option": "gtid_mode",
            "required": "ON"
        },
        {
            "action": "server_update+restart",
            "current": "1",
            "option": "server_id",
            "required": "<unique ID>"
        }
    ],
    "status": "error"
}
 
#Our First Task is to ALTER TABLE and add primary key to required table.
 
mysql>
ALTER TABLE usermgmt  ADD PRIMARY KEY(userid);
ALTER TABLE salesaccount  ADD PRIMARY KEY(salesid);
ALTER TABLE salesdetails  ADD PRIMARY KEY(salesid);
Let’s Re-check the Instance
 
 
MySQL JS > dba.checkInstanceConfiguration('root@192.168.0.10:3306')
MySQL  192.168.0.10:3306 ssl  JS > dba.checkInstanceConfiguration('root@192.168.0.10:3306')
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...
 
This instance reports its own address as 192.168.0.10:3306
 
Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected
 
Checking instance configuration...
 
NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+
 
Some variables need to be changed, but cannot be done dynamically on the server.
NOTE: Please use the dba.configureInstance() command to repair these issues.
 
{
    "config_errors": [
        {
            "action": "server_update",
            "current": "CRC32",
            "option": "binlog_checksum",
            "required": "NONE"
        },
        {
            "action": "server_update+restart",
            "current": "OFF",
            "option": "enforce_gtid_consistency",
            "required": "ON"
        },
        {
            "action": "server_update+restart",
            "current": "OFF",
            "option": "gtid_mode",
            "required": "ON"
        },
        {
            "action": "server_update+restart",
            "current": "1",
            "option": "server_id",
            "required": "<unique ID>"
        }
    ],
    "status": "error"
}
 MySQL  192.168.0.10:3306 ssl  JS >
 
Above Output shows there is no primary key issue, But  “Some configuration options need to be fixed:”“)
Now Next…
 
MySQL  192.168.0.10:3306 ssl  JS >dba.configureInstance('root@192.168.0.10:3306')
 
 MySQL  192.168.0.10:3306 ssl  JS > dba.configureInstance('root@192.168.0.10:3306')
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...
 
This instance reports its own address as 192.168.0.10:3306
 
NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+
 
Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y
Configuring instance...
The instance '192.168.0.10:3306' was configured to be used in an InnoDB cluster.
Restarting MySQL...
NOTE: MySQL server at 192.168.0.10:3306 was restarted.
 
Repeat The Same Process in Machine 02 & Machine 03
On Machine 02
MySQL  192.168.0.11:3306 ssl  JS >  dba.checkInstanceConfiguration('root@192.168.0.11:3306')
MySQL  192.168.0.10:3306 ssl  JS > dba.checkInstanceConfiguration('root@192.168.0.11:3306')
Please provide the password for 'root@192.168.0.11:3306': ********
Save password for 'root@192.168.0.11:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
Validating MySQL instance at 192.168.0.11:3306 for use in an InnoDB cluster...
 
This instance reports its own address as 192.168.0.11:3306
 
Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected
 
Checking instance configuration...
 
NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+
 
Some variables need to be changed, but cannot be done dynamically on the server.
NOTE: Please use the dba.configureInstance() command to repair these issues.
 
{
    "config_errors": [
        {
            "action": "server_update",
            "current": "CRC32",
            "option": "binlog_checksum",
            "required": "NONE"
        },
        {
            "action": "server_update+restart",
            "current": "OFF",
            "option": "enforce_gtid_consistency",
            "required": "ON"
        },
        {
            "action": "server_update+restart",
            "current": "OFF",
            "option": "gtid_mode",
            "required": "ON"
        },
        {
            "action": "server_update+restart",
            "current": "1",
            "option": "server_id",
            "required": "<unique ID>"
        }
    ],
    "status": "error"
}
 MySQL  192.168.0.10:3306 ssl  JS >
MySQL  192.168.0.10:3306 ssl  JS > dba.checkInstanceConfiguration('root@192.168.0.11:3306')
Validating MySQL instance at 192.168.0.11:3306 for use in an InnoDB cluster...
 
This instance reports its own address as 192.168.0.11:3306
 
Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected
 
Checking instance configuration...
Instance configuration is compatible with InnoDB cluster
 
The instance '192.168.0.11:3306' is valid to be used in an InnoDB cluster.
 
{
    "status": "ok"
}
 MySQL  192.168.0.10:3306 ssl  JS >
On Machine 03
 
 MySQL  192.168.0.10:3306 ssl  JS > dba.checkInstanceConfiguration('root@192.168.0.9:3306')
MySQL  192.168.0.10:3306 ssl  JS > dba.checkInstanceConfiguration('root@192.168.0.9:3306')
Please provide the password for 'root@192.168.0.9:3306': ********
Save password for 'root@192.168.0.9:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
Validating MySQL instance at 192.168.0.9:3306 for use in an InnoDB cluster...
 
This instance reports its own address as 192.168.0.9:3306
 
Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected
 
Checking instance configuration...
 
NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+
 
Some variables need to be changed, but cannot be done dynamically on the server.
NOTE: Please use the dba.configureInstance() command to repair these issues.
 
{
    "config_errors": [
        {
            "action": "server_update",
            "current": "CRC32",
            "option": "binlog_checksum",
            "required": "NONE"
        },
        {
            "action": "server_update+restart",
            "current": "OFF",
            "option": "enforce_gtid_consistency",
            "required": "ON"
        },
        {
            "action": "server_update+restart",
            "current": "OFF",
            "option": "gtid_mode",
            "required": "ON"
        },
        {
            "action": "server_update+restart",
            "current": "1",
            "option": "server_id",
            "required": "<unique ID>"
        }
    ],
    "status": "error"
}
 
 
 MySQL  192.168.0.10:3306 ssl  JS > dba.configureInstance('root@192.168.0.9:3306')
 
Configuring MySQL instance at 192.168.0.9:3306 for use in an InnoDB cluster...
 
This instance reports its own address as 192.168.0.9:3306
 
NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+
 
Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y
Configuring instance...
The instance '192.168.0.9:3306' was configured to be used in an InnoDB cluster.
Restarting MySQL...
NOTE: MySQL server at 192.168.0.9:3306 was restarted.
 MySQL  192.168.0.10:3306 ssl  JS >
 
MySQL  192.168.0.9:3306 ssl  JS > dba.checkInstanceConfiguration('root@192.168.0.9:3306')
 
MySQL  192.168.0.10:3306 ssl  JS > dba.checkInstanceConfiguration('root@192.168.0.9:3306')
Validating MySQL instance at 192.168.0.9:3306 for use in an InnoDB cluster...
 
This instance reports its own address as 192.168.0.9:3306
 
Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected
 
Checking instance configuration...
Instance configuration is compatible with InnoDB cluster
 
The instance '192.168.0.9:3306' is valid to be used in an InnoDB cluster.
 
{
    "status": "ok"
}
 MySQL  192.168.0.10:3306 ssl  JS >
 
 
#Let’s Create Cluster
#Connect to Primary Node
##Uninstall validate_password
mysql>UNINSTALL COMPONENT 'file://component_validate_password';
MySQL JS> var clusterr= dba.createCluster("myCluster")
A new InnoDB cluster will be created on instance '192.168.0.10:3306'.
 
Validating instance configuration at 192.168.0.10:3306...
 
This instance reports its own address as 192.168.0.10:3306
 
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '192.168.0.10:33061'. Use the localAddress option to override.
 
Creating InnoDB cluster 'myCluster' on '192.168.0.10:3306'...
 
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
 
 MySQL  192.168.0.10:3306 ssl  JS >
 
MySQL  192.168.0.10:3306 ssl  JS > clusterr.setupAdminAccount("ICAdmin")
 
# Re-Connect mysql shell with new users “ICAdmin” and do the cluster management/setup
 
 
MySQL  JS > \c ICAdmin@192.168.0.10:3306
 
 
MySQL  192.168.0.10:3306 ssl  JS > var clusterr = dba.getCluster()
 
MySQL  192.168.0.10:3306 ssl  JS > clusterr.addInstance("root@192.168.0.9:3306");
NOTE: A GTID set check of the MySQL instance at '192.168.0.9:3306' determined that it is missing transactions that were purged from all cluster members.
 

* Waiting for clone to finish...
NOTE: 192.168.0.9:3306 is being cloned from 192.168.0.10:3306
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
** Stage RECOVERY: \
NOTE: 192.168.0.9:3306 is shutting down...
 
* Waiting for server restart... ready
* 192.168.0.9:3306 has restarted, waiting for clone to finish...
* Clone process has finished: 2.69 GB transferred in 1 min 2 sec (43.35 MB/s)
 
State recovery already finished for '192.168.0.9:3306'
 
The instance '192.168.0.9:3306' was successfully added to the cluster.
 
 MySQL  192.168.0.10:3306 ssl  JS >
MySQL  192.168.0.10:3306 ssl  JS > cluster.addInstance("root@192.168.0.6:3306")
 
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
NOTE: Group Replication will communicate with other members using '192.168.0.6:33061'. Use the localAddress option to override.
 
Validating instance configuration at 192.168.0.6:3306...
 
This instance reports its own address as 192.168.0.6:3306
 
* Waiting for clone to finish...
NOTE: 192.168.0.6:3306 is being cloned from 192.168.0.10:3306
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
 
NOTE: 192.168.0.6:3306 is shutting down...
 
* Waiting for server restart... ready
* 192.168.0.6:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 2.72 GB transferred in 21 sec (129.72 MB/s)
 
State recovery already finished for '192.168.0.6:3306'
 
The instance '192.168.0.6:3306' was successfully added to the cluster.
 
 MySQL  192.168.0.10:3306 ssl  JS >
 
#Add Another Instance
Make sure report_host is not empty when you execute this command:- 
show variables like ‘%report_host%’;

  MySQL  192.168.0.10:3306 ssl  JS >clusterr.addInstance("root@192.168.0.11:3306");

MySQL  192.168.0.10:3306 ssl  JS > clusterr.addInstance("root@192.168.0.11:3306");
NOTE: A GTID set check of the MySQL instance at '192.168.0.11:3306' determined that it is missing transactions that were purged from all cluster members.
 
Please select a recovery method [C]lone/[A]bort (default Abort): C
NOTE: Group Replication will communicate with other members using '192.168.0.11:33061'. Use the localAddress option to override.
 
Validating instance configuration at 192.168.0.11:3306...
  
* Waiting for clone to finish...
NOTE: 192.168.0.11:3306 is being cloned from 192.168.0.10:3306
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
 
NOTE: 192.168.0.11:3306 is shutting down...
 
* Waiting for server restart... ready
* 192.168.0.11:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 2.69 GB transferred in 23 sec (116.86 MB/s)
 
State recovery already finished for '192.168.0.11:3306'
The instance '192.168.0.11:3306' was successfully added to the cluster.
 
 
MySQL  192.168.0.10:3306 ssl  JS > clusterr.status()
  
 MySQL  192.168.0.10:3306 ssl  JS > clusterr.status()
{
    "clusterName": "myCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "192.168.0.10:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "192.168.0.10:3306": {
                "address": "192.168.0.10:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.20"
            },
            "192.168.0.11:3306": {
                "address": "192.168.0.11:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.20"
            },
            "192.168.0.9:3306": {
                "address": "192.168.0.9:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.20"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "192.168.0.10:3306"
}
 MySQL  192.168.0.10:3306 ssl  JS >
 
 
Congratulations!!! , Finally Our Cluster is Ready.
 
 
###In case if you want to Install  validate_password.
INSTALL COMPONENT 'file://component_validate_password';
  
Setup Router for automatic startup
clusterr.setupRouterAccount("RouterAdmin");
MySQL  192.168.0.10:3306 ssl  JS > clusterr.setupRouterAccount("RouterAdmin");
Missing the password for new account RouterAdmin@%. Please provide one.
Password for new account: ********
Confirm password: ********
Creating user RouterAdmin@%.
Setting user password.
Account RouterAdmin@% was successfully created.
 MySQL  192.168.0.10:3306 ssl  JS >
// Exit mysqlsh and bootstrap the Router using the newly created Router admin account
sudo mkdir -p /opt/mysql/router
#chown -R mysql:mysql /opt/mysql/router
[root@192.168.0.10  ~]# mysqlrouter --force --bootstrap ICAdmin@192.168.0.10:3306 --directory=/opt/mysql/router --user=root
Please enter MySQL password for ICAdmin:
# Bootstrapping MySQL Router instance at '/opt/mysql/router'...
- Creating account(s) (only those that are needed, if any)
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /opt/mysql/router/mysqlrouter.conf
# MySQL Router configured for the InnoDB Cluster 'myCluster'
After this MySQL Router has been started with the generated configuration
    $ mysqlrouter -c /opt/mysql/router/mysqlrouter.conf
the cluster 'myCluster' can be reached by connecting to:
## MySQL Classic protocol
- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447
## MySQL X protocol
- Read/Write Connections: localhost:64460
- Read/Only Connections:  localhost:64470
//Start Router
/opt/mysql/router/start.sh
[root@192.168.0.10  ~]# mysql -uroot -pMySQL8.0 -P6446 -h192.168.0.10 -e "select @@hostname"
[root@192.168.0.10  ~]# mysql -uroot -pMySQL8.0 -P6446 -h192.168.0.10 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+-------------+
| @@hostname  |
+-------------+
| Workshop-10 |
+-------------+
[root@192.168.0.10  ~]# [root@192.168.0.6  ~]# service mysqld stop
Redirecting to /bin/systemctl stop mysqld.service
[root@192.168.0.10  ~]# mysql -uroot -pMySQL8.0 -P6446 -h192.168.0.6 -e "select @@hostname"
[root@192.168.0.10  ~]# mysql -uroot -pMySQL8.0 -P6446 -h192.168.0.10 -e "select @@hostname"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------+
| @@hostname |
+------------+
| Workshop-9 |
+------------+
[root@192.168.0.10  ~]#
//Using InnoDB Cluster
mysqlsh
MySQL JS>shell.connect("root@192.168.0.10:6446");
\sql
SQL>SELECT * FROM performance_schema.replication_group_members;
MySQL  192.168.0.10:6446 ssl  SQL > SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST  | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 52e4d23f-ae41-11ea-b284-080027764fed | 192.168.0.9  |        3306 | ONLINE       | PRIMARY     | 8.0.20         |
| group_replication_applier | aca28337-ae41-11ea-8970-0800275d8765 | 192.168.0.11 |        3306 | ONLINE       | SECONDARY   | 8.0.20         |
| group_replication_applier | ccffb1c5-ae40-11ea-87b5-080027528934 | 192.168.0.10 |        3306 | RECOVERING   | SECONDARY   | 8.0.20         |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
3 rows in set (0.0008 sec)
CREATE DATABASE sales;USE sales;
CREATE TABLE if not exists sales.employee(empid int primary key,empname varchar(100),salary int,deptid int);
INSERT sales.employee values(100,'Ram',1000,10);
INSERT sales.employee values(200,'Raja',2000,10);
INSERT sales.employee values(300,'Sita',3000,20);
SELECT * FROM  sales.employee;
---------------------***********************@@@@@@@@FINISHED@@@@@@@@**********_______

Conclusion:-


MySQL InnoDB cluster provides a complete high availability solution for MySQL. MySQL Shell includes AdminAPI which enables you to easily configure and administer a group of at least three MySQL server instances to function as an InnoDB cluster. 
Each MySQL server instance runs MySQL Group Replication, which provides the mechanism to replicate data within InnoDB clusters, with built-in failover.

 In the event of an unexpected failure of a server instance the cluster reconfigures automatically.
           In the default single-primary mode, an InnoDB cluster has a single read-write server instance - the primary. Multiple secondary server instances are replicas of the primary. If the primary fails, a secondary is automatically promoted to the role of primary.

 MySQL Router detects this and forwards client applications to the new primary. Advanced users can also configure a cluster to have multiple-primaries.


===================Thank You for Using MySQL!!!==================
Feel free to share your feedback & recommendations.