viernes, 22 de julio de 2016

CentOS-6 : Alta Disponibilidad activo/pasivo con Crosync-Pacemaker + DRBD (III)

Configuración del Cluster





    Observaciones


  • Desactivamos firewall y SELinux en ambos nodos.
  • El usuario administrador del cluster será hacluster.
  • Desactivaremos el Quorum, trabajando con dos nodos no tiene sentido.
  • El nombre elegido para el cluster es BP.
  • Daremos preferencia a node-1 para trabajar como Master.
  • Crearemos los siguientes recurosos para el DRBD.
    • Para los datos : BP_data.
    • Para la sincronización : BP_syncro.
    • Para el filesystem : BP_fs.


Instalamos paquetes en ambos nodos:

yum install corosync pcs pacemaker cman



Configuramos la contraseña para el usuario hacluster:

[node-1]# passwd hacluster
[node-2]# passwd hacluster



Configuramos servicios:

[node-1]# service pcsd start
[node-2]# service pcsd start


[node-1]# chkconfig pcsd on
[node-1]# chkconfig pacemaker on

[node-2]# chkconfig pcsd on
[node-2]# chkconfig pacemaker on



Autenticamos los nodos:

[node-1]# pcs cluster auth node-1 node-2

Username: hacluster
Password: 
node-1: Authorized
node-2: Authorized



Creamos el cluster:

[node-1] mkdir /etc/cluster
[node-2] mkdir /etc/cluster

[node-1]# pcs cluster setup --name BP node-1 node-2

node-1: Updated cluster.conf...
node-2: Updated cluster.conf...

Synchronizing pcsd certificates on nodes node-1, node-2...
node-1: Success
node-2: Success

Restarting pcsd on the nodes in order to reload the certificates...
node-1: Success
node-2: Success



Arrancamos el cluster:

[node-1]# pcs cluster start --all
node-1: Starting Cluster...
node-2: Starting Cluster...



Comprobamos el estado del cluster:

node-1]# pcs status cluster

Cluster Status:
 Last updated: Tue Mar  1 04:58:26 2016
 Last change: Tue Mar  1 04:57:52 2016
 Stack: cman
 Current DC: server2 - partition with quorum
 Version: 1.1.11-97629de
 2 Nodes configured
 0 Resources configured

PCSD Status:
  node-1: Online
  node-2: Online



[node-2]# pcs status cluster

Cluster Status:
 Last updated: Tue Mar  1 05:00:33 2016
 Last change: Tue Mar  1 04:57:52 2016
 Stack: cman
 Current DC: server2 - partition with quorum
 Version: 1.1.11-97629de
 2 Nodes configured
 0 Resources configured

PCSD Status:
  node-1: Online
  node-2: Online




Comprobamos el status de los nodos:

[node-1]# pcs status nodes

Pacemaker Nodes:
 Online: node-1 node-2
 Standby:
 Maintenance:
 Offline:
Pacemaker Remote Nodes:
 Online:
 Standby:
 Maintenance:
 Offline:


[node-2]# pcs status nodes

Pacemaker Nodes:
 Online: node-1 node-2
 Standby:
 Maintenance:
 Offline:
Pacemaker Remote Nodes:
 Online:
 Standby:
 Maintenance:
 Offline:


[node-1]# pcs status corosync

Nodeid     Name
   1   node-1
   2   node-2


[node-2]# pcs status corosync

Nodeid     Name
   1   node-1
   2   node-2




Comprobamos el status general:

[node-1]# pcs status

Cluster name: BP
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Thu Jul  7 13:16:43 2016  Last change: Thu Jul  7 13:14:47 2016 by root via crmd on node-2
Stack: cman
Current DC: node-2 (version 1.1.14-8.el6-70404b0) - partition with quorum
2 nodes and 0 resources configured

Online: [ node-1 node-2 ]

Full list of resources:


PCSD Status:
  node-1: Online
  node-2: Online



Desactivamos el fenced:

[node-1]# pcs property set stonith-enabled=false



Desactivamos el quorum:

[node-1]# pcs property set no-quorum-policy=ignore



Comprobamos las propiedades del clsuter:

[node-1]# pcs property

Cluster Properties:
 cluster-infrastructure: cman
 dc-version: 1.1.14-8.el6-70404b0
 have-watchdog: false
 no-quorum-policy: ignore
 stonith-enabled: false



Añadimos la IP Virtual:

[node-1]# pcs resource create IpVirt ocf:heartbeat:IPaddr2 ip=192.168.0.4 cidr_netmask=32 op monitor interval=30s



Añadimos los servicios MySQl y Tomcat:

[node-1]# pcs resource create MySQL ocf:heartbeat:mysql op monitor interval="20" timeout="60"

[node-1]# pcs resource create Tomcat lsb:tomcat



Vincularemos la Ip Virtual a los servicios para que trabajen siempre juntos:

[node-1]#pcs constraint colocation add Tomcat MySQL INFINITY



Forzamos a que inicie primero la IP Virtual y luego los servicios:

[node-1]# pcs constraint order IpVirt then Tomcat  

Adding IpVirt Tomcat (kind: Mandatory) (Options: first-action=start then-action=start)

[node-1]# pcs constraint order Tomcat then MySQL

Adding Tomcat MySQL (kind: Mandatory) (Options: first-action=start then-action=start)



Damos preferencia al nodo que hará de master:

[node-1]# pcs constraint location MySQL prefers node-1=50
[node-1]# pcs constraint location Tomcat prefers node-1=50



Comprobamos restricciones:

[node-1]# pcs constraint

Location Constraints:
  Resource: MySQL
    Enabled on: node-1 (score:50)
  Resource: Tomcat
    Enabled on: node-1 (score:50)
Ordering Constraints:
  start IpVirt then start Tomcat (kind:Mandatory)
  start Tomcat then start MySQL (kind:Mandatory)
Colocation Constraints:
  Tomcat with MySQL (score:INFINITY)



Gestión del DRBD



Creamos un nuevo CIB (Cluster Infromation Base):

[node-1]# pcs cluster cib add_drbd
[node-1]# ls -al add_drbd

-rw-r--r-- 1 root root 10665  7 jul 13:28 add_drbd

[node-1]# pcs -f add_drbd resource create BP_data ocf:linbit:drbd drbd_resource=data op monitor interval=60
[node-1]# pcs -f add_drbd resource master BP_data_sync BP_data master-max=1 master-node-max=1 clone-max=2\
 clone-node-max=1 notify=true



Consultamos la configuración del CIB:

[node1]# pcs -f add_drbd resource show

IpVirt (ocf::heartbeat:IPaddr2): Started node-1
 MySQL (ocf::heartbeat:mysql): Started node-1
 Tomcat (lsb:tomcat): Started node-1
 Master/Slave Set: BP_data_sync [BP_data]
     Stopped: [ node-1 node-2 ]



Activamos la configuración del CIB :

[node1]# pcs cluster cib-push add_drbd

CIB updated

[node-1]# pcs cluster cib add_fs
[node-1]# pcs -f add_fs resource create BP_fs Filesystem device="/dev/drbd1" directory="/data" fstype="ext3"



El filesystem debe estar disponible en el Master :

[node1]# pcs -f add_fs constraint colocation add BP_fs BP_data_sync INFINITY with-rsc-role=Master



Debe arrancar antes DRBD para que el filesystem este disponible :

[node-1]# pcs -f add_fs constraint order promote BP_data_sync then start BP_fs Adding BP_data_sync BP_fs
Adding BP_data_sync BP_fs (kind: Mandatory) (Options: first-action=promote then-action=start)



El filesystem debe estar montado antes de que arranque los servicios :

[node-1]# pcs -f add_fs  constraint order BP_fs then MySQL
Adding BP_fs MySQL (kind: Mandatory) (Options: first-action=start then-action=start)

[node-1]# pcs -f add_fs constraint order BP_fs then Tomcat
Adding BP_fs Tomcat (kind: Mandatory) (Options: first-action=start then-action=start)




Forzamso a todos los recursos a trabajar juntos :

[node-1]# pcs -f add_fs constraint colocation add BP_fs Tomcat IpVirt MySQL INFINITY with-rsc-role=Master 



Aplicamos cambios :

[node1]# pcs cluster cib-push add_fs
CIB updated



Comprobamos restricciones :

[node-1]# pcs constraint

Location Constraints:
  Resource: MySQL
    Enabled on: node-1 (score:50)
  Resource: Tomcat
    Enabled on: node-1 (score:50)
Ordering Constraints:
  start IpVirt then start Tomcat (kind:Mandatory)
  start Tomcat then start MySQL (kind:Mandatory)
  promote BP_data_sync then start BP_fs (kind:Mandatory)
  start BP_fs then start MySQL (kind:Mandatory)
  start BP_fs then start Tomcat (kind:Mandatory)
Colocation Constraints:
  Tomcat with MySQL (score:INFINITY)
  BP_fs with BP_data_sync (score:INFINITY) (with-rsc-role:Master)
  BP_fs with MySQL (score:INFINITY) (rsc-role:Started) (with-rsc-role:Master)




No hay comentarios:

Publicar un comentario en la entrada