Pacemaker

De jagfloriano.com
Revisión del 15:59 2 ene 2026 de Escleiron (discusión | contribs.) (Escleiron trasladó la página Cluster - Pacemaker a Pacemaker)
Ir a la navegaciónIr a la búsqueda

Creación de un clúster Pacemaker

Introducción

En este laboratorio se configura un clúster Pacemaker utilizando almacenamiento compartido vía iSCSI. Para ello se emplean un mínimo de tres máquinas:

  • Dos nodos que formarán el clúster Pacemaker
  • Una máquina adicional que actuará como servidor de almacenamiento SAN

El objetivo es simular el uso de discos compartidos de una cabina de almacenamiento en una infraestructura real.

---

Topología del laboratorio

Máquinas

  • Servidor SAN
    • Icecube — 192.168.0.80
  • Nodos Pacemaker
    • Nodo1 — 192.168.0.81
    • Nodo2 — 192.168.0.82

---

Configuración del almacenamiento SAN

Para simular una cabina de almacenamiento se compartirá un disco mediante iSCSI hacia ambos nodos del clúster Pacemaker.

---

Instalación de paquetes requeridos =

En el servidor SAN:

yum --enablerepo=epel install -y \
scsi-target-utils \
targetcli \
policycoreutils-python

---

Configuración del iSCSI Target (targetcli)

Acceder a la consola de configuración:

[root@icecube ~]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.


Crear el backstore y el target iSCSI:

/> cd backstores/fileio
/backstores/fileio> create disk01 /var/lib/iscsi_disks/disk01.img 8G
Created fileio disk01 with size 8589934592

/backstores/fileio> cd /iscsi
/iscsi> create iqn.2027-04.icecube:storage.target00
Created target iqn.2027-04.icecube:storage.target00.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.

Crear el LUN y asociarlo al target:

/iscsi> cd iqn.2027-04.icecube:storage.target00/tpg1/luns
/iscsi/iqn.20...t00/tpg1/luns> create /backstores/fileio/disk01
Created LUN 0.

Configurar ACLs y autenticación CHAP:

/iscsi/iqn.20...t00/tpg1/luns> cd ../acls

/iscsi/iqn.20...t00/tpg1/acls> create iqn.2027-04.icecube:www.icecube
Created Node ACL for iqn.2027-04.icecube:www.icecube
Created mapped LUN 0.

/iscsi/iqn.20...t00/tpg1/acls> cd iqn.2027-04.icecube:www.icecube

/iscsi/iqn.20...e:www.icecube> set auth userid=bonzo
Parameter userid is now 'bonzo'.
/iscsi/iqn.20...e:www.icecube> set auth password=PASSWORD2020
Parameter password is now 'PASSWORD2020'.

Salir y guardar la configuración:

/iscsi/iqn.20...e:www.icecube> exit
Global pref auto_save_on_exit=true
Configuration saved to /etc/target/saveconfig.json


---

Comprobación del servicio iSCSI

Verificar que el puerto iSCSI está en escucha:

[root@icecube ~]# ss -napt | grep 3260
LISTEN     0      256          *:3260                     *:*
[root@icecube ~]#
[root@icecube ~]# firewall-cmd --add-service=iscsi-target --permanent
success
[root@icecube ~]# firewall-cmd --reload
success


---

Configuración alternativa del Target (tgt)

Ajustar contextos SELinux:

[root@icecube ~]# chcon -R -t tgtd_var_lib_t /var/lib/iscsi_disks 
[root@icecube ~]# semanage fcontext -a -t tgtd_var_lib_t /var/lib/iscsi_disks

Configurar el archivo /etc/tgt/targets.conf:

[root@icecube ~]# cat  /etc/tgt/targets.conf |grep -v ^#
default-driver iscsi

<target iqn.2027-04.icecube:storage.target00>
    backing-store /var/lib/iscsi_disks/disk01.img
    initiator-address 192.168.0.81
    initiator-address 192.168.0.82
    incominguser bonzo PASSWORD2020
</target>

Iniciar y habilitar el servicio:

[root@icecube ~]#  systemctl start tgtd
[root@icecube ~]# systemctl enable tgtd
Created symlink from /etc/systemd/system/multi-user.target.wants/tgtd.service to /usr/lib/systemd/system/tgtd.service.
[root@icecube ~]#

Verificación del target:

[root@icecube ~]# tgtadm --mode target --op show
Target 1: iqn.2027-04.icecube:storage.target00
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 8590 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: rdwr
            Backing store path: /var/lib/iscsi_disks/disk01.img
            Backing store flags:
    Account information:
        bonzo
    ACL information:
        192.168.0.81
        192.168.0.82

---

Configuración del iSCSI Initiator en los nodos

Los siguientes pasos deben ejecutarse en todos los nodos del clúster.

---

Instalación de paquetes

yum --enablerepo=epel install -y \
scsi-target-utils \
iscsi-initiator-utils

---

Configuración del iniciador

Comprobar el IQN del iniciador:

[root@nodo1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2027-04.icecube:www.icecube

Editar el archivo de configuración CHAP:

[root@nodo1 ~]# vim  /etc/iscsi/iscsid.conf

Verificar parámetros de autenticación:

[root@nodo1 ~]# grep -i node.session.auth /etc/iscsi/iscsid.conf|grep -v ^#
node.session.auth.authmethod = CHAP
node.session.auth.username = bonzo
node.session.auth.password = PASSWORD2020

---

Descubrimiento y conexión al target

Descubrir targets disponibles:

[root@nodo1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.0.80
192.168.0.80:3260,1 iqn.2027-04.icecube:storage.target00

[root@nodo1 ~]# iscsiadm -m node -o show
# BEGIN RECORD 6.2.0.874-19
node.name = iqn.2027-04.icecube:storage.target00
node.tpgt = 1
node.startup = automatic
node.leading_login = No
iface.hwaddress = <empty>
iface.ipaddress = <empty>
iface.iscsi_ifacename = default
iface.net_ifacename = <empty>
iface.gateway = <empty>
iface.subnet_mask = <empty>
iface.transport_name = tcp
iface.initiatorname = <empty>
iface.state = <empty>
iface.vlan_id = 0
iface.vlan_priority = 0
iface.vlan_state = <empty>
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
iface.bootproto = <empty>
iface.dhcp_alt_client_id_state = <empty>
iface.dhcp_alt_client_id = <empty>
iface.dhcp_dns = <empty>
iface.dhcp_learn_iqn = <empty>
iface.dhcp_req_vendor_id_state = <empty>
iface.dhcp_vendor_id_state = <empty>
iface.dhcp_vendor_id = <empty>
iface.dhcp_slp_da = <empty>
iface.fragmentation = <empty>
iface.gratuitous_arp = <empty>
iface.incoming_forwarding = <empty>
iface.tos_state = <empty>
iface.tos = 0
iface.ttl = 0
iface.delayed_ack = <empty>
iface.tcp_nagle = <empty>
iface.tcp_wsf_state = <empty>
iface.tcp_wsf = 0
iface.tcp_timer_scale = 0
iface.tcp_timestamp = <empty>
iface.redirect = <empty>
iface.def_task_mgmt_timeout = 0
iface.header_digest = <empty>
iface.data_digest = <empty>
iface.immediate_data = <empty>
iface.initial_r2t = <empty>
iface.data_seq_inorder = <empty>
iface.data_pdu_inorder = <empty>
iface.erl = 0
iface.max_receive_data_len = 0
iface.first_burst_len = 0
iface.max_outstanding_r2t = 0
iface.max_burst_len = 0
iface.chap_auth = <empty>
iface.bidi_chap = <empty>
iface.strict_login_compliance = <empty>
iface.discovery_auth = <empty>
iface.discovery_logout = <empty>
node.discovery_address = 192.168.0.80
node.discovery_port = 3260
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.nr_sessions = 1
node.session.auth.authmethod = CHAP
node.session.auth.username = bonzo
node.session.auth.password = ********
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
node.session.auth.chap_algs = MD5
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.session.scan = auto
node.conn[0].address = 192.168.0.80
node.conn[0].port = 3260
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD

Iniciar sesión:

[root@nodo1 ~]#  iscsiadm -m node --login

Verificar la sesión activa:

[root@nodo1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.0.80
192.168.0.80:3260,1 iqn.2027-04.icecube:storage.target00

---

Verificación del disco compartido

Comprobar que el nuevo disco aparece en el sistema:

[root@nodo1 ~]# iscsiadm -m session -o show
tcp: [1] 192.168.0.80:3260,1 iqn.2027-04.icecube:storage.target00 (non-flash)

[root@nodo1 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0 10.5G  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0  9.5G  0 part
  ├─centos-root 253:0    0  8.4G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]
sdb               8:16   0    8G  0 disk
sr0              11:0    1 1024M  0 rom

Inicializar el disco como volumen físico:

[root@nodo1 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

---

Notas finales

  • El disco iSCSI queda disponible para ser utilizado como recurso compartido en Pacemaker
  • Todos los nodos deben ver el mismo dispositivo
  • A partir de este punto se puede continuar con la configuración del clúster