Diferencia entre revisiones de «Pacemaker»

De jagfloriano.com
Ir a la navegaciónIr a la búsqueda
Sin resumen de edición
Etiqueta: Reversión manual
 
(No se muestran 55 ediciones intermedias de 2 usuarios)
Línea 1: Línea 1:
== Creación de un clúster Pacemaker ( EN DESARROLLO) ==
== Creación de un clúster Pacemaker ==


=== Introducción ===
=== Introducción ===
Línea 9: Línea 9:


El objetivo es simular el uso de discos compartidos de una cabina de almacenamiento en una infraestructura real.
El objetivo es simular el uso de discos compartidos de una cabina de almacenamiento en una infraestructura real.


=== Topología del laboratorio ===
=== Topología del laboratorio ===
==== Máquinas ====


* '''Servidor SAN'''
* '''Servidor SAN'''
Línea 20: Línea 18:
** Nodo1 — <code>192.168.1.81</code>
** Nodo1 — <code>192.168.1.81</code>
** Nodo2 — <code>192.168.1.82</code>
** Nodo2 — <code>192.168.1.82</code>


== Configuración del almacenamiento SAN ==
== Configuración del almacenamiento SAN ==
Antes de proceder con la implementación del clúster, es necesario haber configurado el almacenamiento compartido. Este proceso se detalla en la sección dedicada a [[iSCSI|iSCSI]]


Para simular una cabina de almacenamiento se compartirá un disco mediante iSCSI hacia ambos nodos del clúster Pacemaker.
==== Verificación del disco compartido ====


=== Instalación de paquetes requeridos ===
Comprobar que el nuevo disco aparece en ambos nodos:


En el servidor SAN (target):
:Nodo1:
<syntaxhighlight lang="bash">
[root@nodo1 ~]# iscsiadm -m session -o show
tcp: [1] 192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 (non-flash)
</syntaxhighlight>


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# subscription-manager register
[root@nodo1 ~]# lsblk
Registering to: subscription.rhsm.redhat.com:443/subscription
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
Username: $USER
sda              8:0    0 10.5G  0 disk
Password:
├─sda1            8:1    0    1G  0 part /boot
The system has been registered with ID: dddb6e1e-049a-4ea2-8943-403ec63bb06f
└─sda2            8:2    0  9.5G  0 part
The registered system name is: icecube
  ├─centos-root 253:0    0  8.4G  0 lvm  /
 
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]
sdb              8:16  0    8G  0 disk
sr0              11:0    1 1024M  0 rom
</syntaxhighlight>
:Nodo2:
<syntaxhighlight lang="bash">
[root@nodo2 ~]#  iscsiadm -m session -o show
tcp: [1] 192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 (non-flash)
</syntaxhighlight>
</syntaxhighlight>
Activar los repositorios EPEL:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# subscription-manager repos --enable codeready-builder-for-rhel-9-$(arch)-rpms
[root@nodo2 ~]# lsblk
Repository 'codeready-builder-for-rhel-9-x86_64-rpms' is enabled for this system.
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
[root@icecube ~]# dnf install \
sda                  8:0    0  20G  0 disk
https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
├─sda1                8:1    0    1G  0 part /boot
Updating Subscription Management repositories.
└─sda2                8:2    0  19G  0 part
Red Hat CodeReady Linux Builder for RHEL 9 x86_64 (RPMs)                          24 MB/s |  15 MB    00:00
  ├─rhel-root      253:0    0  17G  0 lvm  /
epel-release-latest-9.noarch.rpm                                                  41 kB/s | 19 kB    00:00
  └─rhel-swap      253:1    0    2G  0 lvm [SWAP]
Dependencies resolved.
sdb                  8:16  0  40G 0 disk
===============================================================================================================================
sr0                  11:0    1 1024M  0 rom
  Package                                        Architecture          Version                      Repository          Size
</syntaxhighlight>
===============================================================================================================================
Installing:
epel-release                                  noarch                9-10.el9                    @commandline        19 k


Transaction Summary
== Pacemaker ==
===============================================================================================================================
=== Instalación ===
Install  1 Package


Total size: 19 k
Installed size: 26 k
Is this ok [y/N]: y
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                              1/1
  Installing      : epel-release-9-10.el9.noarch                                  1/1
  Running scriptlet: epel-release-9-10.el9.noarch                                  1/1
Many EPEL packages require the CodeReady Builder (CRB) repository.
It is recommended that you run /usr/bin/crb enable to enable the CRB repository.


  Verifying        : epel-release-9-10.el9.noarch                                  1/1
PACEMAKER RHEL9
Installed products updated.


Installed:
<syntaxhighlight lang="bash">
   epel-release-9-10.el9.noarch
[root@nodo1 ~]# subscription-manager repos \
   --enable=rhel-9-for-x86_64-highavailability-rpms
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.


Complete!
[root@nodo2 ~]# subscription-manager repos \
  --enable=rhel-9-for-x86_64-highavailability-rpms
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.
</syntaxhighlight>
</syntaxhighlight>


Instalar paquetes requeridos:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# dnf --enablerepo=epel* install targetcli
[root@nodo1 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2
Updating Subscription Management repositories.
[root@nodo2 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2
Last metadata expiration check: 0:02:26 ago on Fri 02 Jan 2026 09:50:24 PM CET.
</syntaxhighlight>
Dependencies resolved.
======================================================================================================================================================
Package                                        Architecture          Version                    Repository                            Size
======================================================================================================================================================
Installing:                                                                                 
targetcli                                      noarch                2.1.57-2.el9                rhel-9-for-x86_64-appstream-rpms      79 k
Installing dependencies:                                                                   
python3-configshell                            noarch                1:1.1.30-1.el9              rhel-9-for-x86_64-baseos-rpms        76 k
python3-kmod                                    x86_64                0.9-32.el9                  rhel-9-for-x86_64-baseos-rpms        88 k
python3-pyparsing                              noarch                2.4.7-9.el9                rhel-9-for-x86_64-baseos-rpms        154 k
python3-rtslib                                  noarch                2.1.76-1.el9                rhel-9-for-x86_64-appstream-rpms    104 k
python3-urwid                                  x86_64                2.1.2-4.el9                rhel-9-for-x86_64-baseos-rpms        842 k
target-restore                                  noarch                2.1.76-1.el9                rhel-9-for-x86_64-appstream-rpms      16 k
Transaction Summary                                                                         
======================================================================================================================================================
Install  7 Packages


Total download size: 1.3 M
=== Configuración Cluster ===
Installed size: 5.0 M
Is this ok [y/N]: y
Downloading Packages:
(1/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm                              1.0 MB/s | 154 kB    00:00
(2/7): python3-kmod-0.9-32.el9.x86_64.rpm                                    347 kB/s |  88 kB    00:00
(3/7): python3-configshell-1.1.30-1.el9.noarch.rpm                            683 kB/s |  76 kB    00:00
(4/7): target-restore-2.1.76-1.el9.noarch.rpm                                145 kB/s |  16 kB    00:00
(5/7): python3-rtslib-2.1.76-1.el9.noarch.rpm                                900 kB/s | 104 kB    00:00
(6/7): targetcli-2.1.57-2.el9.noarch.rpm                                      677 kB/s |  79 kB    00:00
(7/7): python3-urwid-2.1.2-4.el9.x86_64.rpm                                  318 kB/s | 842 kB    00:02
---------------------------------------------------------------------------------------
Total                                                                        513 kB/s | 1.3 MB    00:02
Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)                        2.7 MB/s | 3.6 kB    00:00
Importing GPG key 0xFD431D51:                                               
Userid    : "Red Hat, Inc. (release key 2) <security@redhat.com>"
Fingerprint: 567E 347A D004 4ADE 55BA 8A5F 199E 2F91 FD43 1D51
From      : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Is this ok [y/N]: y
Key imported successfully
Importing GPG key 0x5A6340B3:
Userid    : "Red Hat, Inc. (auxiliary key 3) <security@redhat.com>"
Fingerprint: 7E46 2425 8C40 6535 D56D 6F13 5054 E4A4 5A63 40B3
From      : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Is this ok [y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                            1/1
  Installing      : python3-urwid-2.1.2-4.el9.x86_64          1/7
  Installing      : python3-pyparsing-2.4.7-9.el9.noarch      2/7
  Installing      : python3-configshell-1:1.1.30-1.el9.noarch  3/7
  Installing      : python3-kmod-0.9-32.el9.x86_64            4/7
  Installing      : python3-rtslib-2.1.76-1.el9.noarch        5/7
  Installing      : target-restore-2.1.76-1.el9.noarch        6/7
  Running scriptlet: target-restore-2.1.76-1.el9.noarch        6/7
  Installing      : targetcli-2.1.57-2.el9.noarch              7/7
  Running scriptlet: targetcli-2.1.57-2.el9.noarch              7/7
  Verifying        : python3-kmod-0.9-32.el9.x86_64            1/7
  Verifying        : python3-pyparsing-2.4.7-9.el9.noarch      2/7
  Verifying        : python3-urwid-2.1.2-4.el9.x86_64          3/7
  Verifying        : python3-configshell-1:1.1.30-1.el9.noarch  4/7
  Verifying        : target-restore-2.1.76-1.el9.noarch        5/7
  Verifying        : python3-rtslib-2.1.76-1.el9.noarch        6/7
  Verifying        : targetcli-2.1.57-2.el9.noarch              7/7
Installed products updated.


Installed:
<syntaxhighlight lang="bash">
  python3-configshell-1:1.1.30-1.el9.noarch      python3-kmod-0.9-32.el9.x86_64      python3-pyparsing-2.4.7-9.el9.noarch      python3-rtslib-2.1.76-1.el9.noarch      python3-urwid-2.1.2-4.el9.x86_64      target-restore-2.1.76-1.el9.noarch      targetcli-2.1.57-2.el9.noarch
[root@nodo1 ~]# systemctl enable --now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
[root@nodo1 ~]#


Complete!
[root@nodo2 ~]# systemctl enable --now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
[root@nodo2 ~]#
</syntaxhighlight>
</syntaxhighlight>


==== Configuración del iSCSI Target (targetcli) ====
<syntaxhighlight lang="bash">
[root@nodo1 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@nodo1 ~]#
 
[root@nodo2 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@nodo2 ~]#
</syntaxhighlight>


===== Configuraciones previas: =====


Habilitar el servicio target:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# systemctl enable --now target
root@nodo1 ~]# firewall-cmd --add-service=high-availability --permanent
success
[root@nodo1 ~]# firewall-cmd --reload
success
[root@nodo1 ~]#
 
 
[root@nodo2 ~]# firewall-cmd --add-service=high-availability --permanent
success
[root@nodo2 ~]# firewall-cmd --reload
success
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# pcs host auth nodo1 nodo2
Username: hacluster
Password:
nodo1: Authorized
nodo2: Authorized
[root@nodo1 ~]#


[root@icecube ~]# systemctl status target
[root@nodo2 ~]# pcs host auth nodo1 nodo2
● target.service - Restore LIO kernel target configuration
Username: hacluster
    Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; preset: disabled)
Password:
    Active: active (exited) since Fri 2026-01-02 23:10:39 CET; 2min 44s ago
nodo1: Authorized
  Main PID: 5382 (code=exited, status=0/SUCCESS)
nodo2: Authorized
        CPU: 49ms
[root@nodo2 ~]#
</syntaxhighlight>


Jan 02 23:10:39 icecube systemd[1]: Starting Restore LIO kernel target configuration...
<syntaxhighlight lang="bash">
Jan 02 23:10:39 icecube systemd[1]: Finished Restore LIO kernel target configuration.
[root@nodo2 ~]# pcs cluster setup iscsi-cluster nodo1 nodo2
[root@icecube ~]#
No addresses specified for host 'nodo1', using 'nodo1'
No addresses specified for host 'nodo2', using 'nodo2'
Destroying cluster on hosts: 'nodo1', 'nodo2'...
nodo1: Successfully destroyed cluster
nodo2: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'nodo1', 'nodo2'
nodo2: successful removal of the file 'pcsd settings'
nodo1: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'nodo1', 'nodo2'
nodo2: successful distribution of the file 'corosync authkey'
nodo2: successful distribution of the file 'pacemaker authkey'
nodo1: successful distribution of the file 'corosync authkey'
nodo1: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'nodo1', 'nodo2'
nodo2: successful distribution of the file 'corosync.conf'
nodo1: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
[root@nodo2 ~]#
</syntaxhighlight>
</syntaxhighlight>


Verificar que existe un LV para usarlo para crear la LUN:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# lsblk /dev/vg_iscsi/lv_storage
[root@nodo2 ~]# pcs status
NAME                MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
Error: error running crm_mon, is pacemaker running?
vg_iscsi-lv_storage 253:2    0  40G  0 lvm
  crm_mon: Connection to cluster failed: Connection refused
[root@icecube ~]# f
</syntaxhighlight>
</syntaxhighlight>


===== Configurar el Target =====
<syntaxhighlight lang="bash">
Verificar que no existe configuración actualmente en targetcli:
[root@nodo2 ~]# pcs cluster start --all
<syntaxhighlight lang="ini">
nodo1: Starting Cluster...
[root@icecube ~]# targetcli ls
nodo2: Starting Cluster...
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 0]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 0]
  o- loopback ......................................................................................................... [Targets: 0]
[root@icecube ~]#
</syntaxhighlight>
</syntaxhighlight>


Acceder a la consola de configuración y crear una LUN:
<syntaxhighlight lang="bash">
[root@nodo2 ~]# pcs cluster enable --all
nodo1: Cluster Enabled
nodo2: Cluster Enabled
</syntaxhighlight>


<syntaxhighlight lang="ini">
<syntaxhighlight lang="bash">
[root@icecube ~]# targetcli
[root@nodo2 ~]# pcs status
targetcli shell version 2.1.57
Cluster name: iscsi-cluster
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.


/> /backstores/block create lun01 /dev/vg_iscsi/lv_storage
WARNINGS:
Created block storage object lun01 using /dev/vg_iscsi/lv_storage.
No stonith devices and stonith-enabled is not false
</syntaxhighlight>
error: Resource start-up disabled since no STONITH resources have been defined
error: Either configure some or disable STONITH with the stonith-enabled option
error: NOTE: Clusters with shared data need STONITH to ensure data integrity
warning: Node nodo1 is unclean but cannot be fenced
warning: Node nodo2 is unclean but cannot be fenced
error: CIB did not pass schema validation
Errors found during check: config not valid


Crear el target y el backstore iSCSI:
Cluster Summary:
 
  * Stack: unknown (Pacemaker is running)
<syntaxhighlight lang="ini">
  * Current DC: NONE
/> /iscsi create iqn.2026-01.icecube:storage.target01
  * Last updated: Sat Jan  3 00:28:04 2026 on nodo2
Created target iqn.2026-01.icecube:storage.target01.
  * Last change:  Sat Jan  3 00:27:58 2026 by hacluster via hacluster on nodo2
Created TPG 1.
  * 2 nodes configured
Global pref auto_add_default_portal=true
  * 0 resource instances configured
Created default portal listening on all IPs (0.0.0.0), port 3260.
</syntaxhighlight>


<syntaxhighlight lang="ini">
Node List:
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns create /backstores/block/lun01
  * Node nodo1: UNCLEAN (offline)
Created LUN 0.
  * Node nodo2: UNCLEAN (offline)
</syntaxhighlight>


Configurar ACLs y autenticación CHAP:
Full List of Resources:
  * No resources


<syntaxhighlight lang="ini">
Daemon Status:
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/acls create iqn.2026-01.icecube:node01.initiator01
  corosync: active/enabled
Created Node ACL for iqn.2026-01.icecube:node01.initiator01
  pacemaker: active/enabled
Created mapped LUN 0.
  pcsd: active/enabled
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="ini">
<syntaxhighlight lang="bash">
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1 set attribute authentication=1
[root@nodo2 ~]# pcs property set stonith-enabled=false
Parameter authentication is now '1'.
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="bash">
[root@nodo2 ~]# pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 00:34:35 2026 on nodo2
  * Last change:  Sat Jan  3 00:34:28 2026 by root via root on nodo2
  * 2 nodes configured
  * 0 resource instances configured


<syntaxhighlight lang="ini">
Node List:
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/acls/iqn.2026-01.icecube:node01.initiator01 set auth userid=bonzo
  * Online: [ nodo1 nodo2 ]
Parameter userid is now 'bonzo'.
</syntaxhighlight>


<syntaxhighlight lang="ini">
Full List of Resources:
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/acls/iqn.2026-01.icecube:node01.initiator01 set auth password=PASSWORD2020
  * No resources
Parameter password is now 'PASSWORD2020'.
</syntaxhighlight>


Salir y guardar la configuración:
Daemon Status:
<syntaxhighlight lang="ini">
  corosync: active/enabled
/> exit
  pacemaker: active/enabled
Global pref auto_save_on_exit=true
  pcsd: active/enabled
Last 10 configs saved in /etc/target/backup/.
[root@nodo2 ~]#
Configuration saved to /etc/target/saveconfig.json
</syntaxhighlight>
</syntaxhighlight>


Verificación del fichero de configuracion:
=== Configuración LVM ===
==== Editar /etc/lvm/lvm.conf ====


Establecer la misma configuración de <code>/etc/lvm/lvm.conf</code> en todos los nodos del clúster y ejecutar un <code>dracut -f</code> y <code>reboot</code>.


<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
[root@icecube ~]# cat /etc/target/saveconfig.json
[root@nodo1 ~]# grep -vE '^\s*#|^\s*$' /etc/lvm/lvm.conf
{
config {
  "fabric_modules": [],
}
  "storage_objects": [
devices {
    {
}
      "alua_tpgs": [
allocation {
        {
}
          "alua_access_state": 0,
log {
          "alua_access_status": 0,
}
          "alua_access_type": 3,
backup {
          "alua_support_active_nonoptimized": 1,
}
          "alua_support_active_optimized": 1,
shell {
          "alua_support_offline": 1,
}
          "alua_support_standby": 1,
global {
          "alua_support_transitioning": 1,
        system_id_source = "uname"
          "alua_support_unavailable": 1,
}
          "alua_write_metadata": 0,
activation {
          "implicit_trans_secs": 0,
        auto_activation_volume_list = [ ]
          "name": "default_tg_pt_gp",
}
          "nonop_delay_msecs": 100,
report {
          "preferred": 0,
}
          "tg_pt_gp_id": 0,
dmeventd {
          "trans_delay_msecs": 0
        }
      ],
      "attributes": {
        "alua_support": 1,
        "block_size": 512,
        "emulate_3pc": 1,
        "emulate_caw": 1,
        "emulate_dpo": 1,
        "emulate_fua_read": 1,
        "emulate_fua_write": 1,
        "emulate_model_alias": 1,
        "emulate_pr": 1,
        "emulate_rest_reord": 0,
        "emulate_rsoc": 1,
        "emulate_tas": 1,
        "emulate_tpu": 0,
        "emulate_tpws": 0,
        "emulate_ua_intlck_ctrl": 0,
        "emulate_write_cache": 0,
        "enforce_pr_isids": 1,
        "force_pr_aptpl": 0,
        "is_nonrot": 0,
        "max_unmap_block_desc_count": 0,
        "max_unmap_lba_count": 0,
        "max_write_same_len": 65535,
        "optimal_sectors": 65528,
        "pgr_support": 1,
        "pi_prot_format": 0,
        "pi_prot_type": 0,
        "pi_prot_verify": 0,
        "queue_depth": 128,
        "submit_type": 0,
        "unmap_granularity": 0,
        "unmap_granularity_alignment": 0,
        "unmap_zeroes_data": 0
      },
      "dev": "/dev/vg_iscsi/lv_storage",
      "name": "lun01",
      "plugin": "block",
      "readonly": false,
      "write_back": false,
      "wwn": "b1e53820-2d0e-4605-a66e-96ed4d4738a9"
    }
  ],
  "targets": [
    {
      "fabric": "iscsi",
      "parameters": {
        "cmd_completion_affinity": "-1"
      },
      "tpgs": [
        {
          "attributes": {
            "authentication": 1,
            "cache_dynamic_acls": 0,
            "default_cmdsn_depth": 64,
            "default_erl": 0,
            "demo_mode_discovery": 1,
            "demo_mode_write_protect": 1,
            "fabric_prot_type": 0,
            "generate_node_acls": 0,
            "login_keys_workaround": 1,
            "login_timeout": 15,
            "prod_mode_write_protect": 0,
            "t10_pi": 0,
            "tpg_enabled_sendtargets": 1
          },
          "enable": true,
          "luns": [
            {
              "alias": "4937399dfd",
              "alua_tg_pt_gp_name": "default_tg_pt_gp",
              "index": 0,
              "storage_object": "/backstores/block/lun01"
            }
          ],
          "node_acls": [
            {
              "attributes": {
                "authentication": -1,
                "dataout_timeout": 3,
                "dataout_timeout_retries": 5,
                "default_erl": 0,
                "nopin_response_timeout": 30,
                "nopin_timeout": 15,
                "random_datain_pdu_offsets": 0,
                "random_datain_seq_offsets": 0,
                "random_r2t_offsets": 0
              },
              "chap_password": "PASSWORD2020",
              "chap_userid": "bonzo",
              "mapped_luns": [
                {
                  "alias": "612d32f2e4",
                  "index": 0,
                  "tpg_lun": 0,
                  "write_protect": false
                }
              ],
              "node_wwn": "iqn.2026-01.icecube:node01.initiator01"
            }
          ],
          "parameters": {
            "AuthMethod": "CHAP",
            "DataDigest": "CRC32C,None",
            "DataPDUInOrder": "Yes",
            "DataSequenceInOrder": "Yes",
            "DefaultTime2Retain": "20",
            "DefaultTime2Wait": "2",
            "ErrorRecoveryLevel": "0",
            "FirstBurstLength": "65536",
            "HeaderDigest": "CRC32C,None",
            "IFMarkInt": "Reject",
            "IFMarker": "No",
            "ImmediateData": "Yes",
            "InitialR2T": "Yes",
            "MaxBurstLength": "262144",
            "MaxConnections": "1",
            "MaxOutstandingR2T": "1",
            "MaxRecvDataSegmentLength": "8192",
            "MaxXmitDataSegmentLength": "262144",
            "OFMarkInt": "Reject",
            "OFMarker": "No",
            "TargetAlias": "LIO Target"
          },
          "portals": [
            {
              "ip_address": "0.0.0.0",
              "iser": false,
              "offload": false,
              "port": 3260
            }
          ],
          "tag": 1
        }
      ],
      "wwn": "iqn.2026-01.icecube:storage.target01"
    }
  ]
}
}
</syntaxhighlight>
</syntaxhighlight>


===== Comprobación del servicio iSCSI =====
<syntaxhighlight lang="bash">
[root@nodo1 ~]# dracut -f
</syntaxhighlight>


Ver estado de configuracion de targetcli:
<syntaxhighlight lang="bash">
 
[root@nodo1 ~]# reboot
<syntaxhighlight lang="ini">
[root@icecube ~]# targetcli ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 1]
  | | o- lun01 ........................................................... [/dev/vg_iscsi/lv_storage (40.0GiB) write-thru activated]
  | |  o- alua ................................................................................................... [ALUA Groups: 1]
  | |    o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.2026-01.icecube:storage.target01 .............................................................................. [TPGs: 1]
  |  o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
  |    o- acls .......................................................................................................... [ACLs: 1]
  |    | o- iqn.2026-01.icecube:node01.initiator01 ................................................... [1-way auth, Mapped LUNs: 1]
  |    |  o- mapped_lun0 ................................................................................. [lun0 block/lun01 (rw)]
  |    o- luns .......................................................................................................... [LUNs: 1]
  |    | o- lun0 ...................................................... [block/lun01 (/dev/vg_iscsi/lv_storage) (default_tg_pt_gp)]
  |    o- portals .................................................................................................... [Portals: 1]
  |      o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
</syntaxhighlight>
</syntaxhighlight>


Verificar que el puerto iSCSI está en escucha:
Verificaciones post reinicio:
 
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# uname -n
nodo1


[root@icecube ~]# ss -napt | grep 3260
[root@nodo1 ~]# lvm systemid
LISTEN    0      256          *:3260                    *:*
  system ID: nodo1
[root@icecube ~]#
</syntaxhighlight>
</syntaxhighlight>


==== Crear PV|VG|LV con LUN compartida ====
La creación del VG debe ejecutarse en un solo nodo del clúster:
Crear Phisycal Volume:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# firewall-cmd --add-service=iscsi-target --permanent
[root@nodo1 ~]# pvcreate /dev/sdb
success
  Physical volume "/dev/sdb" successfully created.
</syntaxhighlight>
</syntaxhighlight>
Crear Volume Group:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# firewall-cmd --reload
[root@nodo1 ~]# vgcreate --setautoactivation n vg_shared /dev/sdb
success
  Volume group "vg_shared" successfully created with system ID nodo1
</syntaxhighlight>
</syntaxhighlight>


===== Configuración SELinux del Target =====
<syntaxhighlight lang="bash">
 
[root@nodo1 ~]# vgs -o+systemid
Ajustar contextos SELinux:
  VG        #PV #LV #SN Attr  VSize  VFree  System ID
  rhel        1  2  0 wz--n- <19.00g    0
  vg_shared  1  0  0 wz--n-  39.96g 39.96g nodo1
[root@nodo1 ~]#
</syntaxhighlight>


Crear Logical Volume:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# chcon -R -t tgtd_var_lib_t /var/lib/iscsi_disks
[root@nodo1 ~]# lvcreate -n lv_data -l 100%FREE vg_shared
  Wiping xfs signature on /dev/vg_shared/lv_data.
  Logical volume "lv_data" created.
</syntaxhighlight>
</syntaxhighlight>
Formatear en XFS:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# semanage fcontext -a -t tgtd_var_lib_t /var/lib/iscsi_disks
[root@nodo1 ~]# mkfs.xfs /dev/vg_shared/lv_data
meta-data=/dev/vg_shared/lv_data isize=512    agcount=4, agsize=2618880 blks
        =                      sectsz=512  attr=2, projid32bit=1
        =                      crc=1        finobt=1, sparse=1, rmapbt=0
        =                      reflink=1    bigtime=1 inobtcount=1 nrext64=0
data    =                      bsize=4096  blocks=10475520, imaxpct=25
        =                      sunit=0      swidth=0 blks
naming  =version 2              bsize=4096  ascii-ci=0, ftype=1
log      =internal log          bsize=4096  blocks=16384, version=2
        =                      sectsz=512  sunit=0 blks, lazy-count=1
realtime =none                  extsz=4096  blocks=0, rtextents=0
[root@nodo1 ~]#
</syntaxhighlight>
</syntaxhighlight>


=== Configuración del iSCSI Initiator en los nodos ===
=== Configuración Recursos ===
==== Requisitos previos ====


Los siguientes pasos deben ejecutarse en '''todos los nodos del clúster'''.
Hay que desactivar el VG para que lo gestione el cluster y crear el directorio donde montar el FS:
 
==== Instalación de paquetes ====
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# dnf -y install iscsi-initiator-utils
[root@nodo1 ~]# vgchange -an vg_shared
Updating Subscription Management repositories.                                         
  0 logical volume(s) in volume group "vg_shared" now active
Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)                                                                                      43 MB/s |  79 MB    00:01
Red Hat CodeReady Linux Builder for RHEL 9 x86_64 (RPMs)                                                                                      19 MB/s |  15 MB    00:00
Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)                                                                                          45 MB/s |  95 MB    00:02
Package iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64 is already installed.
Dependencies resolved.
=========================================================================================================================================================================
Package                                              Architecture      Version                                Repository                                        Size
=========================================================================================================================================================================
Upgrading:                                                                                                                                           
iscsi-initiator-utils                                x86_64            6.2.1.11-0.git4b3e853.el9              rhel-9-for-x86_64-baseos-rpms                    392 k
iscsi-initiator-utils-iscsiuio                        x86_64            6.2.1.11-0.git4b3e853.el9              rhel-9-for-x86_64-baseos-rpms                      81 k
 
Transaction Summary                                                                                                                                   
=========================================================================================================================================================================
Upgrade  2 Packages
 
Total download size: 473 k
Downloading Packages:
(1/2): iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64.rpm                                                                      1.6 MB/s | 392 kB    00:00
(2/2): iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64.rpm                                                              330 kB/s |  81 kB    00:00
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                  1.9 MB/s | 473 kB    00:00
Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)                                                                                  3.1 MB/s | 3.6 kB    00:00
Importing GPG key 0xFD431D51:
Userid    : "Red Hat, Inc. (release key 2) <security@redhat.com>"
Fingerprint: 567E 347A D004 4ADE 55BA 8A5F 199E 2F91 FD43 1D51
From      : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Key imported successfully
Importing GPG key 0x5A6340B3:
Userid    : "Red Hat, Inc. (auxiliary key 3) <security@redhat.com>"
Fingerprint: 7E46 2425 8C40 6535 D56D 6F13 5054 E4A4 5A63 40B3
From      : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                1/1
  Upgrading        : iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64                                                1/4
  Running scriptlet: iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64                                                1/4
  Upgrading        : iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64                                                          2/4
  Running scriptlet: iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64                                                          2/4
  Running scriptlet: iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64                                                          3/4
  Cleanup          : iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64                                                          3/4
  Running scriptlet: iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64                                                          3/4
  Running scriptlet: iscsi-initiator-utils-iscsiuio-6.2.1.9-1.gita65a472.el9.x86_64                                                  4/4
  Cleanup          : iscsi-initiator-utils-iscsiuio-6.2.1.9-1.gita65a472.el9.x86_64                                                  4/4
  Running scriptlet: iscsi-initiator-utils-iscsiuio-6.2.1.9-1.gita65a472.el9.x86_64                                                  4/4
  Verifying        : iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64                                                          1/4
  Verifying        : iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64                                                          2/4
  Verifying        : iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64                                                3/4
  Verifying        : iscsi-initiator-utils-iscsiuio-6.2.1.9-1.gita65a472.el9.x86_64                4/4
Installed products updated.
 
Upgraded:
  iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64    iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64
 
Complete!
 
</syntaxhighlight>
</syntaxhighlight>


==== Configuración del Initiator====
Comprobar el IQN del Initiator:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# cat /etc/iscsi/initiatorname.iscsi
[root@nodo1 ~]# lvscan
InitiatorName=iqn.2026-01.icecube:node01.initiator01
  ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
  inactive            '/dev/vg_shared/lv_data' [39.96 GiB] inherit
</syntaxhighlight>
</syntaxhighlight>


Editar el archivo de configuración CHAP:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# vim  /etc/iscsi/iscsid.conf
[root@nodo1 ~]# mkdir -p /srv/shared
</syntaxhighlight>
</syntaxhighlight>


Verificar parámetros de autenticación:
==== Crear recursos ====
El orden de creación es importante ya que afecta al orden de arranque:


Crear recurso que active el VG:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# grep -i node.session.auth /etc/iscsi/iscsid.conf|grep -v ^#
[root@nodo1 ~]# pcs resource create vg_shared ocf:heartbeat:LVM-activate vgname=vg_shared vg_access_mode=system_id --group SHARED
node.session.auth.authmethod = CHAP
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.
node.session.auth.username = bonzo
node.session.auth.password = PASSWORD2020
</syntaxhighlight>
</syntaxhighlight>
 
Crear recurso que monta el LV:
 
<syntaxhighlight lang="bash">
[root@nodo1 ~]# pcs resource create fs_shared ocf:heartbeat:Filesystem device="/dev/vg_shared/lv_data" directory="/srv/shared"  fstype="xfs" --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.
</syntaxhighlight>
Crear recurso que active la IP del recurso:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# systemctl enable --now iscsid
[root@nodo1 ~]# pcs resource create ip_shared  ocf:heartbeat:IPaddr2  ip=192.168.1.83  cidr_netmask=24  nic=enp0s3  --group SHARED
Created symlink /etc/systemd/system/multi-user.target.wants/iscsid.service → /usr/lib/systemd/system/iscsid.service.
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.
</syntaxhighlight>
</syntaxhighlight>


==== Verificación ====
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# systemctl restart iscsid
[root@nodo1 ~]# pcs status
</syntaxhighlight>
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:02:49 2026 on nodo1
  * Last change:  Sat Jan  3 10:52:08 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured


==== Descubrimiento y conexión al target ====
Node List:
  * Online: [ nodo1 nodo2 ]


Descubrir targets disponibles:
Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo1
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo1
    * ip_shared (ocf:heartbeat:IPaddr2):        Started nodo1


<syntaxhighlight lang="bash">
Daemon Status:
[root@nodo1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.80
  corosync: active/enabled
192.168.1.80:3260,1 iqn.2027-04.icecube:storage.target00
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#
</syntaxhighlight>
</syntaxhighlight>


Iniciar sesión:
==== Pruebas de movimiento paquetes: ====


Verificamos que los recursos están arrancados en el <code>nodo1</code>y comprobamos que el VG esta activo, el LV montado y la IP arrancada:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# iscsiadm -m node --login
[root@nodo1 ~]# pcs status
Login to [iface: default, target: iqn.2026-01.icecube:storage.target01, portal: 192.168.1.80,3260] successful.
Cluster name: iscsi-cluster
</syntaxhighlight>
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:24:14 2026 on nodo1
  * Last change:  Sat Jan  3 10:52:08 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured


==== Verificación del disco compartido ====
Node List:
  * Online: [ nodo1 nodo2 ]


Comprobar que el nuevo disco aparece en el sistema:
Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo1
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo1
    * ip_shared (ocf:heartbeat:IPaddr2):        Started nodo1


<syntaxhighlight lang="bash">
Daemon Status:
[root@nodo1 ~]# iscsiadm -m session -o show
  corosync: active/enabled
tcp: [1] 192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 (non-flash)
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# lsblk
[root@nodo1 ~]# lvscan
NAME           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  ACTIVE           '/dev/rhel/swap' [2.00 GiB] inherit
sda              8:0    0 10.5G  0 disk
   ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
├─sda1            8:1    0    1G  0 part /boot
   ACTIVE            '/dev/vg_shared/lv_data' [39.96 GiB] inherit
└─sda2            8:2   0  9.5G  0 part
   ├─centos-root 253:0    0  8.4G  0 lvm  /
   └─centos-swap 253:1    0    1G  0 lvm  [SWAP]
sdb              8:16  0    8G  0 disk
sr0              11:0    1 1024M  0 rom
</syntaxhighlight>


== Pacemaker ==
[root@nodo1 ~]# df -hT /srv/shared
=== Instalación ===
Filesystem                    Type  Size  Used Avail Use% Mounted on
/dev/mapper/vg_shared-lv_data xfs    40G  318M  40G  1% /srv/shared


[root@nodo1 ~]# ip a show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.81/24 brd 192.168.1.255 scope global noprefixroute enp0s3
      valid_lft forever preferred_lft forever
    inet 192.168.1.83/24 brd 192.168.1.255 scope global secondary enp0s3
      valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fef3:3c51/64 scope link tentative noprefixroute
      valid_lft forever preferred_lft forever
</syntaxhighlight>


PACEMAKER RHEL9
Procedemos a mover el paquete al nodo2:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# subscription-manager repos \
[root@nodo1 ~]# pcs resource move SHARED
  --enable=rhel-9-for-x86_64-highavailability-rpms
Location constraint to move resource 'SHARED' has been created
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.
Waiting for the cluster to apply configuration changes...
 
Location constraint created to move resource 'SHARED' has been removed
[root@nodo2 ~]# subscription-manager repos \
Waiting for the cluster to apply configuration changes...
  --enable=rhel-9-for-x86_64-highavailability-rpms
resource 'SHARED' is running on node 'nodo2'
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.
[root@nodo1 ~]#  
</syntaxhighlight>
</syntaxhighlight>


Verificamos que los recursos están arrancados en el <code>nodo2</code> y comprobamos que el VG esta activo, el LV montado y la IP arrancada:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2
[root@nodo1 ~]# pcs status
[root@nodo2 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2
Cluster name: iscsi-cluster
</syntaxhighlight>
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:25:34 2026 on nodo1
  * Last change:  Sat Jan  3 11:24:31 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured
 
Node List:
  * Online: [ nodo1 nodo2 ]
 
Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo2
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo2
    * ip_shared (ocf:heartbeat:IPaddr2):        Started nodo2


=== Configuración Cluster ===
Daemon Status:
=== Configuración LVM ===
  corosync: active/enabled
==== Crear VG con el disco compartido físico: ====
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#


<syntaxhighlight lang="bash">
[root@nodo1 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# vgs
 
   VG  #PV #LV #SN Attr  VSize  VFree
[root@nodo2 ~]# lvscan
   rhel   1  2  0 wz--n- <19.00g    0
   ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
</syntaxhighlight>
   ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
<syntaxhighlight lang="bash">
  ACTIVE            '/dev/vg_shared/lv_data' [39.96 GiB] inherit
[root@nodo1 ~]# pvs
[root@nodo2 ~]# df -hT /srv/shared
  PV        VG  Fmt Attr PSize  PFree
Filesystem                    Type  Size Used Avail Use% Mounted on
  /dev/sda2 rhel lvm2 a--  <19.00g     0
/dev/mapper/vg_shared-lv_data xfs    40G 318M  40G  1% /srv/shared
  /dev/sdb        lvm2 ---  39.99g 39.99g
[root@nodo2 ~]# ip a show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
     link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.82/24 brd 192.168.1.255 scope global noprefixroute enp0s3
      valid_lft forever preferred_lft forever
    inet 192.168.1.83/24 brd 192.168.1.255 scope global secondary enp0s3
      valid_lft forever preferred_lft forever
</syntaxhighlight>
</syntaxhighlight>


== Notas finales ==
* El disco iSCSI queda disponible para ser utilizado como recurso compartido en Pacemaker.
* Todos los nodos deben ver el mismo dispositivo de bloques.
* El uso de <code>pcs resource move</code> en RHEL 8/9 genera movimientos **temporales**; las constraints de localización se crean y eliminan automáticamente tras el movimiento.
* Para definir afinidad permanente de un recurso a un nodo es necesario crear una constraint explícita.




== Referencias ==


== Notas finales ==
* [[iSCSI]] – Configuración de almacenamiento compartido
* [[LVM]] – Gestión de volúmenes lógicos


* El disco iSCSI queda disponible para ser utilizado como recurso compartido en Pacemaker
* [https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/configuring_and_managing_high_availability_clusters/index#con_HA-lvm-shared-volumes-overview-of-high-availability RHEL 9 – Shared LVM volumes in High Availability clusters]
* Todos los nodos deben ver el mismo dispositivo
* [https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_configuring-active-passive-http-server-in-a-cluster-configuring-and-managing-high-availability-clusters RHEL 9 – Configuring an active/passive service in a cluster]
* A partir de este punto se puede continuar con la configuración del clúster
* [https://clusterlabs.org/pacemaker/doc/ Pacemaker Documentation]

Revisión actual - 11:02 3 ene 2026

Creación de un clúster Pacemaker

Introducción

En este laboratorio se configura un clúster Pacemaker utilizando almacenamiento compartido vía iSCSI. Para ello se emplean un mínimo de tres máquinas:

  • Dos nodos que formarán el clúster Pacemaker
  • Una máquina adicional que actuará como servidor de almacenamiento SAN

El objetivo es simular el uso de discos compartidos de una cabina de almacenamiento en una infraestructura real.

Topología del laboratorio

  • Servidor SAN
    • Icecube — 192.168.1.80
  • Nodos Pacemaker
    • Nodo1 — 192.168.1.81
    • Nodo2 — 192.168.1.82

Configuración del almacenamiento SAN

Antes de proceder con la implementación del clúster, es necesario haber configurado el almacenamiento compartido. Este proceso se detalla en la sección dedicada a iSCSI

Verificación del disco compartido

Comprobar que el nuevo disco aparece en ambos nodos:

Nodo1:
[root@nodo1 ~]# iscsiadm -m session -o show
tcp: [1] 192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 (non-flash)
[root@nodo1 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0 10.5G  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0  9.5G  0 part
  ├─centos-root 253:0    0  8.4G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]
sdb               8:16   0    8G  0 disk
sr0              11:0    1 1024M  0 rom
Nodo2:
[root@nodo2 ~]#  iscsiadm -m session -o show
tcp: [1] 192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 (non-flash)
[root@nodo2 ~]# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                   8:0    0   20G  0 disk
├─sda1                8:1    0    1G  0 part /boot
└─sda2                8:2    0   19G  0 part
  ├─rhel-root       253:0    0   17G  0 lvm  /
  └─rhel-swap       253:1    0    2G  0 lvm  [SWAP]
sdb                   8:16   0   40G  0 disk
sr0                  11:0    1 1024M  0 rom

Pacemaker

Instalación

PACEMAKER RHEL9

[root@nodo1 ~]# subscription-manager repos \
  --enable=rhel-9-for-x86_64-highavailability-rpms
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.

[root@nodo2 ~]# subscription-manager repos \
  --enable=rhel-9-for-x86_64-highavailability-rpms
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.
[root@nodo1 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2
[root@nodo2 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2

Configuración Cluster

[root@nodo1 ~]# systemctl enable --now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service  /usr/lib/systemd/system/pcsd.service.
[root@nodo1 ~]#

[root@nodo2 ~]# systemctl enable --now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service  /usr/lib/systemd/system/pcsd.service.
[root@nodo2 ~]#
[root@nodo1 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@nodo1 ~]#

[root@nodo2 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@nodo2 ~]#


root@nodo1 ~]# firewall-cmd --add-service=high-availability --permanent
success
[root@nodo1 ~]# firewall-cmd --reload
success
[root@nodo1 ~]#


[root@nodo2 ~]# firewall-cmd --add-service=high-availability --permanent
success
[root@nodo2 ~]# firewall-cmd --reload
success
[root@nodo1 ~]# pcs host auth nodo1 nodo2
Username: hacluster
Password:
nodo1: Authorized
nodo2: Authorized
[root@nodo1 ~]#

[root@nodo2 ~]#  pcs host auth nodo1 nodo2
Username: hacluster
Password:
nodo1: Authorized
nodo2: Authorized
[root@nodo2 ~]#
[root@nodo2 ~]# pcs cluster setup iscsi-cluster nodo1 nodo2
No addresses specified for host 'nodo1', using 'nodo1'
No addresses specified for host 'nodo2', using 'nodo2'
Destroying cluster on hosts: 'nodo1', 'nodo2'...
nodo1: Successfully destroyed cluster
nodo2: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'nodo1', 'nodo2'
nodo2: successful removal of the file 'pcsd settings'
nodo1: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'nodo1', 'nodo2'
nodo2: successful distribution of the file 'corosync authkey'
nodo2: successful distribution of the file 'pacemaker authkey'
nodo1: successful distribution of the file 'corosync authkey'
nodo1: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'nodo1', 'nodo2'
nodo2: successful distribution of the file 'corosync.conf'
nodo1: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
[root@nodo2 ~]#
[root@nodo2 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
  crm_mon: Connection to cluster failed: Connection refused
[root@nodo2 ~]# pcs cluster start --all
nodo1: Starting Cluster...
nodo2: Starting Cluster...
[root@nodo2 ~]# pcs cluster enable --all
nodo1: Cluster Enabled
nodo2: Cluster Enabled
[root@nodo2 ~]# pcs status
Cluster name: iscsi-cluster

WARNINGS:
No stonith devices and stonith-enabled is not false
error: Resource start-up disabled since no STONITH resources have been defined
error: Either configure some or disable STONITH with the stonith-enabled option
error: NOTE: Clusters with shared data need STONITH to ensure data integrity
warning: Node nodo1 is unclean but cannot be fenced
warning: Node nodo2 is unclean but cannot be fenced
error: CIB did not pass schema validation
Errors found during check: config not valid

Cluster Summary:
  * Stack: unknown (Pacemaker is running)
  * Current DC: NONE
  * Last updated: Sat Jan  3 00:28:04 2026 on nodo2
  * Last change:  Sat Jan  3 00:27:58 2026 by hacluster via hacluster on nodo2
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Node nodo1: UNCLEAN (offline)
  * Node nodo2: UNCLEAN (offline)

Full List of Resources:
  * No resources

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo2 ~]# pcs property set stonith-enabled=false
[root@nodo2 ~]# pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 00:34:35 2026 on nodo2
  * Last change:  Sat Jan  3 00:34:28 2026 by root via root on nodo2
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ nodo1 nodo2 ]

Full List of Resources:
  * No resources

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo2 ~]#

Configuración LVM

Editar /etc/lvm/lvm.conf

Establecer la misma configuración de /etc/lvm/lvm.conf en todos los nodos del clúster y ejecutar un dracut -f y reboot.

[root@nodo1 ~]# grep -vE '^\s*#|^\s*$' /etc/lvm/lvm.conf
config {
}
devices {
}
allocation {
}
log {
}
backup {
}
shell {
}
global {
        system_id_source = "uname"
}
activation {
        auto_activation_volume_list = [ ]
}
report {
}
dmeventd {
}
[root@nodo1 ~]# dracut -f
[root@nodo1 ~]# reboot

Verificaciones post reinicio:

[root@nodo1 ~]# uname -n
nodo1

[root@nodo1 ~]# lvm systemid
  system ID: nodo1

Crear PV|VG|LV con LUN compartida

La creación del VG debe ejecutarse en un solo nodo del clúster: Crear Phisycal Volume:

[root@nodo1 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

Crear Volume Group:

[root@nodo1 ~]# vgcreate --setautoactivation n vg_shared /dev/sdb
  Volume group "vg_shared" successfully created with system ID nodo1
[root@nodo1 ~]# vgs -o+systemid
  VG        #PV #LV #SN Attr   VSize   VFree  System ID
  rhel        1   2   0 wz--n- <19.00g     0
  vg_shared   1   0   0 wz--n-  39.96g 39.96g nodo1
[root@nodo1 ~]#

Crear Logical Volume:

[root@nodo1 ~]# lvcreate -n lv_data -l 100%FREE vg_shared
  Wiping xfs signature on /dev/vg_shared/lv_data.
  Logical volume "lv_data" created.

Formatear en XFS:

[root@nodo1 ~]# mkfs.xfs /dev/vg_shared/lv_data
meta-data=/dev/vg_shared/lv_data isize=512    agcount=4, agsize=2618880 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@nodo1 ~]#

Configuración Recursos

Requisitos previos

Hay que desactivar el VG para que lo gestione el cluster y crear el directorio donde montar el FS:

[root@nodo1 ~]# vgchange -an vg_shared
  0 logical volume(s) in volume group "vg_shared" now active
[root@nodo1 ~]# lvscan
  ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
  inactive            '/dev/vg_shared/lv_data' [39.96 GiB] inherit
[root@nodo1 ~]# mkdir -p /srv/shared

Crear recursos

El orden de creación es importante ya que afecta al orden de arranque:

Crear recurso que active el VG:

[root@nodo1 ~]# pcs resource create vg_shared ocf:heartbeat:LVM-activate vgname=vg_shared vg_access_mode=system_id --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.

Crear recurso que monta el LV:

[root@nodo1 ~]# pcs resource create fs_shared ocf:heartbeat:Filesystem device="/dev/vg_shared/lv_data" directory="/srv/shared"  fstype="xfs" --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.

Crear recurso que active la IP del recurso:

[root@nodo1 ~]# pcs resource create ip_shared   ocf:heartbeat:IPaddr2   ip=192.168.1.83   cidr_netmask=24   nic=enp0s3   --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.

Verificación

[root@nodo1 ~]#  pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:02:49 2026 on nodo1
  * Last change:  Sat Jan  3 10:52:08 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured

Node List:
  * Online: [ nodo1 nodo2 ]

Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo1
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo1
    * ip_shared (ocf:heartbeat:IPaddr2):         Started nodo1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#

Pruebas de movimiento paquetes:

Verificamos que los recursos están arrancados en el nodo1y comprobamos que el VG esta activo, el LV montado y la IP arrancada:

[root@nodo1 ~]# pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:24:14 2026 on nodo1
  * Last change:  Sat Jan  3 10:52:08 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured

Node List:
  * Online: [ nodo1 nodo2 ]

Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo1
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo1
    * ip_shared (ocf:heartbeat:IPaddr2):         Started nodo1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#
[root@nodo1 ~]# lvscan
  ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
  ACTIVE            '/dev/vg_shared/lv_data' [39.96 GiB] inherit

[root@nodo1 ~]# df -hT /srv/shared
Filesystem                    Type  Size  Used Avail Use% Mounted on
/dev/mapper/vg_shared-lv_data xfs    40G  318M   40G   1% /srv/shared

[root@nodo1 ~]# ip a show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.81/24 brd 192.168.1.255 scope global noprefixroute enp0s3
       valid_lft forever preferred_lft forever
    inet 192.168.1.83/24 brd 192.168.1.255 scope global secondary enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fef3:3c51/64 scope link tentative noprefixroute
       valid_lft forever preferred_lft forever

Procedemos a mover el paquete al nodo2:

[root@nodo1 ~]# pcs resource move SHARED
Location constraint to move resource 'SHARED' has been created
Waiting for the cluster to apply configuration changes...
Location constraint created to move resource 'SHARED' has been removed
Waiting for the cluster to apply configuration changes...
resource 'SHARED' is running on node 'nodo2'
[root@nodo1 ~]#

Verificamos que los recursos están arrancados en el nodo2 y comprobamos que el VG esta activo, el LV montado y la IP arrancada:

[root@nodo1 ~]# pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:25:34 2026 on nodo1
  * Last change:  Sat Jan  3 11:24:31 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured

Node List:
  * Online: [ nodo1 nodo2 ]

Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo2
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo2
    * ip_shared (ocf:heartbeat:IPaddr2):         Started nodo2

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#
[root@nodo2 ~]# lvscan
  ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
  ACTIVE            '/dev/vg_shared/lv_data' [39.96 GiB] inherit
[root@nodo2 ~]# df -hT /srv/shared
Filesystem                    Type  Size  Used Avail Use% Mounted on
/dev/mapper/vg_shared-lv_data xfs    40G  318M   40G   1% /srv/shared
[root@nodo2 ~]# ip a show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.82/24 brd 192.168.1.255 scope global noprefixroute enp0s3
       valid_lft forever preferred_lft forever
    inet 192.168.1.83/24 brd 192.168.1.255 scope global secondary enp0s3
       valid_lft forever preferred_lft forever

Notas finales

  • El disco iSCSI queda disponible para ser utilizado como recurso compartido en Pacemaker.
  • Todos los nodos deben ver el mismo dispositivo de bloques.
  • El uso de pcs resource move en RHEL 8/9 genera movimientos **temporales**; las constraints de localización se crean y eliminan automáticamente tras el movimiento.
  • Para definir afinidad permanente de un recurso a un nodo es necesario crear una constraint explícita.


Referencias

  • iSCSI – Configuración de almacenamiento compartido
  • LVM – Gestión de volúmenes lógicos