Diferencia entre revisiones de «Pacemaker»

De jagfloriano.com
Ir a la navegaciónIr a la búsqueda
Sin resumen de edición
Etiqueta: Reversión manual
 
(No se muestran 80 ediciones intermedias de 2 usuarios)
Línea 1: Línea 1:
== Creación de un clúster Pacemaker ( EN DESARROLLO) ==
== Creación de un clúster Pacemaker ==


=== Introducción ===
=== Introducción ===
Línea 9: Línea 9:


El objetivo es simular el uso de discos compartidos de una cabina de almacenamiento en una infraestructura real.
El objetivo es simular el uso de discos compartidos de una cabina de almacenamiento en una infraestructura real.
---


=== Topología del laboratorio ===
=== Topología del laboratorio ===
==== Máquinas ====


* '''Servidor SAN'''
* '''Servidor SAN'''
Línea 22: Línea 18:
** Nodo1 — <code>192.168.1.81</code>
** Nodo1 — <code>192.168.1.81</code>
** Nodo2 — <code>192.168.1.82</code>
** Nodo2 — <code>192.168.1.82</code>
---


== Configuración del almacenamiento SAN ==
== Configuración del almacenamiento SAN ==
Antes de proceder con la implementación del clúster, es necesario haber configurado el almacenamiento compartido. Este proceso se detalla en la sección dedicada a [[iSCSI|iSCSI]]


Para simular una cabina de almacenamiento se compartirá un disco mediante iSCSI hacia ambos nodos del clúster Pacemaker.
==== Verificación del disco compartido ====


=== Instalación de paquetes requeridos ===
Comprobar que el nuevo disco aparece en ambos nodos:


En el servidor SAN (target):
:Nodo1:
<syntaxhighlight lang="bash">
[root@nodo1 ~]# iscsiadm -m session -o show
tcp: [1] 192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 (non-flash)
</syntaxhighlight>


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# subscription-manager register
[root@nodo1 ~]# lsblk
Registering to: subscription.rhsm.redhat.com:443/subscription
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
Username: $USER
sda              8:0    0 10.5G  0 disk
Password:
├─sda1            8:1    0    1G  0 part /boot
The system has been registered with ID: dddb6e1e-049a-4ea2-8943-403ec63bb06f
└─sda2            8:2    0  9.5G  0 part
The registered system name is: icecube
  ├─centos-root 253:0    0  8.4G  0 lvm  /
 
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]
sdb              8:16  0    8G  0 disk
sr0              11:0    1 1024M  0 rom
</syntaxhighlight>
:Nodo2:
<syntaxhighlight lang="bash">
[root@nodo2 ~]#  iscsiadm -m session -o show
tcp: [1] 192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 (non-flash)
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo2 ~]# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0  20G  0 disk
├─sda1                8:1    0    1G  0 part /boot
└─sda2                8:2    0  19G  0 part
  ├─rhel-root      253:0    0  17G  0 lvm  /
  └─rhel-swap      253:1    0    2G  0 lvm  [SWAP]
sdb                  8:16  0  40G  0 disk
sr0                  11:0    1 1024M  0 rom
</syntaxhighlight>


[root@icecube ~]# subscription-manager repos --enable codeready-builder-for-rhel-9-$(arch)-rpms
== Pacemaker ==
Repository 'codeready-builder-for-rhel-9-x86_64-rpms' is enabled for this system.
=== Instalación ===
[root@icecube ~]# dnf install \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
Updating Subscription Management repositories.
Red Hat CodeReady Linux Builder for RHEL 9 x86_64 (RPMs)                          24 MB/s |  15 MB    00:00
epel-release-latest-9.noarch.rpm                                                  41 kB/s |  19 kB    00:00
Dependencies resolved.
===============================================================================================================================
Package                                        Architecture          Version                      Repository          Size
===============================================================================================================================
Installing:
epel-release                                  noarch                9-10.el9                    @commandline        19 k


Transaction Summary
===============================================================================================================================
Install  1 Package


Total size: 19 k
PACEMAKER RHEL9
Installed size: 26 k
Is this ok [y/N]: y
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                              1/1
  Installing      : epel-release-9-10.el9.noarch                                  1/1
  Running scriptlet: epel-release-9-10.el9.noarch                                  1/1
Many EPEL packages require the CodeReady Builder (CRB) repository.
It is recommended that you run /usr/bin/crb enable to enable the CRB repository.


   Verifying        : epel-release-9-10.el9.noarch                                  1/1
<syntaxhighlight lang="bash">
Installed products updated.
[root@nodo1 ~]# subscription-manager repos \
   --enable=rhel-9-for-x86_64-highavailability-rpms
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.


Installed:
[root@nodo2 ~]# subscription-manager repos \
   epel-release-9-10.el9.noarch
   --enable=rhel-9-for-x86_64-highavailability-rpms
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.
</syntaxhighlight>


Complete!
<syntaxhighlight lang="bash">
[root@nodo1 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2
[root@nodo2 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2
</syntaxhighlight>
</syntaxhighlight>


=== Configuración Cluster ===


<syntaxhighlight lang="bash">
[root@nodo1 ~]# systemctl enable --now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
[root@nodo1 ~]#


 
[root@nodo2 ~]# systemctl enable --now pcsd
 
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
[root@nodo2 ~]#
</syntaxhighlight>


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@nodo1 ~]#


[root@icecube ~]# dnf --enablerepo=epel* install targetcli
[root@nodo2 ~]# passwd hacluster
Updating Subscription Management repositories.
Changing password for user hacluster.
Last metadata expiration check: 0:02:26 ago on Fri 02 Jan 2026 09:50:24 PM CET.
New password:
Dependencies resolved.
Retype new password:
======================================================================================================================================================
passwd: all authentication tokens updated successfully.
Package                                        Architecture          Version                    Repository                            Size
[root@nodo2 ~]#
======================================================================================================================================================
</syntaxhighlight>
Installing:                                                                                
targetcli                                      noarch                2.1.57-2.el9                rhel-9-for-x86_64-appstream-rpms      79 k
Installing dependencies:                                                                   
python3-configshell                            noarch                1:1.1.30-1.el9              rhel-9-for-x86_64-baseos-rpms        76 k
python3-kmod                                    x86_64                0.9-32.el9                  rhel-9-for-x86_64-baseos-rpms        88 k
python3-pyparsing                              noarch                2.4.7-9.el9                rhel-9-for-x86_64-baseos-rpms        154 k
python3-rtslib                                  noarch                2.1.76-1.el9                rhel-9-for-x86_64-appstream-rpms    104 k
python3-urwid                                  x86_64                2.1.2-4.el9                rhel-9-for-x86_64-baseos-rpms        842 k
target-restore                                  noarch                2.1.76-1.el9                rhel-9-for-x86_64-appstream-rpms      16 k
Transaction Summary                                                                         
======================================================================================================================================================
Install  7 Packages


Total download size: 1.3 M
Installed size: 5.0 M
Is this ok [y/N]: y
Downloading Packages:
(1/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm                              1.0 MB/s | 154 kB    00:00
(2/7): python3-kmod-0.9-32.el9.x86_64.rpm                                    347 kB/s |  88 kB    00:00
(3/7): python3-configshell-1.1.30-1.el9.noarch.rpm                            683 kB/s |  76 kB    00:00
(4/7): target-restore-2.1.76-1.el9.noarch.rpm                                145 kB/s |  16 kB    00:00
(5/7): python3-rtslib-2.1.76-1.el9.noarch.rpm                                900 kB/s | 104 kB    00:00
(6/7): targetcli-2.1.57-2.el9.noarch.rpm                                      677 kB/s |  79 kB    00:00
(7/7): python3-urwid-2.1.2-4.el9.x86_64.rpm                                  318 kB/s | 842 kB    00:02
---------------------------------------------------------------------------------------
Total                                                                        513 kB/s | 1.3 MB    00:02
Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)                        2.7 MB/s | 3.6 kB    00:00
Importing GPG key 0xFD431D51:                                               
Userid    : "Red Hat, Inc. (release key 2) <security@redhat.com>"
Fingerprint: 567E 347A D004 4ADE 55BA 8A5F 199E 2F91 FD43 1D51
From      : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Is this ok [y/N]: y
Key imported successfully
Importing GPG key 0x5A6340B3:
Userid    : "Red Hat, Inc. (auxiliary key 3) <security@redhat.com>"
Fingerprint: 7E46 2425 8C40 6535 D56D 6F13 5054 E4A4 5A63 40B3
From      : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Is this ok [y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                            1/1
  Installing      : python3-urwid-2.1.2-4.el9.x86_64          1/7
  Installing      : python3-pyparsing-2.4.7-9.el9.noarch      2/7
  Installing      : python3-configshell-1:1.1.30-1.el9.noarch  3/7
  Installing      : python3-kmod-0.9-32.el9.x86_64            4/7
  Installing      : python3-rtslib-2.1.76-1.el9.noarch        5/7
  Installing      : target-restore-2.1.76-1.el9.noarch        6/7
  Running scriptlet: target-restore-2.1.76-1.el9.noarch        6/7
  Installing      : targetcli-2.1.57-2.el9.noarch              7/7
  Running scriptlet: targetcli-2.1.57-2.el9.noarch              7/7
  Verifying        : python3-kmod-0.9-32.el9.x86_64            1/7
  Verifying        : python3-pyparsing-2.4.7-9.el9.noarch      2/7
  Verifying        : python3-urwid-2.1.2-4.el9.x86_64          3/7
  Verifying        : python3-configshell-1:1.1.30-1.el9.noarch  4/7
  Verifying        : target-restore-2.1.76-1.el9.noarch        5/7
  Verifying        : python3-rtslib-2.1.76-1.el9.noarch        6/7
  Verifying        : targetcli-2.1.57-2.el9.noarch              7/7
Installed products updated.


Installed:
<syntaxhighlight lang="bash">
  python3-configshell-1:1.1.30-1.el9.noarch      python3-kmod-0.9-32.el9.x86_64      python3-pyparsing-2.4.7-9.el9.noarch      python3-rtslib-2.1.76-1.el9.noarch      python3-urwid-2.1.2-4.el9.x86_64      target-restore-2.1.76-1.el9.noarch      targetcli-2.1.57-2.el9.noarch
root@nodo1 ~]# firewall-cmd --add-service=high-availability --permanent
success
[root@nodo1 ~]# firewall-cmd --reload
success
[root@nodo1 ~]#


Complete!


[root@nodo2 ~]# firewall-cmd --add-service=high-availability --permanent
success
[root@nodo2 ~]# firewall-cmd --reload
success
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="bash">
[root@nodo1 ~]# pcs host auth nodo1 nodo2
Username: hacluster
Password:
nodo1: Authorized
nodo2: Authorized
[root@nodo1 ~]#


==== Configuración del iSCSI Target (targetcli) ====
[root@nodo2 ~]#  pcs host auth nodo1 nodo2
 
Username: hacluster
 
Password:
Configuraciones previas:
nodo1: Authorized
nodo2: Authorized
[root@nodo2 ~]#
</syntaxhighlight>


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# systemctl enable --now target
[root@nodo2 ~]# pcs cluster setup iscsi-cluster nodo1 nodo2
No addresses specified for host 'nodo1', using 'nodo1'
No addresses specified for host 'nodo2', using 'nodo2'
Destroying cluster on hosts: 'nodo1', 'nodo2'...
nodo1: Successfully destroyed cluster
nodo2: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'nodo1', 'nodo2'
nodo2: successful removal of the file 'pcsd settings'
nodo1: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'nodo1', 'nodo2'
nodo2: successful distribution of the file 'corosync authkey'
nodo2: successful distribution of the file 'pacemaker authkey'
nodo1: successful distribution of the file 'corosync authkey'
nodo1: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'nodo1', 'nodo2'
nodo2: successful distribution of the file 'corosync.conf'
nodo1: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
[root@nodo2 ~]#
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
 
[root@nodo2 ~]# pcs status
[root@icecube ~]# systemctl status target
Error: error running crm_mon, is pacemaker running?
● target.service - Restore LIO kernel target configuration
  crm_mon: Connection to cluster failed: Connection refused
    Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; preset: disabled)
    Active: active (exited) since Fri 2026-01-02 23:10:39 CET; 2min 44s ago
  Main PID: 5382 (code=exited, status=0/SUCCESS)
        CPU: 49ms
 
Jan 02 23:10:39 icecube systemd[1]: Starting Restore LIO kernel target configuration...
Jan 02 23:10:39 icecube systemd[1]: Finished Restore LIO kernel target configuration.
[root@icecube ~]#
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo2 ~]# pcs cluster start --all
nodo1: Starting Cluster...
nodo2: Starting Cluster...
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# lsblk /dev/vg_iscsi/lv_storage
[root@nodo2 ~]# pcs cluster enable --all
NAME                MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nodo1: Cluster Enabled
vg_iscsi-lv_storage 253:2    0  40G  0 lvm
nodo2: Cluster Enabled
[root@icecube ~]# f
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="bash">
[root@nodo2 ~]# pcs status
Cluster name: iscsi-cluster


Verificar que no existe configuración actualmente en targetcli:
WARNINGS:
<syntaxhighlight lang="ini">
No stonith devices and stonith-enabled is not false
[root@icecube ~]# targetcli ls
error: Resource start-up disabled since no STONITH resources have been defined
o- / ......................................................................................................................... [...]
error: Either configure some or disable STONITH with the stonith-enabled option
  o- backstores .............................................................................................................. [...]
error: NOTE: Clusters with shared data need STONITH to ensure data integrity
  | o- block .................................................................................................. [Storage Objects: 0]
warning: Node nodo1 is unclean but cannot be fenced
  | o- fileio ................................................................................................. [Storage Objects: 0]
warning: Node nodo2 is unclean but cannot be fenced
  | o- pscsi .................................................................................................. [Storage Objects: 0]
error: CIB did not pass schema validation
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
Errors found during check: config not valid
  o- iscsi ............................................................................................................ [Targets: 0]
  o- loopback ......................................................................................................... [Targets: 0]
[root@icecube ~]#
</syntaxhighlight>


Cluster Summary:
  * Stack: unknown (Pacemaker is running)
  * Current DC: NONE
  * Last updated: Sat Jan  3 00:28:04 2026 on nodo2
  * Last change:  Sat Jan  3 00:27:58 2026 by hacluster via hacluster on nodo2
  * 2 nodes configured
  * 0 resource instances configured


Acceder a la consola de configuración y crear una LUN:
Node List:
  * Node nodo1: UNCLEAN (offline)
  * Node nodo2: UNCLEAN (offline)


<syntaxhighlight lang="ini">
Full List of Resources:
[root@icecube ~]# targetcli
  * No resources
targetcli shell version 2.1.57
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.


/> /backstores/block create lun01 /dev/vg_iscsi/lv_storage
Daemon Status:
Created block storage object lun01 using /dev/vg_iscsi/lv_storage.
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
</syntaxhighlight>
</syntaxhighlight>


Crear el target y el backstore iSCSI:
<syntaxhighlight lang="bash">
 
[root@nodo2 ~]# pcs property set stonith-enabled=false
<syntaxhighlight lang="ini">
/> /iscsi create iqn.2026-01.icecube:storage.target01
Created target iqn.2026-01.icecube:storage.target01.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
</syntaxhighlight>
</syntaxhighlight>
<syntaxhighlight lang="bash">
[root@nodo2 ~]# pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 00:34:35 2026 on nodo2
  * Last change:  Sat Jan  3 00:34:28 2026 by root via root on nodo2
  * 2 nodes configured
  * 0 resource instances configured


<syntaxhighlight lang="ini">
Node List:
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns create /backstores/block/lun01
  * Online: [ nodo1 nodo2 ]
Created LUN 0.
</syntaxhighlight>


Configurar ACLs y autenticación CHAP:
Full List of Resources:
  * No resources


<syntaxhighlight lang="ini">
Daemon Status:
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/acls create iqn.2026-01.icecube:node01.initiator01
  corosync: active/enabled
Created Node ACL for iqn.2026-01.icecube:node01.initiator01
  pacemaker: active/enabled
Created mapped LUN 0.
  pcsd: active/enabled
[root@nodo2 ~]#
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="ini">
=== Configuración LVM ===
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1 set attribute authentication=1
==== Editar /etc/lvm/lvm.conf ====
Parameter authentication is now '1'.
 
</syntaxhighlight>
Establecer la misma configuración de <code>/etc/lvm/lvm.conf</code> en todos los nodos del clúster y ejecutar un <code>dracut -f</code> y <code>reboot</code>.


<syntaxhighlight lang="ini">
<syntaxhighlight lang="ini">
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/acls/iqn.2026-01.icecube:node01.initiator01 set auth userid=bonzo
[root@nodo1 ~]# grep -vE '^\s*#|^\s*$' /etc/lvm/lvm.conf
Parameter userid is now 'bonzo'.
config {
}
devices {
}
allocation {
}
log {
}
backup {
}
shell {
}
global {
        system_id_source = "uname"
}
activation {
        auto_activation_volume_list = [ ]
}
report {
}
dmeventd {
}
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="ini">
<syntaxhighlight lang="bash">
/> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/acls/iqn.2026-01.icecube:node01.initiator01 set auth password=PASSWORD2020
[root@nodo1 ~]# dracut -f
Parameter password is now 'PASSWORD2020'.
</syntaxhighlight>
</syntaxhighlight>


Salir y guardar la configuración:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="ini">
[root@nodo1 ~]# reboot
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json
</syntaxhighlight>
</syntaxhighlight>


Verificación del fichero de configuracion:
Verificaciones post reinicio:
<syntaxhighlight lang="bash">
[root@nodo1 ~]# uname -n
nodo1


[root@nodo1 ~]# lvm systemid
  system ID: nodo1
</syntaxhighlight>


<syntaxhighlight lang="ini">
==== Crear PV|VG|LV con LUN compartida ====
[root@icecube ~]# cat /etc/target/saveconfig.json
La creación del VG debe ejecutarse en un solo nodo del clúster:
{
Crear Phisycal Volume:
   "fabric_modules": [],
<syntaxhighlight lang="bash">
  "storage_objects": [
[root@nodo1 ~]# pvcreate /dev/sdb
    {
   Physical volume "/dev/sdb" successfully created.
      "alua_tpgs": [
        {
          "alua_access_state": 0,
          "alua_access_status": 0,
          "alua_access_type": 3,
          "alua_support_active_nonoptimized": 1,
          "alua_support_active_optimized": 1,
          "alua_support_offline": 1,
          "alua_support_standby": 1,
          "alua_support_transitioning": 1,
          "alua_support_unavailable": 1,
          "alua_write_metadata": 0,
          "implicit_trans_secs": 0,
          "name": "default_tg_pt_gp",
          "nonop_delay_msecs": 100,
          "preferred": 0,
          "tg_pt_gp_id": 0,
          "trans_delay_msecs": 0
        }
      ],
      "attributes": {
        "alua_support": 1,
        "block_size": 512,
        "emulate_3pc": 1,
        "emulate_caw": 1,
        "emulate_dpo": 1,
        "emulate_fua_read": 1,
        "emulate_fua_write": 1,
        "emulate_model_alias": 1,
        "emulate_pr": 1,
        "emulate_rest_reord": 0,
        "emulate_rsoc": 1,
        "emulate_tas": 1,
        "emulate_tpu": 0,
        "emulate_tpws": 0,
        "emulate_ua_intlck_ctrl": 0,
        "emulate_write_cache": 0,
        "enforce_pr_isids": 1,
        "force_pr_aptpl": 0,
        "is_nonrot": 0,
        "max_unmap_block_desc_count": 0,
        "max_unmap_lba_count": 0,
        "max_write_same_len": 65535,
        "optimal_sectors": 65528,
        "pgr_support": 1,
        "pi_prot_format": 0,
        "pi_prot_type": 0,
        "pi_prot_verify": 0,
        "queue_depth": 128,
        "submit_type": 0,
        "unmap_granularity": 0,
        "unmap_granularity_alignment": 0,
        "unmap_zeroes_data": 0
      },
      "dev": "/dev/vg_iscsi/lv_storage",
      "name": "lun01",
      "plugin": "block",
      "readonly": false,
      "write_back": false,
      "wwn": "b1e53820-2d0e-4605-a66e-96ed4d4738a9"
    }
  ],
  "targets": [
    {
      "fabric": "iscsi",
      "parameters": {
        "cmd_completion_affinity": "-1"
      },
      "tpgs": [
        {
          "attributes": {
            "authentication": 1,
            "cache_dynamic_acls": 0,
            "default_cmdsn_depth": 64,
            "default_erl": 0,
            "demo_mode_discovery": 1,
            "demo_mode_write_protect": 1,
            "fabric_prot_type": 0,
            "generate_node_acls": 0,
            "login_keys_workaround": 1,
            "login_timeout": 15,
            "prod_mode_write_protect": 0,
            "t10_pi": 0,
            "tpg_enabled_sendtargets": 1
          },
          "enable": true,
          "luns": [
            {
              "alias": "4937399dfd",
              "alua_tg_pt_gp_name": "default_tg_pt_gp",
              "index": 0,
              "storage_object": "/backstores/block/lun01"
            }
          ],
          "node_acls": [
            {
              "attributes": {
                "authentication": -1,
                "dataout_timeout": 3,
                "dataout_timeout_retries": 5,
                "default_erl": 0,
                "nopin_response_timeout": 30,
                "nopin_timeout": 15,
                "random_datain_pdu_offsets": 0,
                "random_datain_seq_offsets": 0,
                "random_r2t_offsets": 0
              },
              "chap_password": "PASSWORD2020",
              "chap_userid": "bonzo",
              "mapped_luns": [
                {
                  "alias": "612d32f2e4",
                  "index": 0,
                  "tpg_lun": 0,
                  "write_protect": false
                }
              ],
              "node_wwn": "iqn.2026-01.icecube:node01.initiator01"
            }
          ],
          "parameters": {
            "AuthMethod": "CHAP",
            "DataDigest": "CRC32C,None",
            "DataPDUInOrder": "Yes",
            "DataSequenceInOrder": "Yes",
            "DefaultTime2Retain": "20",
            "DefaultTime2Wait": "2",
            "ErrorRecoveryLevel": "0",
            "FirstBurstLength": "65536",
            "HeaderDigest": "CRC32C,None",
            "IFMarkInt": "Reject",
            "IFMarker": "No",
            "ImmediateData": "Yes",
            "InitialR2T": "Yes",
            "MaxBurstLength": "262144",
            "MaxConnections": "1",
            "MaxOutstandingR2T": "1",
            "MaxRecvDataSegmentLength": "8192",
            "MaxXmitDataSegmentLength": "262144",
            "OFMarkInt": "Reject",
            "OFMarker": "No",
            "TargetAlias": "LIO Target"
          },
          "portals": [
            {
              "ip_address": "0.0.0.0",
              "iser": false,
              "offload": false,
              "port": 3260
            }
          ],
          "tag": 1
        }
      ],
      "wwn": "iqn.2026-01.icecube:storage.target01"
    }
  ]
}
</syntaxhighlight>
</syntaxhighlight>
 
Crear Volume Group:
==== Comprobación del servicio iSCSI ====
<syntaxhighlight lang="bash">
 
[root@nodo1 ~]# vgcreate --setautoactivation n vg_shared /dev/sdb
Ver estado de configuracion de targetcli:
   Volume group "vg_shared" successfully created with system ID nodo1
 
<syntaxhighlight lang="ini">
[root@icecube ~]# targetcli ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 1]
  | | o- lun01 ........................................................... [/dev/vg_iscsi/lv_storage (40.0GiB) write-thru activated]
   | |  o- alua ................................................................................................... [ALUA Groups: 1]
  | |    o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.2026-01.icecube:storage.target01 .............................................................................. [TPGs: 1]
  |  o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
  |    o- acls .......................................................................................................... [ACLs: 1]
  |    | o- iqn.2026-01.icecube:node01.initiator01 ................................................... [1-way auth, Mapped LUNs: 1]
  |    |  o- mapped_lun0 ................................................................................. [lun0 block/lun01 (rw)]
  |    o- luns .......................................................................................................... [LUNs: 1]
  |    | o- lun0 ...................................................... [block/lun01 (/dev/vg_iscsi/lv_storage) (default_tg_pt_gp)]
  |    o- portals .................................................................................................... [Portals: 1]
  |      o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
</syntaxhighlight>
</syntaxhighlight>
Verificar que el puerto iSCSI está en escucha:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
 
[root@nodo1 ~]# vgs -o+systemid
[root@icecube ~]# ss -napt | grep 3260
  VG        #PV #LV #SN Attr  VSize  VFree  System ID
LISTEN     0     256          *:3260                    *:*
  rhel        1  2  0 wz--n- <19.00g     0
[root@icecube ~]#
  vg_shared  1  0  0 wz--n-  39.96g 39.96g nodo1
[root@nodo1 ~]#
</syntaxhighlight>
</syntaxhighlight>


Crear Logical Volume:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# firewall-cmd --add-service=iscsi-target --permanent
[root@nodo1 ~]# lvcreate -n lv_data -l 100%FREE vg_shared
success
  Wiping xfs signature on /dev/vg_shared/lv_data.
  Logical volume "lv_data" created.
</syntaxhighlight>
</syntaxhighlight>
Formatear en XFS:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# firewall-cmd --reload
[root@nodo1 ~]# mkfs.xfs /dev/vg_shared/lv_data
success
meta-data=/dev/vg_shared/lv_data isize=512    agcount=4, agsize=2618880 blks
        =                      sectsz=512  attr=2, projid32bit=1
        =                      crc=1        finobt=1, sparse=1, rmapbt=0
        =                      reflink=1    bigtime=1 inobtcount=1 nrext64=0
data    =                      bsize=4096  blocks=10475520, imaxpct=25
        =                      sunit=0      swidth=0 blks
naming  =version 2              bsize=4096  ascii-ci=0, ftype=1
log      =internal log          bsize=4096  blocks=16384, version=2
        =                      sectsz=512  sunit=0 blks, lazy-count=1
realtime =none                  extsz=4096  blocks=0, rtextents=0
[root@nodo1 ~]#
</syntaxhighlight>
</syntaxhighlight>


=== Configuración alternativa del Target ===
=== Configuración Recursos ===
==== Requisitos previos ====


Ajustar contextos SELinux:
Hay que desactivar el VG para que lo gestione el cluster y crear el directorio donde montar el FS:
<syntaxhighlight lang="bash">
[root@nodo1 ~]# vgchange -an vg_shared
  0 logical volume(s) in volume group "vg_shared" now active
</syntaxhighlight>


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@icecube ~]# chcon -R -t tgtd_var_lib_t /var/lib/iscsi_disks
[root@nodo1 ~]# lvscan
[root@icecube ~]# semanage fcontext -a -t tgtd_var_lib_t /var/lib/iscsi_disks
  ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
  inactive            '/dev/vg_shared/lv_data' [39.96 GiB] inherit
</syntaxhighlight>
</syntaxhighlight>


=== Configuración del iSCSI Initiator en los nodos ===
<syntaxhighlight lang="bash">
[root@nodo1 ~]# mkdir -p /srv/shared
</syntaxhighlight>


Los siguientes pasos deben ejecutarse en '''todos los nodos del clúster'''.
==== Crear recursos ====
El orden de creación es importante ya que afecta al orden de arranque:


=== Instalación de paquetes ===
Crear recurso que active el VG:
<syntaxhighlight lang="bash">
[root@nodo1 ~]# pcs resource create vg_shared ocf:heartbeat:LVM-activate vgname=vg_shared vg_access_mode=system_id --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.
</syntaxhighlight>
Crear recurso que monta el LV:
<syntaxhighlight lang="bash">
[root@nodo1 ~]# pcs resource create fs_shared ocf:heartbeat:Filesystem device="/dev/vg_shared/lv_data" directory="/srv/shared"  fstype="xfs" --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.
</syntaxhighlight>
Crear recurso que active la IP del recurso:
<syntaxhighlight lang="bash">
[root@nodo1 ~]# pcs resource create ip_shared  ocf:heartbeat:IPaddr2  ip=192.168.1.83  cidr_netmask=24  nic=enp0s3  --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.
</syntaxhighlight>


==== Verificación ====
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]#  pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:02:49 2026 on nodo1
  * Last change:  Sat Jan  3 10:52:08 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured


[root@nodo1 ~]# dnf -y install iscsi-initiator-utils
Node List:
Updating Subscription Management repositories.
  * Online: [ nodo1 nodo2 ]
Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)                                                                                                                                                                                                        43 MB/s |  79 MB    00:01
Red Hat CodeReady Linux Builder for RHEL 9 x86_64 (RPMs)                                                                                                                                                                                                        19 MB/s |  15 MB    00:00
Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)                                                                                                                                                                                                            45 MB/s |  95 MB    00:02
Package iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64 is already installed.
Dependencies resolved.
================================================================================================================================================================================================================================================================================================
Package                                                                      Architecture                                        Version                                                                    Repository                                                                  Size
================================================================================================================================================================================================================================================================================================
Upgrading:
iscsi-initiator-utils                                                        x86_64                                              6.2.1.11-0.git4b3e853.el9                                                  rhel-9-for-x86_64-baseos-rpms                                              392 k
iscsi-initiator-utils-iscsiuio                                              x86_64                                              6.2.1.11-0.git4b3e853.el9                                                  rhel-9-for-x86_64-baseos-rpms                                                81 k


Transaction Summary
Full List of Resources:
================================================================================================================================================================================================================================================================================================
  * Resource Group: SHARED:
Upgrade  2 Packages
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo1
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo1
    * ip_shared (ocf:heartbeat:IPaddr2):        Started nodo1


Total download size: 473 k
Daemon Status:
Downloading Packages:
   corosync: active/enabled
(1/2): iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64.rpm                                                                                                                                                                                              1.6 MB/s | 392 kB    00:00
   pacemaker: active/enabled
(2/2): iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64.rpm                                                                                                                                                                                      330 kB/s |  81 kB    00:00
   pcsd: active/enabled
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[root@nodo1 ~]#
Total                                                                                                                                                                                                                                                          1.9 MB/s | 473 kB    00:00
</syntaxhighlight>
Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)                                                                                                                                                                                                          3.1 MB/s | 3.6 kB    00:00
Importing GPG key 0xFD431D51:
Userid    : "Red Hat, Inc. (release key 2) <security@redhat.com>"
Fingerprint: 567E 347A D004 4ADE 55BA 8A5F 199E 2F91 FD43 1D51
From      : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Key imported successfully
Importing GPG key 0x5A6340B3:
Userid    : "Red Hat, Inc. (auxiliary key 3) <security@redhat.com>"
Fingerprint: 7E46 2425 8C40 6535 D56D 6F13 5054 E4A4 5A63 40B3
From      : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
   Preparing        :                                                                                                                                                                                                                                                                       1/1
   Upgrading        : iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64                                                                                                                                                                                                        1/4
   Running scriptlet: iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64                                                                                                                                                                                                        1/4
  Upgrading        : iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64                                                                                                                                                                                                                2/4
  Running scriptlet: iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64                                                                                                                                                                                                                2/4
  Running scriptlet: iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64                                                                                                                                                                                                                  3/4
  Cleanup          : iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64                                                                                                                                                                                                                  3/4
  Running scriptlet: iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64                                                                                                                                                                                                                  3/4
  Running scriptlet: iscsi-initiator-utils-iscsiuio-6.2.1.9-1.gita65a472.el9.x86_64                                                                                                                                                                                                        4/4
  Cleanup          : iscsi-initiator-utils-iscsiuio-6.2.1.9-1.gita65a472.el9.x86_64                                                                                                                                                                                                        4/4
  Running scriptlet: iscsi-initiator-utils-iscsiuio-6.2.1.9-1.gita65a472.el9.x86_64                                                                                                                                                                                                        4/4
  Verifying        : iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64                                                                                                                                                                                                                1/4
  Verifying        : iscsi-initiator-utils-6.2.1.9-1.gita65a472.el9.x86_64                                                                                                                                                                                                                  2/4
  Verifying        : iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64                                                                                                                                                                                                        3/4
  Verifying        : iscsi-initiator-utils-iscsiuio-6.2.1.9-1.gita65a472.el9.x86_64                                                                                                                                                                                                        4/4
Installed products updated.


Upgraded:
==== Pruebas de movimiento paquetes: ====
  iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64                                                                                    iscsi-initiator-utils-iscsiuio-6.2.1.11-0.git4b3e853.el9.x86_64


Complete!
Verificamos que los recursos están arrancados en el <code>nodo1</code>y comprobamos que el VG esta activo, el LV montado y la IP arrancada:
<syntaxhighlight lang="bash">
[root@nodo1 ~]# pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:24:14 2026 on nodo1
  * Last change:  Sat Jan  3 10:52:08 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured


</syntaxhighlight>
Node List:
  * Online: [ nodo1 nodo2 ]


---
Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo1
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo1
    * ip_shared (ocf:heartbeat:IPaddr2):        Started nodo1


==== Configuración del iniciador ====
Daemon Status:
 
  corosync: active/enabled
Comprobar el IQN del iniciador:
  pacemaker: active/enabled
 
  pcsd: active/enabled
<syntaxhighlight lang="bash">
[root@nodo1 ~]#
[root@nodo1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2027-04.icecube:www.icecube
</syntaxhighlight>
</syntaxhighlight>


Editar el archivo de configuración CHAP:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# vim  /etc/iscsi/iscsid.conf
[root@nodo1 ~]# lvscan
</syntaxhighlight>
  ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
  ACTIVE            '/dev/vg_shared/lv_data' [39.96 GiB] inherit


Verificar parámetros de autenticación:
[root@nodo1 ~]# df -hT /srv/shared
Filesystem                    Type  Size  Used Avail Use% Mounted on
/dev/mapper/vg_shared-lv_data xfs    40G  318M  40G  1% /srv/shared


<syntaxhighlight lang="bash">
[root@nodo1 ~]# ip a show enp0s3
[root@nodo1 ~]# grep -i node.session.auth /etc/iscsi/iscsid.conf|grep -v ^#
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
node.session.auth.authmethod = CHAP
    link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
node.session.auth.username = bonzo
    inet 192.168.1.81/24 brd 192.168.1.255 scope global noprefixroute enp0s3
node.session.auth.password = PASSWORD2020
      valid_lft forever preferred_lft forever
    inet 192.168.1.83/24 brd 192.168.1.255 scope global secondary enp0s3
      valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fef3:3c51/64 scope link tentative noprefixroute
      valid_lft forever preferred_lft forever
</syntaxhighlight>
</syntaxhighlight>


---
Procedemos a mover el paquete al nodo2:
 
==== Descubrimiento y conexión al target ====
 
Descubrir targets disponibles:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.0.80
[root@nodo1 ~]# pcs resource move SHARED
192.168.0.80:3260,1 iqn.2027-04.icecube:storage.target00
Location constraint to move resource 'SHARED' has been created
 
Waiting for the cluster to apply configuration changes...
[root@nodo1 ~]# iscsiadm -m node -o show
Location constraint created to move resource 'SHARED' has been removed
# BEGIN RECORD 6.2.0.874-19
Waiting for the cluster to apply configuration changes...
node.name = iqn.2027-04.icecube:storage.target00
resource 'SHARED' is running on node 'nodo2'
node.tpgt = 1
[root@nodo1 ~]#  
node.startup = automatic
node.leading_login = No
iface.hwaddress = <empty>
iface.ipaddress = <empty>
iface.iscsi_ifacename = default
iface.net_ifacename = <empty>
iface.gateway = <empty>
iface.subnet_mask = <empty>
iface.transport_name = tcp
iface.initiatorname = <empty>
iface.state = <empty>
iface.vlan_id = 0
iface.vlan_priority = 0
iface.vlan_state = <empty>
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
iface.bootproto = <empty>
iface.dhcp_alt_client_id_state = <empty>
iface.dhcp_alt_client_id = <empty>
iface.dhcp_dns = <empty>
iface.dhcp_learn_iqn = <empty>
iface.dhcp_req_vendor_id_state = <empty>
iface.dhcp_vendor_id_state = <empty>
iface.dhcp_vendor_id = <empty>
iface.dhcp_slp_da = <empty>
iface.fragmentation = <empty>
iface.gratuitous_arp = <empty>
iface.incoming_forwarding = <empty>
iface.tos_state = <empty>
iface.tos = 0
iface.ttl = 0
iface.delayed_ack = <empty>
iface.tcp_nagle = <empty>
iface.tcp_wsf_state = <empty>
iface.tcp_wsf = 0
iface.tcp_timer_scale = 0
iface.tcp_timestamp = <empty>
iface.redirect = <empty>
iface.def_task_mgmt_timeout = 0
iface.header_digest = <empty>
iface.data_digest = <empty>
iface.immediate_data = <empty>
iface.initial_r2t = <empty>
iface.data_seq_inorder = <empty>
iface.data_pdu_inorder = <empty>
iface.erl = 0
iface.max_receive_data_len = 0
iface.first_burst_len = 0
iface.max_outstanding_r2t = 0
iface.max_burst_len = 0
iface.chap_auth = <empty>
iface.bidi_chap = <empty>
iface.strict_login_compliance = <empty>
iface.discovery_auth = <empty>
iface.discovery_logout = <empty>
node.discovery_address = 192.168.0.80
node.discovery_port = 3260
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.nr_sessions = 1
node.session.auth.authmethod = CHAP
node.session.auth.username = bonzo
node.session.auth.password = ********
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
node.session.auth.chap_algs = MD5
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.session.scan = auto
node.conn[0].address = 192.168.0.80
node.conn[0].port = 3260
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD
</syntaxhighlight>
</syntaxhighlight>


Iniciar sesión:
Verificamos que los recursos están arrancados en el <code>nodo2</code> y comprobamos que el VG esta activo, el LV montado y la IP arrancada:
 
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# iscsiadm -m node --login
[root@nodo1 ~]# pcs status
</syntaxhighlight>
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:25:34 2026 on nodo1
  * Last change:  Sat Jan  3 11:24:31 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured


Verificar la sesión activa:
Node List:
  * Online: [ nodo1 nodo2 ]


<syntaxhighlight lang="bash">
Full List of Resources:
[root@nodo1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.0.80
  * Resource Group: SHARED:
192.168.0.80:3260,1 iqn.2027-04.icecube:storage.target00
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo2
</syntaxhighlight>
    * fs_shared (ocf:heartbeat:Filesystem):     Started nodo2
    * ip_shared (ocf:heartbeat:IPaddr2):        Started nodo2


---
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#


==== Verificación del disco compartido ====
</syntaxhighlight>
 
Comprobar que el nuevo disco aparece en el sistema:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
[root@nodo1 ~]# iscsiadm -m session -o show
tcp: [1] 192.168.0.80:3260,1 iqn.2027-04.icecube:storage.target00 (non-flash)


[root@nodo1 ~]# lsblk
[root@nodo2 ~]# lvscan
NAME           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  ACTIVE           '/dev/rhel/swap' [2.00 GiB] inherit
sda              8:0    0 10.5G  0 disk
  ACTIVE           '/dev/rhel/root' [<17.00 GiB] inherit
├─sda1           8:1    0    1G  0 part /boot
  ACTIVE           '/dev/vg_shared/lv_data' [39.96 GiB] inherit
└─sda2           8:2    0  9.5G  0 part
[root@nodo2 ~]# df -hT /srv/shared
  ├─centos-root 253:0    0 8.4G  0 lvm /
Filesystem                    Type Size Used Avail Use% Mounted on
  └─centos-swap 253:1   0    1G  0 lvm [SWAP]
/dev/mapper/vg_shared-lv_data xfs   40G 318M  40G  1% /srv/shared
sdb              8:16  0    8G  0 disk
[root@nodo2 ~]# ip a show enp0s3
sr0              11:0    1 1024M  0 rom
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.82/24 brd 192.168.1.255 scope global noprefixroute enp0s3
      valid_lft forever preferred_lft forever
    inet 192.168.1.83/24 brd 192.168.1.255 scope global secondary enp0s3
      valid_lft forever preferred_lft forever
</syntaxhighlight>


</syntaxhighlight>
== Notas finales ==


Inicializar el disco como volumen físico:
* El disco iSCSI queda disponible para ser utilizado como recurso compartido en Pacemaker.
* Todos los nodos deben ver el mismo dispositivo de bloques.
* El uso de <code>pcs resource move</code> en RHEL 8/9 genera movimientos **temporales**; las constraints de localización se crean y eliminan automáticamente tras el movimiento.
* Para definir afinidad permanente de un recurso a un nodo es necesario crear una constraint explícita.


<syntaxhighlight lang="bash">
[root@nodo1 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
</syntaxhighlight>


---
== Referencias ==


== Notas finales ==
* [[iSCSI]] – Configuración de almacenamiento compartido
* [[LVM]] – Gestión de volúmenes lógicos


* El disco iSCSI queda disponible para ser utilizado como recurso compartido en Pacemaker
* [https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/configuring_and_managing_high_availability_clusters/index#con_HA-lvm-shared-volumes-overview-of-high-availability RHEL 9 – Shared LVM volumes in High Availability clusters]
* Todos los nodos deben ver el mismo dispositivo
* [https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_configuring-active-passive-http-server-in-a-cluster-configuring-and-managing-high-availability-clusters RHEL 9 – Configuring an active/passive service in a cluster]
* A partir de este punto se puede continuar con la configuración del clúster
* [https://clusterlabs.org/pacemaker/doc/ Pacemaker Documentation]

Revisión actual - 11:02 3 ene 2026

Creación de un clúster Pacemaker

Introducción

En este laboratorio se configura un clúster Pacemaker utilizando almacenamiento compartido vía iSCSI. Para ello se emplean un mínimo de tres máquinas:

  • Dos nodos que formarán el clúster Pacemaker
  • Una máquina adicional que actuará como servidor de almacenamiento SAN

El objetivo es simular el uso de discos compartidos de una cabina de almacenamiento en una infraestructura real.

Topología del laboratorio

  • Servidor SAN
    • Icecube — 192.168.1.80
  • Nodos Pacemaker
    • Nodo1 — 192.168.1.81
    • Nodo2 — 192.168.1.82

Configuración del almacenamiento SAN

Antes de proceder con la implementación del clúster, es necesario haber configurado el almacenamiento compartido. Este proceso se detalla en la sección dedicada a iSCSI

Verificación del disco compartido

Comprobar que el nuevo disco aparece en ambos nodos:

Nodo1:
[root@nodo1 ~]# iscsiadm -m session -o show
tcp: [1] 192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 (non-flash)
[root@nodo1 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0 10.5G  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0  9.5G  0 part
  ├─centos-root 253:0    0  8.4G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]
sdb               8:16   0    8G  0 disk
sr0              11:0    1 1024M  0 rom
Nodo2:
[root@nodo2 ~]#  iscsiadm -m session -o show
tcp: [1] 192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 (non-flash)
[root@nodo2 ~]# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                   8:0    0   20G  0 disk
├─sda1                8:1    0    1G  0 part /boot
└─sda2                8:2    0   19G  0 part
  ├─rhel-root       253:0    0   17G  0 lvm  /
  └─rhel-swap       253:1    0    2G  0 lvm  [SWAP]
sdb                   8:16   0   40G  0 disk
sr0                  11:0    1 1024M  0 rom

Pacemaker

Instalación

PACEMAKER RHEL9

[root@nodo1 ~]# subscription-manager repos \
  --enable=rhel-9-for-x86_64-highavailability-rpms
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.

[root@nodo2 ~]# subscription-manager repos \
  --enable=rhel-9-for-x86_64-highavailability-rpms
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.
[root@nodo1 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2
[root@nodo2 ~]# dnf install -y pacemaker pcs fence-agents-all lvm2

Configuración Cluster

[root@nodo1 ~]# systemctl enable --now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service  /usr/lib/systemd/system/pcsd.service.
[root@nodo1 ~]#

[root@nodo2 ~]# systemctl enable --now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service  /usr/lib/systemd/system/pcsd.service.
[root@nodo2 ~]#
[root@nodo1 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@nodo1 ~]#

[root@nodo2 ~]# passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@nodo2 ~]#


root@nodo1 ~]# firewall-cmd --add-service=high-availability --permanent
success
[root@nodo1 ~]# firewall-cmd --reload
success
[root@nodo1 ~]#


[root@nodo2 ~]# firewall-cmd --add-service=high-availability --permanent
success
[root@nodo2 ~]# firewall-cmd --reload
success
[root@nodo1 ~]# pcs host auth nodo1 nodo2
Username: hacluster
Password:
nodo1: Authorized
nodo2: Authorized
[root@nodo1 ~]#

[root@nodo2 ~]#  pcs host auth nodo1 nodo2
Username: hacluster
Password:
nodo1: Authorized
nodo2: Authorized
[root@nodo2 ~]#
[root@nodo2 ~]# pcs cluster setup iscsi-cluster nodo1 nodo2
No addresses specified for host 'nodo1', using 'nodo1'
No addresses specified for host 'nodo2', using 'nodo2'
Destroying cluster on hosts: 'nodo1', 'nodo2'...
nodo1: Successfully destroyed cluster
nodo2: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'nodo1', 'nodo2'
nodo2: successful removal of the file 'pcsd settings'
nodo1: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'nodo1', 'nodo2'
nodo2: successful distribution of the file 'corosync authkey'
nodo2: successful distribution of the file 'pacemaker authkey'
nodo1: successful distribution of the file 'corosync authkey'
nodo1: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'nodo1', 'nodo2'
nodo2: successful distribution of the file 'corosync.conf'
nodo1: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
[root@nodo2 ~]#
[root@nodo2 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
  crm_mon: Connection to cluster failed: Connection refused
[root@nodo2 ~]# pcs cluster start --all
nodo1: Starting Cluster...
nodo2: Starting Cluster...
[root@nodo2 ~]# pcs cluster enable --all
nodo1: Cluster Enabled
nodo2: Cluster Enabled
[root@nodo2 ~]# pcs status
Cluster name: iscsi-cluster

WARNINGS:
No stonith devices and stonith-enabled is not false
error: Resource start-up disabled since no STONITH resources have been defined
error: Either configure some or disable STONITH with the stonith-enabled option
error: NOTE: Clusters with shared data need STONITH to ensure data integrity
warning: Node nodo1 is unclean but cannot be fenced
warning: Node nodo2 is unclean but cannot be fenced
error: CIB did not pass schema validation
Errors found during check: config not valid

Cluster Summary:
  * Stack: unknown (Pacemaker is running)
  * Current DC: NONE
  * Last updated: Sat Jan  3 00:28:04 2026 on nodo2
  * Last change:  Sat Jan  3 00:27:58 2026 by hacluster via hacluster on nodo2
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Node nodo1: UNCLEAN (offline)
  * Node nodo2: UNCLEAN (offline)

Full List of Resources:
  * No resources

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo2 ~]# pcs property set stonith-enabled=false
[root@nodo2 ~]# pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 00:34:35 2026 on nodo2
  * Last change:  Sat Jan  3 00:34:28 2026 by root via root on nodo2
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ nodo1 nodo2 ]

Full List of Resources:
  * No resources

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo2 ~]#

Configuración LVM

Editar /etc/lvm/lvm.conf

Establecer la misma configuración de /etc/lvm/lvm.conf en todos los nodos del clúster y ejecutar un dracut -f y reboot.

[root@nodo1 ~]# grep -vE '^\s*#|^\s*$' /etc/lvm/lvm.conf
config {
}
devices {
}
allocation {
}
log {
}
backup {
}
shell {
}
global {
        system_id_source = "uname"
}
activation {
        auto_activation_volume_list = [ ]
}
report {
}
dmeventd {
}
[root@nodo1 ~]# dracut -f
[root@nodo1 ~]# reboot

Verificaciones post reinicio:

[root@nodo1 ~]# uname -n
nodo1

[root@nodo1 ~]# lvm systemid
  system ID: nodo1

Crear PV|VG|LV con LUN compartida

La creación del VG debe ejecutarse en un solo nodo del clúster: Crear Phisycal Volume:

[root@nodo1 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

Crear Volume Group:

[root@nodo1 ~]# vgcreate --setautoactivation n vg_shared /dev/sdb
  Volume group "vg_shared" successfully created with system ID nodo1
[root@nodo1 ~]# vgs -o+systemid
  VG        #PV #LV #SN Attr   VSize   VFree  System ID
  rhel        1   2   0 wz--n- <19.00g     0
  vg_shared   1   0   0 wz--n-  39.96g 39.96g nodo1
[root@nodo1 ~]#

Crear Logical Volume:

[root@nodo1 ~]# lvcreate -n lv_data -l 100%FREE vg_shared
  Wiping xfs signature on /dev/vg_shared/lv_data.
  Logical volume "lv_data" created.

Formatear en XFS:

[root@nodo1 ~]# mkfs.xfs /dev/vg_shared/lv_data
meta-data=/dev/vg_shared/lv_data isize=512    agcount=4, agsize=2618880 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@nodo1 ~]#

Configuración Recursos

Requisitos previos

Hay que desactivar el VG para que lo gestione el cluster y crear el directorio donde montar el FS:

[root@nodo1 ~]# vgchange -an vg_shared
  0 logical volume(s) in volume group "vg_shared" now active
[root@nodo1 ~]# lvscan
  ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
  inactive            '/dev/vg_shared/lv_data' [39.96 GiB] inherit
[root@nodo1 ~]# mkdir -p /srv/shared

Crear recursos

El orden de creación es importante ya que afecta al orden de arranque:

Crear recurso que active el VG:

[root@nodo1 ~]# pcs resource create vg_shared ocf:heartbeat:LVM-activate vgname=vg_shared vg_access_mode=system_id --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.

Crear recurso que monta el LV:

[root@nodo1 ~]# pcs resource create fs_shared ocf:heartbeat:Filesystem device="/dev/vg_shared/lv_data" directory="/srv/shared"  fstype="xfs" --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.

Crear recurso que active la IP del recurso:

[root@nodo1 ~]# pcs resource create ip_shared   ocf:heartbeat:IPaddr2   ip=192.168.1.83   cidr_netmask=24   nic=enp0s3   --group SHARED
Deprecation Warning: Using '--group' is deprecated and will be replaced with 'group' in a future release. Specify --future to switch to the future behavior.

Verificación

[root@nodo1 ~]#  pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:02:49 2026 on nodo1
  * Last change:  Sat Jan  3 10:52:08 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured

Node List:
  * Online: [ nodo1 nodo2 ]

Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo1
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo1
    * ip_shared (ocf:heartbeat:IPaddr2):         Started nodo1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#

Pruebas de movimiento paquetes:

Verificamos que los recursos están arrancados en el nodo1y comprobamos que el VG esta activo, el LV montado y la IP arrancada:

[root@nodo1 ~]# pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:24:14 2026 on nodo1
  * Last change:  Sat Jan  3 10:52:08 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured

Node List:
  * Online: [ nodo1 nodo2 ]

Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo1
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo1
    * ip_shared (ocf:heartbeat:IPaddr2):         Started nodo1

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#
[root@nodo1 ~]# lvscan
  ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
  ACTIVE            '/dev/vg_shared/lv_data' [39.96 GiB] inherit

[root@nodo1 ~]# df -hT /srv/shared
Filesystem                    Type  Size  Used Avail Use% Mounted on
/dev/mapper/vg_shared-lv_data xfs    40G  318M   40G   1% /srv/shared

[root@nodo1 ~]# ip a show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.81/24 brd 192.168.1.255 scope global noprefixroute enp0s3
       valid_lft forever preferred_lft forever
    inet 192.168.1.83/24 brd 192.168.1.255 scope global secondary enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fef3:3c51/64 scope link tentative noprefixroute
       valid_lft forever preferred_lft forever

Procedemos a mover el paquete al nodo2:

[root@nodo1 ~]# pcs resource move SHARED
Location constraint to move resource 'SHARED' has been created
Waiting for the cluster to apply configuration changes...
Location constraint created to move resource 'SHARED' has been removed
Waiting for the cluster to apply configuration changes...
resource 'SHARED' is running on node 'nodo2'
[root@nodo1 ~]#

Verificamos que los recursos están arrancados en el nodo2 y comprobamos que el VG esta activo, el LV montado y la IP arrancada:

[root@nodo1 ~]# pcs status
Cluster name: iscsi-cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: nodo2 (version 2.1.10-1.el9-5693eaeee) - partition with quorum
  * Last updated: Sat Jan  3 11:25:34 2026 on nodo1
  * Last change:  Sat Jan  3 11:24:31 2026 by root via root on nodo1
  * 2 nodes configured
  * 3 resource instances configured

Node List:
  * Online: [ nodo1 nodo2 ]

Full List of Resources:
  * Resource Group: SHARED:
    * vg_shared (ocf:heartbeat:LVM-activate):    Started nodo2
    * fs_shared (ocf:heartbeat:Filesystem):      Started nodo2
    * ip_shared (ocf:heartbeat:IPaddr2):         Started nodo2

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nodo1 ~]#
[root@nodo2 ~]# lvscan
  ACTIVE            '/dev/rhel/swap' [2.00 GiB] inherit
  ACTIVE            '/dev/rhel/root' [<17.00 GiB] inherit
  ACTIVE            '/dev/vg_shared/lv_data' [39.96 GiB] inherit
[root@nodo2 ~]# df -hT /srv/shared
Filesystem                    Type  Size  Used Avail Use% Mounted on
/dev/mapper/vg_shared-lv_data xfs    40G  318M   40G   1% /srv/shared
[root@nodo2 ~]# ip a show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.82/24 brd 192.168.1.255 scope global noprefixroute enp0s3
       valid_lft forever preferred_lft forever
    inet 192.168.1.83/24 brd 192.168.1.255 scope global secondary enp0s3
       valid_lft forever preferred_lft forever

Notas finales

  • El disco iSCSI queda disponible para ser utilizado como recurso compartido en Pacemaker.
  • Todos los nodos deben ver el mismo dispositivo de bloques.
  • El uso de pcs resource move en RHEL 8/9 genera movimientos **temporales**; las constraints de localización se crean y eliminan automáticamente tras el movimiento.
  • Para definir afinidad permanente de un recurso a un nodo es necesario crear una constraint explícita.


Referencias

  • iSCSI – Configuración de almacenamiento compartido
  • LVM – Gestión de volúmenes lógicos