Diferencia entre revisiones de «Multipath»
(→Fixes) |
|||
| (No se muestran 24 ediciones intermedias del mismo usuario) | |||
| Línea 1: | Línea 1: | ||
= Configurar Multipath = | = Configurar Multipath = | ||
== Introducción == | == Introducción == | ||
Continuando con anteriores | Continuando con los LABs anteriores, en este apartado se implementa Multipath sobre iSCSI, partiendo del entorno configurado previamente en [[iSCSI]] y el clúster descrito en [[Pacemaker|aquí]]. | ||
Aunque el orden | Aunque el orden recomendado para desplegar una infraestructura de este tipo suele ser el siguiente | ||
*iSCSI | *iSCSI | ||
| Línea 10: | Línea 10: | ||
*filesystem | *filesystem | ||
*cluster (si aplica) | *cluster (si aplica) | ||
en este laboratorio se introduce Multipath en este punto con el objetivo de mejorar la capa de alta disponibilidad (HA) del almacenamiento ya existente. | |||
== Requisitos Previos == | == Requisitos Previos == | ||
=== Configuración de red === | === Configuración de red === | ||
Para | Para que Multipath funcione correctamente, es imprescindible que una misma LUN sea accesible por múltiples paths. | ||
En el caso de iSCSI, esto implica exportar el target por varias direcciones IP. | |||
En este LAB, el servidor iSCSI utilizará dos direcciones IP en la misma interfaz de red, lo cual es suficiente para fines de laboratorio y validación de Multipath. | |||
Estado inicial de la interfaz de red: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
[root@icecube network-scripts]# nmcli | [root@icecube network-scripts]# nmcli | ||
| Línea 45: | Línea 52: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Comprobamos las conexiones activas: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
| Línea 53: | Línea 61: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Añadimos una segunda dirección IP a la conexión existente: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
[root@icecube network-scripts]# nmcli con mod enp0s3 +ipv4.addresses 192.168.1.79/24 | [root@icecube network-scripts]# nmcli con mod enp0s3 +ipv4.addresses 192.168.1.79/24 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Aplicamos los cambios: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
[root@icecube network-scripts]# nmcli device reapply enp0s3 | [root@icecube network-scripts]# nmcli device reapply enp0s3 | ||
| Línea 62: | Línea 72: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Verificamos que la interfaz dispone ahora de dos direcciones IP: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
[root@icecube network-scripts]# ip a | [root@icecube network-scripts]# ip a | ||
| Línea 83: | Línea 94: | ||
=== Almacenamiento === | === Almacenamiento === | ||
Se ha añadido un disco de | Se ha añadido un nuevo disco virtual de 10 GB al servidor iSCSI, que se utilizará para crear una nueva LUN destinada a Multipath. | ||
Comprobamos los discos disponibles: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
[root@icecube ~]# lsblk | [root@icecube ~]# lsblk | ||
| Línea 101: | Línea 113: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Creamos el volumen físico y el grupo de volúmenes: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
[root@icecube ~]# vgcreate vg_multipath /dev/sdd | [root@icecube ~]# vgcreate vg_multipath /dev/sdd | ||
| Línea 107: | Línea 120: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Creamos el volumen lógico que se exportará vía iSCSI: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
[root@icecube ~]# lvcreate -n lv_mp_data -l 100%FREE vg_multipath | [root@icecube ~]# lvcreate -n lv_mp_data -l 100%FREE vg_multipath | ||
| Línea 112: | Línea 126: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Verificamos el resultado: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
[root@icecube ~]# lvs | [root@icecube ~]# lvs | ||
| Línea 119: | Línea 134: | ||
lv_storage vg_iscsi -wi-ao---- 39.99g | lv_storage vg_iscsi -wi-ao---- 39.99g | ||
lv_mp_data vg_multipath -wi-a----- <10.00g | lv_mp_data vg_multipath -wi-a----- <10.00g | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Configurar las Luns == | == Configurar las Luns == | ||
A continuación se muestra la configuración existente tras completar los LABs de [[iSCSI]] y [[Pacemaker]]: | |||
<syntaxhighlight lang=" | <syntaxhighlight lang="bash"> | ||
[root@icecube ~]# targetcli ls | [root@icecube ~]# targetcli ls | ||
o- / ......................................................................................................................... [...] | o- / ......................................................................................................................... [...] | ||
| Línea 157: | Línea 164: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Accedemos a la consola de targetcli y creamos el nuevo backstore: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
| Línea 168: | Línea 176: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Verificamos que el nuevo dispositivo aparece correctamente: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
/iscsi/iqn.20...target01/tpg1> ls /backstores/block | /iscsi/iqn.20...target01/tpg1> ls /backstores/block | ||
| Línea 179: | Línea 188: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Creamos la nueva LUN asociada al backstore: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns create /backstores/block/lun_mp | /iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns create /backstores/block/lun_mp | ||
| Línea 185: | Línea 195: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Comprobamos las LUNs configuradas: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
/iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns | /iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns | ||
| Línea 194: | Línea 205: | ||
o- 0.0.0.0:3260 ............................................................................................................. [OK] | o- 0.0.0.0:3260 ............................................................................................................. [OK] | ||
</syntaxhighlight> | </syntaxhighlight> | ||
== Configuración de portales iSCSI == | |||
Por defecto, el target utiliza el portal 0.0.0.0:3260, lo cual interfiere con la lógica de Multipath, ya que no permite diferenciar paths por IP. | |||
Por este motivo, se elimina el portal genérico: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
| Línea 200: | Línea 216: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Y se crean dos portales explícitos, uno por cada IP configurada en el target: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals create 192.168.1.80 3260 | /iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals create 192.168.1.80 3260 | ||
| Línea 212: | Línea 229: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Verificamos los portales activos: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
/iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals | /iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals | ||
| Línea 218: | Línea 236: | ||
o- 192.168.1.80:3260 ........................................................................................................ [OK] | o- 192.168.1.80:3260 ........................................................................................................ [OK] | ||
/iscsi/iqn.20...target01/tpg1> | /iscsi/iqn.20...target01/tpg1> | ||
</syntaxhighlight> | |||
Guardamos la configuración y salimos: | |||
<syntaxhighlight lang="bash"> | |||
/iscsi/iqn.20...target01/tpg1> exit | /iscsi/iqn.20...target01/tpg1> exit | ||
Global pref auto_save_on_exit=true | Global pref auto_save_on_exit=true | ||
| Línea 226: | Línea 248: | ||
== Configurar Clientes == | == Configurar Clientes == | ||
=== Requisitos === | |||
En cada nodo cliente es necesario instalar los siguientes paquetes: | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo2 ~]# dnf install -y iscsi-initiator-utils device-mapper-multipath | |||
Updating Subscription Management repositories. | |||
Last metadata expiration check: 4:06:37 ago on Sat 03 Jan 2026 09:42:58 AM CET. | |||
Package iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64 is already installed. | |||
Package device-mapper-multipath-0.8.7-35.el9.x86_64 is already installed. | |||
Dependencies resolved. | |||
================================================================================================================================================================================================ | |||
Package Architecture Version Repository Size | |||
================================================================================================================================================================================================ | |||
Upgrading: | |||
device-mapper-multipath x86_64 0.8.7-39.el9 rhel-9-for-x86_64-baseos-rpms 151 k | |||
device-mapper-multipath-libs x86_64 0.8.7-39.el9 rhel-9-for-x86_64-baseos-rpms 284 k | |||
kpartx x86_64 0.8.7-39.el9 rhel-9-for-x86_64-baseos-rpms 47 k | |||
Transaction Summary | |||
================================================================================================================================================================================================ | |||
Upgrade 3 Packages | |||
Total download size: 481 k | |||
Downloading Packages: | |||
(1/3): device-mapper-multipath-0.8.7-39.el9.x86_64.rpm 529 kB/s | 151 kB 00:00 | |||
(2/3): device-mapper-multipath-libs-0.8.7-39.el9.x86_64.rpm 867 kB/s | 284 kB 00:00 | |||
(3/3): kpartx-0.8.7-39.el9.x86_64.rpm 140 kB/s | 47 kB 00:00 | |||
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |||
Total 1.4 MB/s | 481 kB 00:00 | |||
Running transaction check | |||
Transaction check succeeded. | |||
Running transaction test | |||
Transaction test succeeded. | |||
Running transaction | |||
Preparing : 1/1 | |||
Upgrading : kpartx-0.8.7-39.el9.x86_64 1/6 | |||
Upgrading : device-mapper-multipath-libs-0.8.7-39.el9.x86_64 2/6 | |||
Upgrading : device-mapper-multipath-0.8.7-39.el9.x86_64 3/6 | |||
Running scriptlet: device-mapper-multipath-0.8.7-39.el9.x86_64 3/6 | |||
Running scriptlet: device-mapper-multipath-0.8.7-35.el9.x86_64 4/6 | |||
Cleanup : device-mapper-multipath-0.8.7-35.el9.x86_64 4/6 | |||
Running scriptlet: device-mapper-multipath-0.8.7-35.el9.x86_64 4/6 | |||
Cleanup : device-mapper-multipath-libs-0.8.7-35.el9.x86_64 5/6 | |||
Cleanup : kpartx-0.8.7-35.el9.x86_64 6/6 | |||
Running scriptlet: kpartx-0.8.7-35.el9.x86_64 6/6 | |||
Verifying : device-mapper-multipath-0.8.7-39.el9.x86_64 1/6 | |||
Verifying : device-mapper-multipath-0.8.7-35.el9.x86_64 2/6 | |||
Verifying : device-mapper-multipath-libs-0.8.7-39.el9.x86_64 3/6 | |||
Verifying : device-mapper-multipath-libs-0.8.7-35.el9.x86_64 4/6 | |||
Verifying : kpartx-0.8.7-39.el9.x86_64 5/6 | |||
Verifying : kpartx-0.8.7-35.el9.x86_64 6/6 | |||
Installed products updated. | |||
Upgraded: | |||
device-mapper-multipath-0.8.7-39.el9.x86_64 device-mapper-multipath-libs-0.8.7-39.el9.x86_64 kpartx-0.8.7-39.el9.x86_64 | |||
Complete! | |||
</syntaxhighlight> | |||
Generamos la configuración por defecto de Multipath y habilitamos los servicios: | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo2 ~]# mpathconf --enable --with_multipathd y | |||
</syntaxhighlight> | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo2 ~]# systemctl enable --now iscsid multipathd | |||
</syntaxhighlight> | |||
=== Conectar los discos === | |||
Descubrimos el target desde ambas IPs: | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.79 | |||
192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 | |||
192.168.1.79:3260,1 iqn.2026-01.icecube:storage.target01 | |||
[root@nodo2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.80 | |||
192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01 | |||
192.168.1.79:3260,1 iqn.2026-01.icecube:storage.target01 | |||
</syntaxhighlight> | |||
Realizamos el login: | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo2 ~]# iscsiadm -m node --login | |||
Login to [iface: default, target: iqn.2026-01.icecube:storage.target01, portal: 192.168.1.79,3260] successful. | |||
</syntaxhighlight> | |||
Verificamos el estado de Multipath: | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo1 ~]# multipath -ll | |||
mpatha (3600140502ad9c55f91b4682bccd2049c) dm-3 LIO-ORG,lun_mp | |||
size=10.0G features='0' hwhandler='1 alua' wp=rw | |||
|-+- policy='service-time 0' prio=0 status=active | |||
| `- 4:0:0:1 sde 8:64 active i/o pending running | |||
`-+- policy='service-time 0' prio=50 status=enabled | |||
`- 3:0:0:1 sdc 8:32 active ready running | |||
mpathb (36001405b1e538202d0e4605a66e96ed4) dm-4 LIO-ORG,lun01 | |||
size=40G features='0' hwhandler='1 alua' wp=rw | |||
|-+- policy='service-time 0' prio=0 status=active | |||
| `- 4:0:0:0 sdd 8:48 active i/o pending running | |||
`-+- policy='service-time 0' prio=50 status=enabled | |||
`- 3:0:0:0 sdb 8:16 active ready running | |||
</syntaxhighlight> | |||
Comprobamos la topología de dispositivos: | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo1 ~]# lsblk | |||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS | |||
sda 8:0 0 20G 0 disk | |||
├─sda1 8:1 0 1G 0 part /boot | |||
└─sda2 8:2 0 19G 0 part | |||
├─rhel-root 253:0 0 17G 0 lvm / | |||
└─rhel-swap 253:1 0 2G 0 lvm [SWAP] | |||
sdb 8:16 0 40G 0 disk | |||
├─vg_shared-lv_data 253:2 0 40G 0 lvm /srv/shared | |||
└─mpathb 253:4 0 40G 0 mpath | |||
sdc 8:32 0 10G 0 disk | |||
└─mpatha 253:3 0 10G 0 mpath | |||
sdd 8:48 0 40G 0 disk | |||
└─mpathb 253:4 0 40G 0 mpath | |||
sde 8:64 0 10G 0 disk | |||
└─mpatha 253:3 0 10G 0 mpath | |||
sr0 11:0 1 1024M 0 rom | |||
</syntaxhighlight> | |||
== Fixes == | == Fixes == | ||
Tras la activación de Multipath, LVM puede seguir referenciando discos individuales en lugar del dispositivo multipath. | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo1 ~]# pvs | |||
WARNING: devices file is missing /dev/mapper/mpathb (253:4) using multipath component /dev/sdb. | |||
See lvmdevices --update for devices file update. | |||
PV VG Fmt Attr PSize PFree | |||
/dev/sda2 rhel lvm2 a-- <19.00g 0 | |||
</syntaxhighlight> | |||
Esto se corrige actualizando el fichero de dispositivos de LVM: | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo1 ~]# lvmdevices --update | |||
WARNING: devices file is missing /dev/mapper/mpathb (253:4) using multipath component /dev/sdb. | |||
IDTYPE=sys_wwid IDNAME=naa.6001405b1e538202d0e4605a66e96ed4 DEVNAME=/dev/sdb PVID=dExzbuXMlMyc5lyceBXm7crwQeCP6Qgd: remove multipath component | |||
Adding multipath device /dev/mapper/mpathb for multipath component /dev/sdb. | |||
IDTYPE=mpath_uuid (old sys_wwid) IDNAME=mpath-36001405b1e538202d0e4605a66e96ed4 (old naa.6001405b1e538202d0e4605a66e96ed4) DEVNAME=/dev/mapper/mpathb (old /dev/sdb) PVID=dExzbuXMlMyc5lyceBXm7crwQeCP6Qgd: update | |||
Updated devices file to version 1.1.38 | |||
</syntaxhighlight> | |||
Tras la corrección, el volumen físico queda correctamente asociado al dispositivo multipath: | |||
<syntaxhighlight lang="bash"> | |||
[root@nodo1 ~]# pvs | |||
WARNING: Device mismatch detected for vg_shared/lv_data which is accessing /dev/sdb instead of /dev/mapper/mpathb. | |||
PV VG Fmt Attr PSize PFree | |||
/dev/mapper/mpathb vg_shared lvm2 a-- 39.96g 0 | |||
/dev/sda2 rhel lvm2 a-- <19.00g 0 | |||
[root@nodo1 ~]# | |||
</syntaxhighlight> | |||
Revisión actual - 13:42 3 ene 2026
Configurar Multipath
Introducción
Continuando con los LABs anteriores, en este apartado se implementa Multipath sobre iSCSI, partiendo del entorno configurado previamente en iSCSI y el clúster descrito en aquí.
Aunque el orden recomendado para desplegar una infraestructura de este tipo suele ser el siguiente
- iSCSI
- multipath
- LVM
- filesystem
- cluster (si aplica)
en este laboratorio se introduce Multipath en este punto con el objetivo de mejorar la capa de alta disponibilidad (HA) del almacenamiento ya existente.
Requisitos Previos
Configuración de red
Para que Multipath funcione correctamente, es imprescindible que una misma LUN sea accesible por múltiples paths.
En el caso de iSCSI, esto implica exportar el target por varias direcciones IP.
En este LAB, el servidor iSCSI utilizará dos direcciones IP en la misma interfaz de red, lo cual es suficiente para fines de laboratorio y validación de Multipath.
Estado inicial de la interfaz de red:
[root@icecube network-scripts]# nmcli
enp0s3: connected to enp0s3
"Intel 82540EM"
ethernet (e1000), 08:00:27:F3:3C:51, hw, mtu 1500
ip4 default
inet4 192.168.1.80/24
route4 192.168.1.0/24 metric 100
route4 default via 192.168.1.1 metric 100
inet6 fe80::a00:27ff:fef3:3c51/64
route6 fe80::/64 metric 1024
lo: connected (externally) to lo
"lo"
loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536
inet4 127.0.0.1/8
inet6 ::1/128
route6 ::1/128 metric 256
DNS configuration:
servers: 8.8.8.8
interface: enp0s3
Use "nmcli device show" to get complete information about known devices and
"nmcli connection show" to get an overview on active connection profiles.
Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details.
Comprobamos las conexiones activas:
[root@icecube network-scripts]# nmcli -f NAME,DEVICE con show --active
NAME DEVICE
enp0s3 enp0s3
lo lo
Añadimos una segunda dirección IP a la conexión existente:
[root@icecube network-scripts]# nmcli con mod enp0s3 +ipv4.addresses 192.168.1.79/24
Aplicamos los cambios:
[root@icecube network-scripts]# nmcli device reapply enp0s3
Connection successfully reapplied to device 'enp0s3'.
Verificamos que la interfaz dispone ahora de dos direcciones IP:
[root@icecube network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.80/24 brd 192.168.1.255 scope global noprefixroute enp0s3
valid_lft forever preferred_lft forever
inet 192.168.1.79/24 brd 192.168.1.255 scope global secondary noprefixroute enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fef3:3c51/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@icecube network-scripts]#
Almacenamiento
Se ha añadido un nuevo disco virtual de 10 GB al servidor iSCSI, que se utilizará para crear una nueva LUN destinada a Multipath.
Comprobamos los discos disponibles:
[root@icecube ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 20G 0 disk
└─vg_iscsi-lv_storage 253:2 0 40G 0 lvm
sdc 8:32 0 20G 0 disk
└─vg_iscsi-lv_storage 253:2 0 40G 0 lvm
sdd 8:48 0 10G 0 disk
sr0 11:0 1 1024M 0 rom
Creamos el volumen físico y el grupo de volúmenes:
[root@icecube ~]# vgcreate vg_multipath /dev/sdd
Physical volume "/dev/sdd" successfully created.
Volume group "vg_multipath" successfully created
Creamos el volumen lógico que se exportará vía iSCSI:
[root@icecube ~]# lvcreate -n lv_mp_data -l 100%FREE vg_multipath
Logical volume "lv_mp_data" created.
Verificamos el resultado:
[root@icecube ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel -wi-ao---- <17.00g
swap rhel -wi-ao---- 2.00g
lv_storage vg_iscsi -wi-ao---- 39.99g
lv_mp_data vg_multipath -wi-a----- <10.00g
Configurar las Luns
A continuación se muestra la configuración existente tras completar los LABs de iSCSI y Pacemaker:
[root@icecube ~]# targetcli ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 1]
| | o- lun01 ........................................................... [/dev/vg_iscsi/lv_storage (40.0GiB) write-thru activated]
| | o- alua ................................................................................................... [ALUA Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 1]
| o- iqn.2026-01.icecube:storage.target01 .............................................................................. [TPGs: 1]
| o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
| o- acls .......................................................................................................... [ACLs: 1]
| | o- iqn.2026-01.icecube:node01.initiator01 ................................................... [1-way auth, Mapped LUNs: 1]
| | o- mapped_lun0 ................................................................................. [lun0 block/lun01 (rw)]
| o- luns .......................................................................................................... [LUNs: 1]
| | o- lun0 ...................................................... [block/lun01 (/dev/vg_iscsi/lv_storage) (default_tg_pt_gp)]
| o- portals .................................................................................................... [Portals: 1]
| o- 0.0.0.0:3260 ..................................................................................................... [OK]
o- loopback ......................................................................................................... [Targets: 0]
Accedemos a la consola de targetcli y creamos el nuevo backstore:
[root@icecube ~]# targetcli
targetcli shell version 2.1.57
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/iscsi/iqn.20...target01/tpg1> /backstores/block create lun_mp /dev/vg_multipath/lv_mp_data
Created block storage object lun_mp using /dev/vg_multipath/lv_mp_data.
Verificamos que el nuevo dispositivo aparece correctamente:
/iscsi/iqn.20...target01/tpg1> ls /backstores/block
o- block ...................................................................................................... [Storage Objects: 2]
o- lun01 ............................................................... [/dev/vg_iscsi/lv_storage (40.0GiB) write-thru activated]
| o- alua ....................................................................................................... [ALUA Groups: 1]
| o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
o- lun_mp ........................................................ [/dev/vg_multipath/lv_mp_data (10.0GiB) write-thru deactivated]
o- alua ....................................................................................................... [ALUA Groups: 1]
o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
Creamos la nueva LUN asociada al backstore:
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns create /backstores/block/lun_mp
Created LUN 1.
Created LUN 1->1 mapping in node ACL iqn.2026-01.icecube:node01.initiator01
Comprobamos las LUNs configuradas:
/iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns
o- luns .................................................................................................................. [LUNs: 2]
o- lun0 .............................................................. [block/lun01 (/dev/vg_iscsi/lv_storage) (default_tg_pt_gp)]
o- lun1 ......................................................... [block/lun_mp (/dev/vg_multipath/lv_mp_data) (default_tg_pt_gp)]
/iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals
o- portals ............................................................................................................ [Portals: 1]
o- 0.0.0.0:3260 ............................................................................................................. [OK]
Configuración de portales iSCSI
Por defecto, el target utiliza el portal 0.0.0.0:3260, lo cual interfiere con la lógica de Multipath, ya que no permite diferenciar paths por IP.
Por este motivo, se elimina el portal genérico:
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals delete 0.0.0.0 3260
Deleted network portal 0.0.0.0:3260
Y se crean dos portales explícitos, uno por cada IP configurada en el target:
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals create 192.168.1.80 3260
Using default IP port 3260
Created network portal 192.168.1.80:3260.
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals create 192.168.1.79 3260
Using default IP port 3260
Created network portal 192.168.1.79:3260.
Verificamos los portales activos:
/iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals
o- portals ............................................................................................................ [Portals: 2]
o- 192.168.1.79:3260 ........................................................................................................ [OK]
o- 192.168.1.80:3260 ........................................................................................................ [OK]
/iscsi/iqn.20...target01/tpg1>
Guardamos la configuración y salimos:
/iscsi/iqn.20...target01/tpg1> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json
[root@icecube network-scripts]#
Configurar Clientes
Requisitos
En cada nodo cliente es necesario instalar los siguientes paquetes:
[root@nodo2 ~]# dnf install -y iscsi-initiator-utils device-mapper-multipath
Updating Subscription Management repositories.
Last metadata expiration check: 4:06:37 ago on Sat 03 Jan 2026 09:42:58 AM CET.
Package iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64 is already installed.
Package device-mapper-multipath-0.8.7-35.el9.x86_64 is already installed.
Dependencies resolved.
================================================================================================================================================================================================
Package Architecture Version Repository Size
================================================================================================================================================================================================
Upgrading:
device-mapper-multipath x86_64 0.8.7-39.el9 rhel-9-for-x86_64-baseos-rpms 151 k
device-mapper-multipath-libs x86_64 0.8.7-39.el9 rhel-9-for-x86_64-baseos-rpms 284 k
kpartx x86_64 0.8.7-39.el9 rhel-9-for-x86_64-baseos-rpms 47 k
Transaction Summary
================================================================================================================================================================================================
Upgrade 3 Packages
Total download size: 481 k
Downloading Packages:
(1/3): device-mapper-multipath-0.8.7-39.el9.x86_64.rpm 529 kB/s | 151 kB 00:00
(2/3): device-mapper-multipath-libs-0.8.7-39.el9.x86_64.rpm 867 kB/s | 284 kB 00:00
(3/3): kpartx-0.8.7-39.el9.x86_64.rpm 140 kB/s | 47 kB 00:00
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 1.4 MB/s | 481 kB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Upgrading : kpartx-0.8.7-39.el9.x86_64 1/6
Upgrading : device-mapper-multipath-libs-0.8.7-39.el9.x86_64 2/6
Upgrading : device-mapper-multipath-0.8.7-39.el9.x86_64 3/6
Running scriptlet: device-mapper-multipath-0.8.7-39.el9.x86_64 3/6
Running scriptlet: device-mapper-multipath-0.8.7-35.el9.x86_64 4/6
Cleanup : device-mapper-multipath-0.8.7-35.el9.x86_64 4/6
Running scriptlet: device-mapper-multipath-0.8.7-35.el9.x86_64 4/6
Cleanup : device-mapper-multipath-libs-0.8.7-35.el9.x86_64 5/6
Cleanup : kpartx-0.8.7-35.el9.x86_64 6/6
Running scriptlet: kpartx-0.8.7-35.el9.x86_64 6/6
Verifying : device-mapper-multipath-0.8.7-39.el9.x86_64 1/6
Verifying : device-mapper-multipath-0.8.7-35.el9.x86_64 2/6
Verifying : device-mapper-multipath-libs-0.8.7-39.el9.x86_64 3/6
Verifying : device-mapper-multipath-libs-0.8.7-35.el9.x86_64 4/6
Verifying : kpartx-0.8.7-39.el9.x86_64 5/6
Verifying : kpartx-0.8.7-35.el9.x86_64 6/6
Installed products updated.
Upgraded:
device-mapper-multipath-0.8.7-39.el9.x86_64 device-mapper-multipath-libs-0.8.7-39.el9.x86_64 kpartx-0.8.7-39.el9.x86_64
Complete!
Generamos la configuración por defecto de Multipath y habilitamos los servicios:
[root@nodo2 ~]# mpathconf --enable --with_multipathd y
[root@nodo2 ~]# systemctl enable --now iscsid multipathd
Conectar los discos
Descubrimos el target desde ambas IPs:
[root@nodo2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.79
192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01
192.168.1.79:3260,1 iqn.2026-01.icecube:storage.target01
[root@nodo2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.80
192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01
192.168.1.79:3260,1 iqn.2026-01.icecube:storage.target01
Realizamos el login:
[root@nodo2 ~]# iscsiadm -m node --login
Login to [iface: default, target: iqn.2026-01.icecube:storage.target01, portal: 192.168.1.79,3260] successful.
Verificamos el estado de Multipath:
[root@nodo1 ~]# multipath -ll
mpatha (3600140502ad9c55f91b4682bccd2049c) dm-3 LIO-ORG,lun_mp
size=10.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 4:0:0:1 sde 8:64 active i/o pending running
`-+- policy='service-time 0' prio=50 status=enabled
`- 3:0:0:1 sdc 8:32 active ready running
mpathb (36001405b1e538202d0e4605a66e96ed4) dm-4 LIO-ORG,lun01
size=40G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 4:0:0:0 sdd 8:48 active i/o pending running
`-+- policy='service-time 0' prio=50 status=enabled
`- 3:0:0:0 sdb 8:16 active ready running
Comprobamos la topología de dispositivos:
[root@nodo1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 40G 0 disk
├─vg_shared-lv_data 253:2 0 40G 0 lvm /srv/shared
└─mpathb 253:4 0 40G 0 mpath
sdc 8:32 0 10G 0 disk
└─mpatha 253:3 0 10G 0 mpath
sdd 8:48 0 40G 0 disk
└─mpathb 253:4 0 40G 0 mpath
sde 8:64 0 10G 0 disk
└─mpatha 253:3 0 10G 0 mpath
sr0 11:0 1 1024M 0 rom
Fixes
Tras la activación de Multipath, LVM puede seguir referenciando discos individuales en lugar del dispositivo multipath.
[root@nodo1 ~]# pvs
WARNING: devices file is missing /dev/mapper/mpathb (253:4) using multipath component /dev/sdb.
See lvmdevices --update for devices file update.
PV VG Fmt Attr PSize PFree
/dev/sda2 rhel lvm2 a-- <19.00g 0
Esto se corrige actualizando el fichero de dispositivos de LVM:
[root@nodo1 ~]# lvmdevices --update
WARNING: devices file is missing /dev/mapper/mpathb (253:4) using multipath component /dev/sdb.
IDTYPE=sys_wwid IDNAME=naa.6001405b1e538202d0e4605a66e96ed4 DEVNAME=/dev/sdb PVID=dExzbuXMlMyc5lyceBXm7crwQeCP6Qgd: remove multipath component
Adding multipath device /dev/mapper/mpathb for multipath component /dev/sdb.
IDTYPE=mpath_uuid (old sys_wwid) IDNAME=mpath-36001405b1e538202d0e4605a66e96ed4 (old naa.6001405b1e538202d0e4605a66e96ed4) DEVNAME=/dev/mapper/mpathb (old /dev/sdb) PVID=dExzbuXMlMyc5lyceBXm7crwQeCP6Qgd: update
Updated devices file to version 1.1.38
Tras la corrección, el volumen físico queda correctamente asociado al dispositivo multipath:
[root@nodo1 ~]# pvs
WARNING: Device mismatch detected for vg_shared/lv_data which is accessing /dev/sdb instead of /dev/mapper/mpathb.
PV VG Fmt Attr PSize PFree
/dev/mapper/mpathb vg_shared lvm2 a-- 39.96g 0
/dev/sda2 rhel lvm2 a-- <19.00g 0
[root@nodo1 ~]#