Multipath
Configurar Multipath
Introducción
Continuando con los LABs anteriores, en este apartado se implementa Multipath sobre iSCSI, partiendo del entorno configurado previamente en iSCSI y el clúster descrito en aquí.
Aunque el orden recomendado para desplegar una infraestructura de este tipo suele ser el siguiente
- iSCSI
- multipath
- LVM
- filesystem
- cluster (si aplica)
en este laboratorio se introduce Multipath en este punto con el objetivo de mejorar la capa de alta disponibilidad (HA) del almacenamiento ya existente.
Requisitos Previos
Configuración de red
Para que Multipath funcione correctamente, es imprescindible que una misma LUN sea accesible por múltiples paths.
En el caso de iSCSI, esto implica exportar el target por varias direcciones IP.
En este LAB, el servidor iSCSI utilizará dos direcciones IP en la misma interfaz de red, lo cual es suficiente para fines de laboratorio y validación de Multipath.
Estado inicial de la interfaz de red:
[root@icecube network-scripts]# nmcli
enp0s3: connected to enp0s3
"Intel 82540EM"
ethernet (e1000), 08:00:27:F3:3C:51, hw, mtu 1500
ip4 default
inet4 192.168.1.80/24
route4 192.168.1.0/24 metric 100
route4 default via 192.168.1.1 metric 100
inet6 fe80::a00:27ff:fef3:3c51/64
route6 fe80::/64 metric 1024
lo: connected (externally) to lo
"lo"
loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536
inet4 127.0.0.1/8
inet6 ::1/128
route6 ::1/128 metric 256
DNS configuration:
servers: 8.8.8.8
interface: enp0s3
Use "nmcli device show" to get complete information about known devices and
"nmcli connection show" to get an overview on active connection profiles.
Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details.
Comprobamos las conexiones activas:
[root@icecube network-scripts]# nmcli -f NAME,DEVICE con show --active
NAME DEVICE
enp0s3 enp0s3
lo lo
Añadimos una segunda dirección IP a la conexión existente:
[root@icecube network-scripts]# nmcli con mod enp0s3 +ipv4.addresses 192.168.1.79/24
Aplicamos los cambios:
[root@icecube network-scripts]# nmcli device reapply enp0s3
Connection successfully reapplied to device 'enp0s3'.
Verificamos que la interfaz dispone ahora de dos direcciones IP:
[root@icecube network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:f3:3c:51 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.80/24 brd 192.168.1.255 scope global noprefixroute enp0s3
valid_lft forever preferred_lft forever
inet 192.168.1.79/24 brd 192.168.1.255 scope global secondary noprefixroute enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fef3:3c51/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@icecube network-scripts]#
Almacenamiento
Se ha añadido un disco de 10G virtual en el servidor para poder crear un LV que será añadido al iSCSI.
[root@icecube ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 20G 0 disk
└─vg_iscsi-lv_storage 253:2 0 40G 0 lvm
sdc 8:32 0 20G 0 disk
└─vg_iscsi-lv_storage 253:2 0 40G 0 lvm
sdd 8:48 0 10G 0 disk
sr0 11:0 1 1024M 0 rom
[root@icecube ~]# vgcreate vg_multipath /dev/sdd
Physical volume "/dev/sdd" successfully created.
Volume group "vg_multipath" successfully created
[root@icecube ~]# lvcreate -n lv_mp_data -l 100%FREE vg_multipath
Logical volume "lv_mp_data" created.
[root@icecube ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel -wi-ao---- <17.00g
swap rhel -wi-ao---- 2.00g
lv_storage vg_iscsi -wi-ao---- 39.99g
lv_mp_data vg_multipath -wi-a----- <10.00g
Configurar las Luns
Configuración actual despues del LAB iSCSI y Pacemaker.
[root@icecube ~]# targetcli ls
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 1]
| | o- lun01 ........................................................... [/dev/vg_iscsi/lv_storage (40.0GiB) write-thru activated]
| | o- alua ................................................................................................... [ALUA Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| o- fileio ................................................................................................. [Storage Objects: 0]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 1]
| o- iqn.2026-01.icecube:storage.target01 .............................................................................. [TPGs: 1]
| o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
| o- acls .......................................................................................................... [ACLs: 1]
| | o- iqn.2026-01.icecube:node01.initiator01 ................................................... [1-way auth, Mapped LUNs: 1]
| | o- mapped_lun0 ................................................................................. [lun0 block/lun01 (rw)]
| o- luns .......................................................................................................... [LUNs: 1]
| | o- lun0 ...................................................... [block/lun01 (/dev/vg_iscsi/lv_storage) (default_tg_pt_gp)]
| o- portals .................................................................................................... [Portals: 1]
| o- 0.0.0.0:3260 ..................................................................................................... [OK]
o- loopback ......................................................................................................... [Targets: 0]
- Conectarse a la consolo targetcli y crear el nuevo dispositivo:
[root@icecube ~]# targetcli
targetcli shell version 2.1.57
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/iscsi/iqn.20...target01/tpg1> /backstores/block create lun_mp /dev/vg_multipath/lv_mp_data
Created block storage object lun_mp using /dev/vg_multipath/lv_mp_data.
/iscsi/iqn.20...target01/tpg1> ls /backstores/block
o- block ...................................................................................................... [Storage Objects: 2]
o- lun01 ............................................................... [/dev/vg_iscsi/lv_storage (40.0GiB) write-thru activated]
| o- alua ....................................................................................................... [ALUA Groups: 1]
| o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
o- lun_mp ........................................................ [/dev/vg_multipath/lv_mp_data (10.0GiB) write-thru deactivated]
o- alua ....................................................................................................... [ALUA Groups: 1]
o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
- Crear la nueva LUN:
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns create /backstores/block/lun_mp
Created LUN 1.
Created LUN 1->1 mapping in node ACL iqn.2026-01.icecube:node01.initiator01
/iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/luns
o- luns .................................................................................................................. [LUNs: 2]
o- lun0 .............................................................. [block/lun01 (/dev/vg_iscsi/lv_storage) (default_tg_pt_gp)]
o- lun1 ......................................................... [block/lun_mp (/dev/vg_multipath/lv_mp_data) (default_tg_pt_gp)]
/iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals
o- portals ............................................................................................................ [Portals: 1]
o- 0.0.0.0:3260 ............................................................................................................. [OK]
- Debido a la configuracion previa hay que eliminar el portal
0.0.0.0porque interfiere con la logica del multipath, se necesita exportar por cada IP especifica del target.
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals delete 0.0.0.0 3260
Deleted network portal 0.0.0.0:3260
- Crear un portal por cada IP asignada al target:
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals create 192.168.1.80 3260
Using default IP port 3260
Created network portal 192.168.1.80:3260.
/iscsi/iqn.20...target01/tpg1> /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals create 192.168.1.79 3260
Using default IP port 3260
Created network portal 192.168.1.79:3260.
/iscsi/iqn.20...target01/tpg1> ls /iscsi/iqn.2026-01.icecube:storage.target01/tpg1/portals
o- portals ............................................................................................................ [Portals: 2]
o- 192.168.1.79:3260 ........................................................................................................ [OK]
o- 192.168.1.80:3260 ........................................................................................................ [OK]
/iscsi/iqn.20...target01/tpg1>
/iscsi/iqn.20...target01/tpg1> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json
[root@icecube network-scripts]#
Configurar Clientes
Requisitos
Es necesario instalar los siguientes paquetes:
[root@nodo2 ~]# dnf install -y iscsi-initiator-utils device-mapper-multipath
Updating Subscription Management repositories.
Last metadata expiration check: 4:06:37 ago on Sat 03 Jan 2026 09:42:58 AM CET.
Package iscsi-initiator-utils-6.2.1.11-0.git4b3e853.el9.x86_64 is already installed.
Package device-mapper-multipath-0.8.7-35.el9.x86_64 is already installed.
Dependencies resolved.
================================================================================================================================================================================================
Package Architecture Version Repository Size
================================================================================================================================================================================================
Upgrading:
device-mapper-multipath x86_64 0.8.7-39.el9 rhel-9-for-x86_64-baseos-rpms 151 k
device-mapper-multipath-libs x86_64 0.8.7-39.el9 rhel-9-for-x86_64-baseos-rpms 284 k
kpartx x86_64 0.8.7-39.el9 rhel-9-for-x86_64-baseos-rpms 47 k
Transaction Summary
================================================================================================================================================================================================
Upgrade 3 Packages
Total download size: 481 k
Downloading Packages:
(1/3): device-mapper-multipath-0.8.7-39.el9.x86_64.rpm 529 kB/s | 151 kB 00:00
(2/3): device-mapper-multipath-libs-0.8.7-39.el9.x86_64.rpm 867 kB/s | 284 kB 00:00
(3/3): kpartx-0.8.7-39.el9.x86_64.rpm 140 kB/s | 47 kB 00:00
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 1.4 MB/s | 481 kB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Upgrading : kpartx-0.8.7-39.el9.x86_64 1/6
Upgrading : device-mapper-multipath-libs-0.8.7-39.el9.x86_64 2/6
Upgrading : device-mapper-multipath-0.8.7-39.el9.x86_64 3/6
Running scriptlet: device-mapper-multipath-0.8.7-39.el9.x86_64 3/6
Running scriptlet: device-mapper-multipath-0.8.7-35.el9.x86_64 4/6
Cleanup : device-mapper-multipath-0.8.7-35.el9.x86_64 4/6
Running scriptlet: device-mapper-multipath-0.8.7-35.el9.x86_64 4/6
Cleanup : device-mapper-multipath-libs-0.8.7-35.el9.x86_64 5/6
Cleanup : kpartx-0.8.7-35.el9.x86_64 6/6
Running scriptlet: kpartx-0.8.7-35.el9.x86_64 6/6
Verifying : device-mapper-multipath-0.8.7-39.el9.x86_64 1/6
Verifying : device-mapper-multipath-0.8.7-35.el9.x86_64 2/6
Verifying : device-mapper-multipath-libs-0.8.7-39.el9.x86_64 3/6
Verifying : device-mapper-multipath-libs-0.8.7-35.el9.x86_64 4/6
Verifying : kpartx-0.8.7-39.el9.x86_64 5/6
Verifying : kpartx-0.8.7-35.el9.x86_64 6/6
Installed products updated.
Upgraded:
device-mapper-multipath-0.8.7-39.el9.x86_64 device-mapper-multipath-libs-0.8.7-39.el9.x86_64 kpartx-0.8.7-39.el9.x86_64
Complete!
Generar la configuración por defecto de multipath y habilitar los servicios:
[root@nodo2 ~]# mpathconf --enable --with_multipathd y
[root@nodo2 ~]# systemctl enable --now iscsid multipathd
Conectar los discos
[root@nodo2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.79
192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01
192.168.1.79:3260,1 iqn.2026-01.icecube:storage.target01
[root@nodo2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.80
192.168.1.80:3260,1 iqn.2026-01.icecube:storage.target01
192.168.1.79:3260,1 iqn.2026-01.icecube:storage.target01
[root@nodo2 ~]# iscsiadm -m node --login
Login to [iface: default, target: iqn.2026-01.icecube:storage.target01, portal: 192.168.1.79,3260] successful.
[root@nodo1 ~]# multipath -ll
mpatha (3600140502ad9c55f91b4682bccd2049c) dm-3 LIO-ORG,lun_mp
size=10.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 4:0:0:1 sde 8:64 active i/o pending running
`-+- policy='service-time 0' prio=50 status=enabled
`- 3:0:0:1 sdc 8:32 active ready running
mpathb (36001405b1e538202d0e4605a66e96ed4) dm-4 LIO-ORG,lun01
size=40G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 4:0:0:0 sdd 8:48 active i/o pending running
`-+- policy='service-time 0' prio=50 status=enabled
`- 3:0:0:0 sdb 8:16 active ready running
[root@nodo1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 40G 0 disk
├─vg_shared-lv_data 253:2 0 40G 0 lvm /srv/shared
└─mpathb 253:4 0 40G 0 mpath
sdc 8:32 0 10G 0 disk
└─mpatha 253:3 0 10G 0 mpath
sdd 8:48 0 40G 0 disk
└─mpathb 253:4 0 40G 0 mpath
sde 8:64 0 10G 0 disk
└─mpatha 253:3 0 10G 0 mpath
sr0 11:0 1 1024M 0 rom
Fixes
[root@nodo1 ~]# pvs
WARNING: devices file is missing /dev/mapper/mpathb (253:4) using multipath component /dev/sdb.
See lvmdevices --update for devices file update.
PV VG Fmt Attr PSize PFree
/dev/sda2 rhel lvm2 a-- <19.00g 0
[root@nodo1 ~]# lvmdevices --update
WARNING: devices file is missing /dev/mapper/mpathb (253:4) using multipath component /dev/sdb.
IDTYPE=sys_wwid IDNAME=naa.6001405b1e538202d0e4605a66e96ed4 DEVNAME=/dev/sdb PVID=dExzbuXMlMyc5lyceBXm7crwQeCP6Qgd: remove multipath component
Adding multipath device /dev/mapper/mpathb for multipath component /dev/sdb.
IDTYPE=mpath_uuid (old sys_wwid) IDNAME=mpath-36001405b1e538202d0e4605a66e96ed4 (old naa.6001405b1e538202d0e4605a66e96ed4) DEVNAME=/dev/mapper/mpathb (old /dev/sdb) PVID=dExzbuXMlMyc5lyceBXm7crwQeCP6Qgd: update
Updated devices file to version 1.1.38
[root@nodo1 ~]# pvs
WARNING: Device mismatch detected for vg_shared/lv_data which is accessing /dev/sdb instead of /dev/mapper/mpathb.
PV VG Fmt Attr PSize PFree
/dev/mapper/mpathb vg_shared lvm2 a-- 39.96g 0
/dev/sda2 rhel lvm2 a-- <19.00g 0
[root@nodo1 ~]#