Skip to content

Commit 024484a

Browse files
authored
Merge pull request #366 from xcp-ng/dtt-xostor-new-disks
xostor.md: add separate section for adding disks
2 parents f75f7d7 + 8b6b993 commit 024484a

File tree

1 file changed

+23
-9
lines changed

1 file changed

+23
-9
lines changed

docs/xostor/xostor.md

Lines changed: 23 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -659,14 +659,23 @@ xe host-call-plugin host-uuid=<HOST_UUID> plugin=linstor-manager fn=addHost args
659659
```
660660
For a short description, this command (re)creates a PBD, opens DRBD/LINSTOR ports, starts specific services and adds the node to the LINSTOR database.
661661

662-
If you have storage devices to use on the host, a LINSTOR storage layer is not directly added to the corresponding node. You can verify the storage state like this:
662+
If you have storage devices to use on the host, a LINSTOR storage layer is not directly added to the corresponding node.
663+
You can follow the [section](#how-to-add-storage-on-a-new-host) below to add storage to this new node.
664+
665+
### How to add storage on a new host?
666+
667+
There are two simple steps:
668+
1. Create a VG (and LV for thin) with all the host disks
669+
2. Create a SP for the host pointing to this new VG
670+
671+
You can verify the storage state like this:
663672
```
664673
linstor sp list
665674
```
666675

667676
Small example:
668677

669-
A `LVM_THIN` entry is missing for `hpmc17` in this context:
678+
A `LVM_THIN` entry is missing for `hpmc17` in this context, meaning it has no local storage:
670679
```
671680
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
672681
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
@@ -679,13 +688,15 @@ A `LVM_THIN` entry is missing for `hpmc17` in this context:
679688
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
680689
```
681690

682-
So if your host has disks that need to be added to the linstor SR, you will need to create VG/LV.
691+
1. Creating a LVM volume group and/or thin device
692+
693+
To add disks to the linstor SR, you will need to create a LVM volume group.
683694
Connect to the machine to modify and use `vgcreate` with the wanted disks to create a VG group on the host:
684695
```
685696
vgcreate <GROUP_NAME> <DEVICES>
686697
```
687698

688-
In this example where we want to use /dev/nvme0n1 with the group `linstor_group`:
699+
In our example where we want to use `/dev/nvme0n1` with the group `linstor_group`:
689700
```
690701
vgcreate linstor_group /dev/nvme0n1
691702
```
@@ -696,20 +707,23 @@ lvcreate -l 100%FREE -T <GROUP_NAME>/<LV_THIN_VOLUME>
696707
lvchange -ay <GROUP_NAME>/<LV_THIN_VOLUME>
697708
```
698709

699-
Always regarding this example, we have:
700-
- `<HOSTNAME>`: `linstor_group`.
710+
Most of the time and in our example, we will have:
711+
- `<GROUP_NAME>`: `linstor_group`.
701712
- `<LV_THIN_VOLUME>`: `thin_device`.
713+
- `<SP_NAME>`: `xcp-sr-linstor_group_thin_device`.
714+
715+
2. Create a new storage pool attached to the node
702716

703-
Run the correct command where the controller is running to add the volume group in the LINSTOR database:
717+
Run the corresponding command on the host where the controller is running to add the volume group in the LINSTOR database:
704718
```
705719
# For thin:
706-
linstor storage-pool create lvmthin <NODE_NAME> <SP_NAME> <VG_NAME>
720+
linstor storage-pool create lvmthin <NODE_NAME> <SP_NAME> <VG_NAME>/<LV_THIN_VOLUME>
707721
708722
# For thick:
709723
linstor storage-pool create lvm <NODE_NAME> <SP_NAME> <VG_NAME>
710724
```
711725

712-
In this example:
726+
In our example:
713727
```
714728
linstor storage-pool create lvm hpmc17 xcp-sr-linstor_group_thin_device linstor_group/thin_device
715729
```

0 commit comments

Comments
 (0)