Device Sectors (Version8: SupportRaid) /dev/nvme0n11 4980480 (2431 MB) /dev/nvme0n12 4194304 (2048 MB)Reserved size: 260352 ( 127 MB)Primary data partition will be created.WARNING: This action will erase all data on '/dev/nvme0n1' and repart it, are you sure to continue? [y/N]yCleaning all partitions...Creating sys partitions...Creating primary data partition...Please remember to mdadm and mkfs new partitions.
it will create the partition that follow DSM required layout.
Type
fdisk -l /dev/nvme0n1
复制代码
You will see the partition layout is created
Disk /dev/nvme0n1: 238.5 GiB, 256060514304 bytes, 500118192 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xef61a3e4Device Boot Start End Sectors Size Id Type/dev/nvme0n1p1 2048 4982527 4980480 2.4G fd Linux raid autodetect/dev/nvme0n1p2 4982528 9176831 4194304 2G fd Linux raid autodetect/dev/nvme0n1p3 9437184 500103449 490666266 234G fd Linux raid autodetect
Create Basic Disk
I have only tried to create Basic Disk Volume as I have only one SSD. For other type of volume/storage pool (RAID0, RAID1, SHR) , I have not tested.
For Basic Disk, it need to create a single partition RAID1 device in order for DSM to recognize it. (as this is what DSM Storage Manager will do when create a Basic Disk Volume)
Type
cat /proc/mdstat
复制代码
To see your current RAID setup
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]md2 : active raid1 sda3[0] sdb3[1] 5855700544 blocks super 1.2 [2/2] [UU]md3 : active raid1 sdc3[0] sdd3[1] 9761614848 blocks super 1.2 [2/2] [UU]md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] 2097088 blocks [4/4] [UUUU]md0 : active raid1 sda1[0] sdb1[3] sdc1[1] sdd1[2] 2489920 blocks [4/4] [UUUU]
AFAIK, md0 is system partition and md1 is system swap. You current volume/storage pool will start at md2.
(if md4 already exist, you should use next md number)
And answer y
mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90Continue creating array? ymdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md5 started.
Create Filesystem
Type
mkfs.ext4 -F /dev/md5
复制代码
as I use ext4
mke2fs 1.42.6 (21-Sep-2012)Filesystem label=1.42.6-23824OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks15335424 inodes, 61333024 blocks25600 blocks (0.04%) reserved for the super userFirst data block=0Maximum filesystem blocks=22103982081872 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872Allocating group tables: doneWriting inode tables: doneCreating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done
if need btrfs, type
mkfs.btrfs -f /dev/md5
复制代码
which can use for vm storage
After format complete , type
reboot
and after the machine bootup, you will see the Volume in DSM Storage Manager
I did the update a day after this was first posted, since I had a NVMe drive that was sitting idle in the DS918. I copied all my docker containers over, and launched them so my spindle drives could go idle.
Everything worked fine for week, however magically all my directories and files on /volume4 disappeared, and all the docker data was lost, without any notice. This was before the DSM update, so clearly something else happened that nuked the partition. The DSM update installed last night, and /volume4 is still there, so the changes persist updates, but all the volume data is lost.