User Tools

Site Tools


start

Persistent Memory


These pages contain instructions, links and other information related to persistent memory enabling in Linux.

Industry specifications

Quick Setup Guide


One interesting use of the PMEM driver is to allow users to begin developing software using DAX, which was upstreamed in v4.0. On a non-NFIT system this can be done by using PMEM's memmap kernel command line to manually create a type 12 memory region.

Here are the additions I made for my system with 32 GiB of RAM:

1) Reserve 16 GiB of memory via the “memmap” kernel parameter in grub's menu.lst, using PMEM's new “!” specifier:

memmap=16G!16G

The documentation for this parameter can be found here: https://www.kernel.org/doc/Documentation/kernel-parameters.txt

Also see: How to choose the correct memmap kernel parameter for PMEM on your system.

2) Set up the correct kernel configuration options for PMEM and DAX in .config.

Options in make menuconfig:

  • Device Drivers - NVDIMM (Non-Volatile Memory Device) Support
    • PMEM: Persistent memory block device support
    • BLK: Block data window (aperture) device support
    • BTT: Block Translation Table (atomic sector updates)
  • Enable the block layer
    • Block device DAX support <not available in kernel-4.5 due to page cache issues>
  • File systems
    • Direct Access (DAX) support
  • Processor type and features
    • Support non-standard NVDIMMs and ADR protected memory <if using the memmap kernel parameter>
CONFIG_BLK_DEV_RAM_DAX=y
CONFIG_FS_DAX=y
CONFIG_X86_PMEM_LEGACY=y
CONFIG_LIBNVDIMM=y
CONFIG_BLK_DEV_PMEM=m
CONFIG_ARCH_HAS_PMEM_API=y

This configuration gave me one pmem device with 16 GiB of space:

# fdisk -l /dev/pmem0

Disk /dev/pmem0: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

lsblk shows the block devices, including pmem devices. Examples:

$ lsblk
NAME                   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
pmem0                  259:0    0    16G  0 disk
├─pmem0p1              259:6    0     4G  0 part /mnt/ext4-pmem0
└─pmem0p2              259:7    0  11.9G  0 part /mnt/btrfs-pmem0
pmem1                  259:1    0    16G  0 disk /mnt/xfs-pmem1
pmem2                  259:2    0    16G  0 disk /mnt/xfs-pmem2
pmem3                  259:3    0    16G  0 disk /mnt/xfs-pmem3
$ lsblk -t
NAME                   ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE  RA WSAME
pmem0                          0   4096      0    4096     512    0           128 128    0B
pmem1                          0   4096      0    4096     512    0           128 128    0B
pmem2                          0   4096      0    4096     512    0           128 128    0B
pmem3                          0   4096      0    4096     512    0           128 128    0B

Partitions


You can divide pmem devices into partitions. In parted, the mkpart subcommand has this syntax

mkpart [part-type fs-type name] start end

Example carving a 16 GiB /dev/pmem0 into 4 GiB, 8 GiB, and 4 GiB partitions (constrained by 1 MiB alignment at the beginning and end) (note: parted displays its outputs using SI decimal units; lsblk uses binary units):

$ parted -s -a optimal /dev/pmem0 \
        mklabel gpt -- \
        mkpart primary ext4 1MiB 4GiB \
        mkpart primary xfs 4GiB 12GiB \
        mkpart primary btrfs 12GiB -1MiB \
        print

Model: Unknown (unknown)
Disk /dev/pmem0: 17.2GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  4295MB  4294MB  ext4         primary
 2      4295MB  12.9GB  8590MB  xfs          primary
 3      12.9GB  17.2GB  4294MB  btrfs        primary

$ lsblk
NAME                   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
pmem0                  259:0    0    16G  0 disk
├─pmem0p1              259:4    0     4G  0 part
├─pmem0p2              259:5    0     8G  0 part
└─pmem0p3              259:8    0     4G  0 part

$ fdisk -l /dev/pmem0
Disk /dev/pmem0: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B334CBC6-1C56-47DF-8981-770C866CEABE

Device          Start      End  Sectors Size Type
/dev/pmem0p1     2048  8388607  8386560   4G Linux filesystem
/dev/pmem0p2  8388608 25165823 16777216   8G Linux filesystem
/dev/pmem0p3 25165824 33552383  8386560   4G Linux filesystem

Filesystems


You can place any filesystem (e.g., ext4, xfs, btrfs) on a pmem device (e.g., /dev/pmem0), a partition on a pmem device (e.g. /dev/pmem0p1), a btt device (e.g., /dev/pmem0s), or a partition on a btt device (e.g., /dev/pmem0sp1).

ext4 and xfs support DAX, which allow applications to perform direct access to persistent memory with mmap().

Example creating ext4, xfs, and btrfs filesystems on three partitions and mounting ext4 and xfs with DAX (note: df -h displays sizes in IEC binary units; df -H uses SI decimal units):

$ mkfs.ext4 -F /dev/pmem0p1
$ mkfs.xfs -f /dev/pmem0p2
$ mkfs.btrfs -f /dev/pmem0p3
$ mount -o dax /dev/pmem0p1 /mnt/ext4-pmem0
$ mount -o dax /dev/pmem0p2 /mnt/xfs-pmem0
$ mount /dev/pmem0p3 /mnt/btrfs-pmem0

$ lsblk
NAME                   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
pmem0                  259:0    0    16G  0 disk
├─pmem0p1              259:4    0     4G  0 part /mnt/ext4-pmem0
├─pmem0p2              259:5    0     8G  0 part /mnt/xfs-pmem0
└─pmem0p3              259:8    0     4G  0 part /mnt/btrfs-pmem0

$ df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/pmem0p1                    3.9G  8.0M  3.7G   1% /mnt/ext4-pmem0
/dev/pmem0p2                    8.0G   33M  8.0G   1% /mnt/xfs-pmem0
/dev/pmem0p3                    4.0G   17M  3.8G   1% /mnt/btrfs-pmem0

$ df -H
Filesystem                      Size  Used Avail Use% Mounted on
/dev/pmem0p1                    4.2G  8.4M  4.0G   1% /mnt/ext4-pmem0
/dev/pmem0p2                    8.6G   34M  8.6G   1% /mnt/xfs-pmem0
/dev/pmem0p3                    4.3G   17M  4.1G   1% /mnt/btrfs-pmem0

iostats


iostats are disabled by default due to performance overhead (e.g., 12M IOPS dropping 25% to 9M IOPS). However, they can be enabled in sysfs if desired.

As of kernel 4.5, iostats are only collected for the base pmem device, not per-partition. Also, I/Os that go through DAX paths (rw_page, rw_bytes, and direct_access functions) are not counted, so nothing is collected for:

  • I/O to files in filesystems mounted with -o dax
  • I/O to raw block devices if CONFIG_BLOCK_DAX is enabled
$ echo 1 > /sys/block/pmem0/queue/iostats
$ echo 1 > /sys/block/pmem1/queue/iostats
$ echo 1 > /sys/block/pmem2/queue/iostats
$ echo 1 > /sys/block/pmem3/queue/iostats

$ iostat -mxy 1
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          21.53    0.00   78.47    0.00    0.00    0.00

Device:         rrqm/s   wrqm/s        r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
pmem0             0.00     0.00 4706551.00    0.00 18384.95     0.00     8.00     6.00    0.00    0.00    0.00   0.00 113.90
pmem1             0.00     0.00 4701492.00    0.00 18365.20     0.00     8.00     6.01    0.00    0.00    0.00   0.00 119.30
pmem2             0.00     0.00 4701851.00    0.00 18366.60     0.00     8.00     6.37    0.00    0.00    0.00   0.00 108.90
pmem3             0.00     0.00 4688767.00    0.00 18315.50     0.00     8.00     6.43    0.00    0.00    0.00   0.00 117.40

fio


Example fio script to perform 4 KiB random reads to four pmem devices:

[global]
direct=1
ioengine=libaio
norandommap
randrepeat=0
bs=256k         # for bandwidth
bs=4k           # for IOPS and latency
iodepth=1
runtime=30
time_based=1
group_reporting
thread
gtod_reduce=0   # for latency
gtod_reduce=1   # IOPS and bandwidth
zero_buffers

## local CPU
numjobs=9       # for bandwidth
numjobs=1       # for latency
numjobs=18      # for IOPS
cpus_allowed_policy=split

rw=randwrite
rw=randread

# CPU affinity based on two 18-core CPUs with QPI snoop configuration of cluster-on-die

[drive_0]
filename=/dev/pmem0
cpus_allowed=0-8,36-44

[drive_1]
filename=/dev/pmem1
cpus_allowed=9-17,45-53

[drive_2]
filename=/dev/pmem2
cpus_allowed=18-26,54-62

[drive_3]
filename=/dev/pmem3
cpus_allowed=27-35,63-71
start.txt · Last modified: 2016/05/25 20:46 by Vishal Verma