Pi5 NVMe Base not working reliably

Hi all,

Got a Pi5 8GB with an official active cooler. Was very keen to get NVMe boot working and chose the NVMe Base over the Pineberry one.

Had it for a few days now and it’s causing me no end of stress. I have tried two different NVMe drives. Both are Samsung 970 EVO Plus drives. One a 1TB and the other a 2TB.

I was initially trying to to use the 1TB one but as I was having issues I and now on the 2TB one.

The problem is the device does not always show up under lsblk. I can’t pinpoint the exact issue. When it does show up I’m able to mount it as normal etc. But then sometimes on a heavy data transfer, it crashes the Pi and upon reboot the NVMe doesn’t show up anymore.

I wondered if it was a power supply issue so I have tried different power supplies. I am currently using my UGREEN 200w USB power station that I use to power a 16inch MacBook Pro when out and about. Connected to one of the 100w USB-C ports on it. I can’t imagine that this device would not be able to supply enough juice to the Pi.

I am running on the fully supported PCIe version using dtparam=pciex1 in my config.txt. I have tried gen3 but having the same issues.

I have tried re-seating the ribbon cable. Blowing the connector with an air-duster etc. The cable is seated 100% correctly.

Has anyone else faced similar issues?

Thanks,
FS

I don’t have one of these, but I would take a wild guess that this is a power issue.

I know your PSU can provide 100W, but USB PD is designed to provide voltage and current in specific combinations. The Pi 5 will only take 5V, and the usual current for this is 3A, giving 5V x 3A = 15W. The UGREEN website doesn’t list the exact combinations the 200W PSU can do, but 5V3A is standard.

The Samsung 970 EVO specs page says that it can draw up to 9W at peak burst, which with 15W would leave 6W (~1.2A) for the Pi 5, which may not really be enough (info from the Pi foundation suggests they think you should budget ~12W to cover the Pi 5’s peak usage). In that case I’d guess during the heavy data transfer you describe the NVME drive pulls a bit too much current, browns-out, and gets stuck in some kind of crash state.

The Pimoroni product page for the NVME base says that it does work with the Samsung 980 and 980 Pro SSDs, but the specs pages for them say they consume 5.3W and 7.4W respectively, so they’re easier on power than the 970’s 9W.

These power calculations are all approximate, but one possible solution is to get the official power supply, which can give a slightly non-standard 5V 5A or 25W. That might be more capable of keeping up with both the Pi 5 and the SSD. That’s almost certainly what Pimoroni were using for the tests.

This might actually be one of those cases where the NVMe Base needs the extra 5V supply from GPIO. The flat flex cable is technically limited to providing 5V@1A or around 5W. The 3V3 supply on the NVMe Base can do up to 3A continuous if needed when providing power through the extra 5V header.

In this setup drives are running at Gen 2 or Gen 3 x1, this has been enough for the drives we’ve tested, as it’s e-z-mode for the drive compared to a normal Gen 3/Gen 4 x4 workload.

I’ll see if I can get my hands on a drive to confirm this is the case for the 970.

@fsociety3765: If you have the bits you need to add the extra 5V, try that. If you don’t, drop your details to support@pimoroni.com saying “@guru sent me from this thread on the forums” and I’ll sort out a little care package so we can test if this solves the problem :-)

@Guru Out of curiosity, what does the extra 5V supply entail? Is that what the two unpoulated through-hole pads on the board are for?

@Shoe The process would be soldering a header/socket to those pads and then connecting that to 5V (Pin 2 or 4) and GND (Pin 6) on the GPIO (https://pinout.xyz/). I’d recommend socket/socket jumper-jerky-junior or equivalent.
Break Away Headers - Right-Angle
Dual pin jumper wire
Jumper Jerky Junior - Socket to Socket

Interesting, thanks!

Hi all,

Thanks for the great info.

I have ordered an official power supply for the Pi5 to see if that makes any difference but sounds like either a more power-efficient drive, or adding the extra 5V, or even both is required.

I just happened to have a bunch of the 970s around so naturally just tried to use them.

1 Like

I’ll get one on order to replicate this and avoid you having to solder. Alternatively, I’m happy to swap one of your 970s for one of our drives :-)

Hi
I am running two PI5’s with official PSU and 1TB Samsung 970 Evo plus with no issues so far.
Regards

1 Like

Hi all,

I am now using the official PSU for the Pi5 and unfortunately, I am having the same issue. With both 970 Evo Plus drives.

I think there must be a setup/environment difference.

Raspberry Pi5 8GB
Official Raspberry Pi5 Power Supply
dtparam=pciex1 in /boot/config.txt
PCIE_PROBE=1 added to EEPROM config
Not overclocked
ArchLinux ARM OS

The two drives I have tried are:
Samsung 970 EVO Plus 1TB
Samsung 970 EVO Plus 2TB

Sometimes after a reboot, /dev/nvme0n1 will show up. But on the next reboot, it will disappear. And while it is detected, it seems to function OK, until any kind of heavy work like cloning the OS from the SD card using rsync or creating swapfiles using dd. At this point, it fails and goes read-only only. Then upon reboot is it not detected again.

I suppose I could try using the standard Raspberry Pi OS as a test to see if perhaps Arch is causing the issue. I guess that’s a possibility. I will also check out the ArchLinux ARM forums.

I think I may shelve the NVMe functionality for now. I’ve seen that Argon has an upcoming case with an optional NVMe base and heatsink. Looking at the pictures it also seems to have some pins on the board to get the extra 5V from the GPIO. So I may just hold out for that and use the SD card in the meantime.

Thanks for everyone’s time and advice thus far.
FS

@fsociety3765
That’s a shame. Was hoping as the dust settles on NVMe we may identify some drives as universally reliable (970 being a favorite of mine)
I have been using an 8GB PI5 and 1TB Samsung 970 Evo Plus (@gen3) as my everyday desktop for a couple of weeks (just about does it 80-90% of the time). Bookworm64, Official PSU, 2x HDMI, IQDAC and separate USB 3 powered hub. Checking sudo journalctl -b daily for NVMe errors.

1 Like

@fsociety3765 I’d definitely check which EEPROM/Firmware version is on your RPi 5 and use the RPI OS to update it (and try your drive with the RPI OS) to see if that makes a difference.

When you run ‘sudo rpi-eeprom-update’ you should see a date after Dec 6th 2023 and ideally Jan 5th.

I have the same problem but with a Kioxia kxg6aznv256g.

The drive showed up fine to start with, but after copying some data it error:ed out and disappeared! Rebooting doesn’t bring it back, neither does unplugging the power. I’m using an official rpi5 power supply.

I’m running firmware

root@rpi5:~# rpi-eeprom-update 
BOOTLOADER: up to date
   CURRENT: fre  5 jan 2024 15:57:40 UTC (1704470260)
    LATEST: fre  5 jan 2024 15:57:40 UTC (1704470260)
   RELEASE: default (/lib/firmware/raspberrypi/bootloader-2712/default)
            Use raspi-config to change the release.

Below is parts of the terminal session where the drives disappears.

@guru I’m up for some soldering if you think it would help.

root@rpi5:~# dmesg
...
[    1.882783] nvme nvme0: pci function 0000:01:00.0                                                             
[    1.887705] brcm-pcie 1000110000.pcie: clkreq control enabled                                                 
[    1.887713] nvme 0000:01:00.0: enabling device (0000 -> 0002)                                                 
[    1.888036] brcm-pcie 1000120000.pcie: host bridge /axi/pcie@120000 ranges:                                                                                                                                                     
[    1.906780] brcm-pcie 1000120000.pcie:   No bus range found for /axi/pcie@120000, using [bus 00-ff]                                                                                                                             
[    1.916242] brcm-pcie 1000120000.pcie:      MEM 0x1f00000000..0x1ffffffffb -> 0x0000000000                                                                                                                                      
[    1.924899] brcm-pcie 1000120000.pcie:      MEM 0x1c00000000..0x1effffffff -> 0x0400000000                                                                                                                                      
[    1.933571] brcm-pcie 1000120000.pcie:   IB MEM 0x1f00000000..0x1f003fffff -> 0x0000000000                                                                                                                                      
[    1.942231] brcm-pcie 1000120000.pcie:   IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000                    
[    1.951803] nvme nvme0: 4/0/0 default/read/poll queues                                                        
[    1.951969] brcm-pcie 1000120000.pcie: setting SCB_ACCESS_EN, READ_UR_MODE, MAX_BURST_SIZE                    
[    1.957920]  nvme0n1: p1 p2 p3 p4
...

root@rpi5:~# lsblk                                                                                               
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS                                                                
mmcblk0     179:0    0  29,8G  0 disk                                                                                                                                                                                              
├─mmcblk0p1 179:1    0   512M  0 part /boot/firmware                                                             
└─mmcblk0p2 179:2    0  29,3G  0 part /                                                                          
nvme0n1     259:0    0 238,5G  0 disk                                                                                                                                                                                              
├─nvme0n1p1 259:1    0   260M  0 part                                                                                                                                                                                              
├─nvme0n1p2 259:2    0    16M  0 part                                                                                                                                                                                              
├─nvme0n1p3 259:3    0 237,2G  0 part 
└─nvme0n1p4 259:4    0  1000M  0 part 

# Trying to copy the existing sd-card to the nvme
$ time dd bs=4M if=/dev/mmcblk0 of=/dev/nvme0n1 conv=fdatasync status=progress
1480589312 bytes (1,5 GB, 1,4 GiB) copied, 163 s, 9,1 MB/s                                                       
dd: error writing '/dev/nvme0n1': No space left on device                                                                                                                                                                          
dd: fdatasync failed for '/dev/nvme0n1': Input/output error                                                      
dd: fsync failed for '/dev/nvme0n1': Input/output error                                                                                                                                                                            
354+0 records in                                                                                                                                                                                                                   
353+0 records out                                                                                                
1480589312 bytes (1,5 GB, 1,4 GiB) copied, 162,805 s, 9,1 MB/s                                       
                                                                                                                 
real    2m42,810s                                                                                                
user    0m0,000s                                                                                                 
sys     0m3,131s                                                                                                 

# The nvme disappeared!
root@rpi5:~# lsblk                                                                                               
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS                                                                 
mmcblk0     179:0    0 29,8G  0 disk                                                                                                                                                                                               
├─mmcblk0p1 179:1    0  512M  0 part /boot/firmware                                                                                                                                                                                
└─mmcblk0p2 179:2    0 29,3G  0 part /                                                                           

root@rpi5:~# dmesg
...
[  812.330528] nvme nvme0: I/O 0 (I/O Cmd) QID 1 timeout, aborting
[  812.330547] nvme nvme0: I/O 1 (I/O Cmd) QID 1 timeout, aborting
[  812.330556] nvme nvme0: I/O 2 (I/O Cmd) QID 1 timeout, aborting
[  812.330564] nvme nvme0: I/O 3 (I/O Cmd) QID 1 timeout, aborting
[  843.050590] nvme nvme0: I/O 0 QID 1 timeout, reset controller
[  873.770660] nvme nvme0: I/O 24 QID 0 timeout, reset controller
[  934.979005] nvme nvme0: Device not ready; aborting reset, CSTS=0x1
[  935.002180] nvme nvme0: Abort status: 0x371
[  935.002188] nvme nvme0: Abort status: 0x371
[  935.002191] nvme nvme0: Abort status: 0x371
[  935.002195] nvme nvme0: Abort status: 0x371
[  938.519945] nvme nvme0: Removing after probe failure status: -19l
[  938.535362] nvme0n1: detected capacity change from 500118192 to 0
[  938.535366] Buffer I/O error on dev nvme0n1, logical block 129722, lost async page write
[  938.535377] Buffer I/O error on dev nvme0n1, logical block 129723, lost async page write
[  938.535381] Buffer I/O error on dev nvme0n1, logical block 129724, lost async page write
[  938.535385] Buffer I/O error on dev nvme0n1, logical block 129725, lost async page write
[  938.535388] Buffer I/O error on dev nvme0n1, logical block 129726, lost async page write
[  938.535392] Buffer I/O error on dev nvme0n1, logical block 129727, lost async page write
[  938.535396] Buffer I/O error on dev nvme0n1, logical block 129728, lost async page write
[  938.535399] Buffer I/O error on dev nvme0n1, logical block 129729, lost async page write
[  938.535403] Buffer I/O error on dev nvme0n1, logical block 129730, lost async page write
[  938.535407] Buffer I/O error on dev nvme0n1, logical block 129731, lost async page write


Hi,

Just joined to say I have the exact same issue - using a Samsung 970 Evo Plus 500GB (also tried a 1TB same model). Can intermittently see it with lsblk but any read/write causes it to crash.

Using the official Raspberry Pi 27W USB-C Power Supply.
Debian GNU/Linux 12 (bookworm)
Bootloader Fri 5 Jan 15:57:40 UTC 2024 (1704470260)

Might be worth adding something to NVMe Base for Raspberry Pi 5 - NVMe Base just to say that people have had issues with these M.2s?

Using a 980 has zero issues, awesome board btw :)

I have the official power supply, and I bought the board with the supplied Kioxia 500 gb
Never been recognised. I’ve reseated the ribbon cable multiple times without success. Any guidance welcome

Can you post some of these errors? You can run sudo journalctl -b -g pcie to show PCIe related messages since last boot.

I think one of the big problems of all these “NVME-does-not-work-for-me” posts is the misunderstanding of hardware: PCIe was not designed to run from a PCIe-device via FPC-connector via a flat-ribbon-connector via a second FPC-connector to a PCIe-device. This is something that works most of the time, but not always. It is a real pitty that neither the RPi-Foundation nor sellers like Pimoroni are clear about this. Nevertheless, a big kudos to Pimoroni and others to make this work most of the time. But there is no guarantee that this just works, even with seemingly identical components. You might have to swap the Pi5, the NVME-base, the cable or the SSD to make it work. And nobody can tell what the real problem was.

jan 18 22:40:05 rpi5 kernel: Kernel command line: coherent_pool=1M 8250.nr_uarts=1 pci=pcie_bus_safe snd_bcm2835.enable_compat_alsa=0 snd_bcm2835.enable_hdmi=1  smsc95xx.macaddr=D8:3A:DD:9C:93:99 vc_mem.mem_base=0x3fc00000 vc_mem.mem_size=0x40000000  console=ttyAMA10,115200 console=tty1 root=PARTUUID=6fdfc8cb-02 rootfstype=ext4 fsck.repair=yes rootwait
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000110000.pcie: host bridge /axi/pcie@110000 ranges:
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000110000.pcie:   No bus range found for /axi/pcie@110000, using [bus 00-ff]
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000110000.pcie:      MEM 0x1b00000000..0x1bfffffffb -> 0x0000000000
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000110000.pcie:      MEM 0x1800000000..0x1affffffff -> 0x0400000000
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000110000.pcie:   IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000110000.pcie: setting SCB_ACCESS_EN, READ_UR_MODE, MAX_BURST_SIZE
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000110000.pcie: Forcing gen 3
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000110000.pcie: PCI host bridge to bus 0000:00
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000110000.pcie: link down
jan 18 22:40:05 rpi5 kernel: pcieport 0000:00:00.0: PME: Signaling with IRQ 39
jan 18 22:40:05 rpi5 kernel: pcieport 0000:00:00.0: AER: enabled with IRQ 39
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie: host bridge /axi/pcie@120000 ranges:
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie:   No bus range found for /axi/pcie@120000, using [bus 00-ff]
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie:      MEM 0x1f00000000..0x1ffffffffb -> 0x0000000000
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie:      MEM 0x1c00000000..0x1effffffff -> 0x0400000000
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie:   IB MEM 0x1f00000000..0x1f003fffff -> 0x0000000000
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie:   IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie: setting SCB_ACCESS_EN, READ_UR_MODE, MAX_BURST_SIZE
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie: Forcing gen 2
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie: PCI host bridge to bus 0001:00
jan 18 22:40:05 rpi5 kernel: brcm-pcie 1000120000.pcie: link up, 5.0 GT/s PCIe x4 (!SSC)
jan 18 22:40:05 rpi5 kernel: pcieport 0001:00:00.0: enabling device (0000 -> 0002)
jan 18 22:40:05 rpi5 kernel: pcieport 0001:00:00.0: PME: Signaling with IRQ 40
jan 18 22:40:05 rpi5 kernel: pcieport 0001:00:00.0: AER: enabled with IRQ 40
jan 18 22:40:09 rpi5 ModemManager[795]: <info>  [base-manager] couldn't check support for device '/sys/devices/platform/axi/1000120000.pcie/1f00100000.ethernet': not supported by any plugin

Anybody that has enabled PCIe 3, may want to undo that, and see what happens.

PCIe 3 Mode

To enable experimental and not-officially-supported PCIe 3 mode, add the follow line to the [all] section at the end of your Raspberry Pi /boot/config.txt file like this:

[all]
dtparam=pciex1_gen=3
Save and reboot - your drive is ready to use!

Im having problem with detecting my disk as well when using lsblk, see below.

I tried re-attaching the SSD disk in slots and same for the flat flex cable, but without any luck!

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
mmcblk0     179:0    0 14.8G  0 disk
|-mmcblk0p1 179:1    0  512M  0 part /boot/firmware
`-mmcblk0p2 179:2    0 14.3G  0 part /

My completely new hardware stack:

  • RPi 5 8GB
  • RPi Active Cooler
  • RPi Power supply (27W)
  • Pimoroni NVMe base
  • Samsung 980 PRO PCIe 4.0 NVMe M.2 250GB disk (completely new, never formatted or used)

Software:

  • fresh installation of RPi OS (bookworm, 64 bit) on SD card (16GB)
  • booted RPi with SD card, with NVMe Base and disk installed
  • updated OS and firmware (Fri Jan 5 15:57:40 UTC 2024) to latest version
  • did not enable PCIe 3 Mode in /boot/config.txt

Issue seems to be related to the disk Samsung 980 PRO PCIe 4.0 NVMe M.2 250GB. I went back to the store and replaced it with Samsung 970 EVO Plus PCIe 3.0 NVMe M.2 SSD 500GB and the disk is now detected in RPi.

Not sure why Samsung 980 PRO 250GB did not work, it is included in the list of tested and compatible disks with the Piromoni NVMe base. Maybe because it is 250GB and not 1TB? Or because it is PCIe 4.0?