NVMe Base is not working

Hi,

I recently received the NVMe Base and it doesn’t work. I tried first a 256GB Samsung PM991a SSD, that didn’t work. Than, I’ve bought a Samsung 980 500GB and had the same result. No drive is listed in lsblk and the board is not shown in lspci. The drives are working just fine connected to an external adapter.

Am I missing something or the board I received is faulty?

I have Raspberry Pi 5/8GB with Raspberry Pi OS Lite 64-bit, powered by the official 27W USB-C power supply. System is all up to date, with latest bootloader.

BOOTLOADER: up to date
   CURRENT: Mon  5 Feb 14:38:34 UTC 2024 (1707143914)
    LATEST: Mon  5 Feb 14:38:34 UTC 2024 (1707143914)
   RELEASE: latest (/lib/firmware/raspberrypi/bootloader-2712/latest)
            Use raspi-config to change the release.

I edited the /boot/firmware/config.txt as follows:

[all]
dtparam=pciex1_gen=3

I also tried a couple of times to reconnect the “PCIe Pipe” (flat flex cable).

The dmesg response:

[    1.477415] brcm-pcie 1000110000.pcie: host bridge /axi/pcie@110000 ranges:
[    1.484421] brcm-pcie 1000110000.pcie:   No bus range found for /axi/pcie@110000, using [bus 00-ff]
[    1.493515] brcm-pcie 1000110000.pcie:      MEM 0x1b00000000..0x1bfffffffb -> 0x0000000000
[    1.501817] brcm-pcie 1000110000.pcie:      MEM 0x1800000000..0x1affffffff -> 0x0400000000
[    1.510120] brcm-pcie 1000110000.pcie:   IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000
[    1.519592] brcm-pcie 1000110000.pcie: setting SCB_ACCESS_EN, READ_UR_MODE, MAX_BURST_SIZE
[    1.527898] brcm-pcie 1000110000.pcie: Forcing gen 3
[    1.532915] brcm-pcie 1000110000.pcie: PCI host bridge to bus 0000:00
[    1.539382] pci_bus 0000:00: root bus resource [bus 00-ff]
[    1.544888] pci_bus 0000:00: root bus resource [mem 0x1b00000000-0x1bfffffffb] (bus address [0x00000000-0xfffffffb])
[    1.555458] pci_bus 0000:00: root bus resource [mem 0x1800000000-0x1affffffff pref] (bus address [0x400000000-0x6ffffffff])
[    1.566646] pci 0000:00:00.0: [14e4:2712] type 01 class 0x060400
[    1.572699] pci 0000:00:00.0: PME# supported from D0 D3hot
[    1.579095] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    2.014280] brcm-pcie 1000110000.pcie: link down
[    2.018949] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    2.025596] pci 0000:00:00.0: PCI bridge to [bus 01]
[    2.030582] pci 0000:00:00.0: Max Payload Size set to  512/ 512 (was  128), Max Read Rq  512
[    2.039154] pcieport 0000:00:00.0: PME: Signaling with IRQ 38
[    2.044985] pcieport 0000:00:00.0: AER: enabled with IRQ 38
[    2.050668] pci_bus 0000:01: busn_res: [bus 01] is released
[    2.056312] pci_bus 0000:00: busn_res: [bus 00-ff] is released
[    2.062291] brcm-pcie 1000120000.pcie: host bridge /axi/pcie@120000 ranges:
[    2.069284] brcm-pcie 1000120000.pcie:   No bus range found for /axi/pcie@120000, using [bus 00-ff]
[    2.078375] brcm-pcie 1000120000.pcie:      MEM 0x1f00000000..0x1ffffffffb -> 0x0000000000
[    2.086677] brcm-pcie 1000120000.pcie:      MEM 0x1c00000000..0x1effffffff -> 0x0400000000
[    2.094982] brcm-pcie 1000120000.pcie:   IB MEM 0x1f00000000..0x1f003fffff -> 0x0000000000
[    2.103283] brcm-pcie 1000120000.pcie:   IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000
[    2.112746] brcm-pcie 1000120000.pcie: setting SCB_ACCESS_EN, READ_UR_MODE, MAX_BURST_SIZE
[    2.121054] brcm-pcie 1000120000.pcie: Forcing gen 2
[    2.126064] brcm-pcie 1000120000.pcie: PCI host bridge to bus 0001:00
[    2.132538] pci_bus 0001:00: root bus resource [bus 00-ff]
[    2.138045] pci_bus 0001:00: root bus resource [mem 0x1f00000000-0x1ffffffffb] (bus address [0x00000000-0xfffffffb])
[    2.148617] pci_bus 0001:00: root bus resource [mem 0x1c00000000-0x1effffffff pref] (bus address [0x400000000-0x6ffffffff])
[    2.159805] pci 0001:00:00.0: [14e4:2712] type 01 class 0x060400
[    2.165856] pci 0001:00:00.0: PME# supported from D0 D3hot
[    2.172146] pci 0001:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[    2.286285] brcm-pcie 1000120000.pcie: link up, 5.0 GT/s PCIe x4 (!SSC)
[    2.292944] pci 0001:01:00.0: [1de4:0001] type 00 class 0x020000
[    2.298986] pci 0001:01:00.0: reg 0x10: [mem 0xffffc000-0xffffffff]
[    2.305283] pci 0001:01:00.0: reg 0x14: [mem 0xffc00000-0xffffffff]
[    2.311579] pci 0001:01:00.0: reg 0x18: [mem 0xffff0000-0xffffffff]
[    2.317937] pci 0001:01:00.0: supports D1
[    2.321958] pci 0001:01:00.0: PME# supported from D0 D1 D3hot D3cold
[    2.338289] pci_bus 0001:01: busn_res: [bus 01-ff] end is updated to 01
[    2.344936] pci 0001:00:00.0: BAR 8: assigned [mem 0x1f00000000-0x1f005fffff]
[    2.352102] pci 0001:01:00.0: BAR 1: assigned [mem 0x1f00000000-0x1f003fffff]
[    2.359268] pci 0001:01:00.0: BAR 2: assigned [mem 0x1f00400000-0x1f0040ffff]
[    2.366434] pci 0001:01:00.0: BAR 0: assigned [mem 0x1f00410000-0x1f00413fff]
[    2.373601] pci 0001:00:00.0: PCI bridge to [bus 01]
[    2.378583] pci 0001:00:00.0:   bridge window [mem 0x1f00000000-0x1f005fffff]
[    2.385750] pci 0001:00:00.0: Max Payload Size set to  256/ 512 (was  128), Max Read Rq  512
[    2.394230] pci 0001:01:00.0: Max Payload Size set to  256/ 256 (was  128), Max Read Rq  512
[    2.402756] pcieport 0001:00:00.0: enabling device (0000 -> 0002)
[    2.408908] pcieport 0001:00:00.0: PME: Signaling with IRQ 39
[    2.414725] pcieport 0001:00:00.0: AER: enabled with IRQ 39

Take out the “gen=3” and see if that works. So that line should just be:

[all]
dtparam=pciex1

I’m using the same drive but I haven’t tried it on gen3.

Also, don’t forget this line under eeprom-config:

PCIE_PROBE=1

I tried to use pciex1 without gen3. It doesn’t make any difference.
Also tried to edit rpi-eeprom-config and add PCIE_PROBE=1, the same.

Thanks anyway for your reply.

Well, I’ve checked the forum for similar issues and tried the given solutions, unfortunately nothing helps. I don’t get to see the NVMe Base listed in lspci or any of my drives in lsblk.

[ 1.477415] brcm-pcie 1000110000.pcie: host bridge /axi/pcie@110000 ranges:
[ 1.484421] brcm-pcie 1000110000.pcie: No bus range found for /axi/pcie@110000, using [bus 00-ff]
[ 1.493515] brcm-pcie 1000110000.pcie: MEM 0x1b00000000..0x1bfffffffb -> 0x0000000000
[ 1.501817] brcm-pcie 1000110000.pcie: MEM 0x1800000000..0x1affffffff -> 0x0400000000
[ 1.510120] brcm-pcie 1000110000.pcie: IB MEM 0x0000000000..0x0fffffffff -> 0x1000000000
[ 1.519592] brcm-pcie 1000110000.pcie: setting SCB_ACCESS_EN, READ_UR_MODE, MAX_BURST_SIZE
[ 1.527898] brcm-pcie 1000110000.pcie: Forcing gen 3
[ 1.532915] brcm-pcie 1000110000.pcie: PCI host bridge to bus 0000:00
[ 1.539382] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 1.544888] pci_bus 0000:00: root bus resource [mem 0x1b00000000-0x1bfffffffb] (bus address [0x00000000-0xfffffffb])
[ 1.555458] pci_bus 0000:00: root bus resource [mem 0x1800000000-0x1affffffff pref] (bus address [0x400000000-0x6ffffffff])
[ 1.566646] pci 0000:00:00.0: [14e4:2712] type 01 class 0x060400
[ 1.572699] pci 0000:00:00.0: PME# supported from D0 D3hot
[ 1.579095] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[ 2.014280] brcm-pcie 1000110000.pcie: link down

It looks like the NVMe Base is not connected at all.

That said, the only scenarios left are either NVMe Base or the flex cable is faulty, or my Raspberry Pi has a problem with PCIe port, although is listed as working:

0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0001:01:00.0 Ethernet controller: Device 1de4:0001

I’ve been having issues as well. I tried with 2 different SSDs and the second one is in the list of supported drives. I’m using Ubuntu 23.10, however. But I don’t think this should be the source of the problems. Unforunately, I can’t think of any way of debugging where the problem comes from.

If no SSD is connected, should we still be able to detect the board?

Yes, that’s what I expected. At least to see the board detected and listed in lspci.

...
[ 2.014280] brcm-pcie 1000110000.pcie: link down
...

That means no connection to RPI’s PCIe.
I’ve opened a support ticket to Pimoroni to see their opinion about this.

1 Like

I was able to have the raspberry pi identify two different SSDs. I had to ditch Ubuntu, install raspberry OS and redo all the recommended steps. Additionally, I reconnected the nvme base very carefully and retried several times until it worked. Unfortunately, I switched again to the microSD that had ubuntu installed and now I can’t see the SSD anymore.

This is the system that worked:

Linux  6.1.0-rpi8-rpi-2712 #1 SMP PREEMPT Debian 1:6.1.73-1+rpt1 (2024-01-25) aarch64 GNU/Linux

The one that doesn’t work so far:

Linux 6.5.0-1010-raspi #13-Ubuntu SMP PREEMPT_DYNAMIC Thu Jan 18 09:08:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux

My use case requires Ubuntu, so I’ll keep trying to make it work.

1 Like

That’s great, at least you know the board is working (intermittently, but it does). In my case I am already using the latest OS Lite:

Linux rpi 6.1.0-rpi8-rpi-2712 #1 SMP PREEMPT Debian 1:6.1.73-1+rpt1 (2024-01-25) aarch64

As for the FPC connection of NVMe Base to RPI, I tried several times to reconnect but I didn’t get any luck to see the drives, not once.

I’ll give it another try, with OS & FPC installation.

I am just curious. Can anyone around here (with a working NVMe Base) please tell me if the board get detected in lspci even if there’s no drive attached? Thanks.

It won’t get detected on it’s own. Until an NVMe (or other PCIe device) is connected to the M.2 slot, there nothing for the PCIe bus to detect on that connection.

Also, the PCIe support is very new, so using RPi OS which has a supported kernel is recommended. I think Ubuntu is using a newer kernel version that has a change that breaks the PCIe compatibility. It’ll be fixed in time, but for now, it’ll make it confusing when you can’t see your drive.

For my use case I have RPi OS Lite installed (tried also several fresh installs) and it still won’t get detected, either with or without NVMe connected.

I have placed a order for another NVMe Base. I hope with better luck this time.

Update:

I received the second NVMe Base and everything works just fine.
It seems that the previous board was faulty.

The first order was refunded thanks to @hel (from Pimoroni Support).

1 Like