4-Port mPCIe SATA Adapter
# │forum
I ordered one of these from Amazon: Cablecc Mini PCI-E PCI Express to 6Gbps Four Ports SATA 3.0 Adapter Converter Hard Drive Extension Card for SSD https://a.co/d/2y9RUdi. This is a full-length, 4-port SATA mPCIe card based on the ASM1064 chip. We know that 2-port SATA, half-length cards, based on the ASM1061 chip, work fine on the TP2. The ASM1061 supports one PCIe Gen 2 lane with a maximum throughput of 500MB/s. The ASM1064 supports one PCIe Gen 3 lane with a maximum throughput of 1GB/s. The ASM1064 won't yield scalable performance for CM4s, but it will take full advantage of the PCIe Gen 3 lane available to some Jetson SoMs and the RK1.
The adapter is arriving today. I'll do basic functionality tests on Jetson Orin NX 16GB and RPi-CM4108000. I have two, unused, circa-2017, planar NAND SanDisk Extreme Pro 960GB SATA SSDs. Performance is representative with good enough endurance (80TBW) for a SATA device. If anyone would like fio raw sequential and random i/o performance numbers at various blocksizes for the ASM1064 versus the built-in ASM1061 controller on node3, let me know.
The "package". Note no SATA cables, so expect to order them separately.
The card is recognized by Ubuntu on Orin NX 16GB in node1.
The card is recognized by Ubuntu (64-bit, 20.04 LTS) on RPi-CM4 in node2.
I'll download the FIO source, build it and do some performance tests if there is interest.
Ubuntu will need
- just in case you did not install it yet (or someone seeing this will wonder why they do not see the drives):
Copy code
sudo apt install linux-modules-extra-raspi
I mistakenly ordered a pair of SATA cables that had 90° plugs on both ends. Ordered a different set that is 90° on one end and straight on the other. These cables are necessary due to the way the 2.5" drives mount in the Thermaltake Tower 100 (connector end down). It looks cleaner because all SATA cables are hidden.
Got everything cabled and installed tonight. I don't like the SATA cable routing here. These cables are 18". Not long enough to route them out of sight, but this setup is better for switching between controllers and nodes. I've ordered 1 meter cables for permanent cable routing.
Finished cabling with long SATA cables yesterday. I ran RAW I/O tests on one SSD attached to the embedded ASM1061 (node3) to confirm functionality. I don't expect anyone to use RAW I/O in this setting. I'll make one, big, EXT4 file system on each device and find the most appropriate fio.ini test examples for SATA SSDs.
The ASM1064 mPCIe card has solder points on the bottom. This means you must either insulate the area or desolder and remove a standoff to prevent an electrical short.
I'm still researching compatible female-female threaded M3 barrel standoffs to replace all of the factory M2 SMD standoffs.
@Terarex (Dan Donovan) Is there any kind of bottleneck on CM4 or would you recommend it anyway?
The CM4 has 1 lane PCIe Gen 2 with a maximum bandwidth of 500MB/sec. The ASM1064 mPCIe card supports 1 lane of PCIe Gen 3 (max bandwidth 1GB/sec). If you intend to use SATA HDDs, particularly if you don't have a purely sequential read workload, the CM4 probably won't exceed the bandwidth. If you want to use SATA SSDs, and the workload is sequential read, the CM4 may limit total performance.
I'm planning using ssd, and use longhorn to make mirrors for a k8s cluster, shouldn't have problemas if I use this to add more disks to the cluster ( I don't expect super fast I/O)