Turing-Pi 2 Evolution (feature request)
# │forum
p
Given that we should be starting to see the TPi2 in our hands soon what would people like to see changed in the original design IF WE EVER got a TPi V2.1 (Please no comments about the RTC issue that goes without saying) Please be positive in the community spirit some ideas I would like to see would be, The movement of header ports (Power Connector, SATA, USB3 header) away from the front of the broad preferably to the side to aid airflow though the cards installed Dedicated USB header for the CM4 Mode port (I could see this of more use on the front of a case) Move the reset BMC buttons onto the Back plate at right angle to the broad for easier access from the rear of the case Maybe an ATX version with 6 or 8 nodes? PWM fan for system fan (Maybe two instead of 1) Maybe also add PWM for the nodes also (Forgot that thanks @segabor ) Upgraded Ethernet 2.5gb maybe useful depending on the use case, I would expect that would almost allow Max throughput of all connected broads currently Maybe mPCIe that supports SPI for LORWAN gateways (though not sure how this might be achieved perhaps some small jumper switch) POE class 4
w
Agree with a lot of these. Multi-gig up link on the switch. It's become common enough they should be able to find a well priced switch chip. More nodes. The TPI1 had seven and is the same for factor with DIMM connectors. More PIs please. A custom heatsink. They know the dimensions very exactly, I'd like to see a custom heatsink that uses all the space, similar to what the Compute Blade Kickstarter is doing.
s
Well, I love the current hardware but what I feel can be improved a lot is the BIOS. Everytime I want to shut down the whole stuff I need to make sure each OS is halted before pressing the Power button. Thermal management: is it working anyway? My Thermaltake case has a big front fan and it has never been rotated a bit. Also, UART pins could have an attachable PCI cover having an USB female port. More nodes would be a big win!
s
Ability to update the routing of the on-board switch, for example to route Nodes 1 & 2 only to NIC1 and Nodes 3 & 4 only to NIC2, for example. Plus a dedicated BMC management port.
t
The current switch chip supports tagged and protocol VLANs. That might be a way to get what you want. TMs hasn't disclosed the feature set they might implement in the new firmware.
u
Maybe remove the fixed mounting nuts (make em non fixed?) on the m2 x 60 spacing to allow for dual sided nvme 22x80 drives without desoldering said nuts.
t
+1 for making the mPCIe and M.2 anchor points removable/relocatable.
I'd like for Turing Machines to fully debug and fix node1 USB flashing and HDMI. I'd also like to see the BMC chip updated to the T113-S4 (256MB vs 128MB) and increase the amount of flash available to the BMC to host dual, more full-featued, boot images.
j
HDMI duplexing of all 4 slots would be really handy for troubleshooting and maintenance.
w
KVM switch?
d
It kind of has one - the USB port. But do you mean for video output?
w
well, many people on the chat here ask about video and keyboad/mouse. We kind of keep saying, video only Node 1, but not keyboard with Jetson and blah blah. If we have this KVM switch we could just configure which ones will get the inputs/outputs. But, maybe this would impact the lanes used by specific nodes, as example, node 3 has SATA and others not. Maybe new versions of compute modules would also help on this matter
d
Ok, I understand what do you mean now, but also can see the challenge here 🙂
t
I'd be happy with a switchable 5 channel TTL to USB-A male adapter. Basically a serial KVM. Cabling would be a PITA. Having 5 USB-to-TTL/UART cable is overkill.
d
The question is if you can use the BMC for this, which already has UART connections to all of the nodes
p
Having the BMC as a node would be useful for upgrading/different use cases.
t
I guess that works. I want a console connection if the BMC uses the tty ports. That way maybe they don't step on each other.
t
moar nodes!
d
Oh, you never have too many 😄
a
I'd remove one sata port seeing that each node can have an nvme drive. Two sata ports seems like overkill to me.
d
There's never too many 😄
Also, CM4 cannot use M.2, so SATA will be useful 🙂
a
Well Daniel you're special. 😂 The removal of the one sata port could open space for something else is what I meant to say.
d
Hehehe 😉
When I think of it, there's still a lot of space. If anything, the ports could be stacked the other way opening some space anyway if the team needed this space
People also use 2 ports for data safety (drive mirror called RAID1)
I only mean 2 SATA oirts are useful 🙂
a
I see your point. In my use case I'll have one Pi installed and hopefully two RK1s and an AI accelerator. All the nodes except for the Pi will have nvme drives. The Pi will be attached to the SATA port. I don't need data redundancy. I am using this as a learning platform.
d
Yeah, I understand. But then if you put the board to actual work and you have actual data you care about, disk redundancy is rather a must have. In my PC I keep file with redundancy for many many years now (previously RAO1 off of 2 disks, now RAID5 off of 3 disks). This is why I would sat the second SATA port is actually useful 🙂
a
True about how important your data is to you. Well this is the first iteration of the retail TPi2 and the team might release more SKUs that are more tailored to specific use cases. The issue is that the more features are added on this size board the more complicated the construction of it gets. Not saying it will definitely lead to more issues but, chances are they will. At the end of the day I want something reliable which it is easy to forget is an intrinsic feature we all want.
s
There’s much to learn from the TP2, but given the time it takes to make major design changes and the sheer cost, I’d expect the next revision will be TP3/RPi CM5 (… on the assumption that the RKx boards don’t become the primary use-case)
c
My 2 cents: if the earlier mentioned KVM is unavailable, move the USB3 controller to node 1 so that at least one node has HDMI + USB access 🙂
d
While this is a valid idea, if you want a USB right now, you can get a Mini PCIe USB controller - having a Mini PCIe for node 1 give you options what do you want to have connected to the Node 1
c
I'm aware, the idea I mentioned was purely to prevent the need of such controller if you only need it occasionally, like installing a Jetson module 😉
d
Sure, just making sure you are aware about this possibility 🙂
d
Yes please. If HDMI Duplexing is not an option, having connected USB ports on the same node as the HDMI interface is a must-have. Otherwise, we all have to source and then swap in/out mini PCIe USB adapters.
t
if you've populated node4 then you could use use usbip to give node1 access to the usb ports
s
An Aspeed or other server-class BMC chip, please! It will make it much easier to develop code, implement useful features, etc.
g
Whilst the UART connectors and the sd card connections are really important it would be cool if there was a header block with cable that you attach so you expose these interfaces on the outside of a case. So you do not have to crack the case everytime you need to interface to it
s
I've been trying to find a PCI bracket with Female-Female pass-through USB connectors, so that a UART/USB cable could be used internally and a USB-A/A cable externally... although I've not found anything suitable yet. I have found UART/DB-9 PCI brackets, which could be used for an old-skool serial setup? (... I guess the other option would be a UART/RJ-45 face-plate?) I've not looked for PCI brackets with an SD slot, but I have an SD extension ribbon-cable installed in my system, meaning I have to crack open the case but not unmount the mainboard to get to the (BMC) SD card, which feels good enough for now.
c
This one's a low-hanging-fruit idea for a future minor revision: if the node control GPIOs (
{EN, RST, USB_VBUS, RPIBOOT} * 4
) can be moved out to a GPIO expander (e.g. PCA9555 on I²C), that would free up 16 pins for other purposes... ...including running RGMII to the RTL8201F a GbE PHY, so we get 1Gbps to the BMC instead of 100Mbps. 😀
s
For future IO-shields: If you don't put the cutouts in the "normal mounting position", but raise them by 20 mm. They still fit in the slot cover and it would allow to mount the TP2 board raised in every case to make space for NVMe SSD with coolers.
m
BMC ability to somehow send a "soft shutdown via power button" command to the nodes. Instead of a hard Node Power Off via the power control only being available. Please
r
Give a lot more focus to the ability to flash nodes. There should be no requirement to swap cards in and out just to do flashing or direct node management. 1. BMC should have access to nvme cards so one could flash from bms to all for nvme drives. 2. KVM switch to switch keyboard mouse monitor between each node, again controlled via bms 3. Put the display type switch in BMC config file so one can switch in software
a
Move all USB connectors to USB type C! Death to microUSB! TypeA should be EoL
s
It'd be great on a per-Node basis to expose at least GPIO pins (5V, GND, and something like GPIO18?) to drive a 5V PWM fan under software/OS control. Due to the positioning of my case-fan, I've got passive heatsinks on Nodes 1-3, but then a PWM fan on Node 4 which is controlled via software and Node 1's GPIO PWM output (based on a daemon process running on Node 1 and a simple REST interface on Node 4 which reports the current temperature!). This does mean that Node 4 only receives temperature-dependent cooling when Node 1 is running, of course...
t
perhaps an I²C PWM fan controller?
I think someone mentioned the possibility of a new CM4 adapter design that also included such an option, but I have no idea if it might ever become reality
s
I think in addition to the excellent suggestions by @rexter0 above that making the existing 2-pin main board fan connector into a 4-pin one with speed signals controlled by a temperature sensor on the switch heat sink would make sense, since that is the hottest part of the current board.
d
I agree with this… don’t take away our sata, more sata please! Or at least figure out a way to get the RPi to work with one of these https://a.co/d/7b78WgM
t
This appears to be the same device: https://a.co/d/93wWl58. It's based on the ASM1166 controller. Best case, if it works on a generic CM4 carrier, would be to use it for HDD connectivity. Performance with the RPi 4's single PCIe Gen 2 lane wouldn't take proper advantage of SSDs. If you need SATA ports for node1 and node2. Look for this mPCIe card. It's based on the ASM1064 and works with CM4 and Jetson modules. I assume it will work with the RK1. Cablecc Mini PCI-E PCI Express to... https://www.amazon.com/dp/B0C3CMH6B8?ref=ppx_pop_mob_ap_share
d
Thanks for the reply! For my project, don’t need full ssd speed. Looking to use one of the CM4’s to run a NAS. That adapter looks promising, but I need more than four SATA ports… I could always use a multiplier for the existing two sata on Node 3, but looking for something more elegant… another option I’ve considered is using a mini pcie to pcie riser, then a proper pcie sata card Mini PCIe Riser: https://a.co/d/iqGD7Le PCIe SATA: https://a.co/d/ikGQ2aI
d
A port multiplier is going to work as well as the module you initially mentioned you wanted to use
The reason for this is a speed limit of the PCIe lane that the CM4 can use
This is exactly what I'm using. I'm yet to test it well, but the initial test shows I can easily saturate the 1 Gb/s of the network speed: https://discord.com/channels/754950670175436841/1030129990211219486/1091312860304511107
c
did you get this to work... just got 2 boards as well 😄
u
Hi there, I'm considering connecting an old 12v FAN (Molex 2510 4PIN) I have to the TP2 FAN (see the image). Plugging the FAN using the 2 middle pins turns the FAN on full blast. The thing is, I want to be able to control the FAN using the PWM wire. Does anybody knows if it's possible to connect the blue wire (PWM I suppose) to the GPIO 18 on the Node 1 to control the FAN from there? I don't mind relying only on the NODE 1 cpu temp. I was considering using the python script I saw on this blob post (https://blog.driftking.tw/en/2019/11/Using-Raspberry-Pi-to-Control-a-PWM-Fan-and-Monitor-its-Speed/#Use-PWM-to-Control-Fan-Speed) as a starting point. Have anybody tried something like that? I'm mostly a software guy and I don't know much about the hardware intricacies 😆 Is it a stupid thing to do?

https://cdn.discordapp.com/attachments/1090193720294522900/1130148310825586801/Screenshot_2023-07-16_at_16.20.47.png

https://cdn.discordapp.com/attachments/1090193720294522900/1130148311232413736/IMG_8318.jpg

https://cdn.discordapp.com/attachments/1090193720294522900/1130148311651848244/IMG_8319.jpg

well, I managed to not fry anything it seems 😂 it's working as intended. Next step is to buy a proper jump wire to replace this work around
t
If a revised TP2 is made available, I have a couple of suggestions regarding physical characteristics: - The mPCI and M.2 standoffs should not be surface mount/soldered. They should be removable. The current board design has sufficient keep out space around the soldered standoffs to accommodate wafer head screw attachment (with Loctite threadlocker blue). To save on manufacturing cost, the standoffs and screws do not need to be factory installed. Including them in a separate bag, like the SoM JST fan cables, is satisfactory. - Increase the height of the M.2 connectors on the rear. The current connectors appear to be 2mm. Why? High capacity M.2, NVMe SSDs are double sided, and PCIe Gen 4/Gen 5 SSDs require heat sinks with thermal pads. The current M.2 connectors and standoffs do not adequately support the underside SSD thickness required for airflow. I'm going to install 4mm (5mm would be better), removable standoffs, which should provide sufficient clearance, but the SSDs will be angled. I don't think I want to tackle the job of reworking a board to desolder the existing and resolder taller M.2 sockets.
d
My dream TP#: Node 1 (Main) - HDMI, mini pcie, 2x USB, ability to flash other 3 nodes Node 2 (Router) - mini pcie w/ sim, 4 external ports (1 WAN/3 LAN… 2.5G or 10G would be nice too!) Node 3 (NAS) - 8 SATA ports, 2x USB Node 4 (AIML/robotics) - 2x USB, 40pin GPIO, 2x MIPI CSI-2 Edit: all with NVMe
s
Indeed, and I'd rather have one SATA per module than none!
m
I would want to replace the 1 Gbps switch part with a 2.5Gbps or 10Gbps capable part.
d
Each available module has a network link speed of 1 Gb/s, so a "faster" switch is not going to change the max speed you can access the module with (but would let you utilize the max speed link of multiple modules at the same time, of course).
m
I thought that the RK part had a 2.5Gbps capable output?
(as an *GMII thing, but still)
Or is this one of the compromises that the *S part makes?
d
RK3588 supports up to 1GB/s with RGMII
We're not using the S version of RK3588
m
I suppose you could theoretically hook up both GMACs to the same chip as long as it supports LAGs?
I mean, I think there's a RTL chip that would theoretically support that, and give you a 10 Gbps uplink - I don't know how much more expensive that part would be, but I imagine the traces would be hard to find?
d
Possibly, but then how LACP works is it won't utilize both links for a single connection, just one of them. Then how TPi2 is designed is it utilized only a single LAN connection. I'm not sure if RK1 exposes both
Where did you get the 10Gb/s from?
m
LAGs allow you to balance flows across them
RTL8396L is one part that would theoretically work
d
Meaning connections, You'd need multiple (at least 2 in this case and hope they both get routed through different interfaces)
They still will be 1Gb/s links (so a total of possible 2Gb/s)
m
Yeah, which is twice the existing capability 😉
c
A fun theoretical problem I want to solve is what happens when there's 2 flows, 1Gbps each, but the switch is hashing them onto the same LAG member
m
From https://svanheule.net/switches/rtl93xx, I see that the 9302A might work.
Yeah, it
*it's an imperfect solution.
d
Yeah, this is the "hope" part I mentioned 😄
LACP is not "intelligent enough" to make it the way you'd like all the time
m
In the case of large numbers of flows, this should go away, but it's only in the limiting case.
In practice, one link is always more utilised than the other, and elephant flows are the common case.
(Unless you're a hyperscaler 🙂 )
But it's definitely better than 1 link. I wonder why I thought it had a better ethernet link?
c
The RTL8370MB has a 16-way hash-to-link table, so the fun problem to tackle would be to make a daemon that can automatically balance out this table, which should solve this problem nicely.
At least for outgoing flows 🤔
m
elephant flows in practice make balancing it impossible.
You can only ever hope to minimise the imbalance 😉
c
Well, yes, that
It doesn't have to be perfectly 50:50, you just don't want high congestion
If you want true 50:50, you have to go round-robin, and either use only protocols that tolerate reordering, or do what Brocade does and balance the link latency in hardware.
m
Huh, well, there is a promising way to add capability to it, and you only have to use one of the PCIe2.1 lanes.
PCIe2.1 supports 5Gbps - overheads (I think about 10%), so if you used a chip on the board that did PCIe to gigabit, you could connect them internally at 5Gbps - overheads.
From the "right compromise" perspective, I think the Intel i225-v is available in quantity for very cheap, so it might make sense to use it instead.
c
So the idea is 2.5GBASE-T from the module to the board?
m
No, it would be PCIe2.1 from the module to the board, and the board hosts the 2.5Gbe controller
c
mm, in the M.2 or mPCIe slot?
m
Or more accurately, 4 2.5GBe controllers
Potentially.
I don't know where we are currently extracting the SATA controllers to, apparently the pins are multiplexed with those.
c
Multiplexed or switched?
m
Multiplexed. It looks like you configure it somehow (it doesn't specify, but I'm guessing there'll be a magic fuse or something?)
We already use this mechanism for exposing USB3.1 for one of the things.
And mPCIe slot for another node
But there are 3 things to configure, so it feels like we might not have used one.
If so, we can use one of these as a victim for a 2.5GBe controller.
c
But if the controller is on the TP2, not the module, which Jetson pins are the victims?
m
Oh, I see.
Yeah, that makes sense - so I suppose it would have to be on the module.
Do we export it as a *MII interface for the Jetson right now?
c
1000BASE-T
m
So we'd export it as a 2.5GBASE-T
Not the most pleasant of things, but I suppose it would do.
c
Which I like since it means RK1s for people who only want 1G don't pay the extra for 2.5G
m
Yeah, that's true.
c
The only thing on the TP2 that has to be upgraded is the switch
m
And it's compatible both ways
It just autonegotiates the best capability.
How do we export the second interface right now, by the way?
Is the second just unconnected?
c
Second interface of which interconnect technology?
m
RK1
c
I mean fill in the blank: second _______ interface? PCIe?
m
Ethernet
c
Didn't know there was a second on the RK3588, but I'd guess there's no PHY on it, so the pins only go as far as the RK1 PCB.
m
I figured that was the probably answer 😉
There was one moderately surprising choice that was made - Most of the RTL "managed switch" chips have a MIPS chip that I think would be sufficiently capable of being a BMC. Was it just that it was MIPS, or was there another problem?
Like, they certainly have more than enough GPIO pins 😉
c
The BMC was selected relatively late in the TP2 design process, after the Ethernet switch was already chosen. I don't know what criteria went into BMC selection though. I'm surprised that there's a separate Ethernet PHY when there's extension interfaces on the RTL8370MB though.
It might have been motivated in part by UART quantity and availability of a SD/MMC interface, which the switching SoCs might not have offered.
m
That's true, generally flash parts are chosen for switch SoC instead.
Oh my, the 8370MB is cheap - even in tiny quantities, it's just over $10.
The equivalent RTL9301 part is US$25 in 200+ quantities, and $22 for 1200+
c
I'm also suspecting a lot of answers to design trade-off questions will be "it was the first thing we found that solved the problem"
Realtek's demo schematic for the RTL8370MB appears almost verbatim in the finished TP2, including an unused SPI-NOR out beyond node4.
m
Wait, why include an unused part?
Oh, I think I know
The default memory map for the processor will map the SPI-NOR straight into its address space.
Anything else would require config.
c
Because it's in the demo schematic and they didn't delete it.
m
Yeah, but I would think that the designers would go to some trouble to save the 50c or so per SPI-NOR, right?
c
I'd think so too. Yet it's there.
m
You know, from a forward compatibility perspective, I like the idea of a 2.5GBaseT interface more and more - a future designer wanting a 10G or higher part, assuming that future hardware were good enough, would be able to do just that.
c
Or 5GBASE-T-minus-overhead if it's not much more expensive 🫰
m
Yeah, indeed
I mentioned the Intel part because it's $2 😉
But it is only 2.5GBe.
c
The only SATA is a PCIe adapter on the TP2; no SATA goes through the socket.
m
Oh! We don't use that bit of the RK's stuff at all 🙂
d
I'm trying to catch up and you got me lost here. What do you mean by this?
m
Oh, just that we'd use one of the PCIe IFs of the RK3588 for an i225v or similar, and replace the exported 1gbase-t interface with a 2.5gbase-t interface.
d
With TPi2 the thing is all modules are treated more or less the same, I guess. So this is why there is ASM1061 (SATA controller) connected to Node 3 and some chip I forgot the model of connected to the Node 4 for USB 3.0
You can put 10GBASE M.2 modules on the back, if you wish 🙂
And have 10Gb/s network between the modules 😛
m
The idea is what substitutions could be made to make it possible in a TPI2.1
d
But yes, the only way to get more than 1Gb/s would be to connect something to the PCie
m
So the idea would be to replace the RTL8370MB with RTL9301 or something like that
m
and then add an i225v, or perhaps a marvell thing to the RK1.1, and then send that over the pins used for 1GBASE-T interface.
Nah, nothing here calls for a PCIe switch.
d
This would mean no Mini PCIe /SATA / USB 3.0 unless you also add a PCIe switch
Well, do you need the ethernet overhead instead of direct host-to-host communication? 🙂
m
Forgive me, I thought there were 4 PCIe interfaces on the RK3588?
d
There are 2 - 1x and 4x
m
I thought there were 3 PCIe2.1 and 1 PCIe3?
c
Time to check the datasheet
d

https://cdn.discordapp.com/attachments/1090193720294522900/1137491087670390804/image.png

c

https://cdn.discordapp.com/attachments/1090193720294522900/1137491165332131880/Screenshot_20230805-150328.png

So 1x2 + 2x1 might allow the i225v upgrade
m
What about the Combo PIPE PHY interface?
d
I'm reading about it right now
m
I was guessing that we were using one of these lanes for the M.2 feature 😉
d
M.2 uses PCIe3 x4
m
Oh, so we're using these lanes for the other features, then
d
I guess there may be indeed 2 more PCIe 2.1 that I wasn unaware of
m
Well, maybe just 1 of them for each, because it doesn't look like they know about bonding these.
d
Just one of the PCIe 2.1 would be in use then, unless there's some RK1 design choice I'm not yet aware of
m
And so there'd be room to use another on the board. You'd just have to find room on it 😉
d
loks like there is indeed 1x PCIe 3.0 x4 and 3x PCIe 2.1 x1
m
It has a lot of connectivity for a tiny SoC 🙂
d
The room is going to be a problem - as far as I know it was a challenge to fit everything on the module of this size
I would not be surprised if the additional PCie lanes might be exposed on the connector for future use
c
What are the i225v's cooling requirements?
m
It's got 1.8W TDP, I think?
1.3, sorry
i226v is the norm these days
c
Might be able to squeeze it on the back of the module somehow, and just go bald (no heatsink), but still that might be hard to fit.
m
7x7mm
And expected discontinuance in 2032.
c
Integrated PHY too? It goes straight from PCIe to 2.5GBASE-T?
m
Yep
It says "NBASE-T" as the iface supported, so yeah, direct to PHY.
I'll just check the BOM of a card that uses it, but I'm pretty sure there's no external PHY
There's also the RTL8125BG, which seems to have fewer reported issues.
Looking at the card, it definitely has an integrated PHY, but it's harder to tell what the TDP is.
Yeah, it seems most of the bugs in the Intel thing are related to "EEE", that is, the energy saving part of the spec, so I'd say that the realtek chip is actually the better of the two, probably, assuming that the TDP is reasonable.
Oh, found it - https://datasheet.lcsc.com/lcsc/2205121200_Realtek-Semicon-RTL8125BG-CG_C3013605.pdf - p44, maximum operating supply current. That gives a TDP of about 1W, probably less under the specific conditions that we're running.
b
For a future TPI I would like to have the LED's of the nodes not be on the board, or they have to at least have some FRONT IO for the Node LED's so it would be possible to show the Node LEDS externally on a case. Instead of having LED's next the board, would be more efficient to simple have a 8 pin 4x2 size connector, STATUS+GND for each node where you have to connect your own LED's, maybe saving on some production costs even.
s
Blinkenlights!

https://cdn.discordapp.com/attachments/1090193720294522900/1146499562148724757/image.png

t
😀
b
Support for PWM Fan connector instead of the current 2-pin 12v connctor with temp sensor, so you can hook a Noctua Fan Controller directly to the motherboard
d
What he said. I ordered a Noctua fan for my case while waiting for the board and now trying to figure out how I'm going to make that work.
t
Got the whole "Thinking Machines" thing going.
c
https://github.com/turing-machines/BMC-Firmware/pull/125 may be of interest to you and @blackphoenixx85
b
@CFSworks thanks. Will look into it. Right now I have a noctua FC1 controller so I can control it manually
d
Thanks for that update. I don't know where that leaves me since I don't understand the gestalt of that, but I'll take it as SOL and that I should look at the FC1 controller. Thank you!
c
If you're comfortable soldering, you can order an EMC2301 chip and Molex 4-pin fan connector and get PWM fan control from the BMC with your current TP2 board. If you want to go that route (I did, works well for me).
d
I've never soldered anything before, so, not excited with that idea. In Geerling's "Racking" video from last year, he made it sound like the four pin connector he shows would have pwm control when the board went into production. "I know they're working on it." He was using a Noctua fan.
b
I went with Noctua Chromax 140mm Fan with a Noctua FC1 Fan Controller
j
Can you elaborate more on this setup?
d
I ordered the Noctua FC1. I have two Noctua 120x25 fans, one of which I'm going to return to get a thinner one (25mm > 15mm) to put at the top of the case after I order a fan bracket. I'd like to put a fan over the motherboard, but would need standoffs and a larger fan. Maybe with the top and bottom fans and the heat sink fans the thermals will be fine.
p
Awesome. What expansion card are you using?
t
For nodes 1 and 2, install a Mini PCIe card containing either a ASM1061 (2 SATA ports) or ASM1064 (4 SATA ports). Stay away from the ASM RAID cards. They don't work. ASM1061-based mPCIe cards are 2230. ASM1064-based mPCIe cards are 2242. With the latter you'll need to insulate or remove the standoff at the 2230 position to prevent it from shorting solder points on the bottom of the card. node 3 has access to 2 SATA ports from an on-TPi2 ASM1061. Note: you may need to install the AHCI/SATA driver package. Symptom is Linux doesn't see the SATA drives, but "lspci" shows the presence of the ASMedia controller
p
Thanks Dan! When I get the RKs I want to set this up in a similar way. I want the ability to backup/RAID my drives too.
s
I need to finalise the software and push it to GitHub - but it's basically the same as using using the GPIO pins to drive a directly-connected fan, just that the fan's actually connected to the furthest node...
p
Would love to see the SATA / PCI HIGH-speed USB-3 etc... shareable / assignable to any of the 4 slots ( thru firmware - just like we assign the usb2.0 port to any of the slots) - dunno how feasible this is - but right now having the SATA stuck to 1 node is limiting....
t
I'd like to see a method to connect from the BMC serial port to those of the nodes. I have an OrangePi One sitting on my M-iTX case, connected to a USB serial cable. I can SSH into my OrangePi (Debian) and use screen to connect to the BMC so it's acting as a serial terminal server. It would be nice to see a similar method to connect to the node serial ports while attached to the BMC serial port.
s
You mean like a built-in KVM switch? I don't think that's within the hardware capabilities of the current TPi board, though I could be wrong. (Don't think so, though.)
t
Not quite a KVM, a serial switch, connects multiple serial ports to ethernet. I used to use them to connect to VAX servers. Can still purchase a dedicated one, but very $$$$! Perle still do them. They are also used today for out of band communication to physical network equipment in some datacentres https://www.perle.com/products/iolan-stg-terminal-server.shtml I could theoretically connect the GPIO to all 4 of the other UART ports on the board and as long as that device was functional I could SSH into it and use screen to connect to the other devices "debug" ports. Startech do a 4 port to USB one which I suppose I could connect to another raspberry pi device https://www.startech.com/en-ie/cards-adapters/icusb2324 Or I could just get 4 uart to USB cables and connect them direct to a raspberry pi... It would be nice if that functionality could go into the next revision TPi 3 or whatever
t
I'm hoping Turing Machines retains and populates the 4-pin, PWM fan header and the EMC2301 (at U109) fan controller on the v2.5 board. Retaining the existing 2-pin fan header would also be useful. Adding default PWM support would eliminate the need to acquire a separate and extra cost variable speed fan controller.
d
TPi2 v2.5 is going to contain a PWM-controllable fan 🙂
t
Excellent! Thanks for the info.
d
If it hasn't been requested already, please put the board's SD card on top of the board. It is a real pain to access once the board is installed in a case.
p
+1 on this - particularly if we plan to make firmware install via sd card - it’s gonna be accessed more frequently- unless of course the official case makes it a breeze to access this via back plane along with the m.2 slots
s
I used an SD extension cable so that I've got a tethered SD card socket elsewhere in my case!
g
@DhanOS (Daniel Kukiela) There is any plan for a 2.6 version, i don't know for include more bandwidth for Web UI flashing and improve times, or more capacity for the BMC, or fix any know issue that exists right now?... I want to decide if i should buy now or is better to wait a little more
t
Not that we have much data, but Turing Machines seems to have a roughly 14 month cadence for new TPiv2 board revs. The v2.5 version will double off-chip BMC NAND storage from 128MB to 256MB. It will also address the known "issues" with the v2.4 board: RTC CR2032 battery; USB generally; system PWM fan control; and Node 3 SATA controller #clkreq connection. At one time, doubling BMC (AllWinner T113-S3) RAM from 128MB to 256MB seemed possible with the T113-S4 chip. Unfortunately, that product is either in very short supply and/or has been sold to a third party.
s
Would be good to see replaceable modules for eMMC / NAND storage - should they fail for any reason, its not a simple job for someone to replace them at home, when soldered on. Something like IMO would work great for the RK1 ( Or i guess a "Lite" version with SD card ). Not sure what is out there for NAND though
k
I'd love to see an alternative adapter board release for the CM4 that connected to the M.2 slots. Having to deal with Mini PCIe to SATA connectors to connect with storage for all nodes is a pain and M.2 would be MUCH nicer.
l
Please remove the M.2 2260 standoffs. I want to mount some NVMe storage with heatsinks for my Orin NX modules, but to do so I need to remove the standoffs. I've seen others mention they cannot use double-sided NVMe for the same reason. Removing them is currently very difficult, requiring some very hot desoldering work. In their place you could add through-holes with removable nut and bolt-style standoffs, like on this PCIe adapter. https://www.amazon.com/dp/B09VGTMX7W
d
We may make such an adapter in the future, but for now, there are no plans to do so.
This is one of the changes made with Turing Pi v2.5 board: https://docs.turingpi.com/changelog/turing-pi2-v25-list-of-improvements
r
So when is v2.5 coming out? I still havent recieved mine yet. Would i get a 2.5 version?
d
The current estimate is February. And yes - you'll get the updated board revision.
a
le
s
... any news yet on offering a (cost-price?) replacement (or at least a solid discount?) to existing backers/owners who would like to upgrade to the v2.5 product?
s
As a customer with a 2.4 board, I'm not sure how that would work economically. The new board would still have to be built at the same cost, and there aren't any good mechanisms to recycle or reuse components between the boards that I can think of. I don't know the extent of the changes but undoubtedly they involve alteratiosn to the internal traces of the board, and if upgrading functionality in place with the earlier boards were possible, that would have been tried on these boards by the community, no doubt. So apart from the desire to have a good-will discount for some reason, I don't think there is a good path for the project to offer such an upgrade path. Am I wrong? Are there any parts to be recaptured by returning a previous-generation board that are worth the effort to remove them?
s
Yeah, I was thinking of a good-will discount - I doubt that remanufacturing v2.4 boards would be at all economic, if even feasible...
r
Is it possible on a future revision duplicate functionality for nodes 3 and 4... ie they both have 2 sata and 4 usb ports. Or each node has 2 usb ports and 1 sata?
I am sure hoping the CM5 will fix the M.2 NVME port issue.
d
There is only a single PCIe lane available. Node 3 connects a SATA controller to it (that exposes 2 SATA ports) and Node 4 connects a USB controller (with 4 USB ports). To add multiple devices a PCIe switch would be necessary and more PCIe controllers. This is, of course, possible, but currently, there are no plans for such features.
t
Just a random idea for some future TPiv(?) would be replacing the built-in SATA and USB 3.0 ports on node 3 and node 4 with mini PCIe slots. I don't know whether there is enough board real estate to pull off, but, with the v2.5 board improvements, this change would make for extremely flexible I/O configurations. There are tested mPCIe boards offering 4x USB 3.0, 4x SATA or, hopefully soon, 2x 2.5Gb Ethernet. I have this tonight: https://a.co/d/fERX2jw @DhanOS (Daniel Kukiela) we, in the US can order it from Amazon. Apparently, Amazon EU is out of stock. If a contribution for QA/support would be of value, DM me.
d
Interesting idea and I thought of this too. We'll keep it in mind, but I have also thought of using PCIe switches. As for this network card - yes, I might need to buy it off Aliexpress. I would like to test one myself too. But yes, I'm open to seeing your test results too.
I also tested myself multiple USB 3.0 and SATA MiniPCIe cards, so indeed they could be used
u
considering the board real estate, will this require significant redesign?
d
What do you mean by "this" exactly?
u
I mean the requirements in order to fit each mPCIe slot for node 3 and 4
d
There's no really space for them currently
u
As far as I can tell with the current board layout, trying to add 2 mPCIe slot for node 3 and 4 would result in a completely new board that doesn't fit the mini-ITX form factor
At that point might as well call it Turing Pi 3
d
It still may be possible to fit them in the Mini ITX form factor, I have some ideas and the engineers will have some too
Other than that it's hard to say anything else. I personally like the idea
u
If the current form factor doesn't have space, a new form factor like microATX (244 × 244 mm) wouldn't be so bad
The LicheeCluster board is still using the mini-ITX form factor though
d
People have been asking about a board that can carry more modules.
I, of course, do not have any information about the possibility of such a board from Turing Machines existing in the future, but if this happened, most likely won't be a Mini ITX if we want to have some more functionality than just packing 7 modules on a MiniITX board
u
I think so as well, if one would willing to use such a massive form factor like ATX, might as well pack some additional features other than increasing the number of compute modules
d
Yes, I mean storage, PCIe/NVMe, SATA, USB, MiniPCIe, like on the TPi2 right now. And the BMC chip, of course
u
With how the current compute module are set up (perpendicular to the board), I wonder if it's possible reach the amount of expansion slot the Compute Blade have when using ATX form factor though?
Especially the Zymbit in the expansion module slot
d
Compute Blades, if nothing changed, are boards for CM4s and there's really not this much expansion because of a single PCIe lane
u
Yeah, the single PCIe lane of RPi is really limiting it's full potential, but with different compute modules in the Turing Pi, there's still space for improvement
d
Of course, there's a lot that can be done still
u
If it's possible to add different PCIe cards like NICs, DPUs and GPUs, this ATX super board would definitely be the attention
d
You can do that with both MiniPCIe and M.2 - the former exposes a single PCIe 2.0 lane and the latter up to 4 PCIe 4.0 lanes (depending on the module)
u
But it would need additional accessories the the breakout board & cable, the ATX super board might as well add a PCIe slot even if it means one less compute modules right? Who knows 🤔
d
Well, if you pack multiple modules on a board, you won't have space for full ATX cards IMO anyway. MiniPCIe is more practical and can be adapter to the full PCIe is you need that\
u
I agree, I wonder if this board would ever get released, will it change how the ecosystem of compute modules work, I mean will it encourage manufacturer like Radxa to try to match the specification to fit into the Turing Pi ecosystem?
As far as I know, the current compute module ecosystem is very fractured, with each manufacturer have their own carrier board and different OS
d
The OS is a module thing. And the boards will obviously differ since the goals might differ
The above are my own thoughts, I, obviously, don't have any info I can share publicly about the future TPi boards yet
But a brainstorm like this can yield interesting ideas and creates many thoughts of things to think on later
u
But ultimately, it would depend on whether someone willing to fund this ATX super cluster board
@DhanOS (Daniel Kukiela) Btw, do you guys have any plan on experimenting with RISC-V?Rockchip has some impressive CPUs but on RISC-V side, there's SophGo as well.
Although they haven't have any impressive SoCs for international market yet
d
No information on that. And yes, I haven't seen any impressive RISC-V chips yet, but there are some really nice ones, like these that Jeff covered in his videos (the massive-multicore ones)
u
As far as I know, SophGo is about to release their 2.5 GHz 16-core RISC-V CPU named SG2380
Whether they have LTS Linux OS is another story, they're still designing the SG2380 architecture though
Most of the RISC-V CPUs are using the CPUs/cores from SiFive, they've just announced their P870/P870A core (expected clock speed to 3GHz), haven't seen any CPUs that uses this core yet
t
t
I did see that forum thread. For my purposes, routing through a node adds complexity and creates a single point of failure. I'd rather use a separate device, but "YMMV". Regardless, I now have a CM4 node running the latest Raspberry Pi-distributed Ubuntu Server 23.10 (64-bit) image. The IO Crest 2x 2.5Gb mPCIe card seems to be working fine. I'll try the latest Raspberry Pi OS (64-bit, Bookworm) as well. I also have I have NVIDIA Jetson Xavier NX and Orin NX modules that I'll try. There, I'm more limited by what NVIDIA supports. Any RK1 testing will have to wait until Turing Machines ships the 32GB modules I have on order (April-ish, I hope). Since I only have 1Gb switches on hand, someone else will need to do performance and stability-under-load testing. Update: I've installed and compatibility checked Raspberry Pi OS Full (64-bit) (Bookwork) with the IO Crest Dual 2.5Gb mPCIe card. A required driver was not available. This was evidenced by "lspci" returning nothing. However, after doing "sudo apt upgrade" and rebooting, the appropriate drivers, including r8169, were present. The two interfaces appear in "ifconfig -a" as eth1 and eth2.
t
When I get my RK1’s I will have two spare CM4 so that’s also an option for me.
s
Perhaps add some/all of the mPCIe sockets to the rear of the board (which would also help to keep the traces short?) - potentially in place of the M.2 slots (since IIRC mPCIe to NVMe adapters are passive components)?
u
you mean on the other side of the board?
I think there might be issues with mPCIe cards with antenna connectors though, it might add unwanted annoyance during installations of those cards
I made a mock-up of the microATX equivalent of Turing Pi 2 based on the image on the store and @Terarex (Dan Donovan)'s idea, capable of 6 nodes and each node have its own mPCIe port. what do guys think? https://cdn.discordapp.com/attachments/1090193720294522900/1202625636070330450/mATX-scheme.png?ex=65ce2367&is=65bbae67&hm=dcc1c84aa145f06106fb3c4b2365f5ab7a9c42b306624dafb7d5046a3f03e4a8&
It's still missing a lot of stuff but I still planning to add more
t
Glad to see it's possible. Considering the form factor change, I believe it would be helpful to put all USB ports and the microSD slot on the rear to ease access. The M.2 slots are fine on the bottom because they should only need to be accessed during installation or replacement. If Turing Machines is considering this type of product, the board and enclosure(s) should be designed and prototyped at the same time.
u
I'm thinking of moving the last 2 mPCIe slot to the same area as the other 4s, I wonder if this form factor have enough real estate to pull it off though?
I'm also designing a mock-up for the ATX equivalent as well, any idea contributions are welcomed
I also have that idea in mind, but I haven't add it in the mock-up yet, in case someone have additional features/ideas
Similar to the microATX, this form factor can have up to 8 nodes, in theory
@Terarex (Dan Donovan) If you have any ideas, just tell me more and I'll see if I can add it in the mock-up
But this is just a mock-up, I think there'll be significant changes in the layout if Turing Pi decided to work on this board
t
Oh yeah. 😉 Definitely not a 2024 product.
u
Any other ideas for the mock-up beside the moving the usb ports to the rear?
The side where the SSDs are installed have a lot of free space, I wonder if are there anything can be added there?
t
Yes, free surface space, but interior trace space may be a different matter. I don't know how many layers there are in the TPiv2 board. More board layers increases manufacturing cost and complexity.
n
can i request that the resistor values for the power/link activity LEDs (leds in general tbh) be bumped up to more like 1K than the ~470R i assume they currently are? they're eye-searingly bright :/ gonna have to swap them out for higher resistance ones on my board
t
make sure the mounting holes are in the correct, standard locations.
u
Do you have any idea on where can the ethernet switch can be moved on the mock-up
I thinking the mock-up should have a 2.5Gbps switch chip and it requires space and mounting holes for a proper heatsink
n
skip mPCIe imo and just go right to key-E or key-B m.2 slots for wifi/5G modules, gives you room for one per module on the right put the switch chip where you currently have the BMC with its over-the-top mPCIe
t
In theory, that would be nice. Do currently available SoMs support 2.5Gb Ethernet without the addition of a separate controller? More interesting to me would be a PCIe switch.
n
do you mean a pcie switch or a pcie matrix multiplexer (ie route different lanes to different modules)
and no, current modules have 1G PHYs on board and while Rockchip originally expected the RK3588 to handle 2.5G on one of its MACs, that didn't pan out in realityh
Orin Nano/NX are the same
t
A PCIe switch is what @DhanOS (Daniel Kukiela) was thinking. I don't think any available SoM has RDMA capability, but having a switch could make NUMA an option.
n
no currently available PCIe switch chip that's remotely close to cheap enough to put in a board like this has multi-host/NTB support you could potentially use EP mode on three RK1s and host mode on one, or daisy-chain them with 2 lanes up and 2 lanes down like Mixtile do, but a generic switch doesn't help you here (they only support having one host)
and afaik the best you can do with the RK3588 PCIe endpoint driver right now is a pretend-o-ethernet
t
Yeah. It's complex. In a past life, I worked with both SGI UV and Cray XC systems, AKA supercomputers.
n
the problem is that pcie is a host-device link, host-to-host links require Shenanigans™️ like non-transparent bridges
and chips which can handle four x4 links to hosts are few and far between
RK3588 is capable of being a PCIe endpoint (device) on the 3.0x4/x2x2/x1x1x1x1 controller so you can build a sort of ring bus
that's what mixtile are doing here https://www.mixtile.com/blade-3/ - two module link works by just putting one into device mode, the 4-module stack does two counter-rotating 2-lane rings basically
t
Yeah. I'm thinking RDMAoCE. Infiniband involves yet another fabric.
n
technically if you're using PCIe EP mode you already have RDMA
but yeah one of the most straightforward and common ways to use PCIe endpoint mode, or a PCIe NTB chip, is to present a virtual ethernet interface with RoCEv2 support
but its all down to what you do with those links
and in any case what mixtile did is not the sort of thing you could put on a general-purpose cluster board because it's highly specific to the SoC used on the module 😦
you could look at doing something like a scaled-down version of https://www.dolphinics.com/products/PXH810.html and an external pcie fabric switch, but small chips for that role just don't exist and the pricing is extremely prohibitive
looks like of currently available chips the smallest is the Microchip PM8561
which is US$100 in 1ku quantities
ah nope sorry only has two NTBs, would need PM8531B/PM8571 which is $110-130 which is probably about the whole BOM cost of a tpi
t
Yup. NVMeoF could be more interesting from a storage perspective. Before I retired, I worked in the storage systems group at Western Digital. Was hoping they would make major advances with the Kazan Networks technology, but WDC has OEM HDD DNA. (sigh)
n
yeah WDC are making some interesting choices lately... anyway this is a bit off topic for the thread
t
Dan, I can hook you up with some SGI gear if you need/want a "fix". Challenge XL, Desksides, Octanes, Indigo 2s, Indigos, Indys, O2s or how about a Fuel.
t
Thank you, no. While rewarding monetarily at the time, I have other challenges to expend my time on. Not the least of which is this tiny, very loosely coupled, four node cluster. 😉
u
as much as I like the M.2 form factor for its higher speed, more PCIe lane but the fact that you can plug any mPCIe card in without worrying about which socket are compatible with which type of card just make replacing the mPCIe with M.2 strange IMO
I wonder B-key or E-key have enough types of expansion card to warrant this replacement
I'm thinking of adding an B+M M.2 slot on the left, for SSDs, full size (80mm) AI accelerators
I wonder if it's possible to add since there are physical buttons and UART pins on the current board
s
Do you know anyone who has done this? How does the cabled setup compare with the Mixtile Cluster Box and its custom backplane with respect to PCIe connectivity?
n
B key is every M.2 LTE or 5G card and most 2-lane NVMe drives E-key is wifi
that is the mixtile cluster box
it's using the 4-lane PCIe PHY in 2x2 mode with one pair of lanes in PCIe endpoint mode and the other pair in host mode
two lanes up, two lanes down
if you want it to only support NVME drives you do key-M since all 2-lane M.2 NVME drives are key-B+M or key-M
also keep in mind that on these boards you have one 3.0x4 PHY and two 2.0x1 PHYs to play with
less if you want SATA or more USB3
i had a better one somewhere
u
Yeah, but since this is just a mock-up for a board that might exist in the future if Turing Pi decided to work on it
Hopefully at that point the SoMs and compute modules have improved even better
n
they kinda already have but if you want better you have to spend $400-600/module on Orins
and since Rockchip still haven't even announced the RK3899 or whatever they end up calling it it'll be a While™️
u
The only reason I was thinking about when I added this slot was the potential uses of AI accelerators like Hailo-8 M.2 AI Acceleration Module or Axelera M.2 AI Edge Accelerator Module, the SSDs is just an afterthought https://www.axelera.ai/wp-content/uploads/2023/09/M.2-AI-Edge-accelerator-Module.pdf https://hailo.ai/products/ai-accelerators/hailo-8-m2-ai-acceleration-module/
As much as I like Orins, I'd like to wait for more potential releases from Rockchip or RISC-V manufacturers
n
these will also not work with RK3588 fwiw and the internal NPU is about as powerful in reality (hailo has mem bw issues)
u
Where did you read about Hailo's memory bw issue? Can you send me a link?
n
I'm not aware of anything public but it's a limitation of any AI accelerator without local high-speed memory; Hailo-8 only has a small amount of local on-die SRAM to work with
anyway -> #754950670645329920
u
An additional I want to see in the future is a Type C PD Pico PSU
t
I was looking for such a beast earlier…
I don’t know if it’s an idea that’s been vetted by EE best practices, but I did think about it myself some time ago
u
On SBCs, it's quite common to use USB-C to power the device, what's the difference in using this for the Pico PSU?
As far as I know, the board doesn't draw that much power either?
Is it possible for each node to have 2 SATA ports in mATX/ATX form factor?
I got this idea from Ambedded's Mars 400PRO
Each node in this server have an eMMC, SSD and SATA or SSD as an OSD disk
t
That depends upon the CPU/architecture of the node in question; usually the limitation is based upon how many PCIe interfaces there are
This can be alleviated by a PCIe switch chip per node… but I’m not experienced as to how complicated that’d be or how much overhead there is in using such a switch
The board doesn’t consume much as do the CM4 nodes. You are talking about using four nodes in the current TP2 however. If you use nVidia nodes or RK1’s along with all NVMe slots filled, mPCIe slots used, and SATA storage devices, the combined current draw could be a problem. Newer USB PD specs cover the larger wattage ranges, but are more expensive. I still think it’s a good idea!
tl;dr higher current/wattage draw is the reason I’ve read for USB PD/pico PSU’s not being a thing for now.
u
I think it's almost the same as node 3? Node 3 already have access to an M.2 and 2 SATA ports (depends on the compute module)
The main issue here is whether is it possible to fit additional mPCIe expansion cards for other nodes or not
Or the simple solution is just use a mPCIe to SATA card and each nodes have its own mPCIe slot like @Terarex (Dan Donovan)'s idea
But doing that without adding SATA would be wasting wasting the storage interface on RK3588
Trying to cram 8 nodes x 2 SATA ports/node = 16 SATA ports into an ATX form factor would end up in taking up all the space for additional I/O for any other purpose
t
Using a compute module with more PCIe interfaces or PCIe switches would work
u
There's also the problem of available space on the board to fit the mPCIe slot as well
From what I've seen so far, longer mPCIe card might not fit if the rear of the board have I/O ports like SATA etc.
j
could there be a way to slot mpcie vertically, like full size pci?
u
I highly doubt it, and it introduces a bunch of problems about card stability for connection
t
some sort of riser card perhaps.... but overall I think it's asking for trouble to mess with all of this 😉
a lot of redesign work would be involved
Hypothetically, you could connect something similar to this to a picoPSU DC input, but this example is limited to 100W; You’d need an IC that supports the full USB PD 3.1 spec including 36/48V to support a higher wattage. (Turing’s Pico-PSU is 160W rated.) https://learn.adafruit.com/adafruit-husb238-usb-type-c-power-delivery-breakout/overview
You’d likely need a high rated USB-PD power supply and a high current rated USB-C cable
u
I know the DIY solution will definitely work if I have the skills to pull it off
But I would like to see such product in the future as well, it might be able to use in other motherboards as well
That's definitely a requirement if you don't want to burn your house down
t
there are folks in the SFF PC community that have been asking for such an animal for years now… myself included
I have a Flex ATX or similar inside my 2U enclosure for my TP2 atm
u
Any plan to upgrade to redunant?
t
I’m a hobbyist so not at the moment, my Dell R530 only has one PSU…
u
PowerEdge R530?
t
Yup
u
Do you use all the HDD slot of the case with TPi?
t
None atm
u
And also the possibility of cramming 2 SATA ports for each node while retaining the expansion card slot for each node
t
My original suggestion was intended to preserve compatibility and improve I/O expansion flexibility, while cost-reducing and simplifying the board by removing built-in SATA and USB from nodes 3 and 4. Let's try to KISS.
u
I'll agree on the KISS aspect of the future versions of the board
According to the specification, the RK3588 has three PCIe 2.0/SATA 3.0 ports
Unless Turing Pi decided to ultilize the alternative PCIe interfaces of SoCs' SATA interfaces (If those SoCs have them)
Removing the SATA ports on the ATX version would be wasting the high speed interfaces of the SoC
t
@_dr.smart After doing a little research, I found that the Jetson Xavier NX pin mux supports 1x + 4x PCIe lanes. While the RK3588 and the Orin NX support more PCIe lanes, the SoM edge connector does not.
t
How much overhead is there when using a PCIe switch anyway?
(Besides the usual “it depends” 😉 )
u
In short, the number of available PCIe lanes is limited by the SoM's physical connection?
t
Yes. I suppose signals not used on the TPiv2 board could be reassigned, but that would break compatibility. Turing Machines needs to maintain compatibility with the Jetson Xavier NX pin mux in order to sell RK1s beyond their captive motherboard community. I don't work for and am not compensated in any way by Turing Machines.
u
Is there any way to overcome the 260-pin edge connector limitation?
t
I wouldn't call it a limitation. It is "as-designed" to be compatible with NVIDIA Jetson SoMs. The Mixtile Core 3588E SoM claims to be Jetson TX2 NX plug compatible, so it probably uses the same or a similar pin mux. The FireFly Core-3588J SoM uses a different, SO-DIMM edge connector format (https://en.t-firefly.com/product/core/core3588j), which supports all the RK3588 features. It is essentially proprietary at this point in time. Frankly, the RPi CM3, CM4 and NVIDIA Jetson NX are the only SoMs with enough market presence to be considered du jour standards. Turing Machines chose the Jetson NX connector because they could build a CM4 adapter for it. There are various, other, CM4-style modules that can (somewhat or fully) work on the RPiv2 CM4 adapter.
u
Both RK1 and Jetson use 260-pin SO-DIMM connector
But the Firefly Core-3588J uses MXM3.0 314 pin connector though?
t
Yes, a completely different interface. Mixtile is doing their own 4-nide cluster employing a PCIe switch and U.2 form-factoe NVMe SSDs. NVIDIA preserved some degree of compatibility with pre-Orin carrier boards because there is a large installed base that they wanted to sell into. The post-Orin NVIDIA SoM may break that compatibility.
u
@User But does more pin on physical connector on both SoMs and carrier boards mean more SoC features ultilized?
@User Since as far as I know and from what I understand from your comments, the reasons why PCIe lane was limited on RK1 and Jetson Orin is for 2 reason: 1.Backwards compatibility with older Jetson products in Orin's case and compatibility with Jetson ecosystem in RK1's case 2.The number of physical pinout is limited so some high speed interfaces have to be sacrificed?
w
With my RK1 delivery I finally got to setting up my TPis and a few things I'd like to see changed (may be listed in this thread, I didn't read all the posts) 1. The BMC SD card should be accessible on the top of the board or via the rear panel. I only realized I'll need to remove and reinstall the SD card a couple times to update the BMC after installing in the rackmount chassis. Being rear-accessible would help quite a bit. 2. Support dual-sided NVMe devices. The glued in screw terminal for the shorter size effectively prevents installation of a long dual sided NVMe device. Use flush mount screw recepticles with a movable standoff to use for the correct size. 3. Support 110mm M.2 drives. There are few consumer drives but quite a few enterprise M.2 drives in this size. Lean in to edge/SMB use cases and support the enterprise variants (the bigger size is frequently required for power protection capacitors) 4. It'd be interesting to have POE++ input option. I expect it'd be difficult to fit on the available space and may require a "hat" similar to how PIs themselves get POE power. POE++ supports up to 60 or 100 watts. Definitely not a need-to-have.
t
#1) agreed. #2) I don't know whether the v2.5 board utilizes fixed or removable standoffs at the 2280 locations. Hoping every standoff, including the mini PCIe ones, are removable. That way, alternate lengths can be deployed to easily support dual-sided SSDs with heat sinks. I desoldered all the standoffs on my v2.4 board because I have dual-sided M.2 NVMe SSDs with 3rd-party, active heat sinks. 4TB, TLC M.2 SSD variants seem to be dual-sided. QLC would fit on a single surface, but they have relatively low endurance and limited sustained write capability. #3) Probably not in the v2.X board series. #4) Good idea.
u
Did you use this board without a case? I think that if you're using TPi + SSDs + heat sinks, they wouldn't fit in the case.
t
No. I'm using a Mini-ITX case that has been modified to use 25mm board standoffs. That is just about as long as you can go and still use a custom I/O shield.
u
POE++ would be a good idea for a board with 4 nodes (6 nodes at best), but beyond that, like an 8-node ATX board, I don't think that's the best idea
How resilient is 110mm M.2 SSDs compare to shorter ones?
t
The discussion was related to a next generation TPiv2. Probably beyond v2.5, except for the mini PCIe and M.2 NVMe standoffs. A microSD extender can be used to address BMC microSD card accessibility.
At one time (pre-3D NAND), 22110 M.2 NVMe SSDs were envisioned as an enterprise product. They never took off and generally have been replaced by other, denser, form factors for enterprise storage applications.
u
With all the news about consumer motherboards with Intel's ATX12VO connector, I'm wondering whether this is a good idea for a board like TPi.
Additional space is required for SATA power connectors since the motherboards are going to do the power conversion instead of the PSUs, thus increase the manufacture cost
w
I work for Micron. 110mm M.2 is still used in enterprise for boot drives and is expected to be used for quite a while. E1.S isn't expected to replace boot drives for 3+ years
You can get 80mm enterprise drives (by enterprise I mean drives that have capacitors to ensure the cache is written in the event of a power failure). 110 is still "prevalant".
u
The only drawback of this form factor is it's bigger than the M.2, not ideal for surface mount on motherboard with limited space But is it possible for E1.S drives to replace M.2 drives in the consumer market, or this is just a thing in the enterprise/server market? As far as I know, there are already PCIe adapters for 5.9 and 8mm thick E1.S drives
w
My point is really that 110mm M.2 will be around for at least a few more years due to the enterprise use cases. E1.S is unlikely to replace M.2 consumer drives and consumer drives are unlikely to use the 110mm form factor. I think the only "consumer" use cases for E1.S will be in things like Icy Dock carriers. That actually raises the point, it might be nice to have pcie ports on the top of the board similar to the new pcie connector on the Pi 5 to support backplanes or other add in card use cases.
u
@Terarex (Dan Donovan) is also suggesting each node to have its own mPCIe slot for expansion cards
3. One minor issue, with the current SSD layout, in future boards with larger form factor like mATX/ATX, using 110mm would block some mounting holes on those boards or causing issues with standoffs
m
not sure if someone mentionned it yet, but you should upgrade the switch internal bandwidth to 2.5 gbps or 10gbps
t
That's unlikely. Getting the documented changes for v2.5 to work reliably have already required one board respin (https://discord.com/channels/754950670175436841/754950670175436848/1213465607492734977). Hoping everything on the second version (v2.5.2?) works properly. Otherwise, there will be another ~2 month delay.
f
I'd love if the BMC would get more than 100mbit outside connectivity. It's a pretty big bottleneck when flashing. And yeah, either [LACP](https://github.com/turing-machines/BMC-Firmware/issues/173) on the switch port, or maybe some of these sweet 2.5 Gbit ports 😆
t
The onboard switch chip, RTL8370MB-CG+, supports 1Gbit connections. The external connection speed is automatically negotiated with an external, upstream switch. If you're only getting 100Mbit, check the cable and switch.
r
You know one this that would be helpful, would be to include a QiiC (i2c) and a default SPI connector (maybe 5 pins jst) for each slot. There is no way to get to minor peripheral parts pf these boards that can serve very useful purposes. Maybe a CSI/DSI connector also.
You know I have an even better idea... make another slot with same pinout that provides specialization, either PCIe. NVMe (addons), USB,, SATA, (onboard connectors). Maybe even of the same card. This would allow configuring as all PCIe or a massive raid video processor.
f
The board is negotiating 1Gbit currently, yes, but the BMC only does do 100mbit
t
Okay. That's surprising since the Allwinner T113-S3 datasheet (https://bbs.aw-ol.com/assets/uploads/files/1648883311844-t113-s3_datasheet_v1.2.pdf) states the chip supports 1Gbe operation.
f
> [ 13.289919] sunxi-gmac 4500000.eth eth0: Link is Up - 100Mbps/Full - flow control off
t
Yup. As stated previously, the T113-S3 datasheet indicates the chip is capable of 1Gbps. However, the TPiv2 BMC firmware buildroot includes support for the older A20 GMAC. There is some discussion of the A20 GMAC implementation here: https://linux-sunxi.org/Ethernet. Since the T113-S3 includes the sun8i EMAC, the BMC firmware could use the Allwinner sun8i EMAC driver (DWMAC_SUN8I). Not certain I have the skills to integrate it, but maybe someone here could help with pointers. I also found this: https://lore.kernel.org/all/20230721134606.4505-9-andre.przywara@arm.com/T/. @User could you comment?
c
Sure, so, here's the situation as it exists today: There's the RTL8370MB-CG Ethernet switch, which has 8 gigabit UTP ports (with integrated PHYs, thus suitable for connecting directly to the two physical jacks, the Ethernet lines of each node, and the BMC's PHY). It also has 2 "extension" ports, which implement a \*MII interface, which is useful for PCB designers that want to provide their own PHY or if they want a direct PHYCPU connection. The TPi2 doesn't use these extension ports. Connected to port 4 of the switch is a RTL8201F 100Mbps PHY. It's the little chip between slot 1 and the BMC with the crab on it. This PHY only does 100Mbps operation, and implements the [RMII]() interface. This interface requires 11 pins, which are connected to pins PE0-10 of the T113-s3. To get a gigabit BMCswitch link going, it is necessary to use a gigabit-capable MII, most likely RGMII directly to one of the extension ports rather than to a PHY (which saves a chip). The caveat is that RGMII requires 4 additional pins (the tx+rx lanes are widened from 2-bit to 4-bit each), and the PE11+ pins are in use for I²C, so some stuff would have to be shuffled around. The other wild thing is that, if you look at the Allwinner D1 (same die, different package/blocks enabled) datasheet, the RXD3 signal is supposed to be on pin PE14, but the T113-s3 package only goes up to PE13. So to get RGMII going, alternate pins PG6-9 would most likely have to be claimed for that function... which means more moving stuff around. I'm unsure if I can say more without violating NDA, however all of the above is public knowledge (pulled from the relevant part documentation and easily learned from studying the board itself). Still, if you have any more questions, ask away and I'll answer if I can. 🙂
t
@CFSworks thanks for that info. So, nothing we can do with the existing board or, likely, the v2.5 board.
r
blah blah blah... can only do 100 mbs just because. wait till next version
t
I've been thinking that some, future, Turing Pi cluster board should implement a fifth module socket for the BMC with the same form factor and pinmux as the other modules. Conceptually similar to what Sipeed is doing with the RISC-V Lichee Pi 4A Cluster, where they're using an Allwinner D1 for the BMC. Turing Machines already has the IP for the CM4 adapter.
r
I was thinking the same thing for expansion slots next to compute comdules that have either m.2 or PCIe connectors (or both) and support hardware. All the components would be on opposite side of pcb so that compute module sits one direction and expansion board face the opposite direction.
a
Interesting ideas here, and I am just starting a build so not really in a position to comment in an informed way. However my point of view is a bit contrarian. For me it would be amazing to have a simpler board. Why not just have, for each node, M2, usb3xn, ethernet. BMC should have usb and perhaps ethernet. And, optional, a fancy programmable usb hub to configure a node USB to a external USB physical port? But my real point is with each node having M2, ethernet and usb3 we can connect any "weird" devices - like a screen and keyboard 🙂 - to any node we like, make design easier (less options) and perhaps makes the board itself easier to optimise(?). Just my 2 pence (cents)!
r
Another idea to expand on that... built in USB KVM (selectable node) ...1 keyboard and screen to rule them all
m
This would be awesome, using something like the rtl8372 - but as said, the tolerances are tighter for 2.5g. The modules would need to be respun to include an rtl8125 (or rtl8126 - providing 5gb support for the future!), but that is doable if you choose to sacrifice either the usb2 or sata (those are multiplexed on the same pins). The other big downside is cost - rtl8370 is about $10-$15 in quantity, whereas the rtl8372 is about $50-$60, I think, and has a bigger power budget. If you went for full 10gbe support, due to backward compatibility, you'd need to include something like the Marvell AQC design, which is about $25-$40 in quantity, I think, and use a 10gbase-T transceiver - which all have a large power budget. The alternative would be using some kind of direct attach cable like thing embedded onto each daughter board with a corresponding place to attach on the carrier board. This is well beyond my knowledge of the theory, and I'm sure it would almost certainly require a fairly large amount of engineering effort, and would also cost considerably more (I'd guess about $100 more for the carrier board's BOM, and of course, the engineering cost needs to be defrayed as well, plus the per daughter board cost). So I'd say the best engineering compromise would probably be the rtl8126 on the daughter boards, since it's available in quantity for relatively little? But of course, 2.5gbe for the main board, because the rtl8372 is so much cheaper than an rtl9303 plus 5 10gbase-T PHYs.
u
I wonder when there is the CM-5, will the carrier board work with it and will the nvme be available for those modules? Maybe not yet to answer since the CM-5 are not even available!
s
Radxa has announced their CM5 and stated that their should work with CM4 carrier cards. I've ordered a few and will try them out.
m
Really all I could ask for I'd more future proofing. Faster Ethernet, expose more PCI lanes even if the Pis don't support it. I'd trade versatility for having the same ports exposed on each node (nvme per node). I can turn nvme into sata if I wish.
r
I agree with the all nodes need same ports exposed. But PCIe, SATA, USB? Thats why I suggested a riser board (slot) that sits next to node slot that would have certain port types exposed based on riser board. Then they could all the same or you can mix and match. This would save space on main board for things like usb console ports for all nodes, multi-port switch etc
u
You mean an additional slot per node for a pluggable I/O dedicated board or an I/0 board "sandwiched" between the node slot and SoMs?
r
An additional slot per node for dedicated IO
Another IDEA.... install a full wireless router BMC with it own DHCP server.
m
A really nice and small feature for us K8S users would be a http or even tcp loadbalancer in the BMC, so that the requests can be forwarded to the nodes
p
Is there anything you can do about finding a good solution for TP 1 and 2 for PWM???? (We can't update every time the TP updates the fix for a problem.) Thanks!
With BMC - Find a way to flash and finish install for Ubuntu with a wireless keyboard. We do this with the RPI Imager, why not with TP?
t
If you're really handy with soldering tiny SMD components, you can add a 4-pin PWM fan header and EMC2301 fan controller to a TPiv2.3/2.4. The v2.0.5 BMC firmware contains the appropriate driver. The Digi-Key part numbers are: - EMC2301-1-ACZL-CT-ND (controller) - WM4330-ND (header) The parts are inexpensive. Order extras just in case you ruin one or more.
p
Thanks! I'll try this. 🙂
t
Note there is no user interface for setting the fan speed. I use a small init.d script to set the fan speed to "5".
9 Views