Orin NX - general info and flashing
# │forum
d
Orin NX in Turing Pi 2 - cooling, flashing, testing, USB keyboard and mouse ------------------------------------------- The improved and updated article can now be found here: https://help.turingpi.com/hc/en-us/articles/9767002356381-Orin-NX-flashing-cooling-configuring-and-testing ------------------------------------------- .
.
COOLING To successfully flash and run the Orin NX module, you must ensure it's properly cooled. Once the dedicated cooling solutions are available, they will be the best option. For now, Xavier NX + a thermopad is a good and working well option. I bought
Waveshare NX-FAN-PWM
heatsink: https://www.amazon.com/Waveshare-Official-Compatible-Speed-Adjustable-Height-Limited/dp/B09TF5T12B
While the mounting hole layout and the bracket are the same, the height of the elements on the boards is slightly different between Xavier NX and Orin NX, causing the Orin NX CPU/GPU core not to touch the heatsink - you can see how the core is being reflected on the heatsink surface and the gap is clearly visible:
I used the solution to remove the thermal paste and use a 0.5mm thick thermopad. To make the heat conductivity decent I used `Thermopad Thermal Grizzly Minus Pad 8 (30 × 30 × 0,5 mm)`:
The thing to remember is the screws are not going to screw all way in. Put the pad between the CPU/GPU core and the heatsink and screw in all of the screws so the 4 square coils touch the heatsink.
I tested this solution under heavy CPU and GPU loads and the results were very good.
. FLASHING I tested flashing with bare-metal Ubuntu 20.04 LTS. As far as I know, using WSL on Windows or WMWare Player (free) with Ubuntu 20.04 LTS should also work. If you tested such a solution, let us know! The installation process also assumes flashing in Node 2 since some users experienced difficulties in flashing in Node 1.
. Installation steps:
Install Ubuntu 20.04 LTS (this exact version, not 22.04 LTS or any other) on a PC.
Currently, SDK Manager does not support Orin devices, so we have to flash them "by hand". In the future, the whole process should be simpler, but for now, this is what we have to do.
Install required libraries:
Copy code
sudo apt install -y wget qemu-user-static nano
Navigate to the Jetson Linux page located at https://developer.nvidia.com/linux-tegra, then click on the green button of the latest Jetson Linux version and scroll down to the download table. From the Drivers section copy links for both
Driver Package (BSP)
and
Sample Root Filesystem
. For example, for
Jetson Linux 35.2.1
the links are: - `Driver Package (BSP)`: https://developer.nvidia.com/downloads/jetson-linux-r3521-aarch64tbz2 - `Sample Root Filesystem`: https://developer.nvidia.com/downloads/linux-sample-root-filesystem-r3521aarch64tbz2
Download both files to, for example, the home directory - with the above URL example:
Copy code
wget https://developer.nvidia.com/downloads/jetson-linux-r3521-aarch64tbz2
wget https://developer.nvidia.com/downloads/linux-sample-root-filesystem-r3521aarch64tbz2
Unpack the Driver Package (BSP) (again, using names from the example URLs above):
Copy code
tar xpf jetson-linux-r3521-aarch64tbz2
Unpack the Sample Root Filesystem into the Driver Package (BSP) (
sudo
is important here):
Copy code
sudo tar xpf linux-sample-root-filesystem-r3521aarch64tbz2 -C Linux_for_Tegra/rootfs/
Turing Pi 2 (similar to some other custom carrier boards) does not have the onboard EEPROM that the module or the flasher can access. The flasher, however, expects the EEPROM to exist as it does on the official Xavier NX carrier boards. We need to modify one file to set the EEPROM size to
0
.
Copy code
sudo nano Linux_for_Tegra/bootloader/t186ref/BCT/tegra234-mb2-bct-misc-p3767-0000.dts
The last EEPROM configuration line says:
Copy code
cvb_eeprom_read_size = <0x100>;
Replace the value of
0x100
with
0x0
(make sure to not modify
cvm_eeprom_read_size
instead - the name is similar, but starts with
cvm
, modify the one whose name starts with
cvb
-
b
like a board):
Copy code
cvb_eeprom_read_size = <0x0>;
Press F3 and F2 to save and exit
Prepare the firmware:
Copy code
cd Linux_for_Tegra/
sudo ./apply_binaries.sh
sudo ./tools/l4t_flash_prerequisites.sh
Insert Orin NX into Node 2 and install the NVMe drive for Node 2 (I don't yet know if that'll work with the USB drive on Turing Pi 2 - you could, in theory, use Mini PCIe to SATA controller but the bootloader would have to support it and I did not test this possibility, yet - if you happened to test this configuration, please let us know!).
Now, let's put the Orin NX device into the Forced Recovery Mode: - turn the Node 2 power off - set Node 2 into the device mode - turn the Node 2 power on I used the web interface, but you can also use the command line version: -
tpi -p on
(turns on all nodes) -
tpi -u device -n 2
-
tpi -p off
(turns off all nodes)
Connect the USB A-A cable to the PC and verify that the Orin NX device has been detected by invoking
lsusb
. It should pop up as the
Nvidia Corp. APX
device on the list.
Assuming you're still in the Linux_for_Tegra directory, flash Orin NX with NVMe drive using:
Copy code
sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 \
  -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml" \
  --showlogs --network usb0 p3509-a02+p3767-0000 internal
If you want to use a USB drive (untested by me):
Copy code
sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device sda1 \
  -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml" \
  --showlogs --network usb0 p3509-a02+p3767-0000 internal
Flashing will take a longer time and the flasher will exit once it's done. The Orin NX FAN would not be spinning for the first part of the flashing process, but this is not a problem - the Orin NX will work just fine cooled passively by the heatsink for this part of the flashing process.
After flashing is done, it's time to move the module into Node 1. Sadly we have no way to turn it off gracefully since we have no input device to shut the operating system down, but this should not be a problem.
To finish up the configuration I used Node 1 with the additional Mini PCIe USB3 controller based on the
Renesas D720201
chip - it works out of the box. I used the controller to connect keyboard and mouse
Turn off the Node 1 power via the web interface or via command (
tpi -p off
). Disconnect the USB A-A cable. Move Orin NX and NVME to Node 1, connect Mini PCIe USB controller to have a way to connect mouse and keyboard. There might be a way to configure Orin NX without keyboard/mouse and monitor using one of the scripts in the
tools
folder (
l4t_creat_default_user.sh
) but I haven't attempted that. Turn on the Node 1 power via the web interface of via the command (
tpi -p on
).
Using keyboard and mouse, go through the initial steps and wait for the setup to finish - until you get a desktop environment. At this stage we have a bare operating system that does not even contain Jetpack - there are a few more required and suggested steps. These steps can be done using SSH - one of the steps asked for a host name - if your PC/Mac has mDNS running, you can use this name directly to connect to it via SSH, otherwise you need to use the IP address.
Orin NX currently has 2 FAN profiles -
quiet
(default) and
cool
.
Quiet
is really quiet but if we want to put some load on the device, I'd suggest to change it to
cool
which is still pretty quiet.
First, we need
nano
or any other editor of choice:
Copy code
sudo apt install -y nano
Then, to change the FAN profile:
Copy code
sudo systemctl stop nvfancontrol
sudo nano /etc/nvfancontrol.conf
Find the line containing
FAN_DEFAULT_PROFILE
- near the bottom of the file content:
Copy code
FAN_DEFAULT_PROFILE quiet
And replace quiet with cool:
Copy code
FAN_DEFAULT_PROFILE cool
Press F3 and F2 to save and exit, then run:
Copy code
sudo rm /var/lib/nvfancontrol/status
sudo systemctl start nvfancontrol
By default, the device is in the
15W
power mode - change it to
MAXN
using the setting in the top-right part of the screen.
Now, let's update the OS:
Copy code
sudo apt update
sudo apt -y upgrade
sudo apt -y dist-upgrade
sudo reboot
This way (currently the only way) does not provide Jetpack, time to install it:
Copy code
sudo apt -y install nvidia-jetpack
. TESTNG Now we can install TensorFlow (and/or PyTorch). I'm using both, but for now we'll need TensorFlow to put some load on the GPU Install TensorFlow:
Copy code
sudo apt -y install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
sudo apt -y install python3-pip
sudo pip3 install -U pip testresources setuptools
sudo pip3 install -U numpy==1.21.1 future==0.18.2 mock==3.0.5 keras_preprocessing==1.1.2 keras_applications==1.0.8 gast==0.4.0 protobuf pybind11 cython pkgconfig packaging h5py==3.6.0
sudo pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v51 tensorflow==2.11.0+nv23.01
And we'll install
stress
- an utility to put stress to the CPU
Copy code
sudo apt -y install stress
Reboot:
Copy code
sudo reboot
Now we can open Jetson Power GUI - it can be found in the upper-right corner when we click on the power profile Additionally, open 2 terminal windows.
Save this Python code into the test.py file - this is a neural network that does nothing, but puts load on the GPU:
Copy code
py
import os
import time
import subprocess
from threading import Thread
import tensorflow as tf
from tensorflow.keras import optimizers, layers, models
import numpy as np


BATCH_SIZE = 4
HIDDEN_LAYERS = 2
HIDDEN_LAYER_KERNELS = 4
DATASET_SIZE = 2048
DATA_SHAPE = (256, 256, 3)

model = models.Sequential()
model.add(layers.Conv2D(HIDDEN_LAYER_KERNELS, (3, 3), activation='relu', input_shape=DATA_SHAPE, strides=(1, 1), padding="same"))
model.add(layers.MaxPooling2D((2, 2), strides=(1, 1), padding="same"))
for _ in range(HIDDEN_LAYERS):
    model.add(layers.Conv2D(HIDDEN_LAYER_KERNELS, (5, 5), activation='relu', strides=(1, 1), padding="same"))
    model.add(layers.MaxPooling2D((5, 5), strides=(1, 1), padding="same"))

model.add(layers.Conv2D(2, (DATA_SHAPE[0] // 8, DATA_SHAPE[1] // 8), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))

model.summary()

X = np.ones((DATASET_SIZE, *DATA_SHAPE))
y = np.ones((DATASET_SIZE, 10))
data = tf.data.Dataset.from_tensor_slices((X, y))
data = data.batch(BATCH_SIZE)

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
              loss=tf.keras.losses.BinaryCrossentropy())

model.fit(data, epochs=1000)
In one terminal window run:
Copy code
stress -c 8
to stress the CPU, and in another tun:
Copy code
python3.8 test.py
to stress the GPU at the same time. At this stage the Orin NX might start showing over-current messages which means we are stressing it more than it can handle. During these tests I haven't noticed anything wrong with the Turing Pi 2 board, but I haven't measured voltages yet either.
. THE END 😄
u
i forsee some fusion360 & CNC work in my near future to model up the heatsink and increase the pocket depth to fit the orin better
d
If you have a CNC capable of milling aluminium, then sure 🙂 My only to wait for the official heatsinks 🙂
u
I do 🙂
l
Thanks for breaking ground on this. Do you have any first impressions on power draw and capacity? Do you think it will be possible to fill the TP2 with these and run them all at full clip?
d
For power draw - I have to re-check it. With probably not super accurate meter I've seen something around 30-35W of power draw from the wall with Orin NX fully loaded on CPU and GPU in MAXN power configuration. This could mean TPi2 should handle 4 of them but I can't say for sure if I cannot test it.
s
Thanks DhanOS for this write-up! I have a Jetson Orin NX 16GB module with the same Xavier heatsink I found to fit after digging through Jetson docs. Now seeing your documented work, I'll leverage some of your steps and provide any additional findings as I dip into testing over the coming weeks. I'm especially curious if 4 of these modules will run OK from a power consumption standpoint. What power profiles did you test on @DhanOS (Daniel Kukiela) ?
d
Are you going to test 4 of them? I only checked MAXN since this is what I am interested in 🙂
s
😂 I wish, I only spent on 1 so far. Once I document the setup and verify the functionality that I want will work for one, I will be considering additional Orin NX 16GB modules
d
Well, then I know for a fact that you can easily run a single one in MAXN power profile 🙂
s
That's excellent, I had been considering 3 Orin NXs otherwise with a different compute CPU focused module for async web I/O
d
I'm waiting for RK1 for CPU and RAM (8 cores, about 3x faster than CM4 iirc, 16 GB RAM)
s
Yeah same. I'd like at least one RK1 module per Turing Pi 2, Jetsons for the remaining slots.
d
Dream setup 😄
u
Confirmed the USB3 port on TX2 devboard is fried today 😭 . Orin NX module is in-hand along with 3 CM4's -- just in need of shipping to be resumed on TPi2!
I'm hoping the RK1 is fairly capable. I'd like to do RK1 x 3 + Orin NX x 1 on the TPi2 chassis. Then either NFS or iSCSI from the TrueNAS Scale box for k8s/cluster storage.
d
But you don't need the devboard once you get your TPi2 😄
This'd be my setup as well I think 🙂
h
Any - even rough - idea how the Orin NX and RK1 will compare in real life? The benchmark data on the former is a bit scarce still.
d
What kind of comparison do you mean? Orin NX is for ML while RK1 will be more a general purpose computing node
h
Yeah, sure thing. What I meant is that general compute capability. I.e. if it could be sensible to get 1-2 Orin NX and request my TP2 now to have a starting point. Since the CM4s are a bit low powered, have stock issues etc. I didn't order my TP2 so far. Since I have no idea, when this RK1 module will make it to life, I was thinking of using the Orin NX as a more sensible intermediate - with future use options for the GPU plus reasonable general compute - than a bunch of random CM4s I am able to get hold of.
d
Hmm, I haven't seen any CPU benchmarks. People do not have a way to run Orin NX-es,, at least yet. You can use Xavier NX dev board or TPi2, but this is not common-enough yet, plus there's no cooling solution for Orin NX yet
If you thin of Orin lineup, the Orin Nano will be released within a few days
I never thought of using Orin NX for general computing mostly because of their price
h
Oh I see, let's wait for their price tage then.
d
> with future use options for the GPU I'm not sure what do you mean here, but it;s not like a standard GOPU, not everything is going to work on it
More info at GTC, probably at the keynotes at Tuesday
h
Yeah. I meant having the option to get into gpu accelerated stuff later on, because of the reasonable gpu. No intention to replace my desktop 😅 I never tried GPU scheduling via Kubernetes or so, which would be one thing to play with having an Nvidia device in the TP2. That's what I thought of.
Which is certainly reasonable. I mainly came to that idea after reading your how to and figuring I'd pay almost the same for the Orin NX as I do for 4 CM4s plus 4 adapter boards. The latter also leaving no room to extend the cluster with RK1 modules later on.
d
Yeah, so since I have Orin NX and CM4s, I could run some CPU benchmarks, but because if time constraint it'd be nice if you can find something that should run on both. Then you'll have to wait a bit 🙂
h
d
The former is a pure Python implementation which means it's speed can vary between Python versions. But I think I can ensure same version on both Orin NX and CM4. The latter depends on Python modules - similar story. But yeah, I can run them
h
Yeah I know. Comparable benchmarks are a bit tight. Maybe a community project for the TP2 community. (I could offer C, C++, Go and Python based things)
t
While doing initial Orin NX testing, after installing the official heatsink, I looked at its boot manager options. If present, booting off USB and NVMe are supported. Some of the other options were interesting. iSCSI support is present, as are (what appear to be) HTTP/S and another network option. I should have a normal HDMI-attached monitor tomorrow to explore the options. It looks like booting from a disk image on the BMC's μSD over HTTP is possible. I might also explore using OpenFiller with iSCSI as a more scalable storage backend. That would open up the TPiv2's M.2 sockets for something like the Innodisk EGPL-T101. It's a 10Gb M.2 NVMe Ethernet adapter.
u
@DhanOS (Daniel Kukiela) I did everything according to the guide, but something went wrong and I can't wrap my head around it https://cdn.discordapp.com/attachments/1080546031298678864/1164263689990443088/image.png?ex=6542940e&is=65301f0e&hm=113416a0fe39afc0b47a289dd421afa7290c8c1e2444d6e70d69714a8d8cdc73&
d
I've never seen this one. Are you using sudo where necessary? Did you possibly run out of disk space?
Is this Ubuntu 20.04 (this exact version)?
u
"Yes" for everything, except I need to recheck on the free space my VM has available. What's the minimum requirement btw?
If I don't figure out the issue on my own, I'll record a screencast with the whole walkthrough
d
It should not need more than 60GB of space, but may be a bit more
d
Could you possibly try the same version that's in the docs currently? I don't know this issue, haven't seen it before. If you successfully flash the older version, this will tell me where to search for the issue and I'll take a look at this and try to find a solution. If you happen to have a similar issue with the version mentioned in teh docs, we'll have to try to search for why do you have this issue.
f
I've flashed the most recent last week. Had disk space issues so bumped it up 100Gi . Not sure it's the same error. Can you post the whole command you are using for flashing carina?
u
I will record the whole process once I get back to this, if I encounter any issues once again
But thanks, I'll bump the available space up to 100G beforehand
f
think it's the command you choose from this section, https://docs.turingpi.com/docs/installing-os-orin-nxnano#flashing that tells the most (or there is the most chance to get something wrong)
and i suppose whether you're doing nvme/sd/usb
> Could you possibly try the same version that's in the docs currently I'm trying that version already btw
> gzip: /home/carina/Linux_for_Tegra/kernel/Image: not in gzip format
the first command from here returns the same error
upd: I think this isn't actually needed because the root cause is almost clear
Alright, so it feels like something's wrong with the flash script, because of course
Image
is not a gzip file,
Image.gz
is!
But I can't understand in which script the error occurs yet
strings: command not found
is probably because of a missing package (maybe something in
sudo ./tools/l4t_flash_prerequisites.sh
????) - it's hard to say from your cropped screenshots. could you try pasting terminal out next time so we can see the full strings you are running? 🙂
maybe post the contents of
history
if you can 🙂
u
Copy code
sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 \  
  -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml" \  
  --showlogs --network usb0 jetson-orin-nano-devkit internal
I managed to move further once I renamed
kernel/Image.gz
into
kernel/Image
And btw, I'd suggest to add
binutils
and
build-essential
installation into
l4t_flash_prerequisites.sh
( still the same command )
d
Were you following the instructions step by step starting from scratch?
u
Yes
d
I will check if something has changed, but only in a few days. We have the BMC firmware release and RK1 firmware release coming and this is my focus for now
u
Okay!
I'll be just reporting on my progress here
d
Well, that's odd then since these instructions let many people flash their orin modules, hmmm
u
Maybe nobody just reported on the same issues x)
Thinking that installing binutils and build-essential manually is obvious
d
I mean this is a set of simple commands that always work the same
u
Unfortunately not, there are several points where determinism breaks
sudo apt -y upgrade
,
sudo apt -y dist-upgrade
, BSP and root filesystem versions will be different if you always choose the recent version over time, plus some of the commands assuming a case-insensitive filesystem
d
I know, this is why I asked you to use the exact version that I mentioned in the docs and see if this works for you
u
Okay so this
and this recommendation
Should make it better
Seems like I successfully got through the first flashing step
d
We do not and should not need to modify the scripts provided by Nvidia. They are (or at least were) working as expected before. Is this for the version I mentioned in the docs, or for the newest firmware?
u
for both of them
d
I know for a fact that this instruction worked before and nothing changed that would require some new package. I will definitely check, though. But my guess is it may be something in your OS?
This usually means the Orin module is not visible to the script (and/or not set into the flashing mode)
u
Maybe 🤔 ubuntu-20.04.5-live-server-arm64
How can I set it into the flashing mode?
d
Live won't work. You need to install it. Yuo may find a way around, but I guess this is the source of your troubles
u
Oh, apologies, I copypasted the filename of the ISO I used for the installation
d
Turn the module power off, set the USB mode (either via web interface or using
tpi
tool) to device for the node that you put the Orin module in, turn the node power on
u
So it's Ubuntu Server 20.04.5 on arm64
d
It's in the docs too, btw 🙂
u
And I did it too
lsusb says
Bus 003 Device 002: ID 0955:7323 NVIDIA Corp. APX
d
Doubt arm will work. Never tried this one. I'm pretty use x84/x86_6 is required. But I don;t know for sure
u
Then I'm in a huge trouble
d
So that looks ok
u
Because I have no non-arm devices in my house x)
d
I may suggest you try the Nvidia forums to try to find the solution (and answer if you can use Live ubuntu on ARM to flash Orin module)
If you happen to do so, please let us know
You can also link your topic here if you need any assistance
u
Okay
Copy code
carina@arm:~/Linux_for_Tegra/bootloader$ sudo bash flashcmd.txt
Welcome to Tegra Flash
version 1.0.0
Type ? or help for help and q or quit to exit
Use ! to execute system commands


 Entering RCM boot

[   0.0222 ] mb1_t234_prod_aligned_sigheader.bin.encrypt filename is from --mb1_bin
[   0.0223 ] psc_bl1_t234_prod_aligned_sigheader.bin.encrypt filename is from --psc_bl1_bin
[   0.0223 ] rcm boot with presigned binaries
[   0.0237 ] tegrarcm_v2 --new_session --chip 0x23 0 --uid --download bct_br br_bct_BR.bct --download mb1 mb1_t234_prod_aligned_sigheader.bin.encrypt --download psc_bl1 psc_bl1_t234_prod_aligned_sigheader.bin.encrypt --download bct_mb1 mb1_bct_MB1_sigheader.bct.encrypt
Error: Return value 8
Command tegrarcm_v2 --new_session --chip 0x23 0 --uid --download bct_br br_bct_BR.bct --download mb1 mb1_t234_prod_aligned_sigheader.bin.encrypt --download psc_bl1 psc_bl1_t234_prod_aligned_sigheader.bin.encrypt --download bct_mb1 mb1_bct_MB1_sigheader.bct.encrypt
Copy code
carina@arm:~/Linux_for_Tegra/bootloader$ cat flashcmd.txt
./tegraflash.py --bl uefi_jetson_with_dtb_sigheader.bin.encrypt --bct br_bct_BR.bct --securedev  --bldtb tegra234-p3767-0000-p3768-0000-a0.dtb --applet rcm_2_encrypt.rcm --applet_softfuse rcm_1_encrypt.rcm --cmd "rcmboot"  --cfg secureflash.xml --chip 0x23 --mb1_bct mb1_bct_MB1_sigheader.bct.encrypt --mem_bct mem_rcm_sigheader.bct.encrypt --mb1_cold_boot_bct mb1_cold_boot_bct_MB1_sigheader.bct.encrypt --mb1_bin mb1_t234_prod_aligned_sigheader.bin.encrypt --psc_bl1_bin psc_bl1_t234_prod_aligned_sigheader.bin.encrypt --mem_bct_cold_boot mem_coldboot_sigheader.bct.encrypt  --bins "psc_fw pscfw_t234_prod_sigheader.bin.encrypt; mts_mce mce_flash_o10_cr_prod_sigheader.bin.encrypt; mb2_applet applet_t234_sigheader.bin.encrypt; mb2_bootloader mb2_t234_with_mb2_cold_boot_bct_MB2_sigheader.bin.encrypt; xusb_fw xusb_t234_prod_sigheader.bin.encrypt; dce_fw display-t234-dce_sigheader.bin.encrypt; nvdec nvdec_t234_prod_sigheader.fw.encrypt; bpmp_fw bpmp_t234-TE980M-A1_prod_sigheader.bin.encrypt; bpmp_fw_dtb tegra234-bpmp-3767-0000-a02-3509-a02_with_odm_sigheader.dtb.encrypt; sce_fw camera-rtcpu-sce_sigheader.img.encrypt; rce_fw camera-rtcpu-t234-rce_sigheader.img.encrypt; ape_fw adsp-fw_sigheader.bin.encrypt; spe_fw spe_t234_sigheader.bin.encrypt; tos tos-optee_t234_sigheader.img.encrypt; eks eks_t234_sigheader.img.encrypt; kernel boot.img; kernel_dtb tegra234-p3767-0000-p3768-0000-a0.dtb"    --secondary_gpt_backup  --bct_backup
f
i installed 35.4.1 within the last two weeks and nothing was missing
but used a amd64 box
i ahd a really bad time trying to run it emulated on an arm (m2) too. so just used some spare hardware i had lying about thankfully
u
You mean an arm virtual machine on m2, right?
I can use x86 VM, but it's just REALLY slow
d
People tried to use x86 emulation and I don't think it worked for anyone (mostly due to the timeouts during flashing)
u
Alright, so the only thing I can't do at this point is to flash the bootloader
I assume it might be happening because I'm using USD A to C adapter
No other clues whatsoever
I'll try to ask friends for an x86 machine with type-A
And I really hope flashing from arm devices will be an option in the future..
Hi @DhanOS (Daniel Kukiela), is it possible to flash Orin NX via UI with the most recent BMC?
p
I ran into the same exact issue. Your suggestions worked perfectly. Tnx!
u
You're welcome! Have you managed to finish the flashing process successfully?
24 Views