Turing-PI v2 Orin NX (v36.2, Ubuntu 22.04, Kernel ...
# │forum
b
Copy code
.-/+oossssoo+/-.               ubuntu@node4 
        `:+ssssssssssssssssss+:`           ------------ 
      -+ssssssssssssssssssyyssss+-         OS: Ubuntu 22.04.3 LTS aarch64 
    .ossssssssssssssssssdMMMNysssso.       Model: NVIDIA Jetson Orin NX
   /ssssssssssshdmmNNmmyNMMMMhssssss/      Mainboard: NVIDIA Jetson Orin NX 
  +ssssssssshmydMMMMMMMNddddyssssssss+     Bios: 2023-11-30 EDK II Ver:36.2.0-gcid-34956989 
 /sssssssshNMMMyhhyyyyhmNMMMNhssssssss/    Kernel: 5.15.122-tegra 
.ssssssssdMMMNhsssssssssshNMMMdssssssss.   Uptime: 5 mins 
+sssshhhyNMMNyssssssssssssyNMMMysssssss+   Packages: 2108 (dpkg) 
ossyNMMMNyMMhsssssssssssssshmmmhssssssso   Shell: bash 5.1.16 
ossyNMMMNyMMhsssssssssssssshmmmhssssssso   Terminal: /dev/pts/0 
+sssshhhyNMMNyssssssssssssyNMMMysssssss+   CPU: (4) @ 1.420GHz 
.ssssssssdMMMNhsssssssssshNMMMdssssssss.   Memory: 1175MiB / 15656MiB 
 /sssssssshNMMMyhhyyyyhdNMMMNhssssssss/    CPU Usage: 3% 
  +sssssssssdmydMMMMMMMMddddyssssssss+     Disk (/): 6.7G / 915G (1%) 
   /ssssssssssshdmNNNNmyNMMMMhssssss/
    .ossssssssssssssssssdMMMNysssso.                               
      -+sssssssssssssssssyyyssss+-                                 
        `:+ssssssssssssssssss+:`
            .-/+oossssoo+/-.
In December 2023, Nvidia released Nvidia Jetson Linux 36.2. This post is to provide the information on how to install it. @DhanOS (Daniel Kukiela) Please turn this post into documentation. Nvidia Release Page: https://developer.nvidia.com/embedded/jetson-linux-r362 This release will provide you with a Ubuntu 22.04 LTS OS with JetPack 6. There are some difference between flashing 35.4.x and 36.2. Requirements: * Ubuntu 22.04 Host Machine (Bare Metal) * Get a USB-A > USB-A 2.0 Cable * Connect Host to Turing PI on the USB_OTG Port (version Turing PI v2.4, this is the vertical USB port next to the HDMI port) Prepare Host OS: Update System:
Copy code
sudo apt update
sudo apt upgrade -y
sudo apt dist-upgrade -y
sudo apt install -y qemu-user-static nano openssh-server openssh-client
sudo reboot
Prepare Flash:
Copy code
mkdir ~/nvidia
cd ~/.nvidia
Download Nvidia BSP Drivers & RootFS:
Copy code
wget -O bsp.tbz2 -L https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v2.0/release/jetson_linux_r36.2.0_aarch64.tbz2
wget -O rootfs.tbz2 -L https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v2.0/release/tegra_linux_sample-root-filesystem_r36.2.0_aarch64.tbz2
Extract BSP & RootFS:
Copy code
tar -xvpf bsp.tbz2
sudo tar -xvpf rootfs.tbz2 -C Linux_for_Tegra/rootfs/
Patch Firmware; Disable EEPROM: Please note that the file location has changed relating to the 35.4.1 release
Copy code
sed -i 's/cvb_eeprom_read_size = <0x100>/cvb_eeprom_read_size = <0x0>/g' Linux_for_Tegra/bootloader/generic/BCT/tegra234-mb2-bct-misc-p3767-0000.dts
Prepare Firmware:
Copy code
cd Linux_for_Tegra
sudo ./apply_binaries.sh  
sudo ./tools/l4t_flash_prerequisites.sh
Prepare Username, Password, Hostname:
Copy code
sudo ./tools/l4t_create_default_user.sh --accept-license -u <username> -p <password> -a -n <hostname>
Example:
Copy code
sudo ./tools/l4t_create_default_user.sh --accept-license -u ubuntu -p turing -a -n node4
Disable NFSv4 on Host: !Important!: This must be done AFTER
apply_binaries
and
l4t_flash_prerequisites.sh
scripts Edit `/etc/default/nfs-kernel-server
Copy code
sudo nano /etc/default/nfs-kernel-server
Add
--no-nfs-version 4
to
RPCMOUNTDOPTS
Result:
Copy code
RPCMOUNTDOPTS="--manage-gids --no-nfs-version 4"
Patch `nv_enable_remote.sh`: This is a script that must be patched, if not then you will get a
Waiting for target to boot-up
and the flashing of the module will never succeed.
Copy code
nano tools/kernel_flash/initrd_flash/nv_enable_remote.sh
Press: CTRL + SHIFT + _ This will give you a menu to go to a line. Enter: 222 The cursor will now be on a empty line between the following to lines, the
|
simulates your cursor.
Copy code
echo "${cfg_str:1}" > "${cfg}/strings/0x409/configuration"
|
        echo "${udc_dev}" > UDC
Now on the empty line insert following:
Copy code
echo on > /sys/bus/usb/devices/usb2/power/control
Result:
Copy code
echo "${cfg_str:1}" > "${cfg}/strings/0x409/configuration"
        echo on > /sys/bus/usb/devices/usb2/power/control
        echo "${udc_dev}" > UDC
Save file:
Copy code
CTRL + X
Press
Y
+ Enter
Prepare Turing PI for flashing: Login to the BMC and run the following commands:
Copy code
tpi power off
tpi usb -n {NODE} flash
tpi power -n {NODE} on
Replace
{NODE}
with the ID where your Nvidia is. Check if the device is in recovery mode for flashing:
Copy code
lsusb
You should now see a device called
NVIDIA Corp. APX
This means that you succesfully put the device into flash mode. Now flash the node, this command will assume that you are flashing to a NVMe device, you are free to add
BEFORE
the
jetson-*
text the following option
--erase-all
this option will erase the NVMe before flashing the new version, this is very handy if you had an old version running.
Flash Firmware:
Copy code
sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 \
  -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/generic/cfg/flash_t234_qspi.xml" \
  --showlogs --network usb0 --erase-all jetson-orin-nano-devkit internal
Wait and sit back until flashing is done. You will see for a bout 10 to 15 seconds a
Waiting for target to boot-up
this is normal give the module some time to boot during the flash process.
Post Flashing Firmware: !IMPORTANT!: Do not forget these steps We need to turn off the module and put it back into normal mode. Login into the node with
SSH
Copy code
sudo shutdown -h now
Login into the BMC and set the module to normal, and boot it up
Copy code
tpi power -n {NODE} off
tpi usb -n {NODE} device
tpi power -n {NODE} on
It took me an entire day to figure this out, enjoy your Orin NX with Ubuntu 22.04 and latest Jetpack with all its goodies. Happy HomeLabbing 😋
d
@blackphoenixx85 Thank you for posting this. I'll give it a try soon on my Orin Nano. Is there anything that can be added to the flash process to pre-set the static IP address?
b
I could not find anything in the docs or in the options of the Nvidia scripts. What I have done is that I already had a working 35.4.1 on my Orin NX. And I have set a permanent lease on MAC address in my router. This is possible for a consumer routers even. Just go into your router and take a look around. You are looking for DHCP. I have not set any static IP for anything in my home network, I work with permanent leases so regardless of reinstalls or reflash in my home network, devices all get the same IP. Hope this helps.
If your Nano has an IP address your router knows its MAC address. You can get the mac address from your nano if it’s up and running with the ‘ip addr’ command. Should be in ‘link/ether’ go to your router and set fixed IP for that MAC address
d
I'll be updating the docs in a week or so. I'll take a look if you can set a static IP. I guess there should be a way
Actually, I'm almost sure you can set a static IP
I'll make sure to add this to the docs
b
This way you never have to set fixed up with config again only when you add a new device to the network once. Also I work in IT, suggestion, just like a professional organization, make an IP plan for your home. Which subnet is for guest, IP ranges for media devices, IP ranges for phones, servers etc. Anyone in IT will tell you, writing documentation s**** however, you will benefit in the long run from it.
d
The benefit of this solution is you know the IP and can SSH easily
I would also set a static DHCP lease for myself because this is easier for me, but I understand many people want just a static IP
b
BTW, just realized, yes you can set a static ip, just edit the right files in the rootfs directory before you flash.
d
TPi2 boards also work in the networks without a DHCP server
Many people I've talked with use the boards standalone and really need static IPs 🙂
Yes
This is what I am going to add to the docs soon
b
https://forums.developer.nvidia.com/t/set-static-ip-on-jetson-nano/107774/9 this should be the right one, direct link to the correct post
@DhanOS (Daniel Kukiela) @dethtungue hope this helps
d
Yes, this is how you do this on Ubuntu
b
Should work for 22.04/Nvidia-Tegra-36.2
d
Yes, because this is how you set IP on Ubuntu
(and some other linux-based OS-es)
b
I work with Ubuntu since they started, but never needed to set static ip 😂
d
I've been dowing a lot of different things with networks 🙂
b
@dethtungue please keep us posted in this thread if you where able to upgrade to 36.2 with this manual
d
Will do. It may be a couple weeks before I get a chance. But I will provide feedback on your very detailed guide.
d
Then there will be updated docs by then as well
a
This procedure worked for me on my Orin Nano 8gb, now on 22.04. Working on the NX now. Flashing didn't work still in slot 1 (thought I'd try), but seems to be rolling along in slot 2.
Ok, both Nano and NX successfully flashed with procedure above. Thanks @blackphoenixx85 !
u
I repeated every step exactly as on this guide and got this:
Copy code
bucarina@katana:~/nvidia/Linux_for_Tegra$ sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 \
  -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/generic/cfg/flash_t234_qspi.xml" \
  --showlogs --network usb0 --erase-all jetson-orin-nano-devkit internal
[sudo] password for bucarina:
/home/bucarina/nvidia/Linux_for_Tegra/tools/kernel_flash/l4t_initrd_flash_internal.sh --no-flash --external-device nvme0n1p1 -c tools/kernel_flash/flash_l4t_external.xml -p -c bootloader/generic/cfg/flash_t234_qspi.xml --showlogs --network usb0 --erase-all jetson-orin-nano-devkit internal
************************************
*                                  *
*  Step 1: Generate flash packages *
*                                  *
************************************
Create folder to store images to flash
Generate image for internal storage devices
Generate images to be flashed
ADDITIONAL_DTB_OVERLAY=""  /home/bucarina/nvidia/Linux_for_Tegra/flash.sh --no-flash --sign  -c bootloader/generic/cfg/flash_t234_qspi.xml jetson-orin-nano-devkit internal

###############################################################################
# L4T BSP Information:
# R36 , REVISION: 2.0
# User release: 0.0
###############################################################################
ECID is
Board ID() version() sku() revision()
Chip SKU(00:00:00:D3) ramcode() fuselevel(fuselevel_production) board_FAB()
emc_opt_disable_fuse:(0)
Error: Unrecognized module SKU
Error: /home/bucarina/nvidia/Linux_for_Tegra/bootloader/signed/flash.idx is not found
Error: failed to relocate images to /home/bucarina/nvidia/Linux_for_Tegra/tools/kernel_flash/images
Cleaning up...
Ubuntu 22.04, bare metal x86_64 machine
Got absolutely the same error on my Ubuntu ARM VM on my Macbook Air M1 earlier
@DhanOS (Daniel Kukiela) can this indicate something is wrong with the baseboard? Or maybe my A-A cable?
b
@🦊 Carina.Akaia.io Your host needs to be
x86_64
, VM might be a problem, furthermore, I had this too; I solved this my Powering the Node Off and On again, however what I have read from the Nvidia Developers forum, this happens quite a lot when it involves a Virtual Machine, due to the USB not correctly being passed from Host to Guest. Could you try bare metal? Ol machine with Ubuntu?
u
I mean, I tried this on a VM before I got an x86_64 machine on my hands. But I haven't tried this off/on thing because I thought that everything supposed to be ok if I can list the device via
lsusb
, thanks 🤔 I'll make one last try with this in mind and get back with the outcome. P.S. Also I wonder if buying the official carrier board for flashing might solve the issue...
b
@🦊 Carina.Akaia.io I do not know about any official carrier board; however, when I was flashing version 35.4.x I had to Power off/on multiple times, and it worked. When I switched to bare metal, I didn't need to, maybe linked to the fact that the USB readout does not work properly through a VM.
r
@blackphoenixx85 I executed each one of the steps you provided up to "Login into the node with SSH" because my board did not show up on my DHCP list. I am trying to figure this out and will post here if I find the issue, this post is just in case someone here has solved this.
b
If your board didn’t show up did you power off/on ? Did it show afterwards?
r
I power cycled it shows up in the host linux machine after lsusb as ID 0955:7323 NVIDIA Corp. APX
b
So turn it off then you must put it out of flash mode and turn it on again
It you leave the tip usb mode flash on it will always reboot in flash mode
Post Flashing Firmware: !IMPORTANT!: Do not forget these steps We need to turn off the module and put it back into normal mode. Login into the node with
SSH
Copy code
sudo shutdown -h now
Login into the BMC and set the module to normal, and boot it up
Copy code
tpi power -n {NODE} off
tpi usb -n {NODE} device
tpi power -n {NODE} on
It took me an entire day to figure this out, enjoy your Orin NX with Ubuntu 22.04 and latest Jetpack with all its goodies. Happy HomeLabbing 😋
Skip the shutdown, turn it off, turn off flash mode and turn it on. Then it should appear in your network, if it does not then the flashing did not go correctly or your echo is not giving it a number
Hope this helps
r
I put it in device mode and did power cycled again now. I did not do the "sudo shutdown -h now" because I do not have SSH access without an IP address.
I will start again with clean files.
b
Not sure if you can cycle with Turing it off before hand, to be safe turn off the put it in device mode so the bmc has time to switch then turn it on, you do not want it attached
If the flash did go successful it should get an ip directly
Are you using a vm or bare metal for flashing?
Vm can cause flashing problems
r
bare metal with ubuntu
b
Exact version
Sorry version of Ubuntu?
r
Ubuntu 23.10
I found a little typo in your instruction above:
Copy code
mkdir ~/nvidia
cd ~/.nvidia.  <===
b
Ahh, that might be the issue, NVIDIA documentation says it must be 22.04 see first post, I tried with the latest Ubuntu too for my first try and it failed.
r
Thank you, I will try again and if not will install 22.04
b
Not a problem, success, let us know, I would go with the 22.04 because many people on nvidia developers forum had the same issue. I installed 22.04 everything went perfect first try and then I wrote this manual
I will fix the typo tomorrow
r
I made two clean Ubuntu installs one with Version 23.10 and one with Version 22.04.3. In both cases I needed to install "bzip2" for tar to work. I suggest that the initial "apt install" should include it.
Copy code
sudo apt install -y qemu-user-static nano openssh-server openssh-client bzip2
@User , thank you for the instructions!
@blackphoenixx85 I have followed the instructions three times, the last one with Ubuntu Linux Version 22.04.3. During the process I monitored the USB as seen by the host (bare metal). The device shows as "0955:7323 NVIDIA Corp. APX" in the beginning and changes to "0955:7035 NVIDIA Corp. Linux for Tegra" after the "Waiting for target to boot-up". The process ends up with "Flash is successful, Reboot device, Cleaning up...". After that the USB shows "0955:7323 NVIDIA Corp. APX". I believe the 7323 is telling us that the device is in recovery mode. It stays that way even after I change the USB to device.
b
Ok, so this means that the flash is successfully, and the device get stuck in flash mode instead of returning to normal boot, can you go to the docs of Turing pi, turn off, set to device mode, and then check the boot through the picocomm on the docs. Ps in which node slot are you flashing? Node slot can actually matter. I did it slot 4. Possible node slot is stuck in flash mode. Turn off, put the node and McKenzie in different slot and boot
What can help is put in device mode and give reboot command from the bmc ssh
More like bmc issue than issue with flash.
If the device is not in slot 4 I advice to switch it to node 4 and reboot bmc
r
I do not have a cable to do USB-to-TTL, I guess I would need that to monitor the BMC boot. I flashed the OrinNX (or at least tried) using slot 4. I rebooted the BMC and also did a hard reboot by power cycling it. To address the possibility that the slot is stuck in flash mode, I moved the OrinNX to slot 2 and monitored the leased DHCP addresses, the board did not show up, the fan did not ran either. @blackphoenixx85 I do not know what "McKensie" is...
I tried to flash once more, this time while the USB was reporting "0955:7035 NVIDIA Corp. Linux for Tegra" (from after the Waiting for target to boot-up to script "reboot device" at the end of the script), I executed lsblk on the host machine. It showed that Linux was running as the sda, sdb, sdc and sdd are all from OrinNX
Copy code
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0                       7:0    0  63.4M  1 loop /snap/core20/1974
loop1                       7:1    0 111.9M  1 loop /snap/lxd/24322
loop2                       7:2    0  53.3M  1 loop /snap/snapd/19457
sda                         8:0    1     0B  0 disk
sdb                         8:16   1     0B  0 disk
sdc                         8:32   1     0B  0 disk
sdd                         8:48   1 476.9G  0 disk
├─sdd1                      8:49   1 475.5G  0 part
├─sdd2                      8:50   1   128M  0 part
├─sdd3                      8:51   1   768K  0 part
├─sdd4                      8:52   1  31.6M  0 part
├─sdd5                      8:53   1   128M  0 part
├─sdd6                      8:54   1   768K  0 part
├─sdd7                      8:55   1  31.6M  0 part
├─sdd8                      8:56   1    80M  0 part
├─sdd9                      8:57   1   512K  0 part
├─sdd10                     8:58   1    64M  0 part
├─sdd11                     8:59   1    80M  0 part
├─sdd12                     8:60   1   512K  0 part
├─sdd13                     8:61   1    64M  0 part
├─sdd14                     8:62   1   400M  0 part
└─sdd15                     8:63   1 479.5M  0 part
nvme0n1                   259:0    0 476.9G  0 disk
├─nvme0n1p1               259:1    0     1G  0 part /boot/efi
├─nvme0n1p2               259:2    0     2G  0 part /boot
└─nvme0n1p3               259:3    0 473.9G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   100G  0 lvm  /
I am uploading the log file that is saved at the end of the flashing just in case anyone here can help. https://cdn.discordapp.com/attachments/1189964405715243160/1197268258580091030/flash_1-1_0_20240117-192225.log?ex=65baa5f5&is=65a830f5&hm=7fbe53a6e6a21fd980ef6d491e70143655c0cce24d2e0e3e0f6f7414c5811d91&
b
That looks like the flash indeed went successfully
You check the docs you can use the bmc with ssh to check the boot with picocomm
Search for picocomm in the docs
Maybe something wrong with your bmc network, try putting it in a slot of a node which you know already has network
r
@blackphoenixx85 I figured out how to get picocom to work. I executed picocom /dev/ttyS5 -b 115200 and let the terminal open. 1) I turned Slot 4 power on using BMC and verified the red LED on the board. Nothing showed up. 2) I turned Slot 4 power off, replaced OrinNX by a CM4 and turned Slot 4 power on. The log showed the boot up of CM4 (Raspberry PI 4) to be sure... 3) I turned Slot 4 power off, installed OrinNX back, and turned Slot 4 power on. The log did not show anything. Because I did not change the board configuration and did not even restarted picocom during steps 1 - 3, the Turing PI 2 board seems ok. Right?
b
The Turing PI board seems ok
@DhanOS (Daniel Kukiela) could you give some insight?
d
There are multiple messages above that I haven't seen. What's the problem or what I should look at?
b
@User : So @rpontual. Seems to have been able to flash successfully, however his board does not show up with boot log and does not give an IP, he switched the board in the slot with a CM4 which did show up any idea why a Orin does not show an IP after flashing or any boot log, seems that the board is stuck in flash mode even when he turns it into device mode
d
The first thing to check would be if the NVMe drive has been moved with this Orin module and put in the right M.2 slot under the board - directly below the node where the module is inserted
Hmm, other than that it should boot if the CM4 booted
I would also try to boot it in Node 1 with a screen connected through HDMI - this might give more insight about what's going on
r
@blackphoenixx85 @DhanOS (Daniel Kukiela) I confirm that I moved the NVMe together with the OrinNX module to the slots I installed it. I tried to obtain information from the HDMI as suggested, after moving Orin, moving NVMe, configuring the HDMI switch, powering the Turing board, powering Node 1 the HDMI did not output anything that my monitor could see. Please also note that the Turing board seems to ignore several commands. For example, sometimes when I command to power a node ON (via BMC or TPI) the status changes on the UI but the node does not comes on. Generally to fix I need to send generic all nodes off and back to on. It is interesting that when I send a command to power a node ON via TPI, if I reload the BMC web page the respective node shows ON even when the actual node is not ON. I am using firmware 2.0.5. I have been assuming that this is a bug that you a aware but given the issue with the OrinNX I am bringing this up now. Thank you for all the help.
@blackphoenixx85 @DhanOS (Daniel Kukiela) I filtered the log file with
Copy code
grep -E -i 'error|warning' initrdlog/flash_1-1_0_20240119-210507.log
and got the lines below. Please let me know if you see a clue to the issue. If not please run a similar filter on the log you got when flashing OrinNX for a comparison. Thank you!
Copy code
[   0.2654 ] BL: version 1.4.0.1-t234-54845784-08e631ca last_boot_error: 0
Error: The backup GPT table is corrupt, but the primary appears OK, so that will be used.
Warning: Not all of the space available to /dev/nvme0n1 appears to be used, you can fix the GPT to use all of the space (an extra 880677552 blocks) or continue with the current setting?
Warning: Not all of the space available to /dev/nvme0n1 appears to be used, you can fix the GPT to use all of the space (an extra 880677552 blocks) or continue with the current setting?
[ 10]: l4t_flash_from_kernel: Warning: skip writing A_smm-fw partition as no image is specified
(continues...)
Second and final part
Copy code
[ 10]: l4t_flash_from_kernel: Warning: skip writing A_smm-fw partition as no image is specified
[ 12]: l4t_flash_from_kernel: Warning: skip writing A_reserved_on_boot partition as no image is specified
[ 20]: l4t_flash_from_kernel: Warning: skip writing B_smm-fw partition as no image is specified
[ 22]: l4t_flash_from_kernel: Warning: skip writing B_reserved_on_boot partition as no image is specified
[ 22]: l4t_flash_from_kernel: Warning: skip writing uefi_variables partition as no image is specified
[ 22]: l4t_flash_from_kernel: Warning: skip writing uefi_ftw partition as no image is specified
[ 22]: l4t_flash_from_kernel: Warning: skip writing reserved partition as no image is specified
[ 22]: l4t_flash_from_kernel: Warning: skip writing worm partition as no image is specified
[ 22]: l4t_flash_from_kernel: Warning: skip writing reserved_partition partition as no image is specified
[ 25]: l4t_flash_from_kernel: Warning: skip writing A_reserved_on_user partition as no image is specified
[ 26]: l4t_flash_from_kernel: Warning: skip writing B_reserved_on_user partition as no image is specified
[ 32]: l4t_flash_from_kernel: Warning: skip writing recovery_alt partition as no image is specified
[ 32]: l4t_flash_from_kernel: Warning: skip writing recovery-dtb_alt partition as no image is specified
[ 32]: l4t_flash_from_kernel: Warning: skip writing esp_alt partition as no image is specified
[ 32]: l4t_flash_from_kernel: Warning: skip writing UDA partition as no image is specified
[ 32]: l4t_flash_from_kernel: Warning: skip writing reserved partition as no image is specified
tar -xpf /mnt/external/system.img  --checkpoint=10000 --warning=no-timestamp --numeric-owner --xattrs --xattrs-include=*  -C  /tmp/ci-G3bDOQzt8U
b
To me this looks like the flash is not ok, and there is a problem with the name
r
I do not know what I am missing to address the "a problem with the name". Did you set environment variables of something like that? For example, do I need to set "BOARD" to something specific?
b
No, I just installed fresh copy of Ubuntu, installed the basic stuff like qemu and ran the tutorial
Everything went automatic
r
I installed the server version of Ubuntu. I wonder if you have the Desktop version.
After investing three days on this, I decided to install Version 35.3 using the instructions from "docs.turingpi.com" to figure out if I had a hardware problem. The installation worked as it should. This time I installed the desktop version of Ubuntu 20.04. Anyway, thank you for your help with 36.2; I will play with 35.3 until my head stops spinning. 😉
d
I was able to follow the instructions above to upgrade my Orin Nano 8GB to Ubuntu 22.04 Jetpack 6. Overall it went well. There is a step at the end to SSH into my Orin and perform a "sudo shutdown -h now" but I'm not sure how to do that while the board is still in flash mode. But the power off/on appears to do what was needed. Also, I wanted to set the static IP. The instructions listed above don't appear to work - the folder structure doesn't match. But On this page, the instruction #3 on this page worked for me. I skipped the last step to disable ipv6. https://www.itzgeek.com/how-tos/linux/ubuntu-how-tos/set-a-static-ip-address-on-ubuntu-22-04.html https://cdn.discordapp.com/attachments/1189964405715243160/1198488792764530738/image.png?ex=65bf16ab&is=65aca1ab&hm=8623b23f4358f77f32996cad7186f4be37e57547f8b81d127034916b287ea498&
r
@blackphoenixx85 as you know, I wasn't able to successfully flash my Jetson Orin NX 16GB with 36.2 yet. But in the spirit of helping you end up with a flawless procedure, I believe you need to add to this block a request for the user to reboot the host, according to my research the RPCMOUNTDOPTS change will only take place after a reboot. Thank you. Disable NFSv4 on Host: !Important!: This must be done AFTER
apply_binaries
and
l4t_flash_prerequisites.sh
scripts Edit `/etc/default/nfs-kernel-server
Copy code
sudo nano /etc/default/nfs-kernel-server
Add
--no-nfs-version 4
to
RPCMOUNTDOPTS
Result:
Copy code
RPCMOUNTDOPTS="--manage-gids --no-nfs-version 4"
b
Thank you
r
@blackphoenixx85 I finally succeeded to install v36.2. I am not sure why it worked now, but here are the differences from my original attempt: - Host: - Ubuntu 20.04.6 LTS (desktop) Previously I was using Ubuntu 22.04 server - I did not disable NFSv4 on Host Previously I did disable the NFSv4 on Host - Before starting this time I executed "export BOARD=jetson-orin-nano-devkit" Previously I did not set BOARD - Jetson: - Had 35.4.1 running before flashing Previously it was "virgin" as received from NVIDIA
b
@rpontual. Happy to hear it worked, your changes are interesting
d
This is great, @blackphoenixx85 ! Thanks so much for the effort here. Just got the RK1s with 22.04, and it saddened me when I logged back onto the Jetson Orin NX with 20.04. I'll give this a go this evening. I noticed the docs don't make mention of this yet, @DhanOS (Daniel Kukiela) . Still working through that, or did you run into a snag?
This worked perfectly for me! Abridged instructions here https://gist.github.com/dudo/4093b5d14f2b003cad507e4f4ac1aa83
@ashram56 you might be interested in this
a
I am also trying to install the latest Jetson SDK so feel free to ignore as I know I'm in unsupported territory 🙂 - @Dudo I tried following along with your gist (with a different usb device power control of course) but i still end up here on the flash step
I am using an NVME for node 2 with an Orin NX 8, and another avenue I found might be an incorrect App Size, though the docs very clearly calling out the error seems like that's not related. I have of course tried rebooting both Node2 via the BMC UI as well as depowering the entire TPI2 board
USB A-A to proxmox host, device detected fine through lsusb
a
What is the issue with flashing the module from slot 1 ?
Speficially what are the symptoms ?
I mean, if I want HDMI I have to use slot 1, but if I want to flash I need to use another slot, move the nvme drive to the corresponding slot, then back to slot 1 when flashed ?
Moved the module to slot 3, and successfully flashed it using above procedure (running into a VM). I suspect by the way that it would work as well using WSL2 (I had managed to flash a Jetson AGX devkit in the past)
Now two questions though: 1/ In the procedure outline in the git, there is an "apt upgrade" command. I tend to err on the side of caution for this, why would you actually upgrade all SW components. In the past, it had a tendency to break the filesystem when used on Jetson 2/ how do you force the module in recovery mode if I want to install other BSP ? On the devkit, there are two buttons, reset and recovery, which can be used to force the module in recovery, but I don't see that on Turing 2 carrier
Well, now docker is not installed... Even after using "apt install nvidia-jetpack" or "apt install nvidia-container-toolkit"
Found it, you need to refer to L4T documentation
a
🤦‍♂️ that's what I get for skimming - switched out to a real shitbox (thanks dudo for the term!) and @blackphoenixx85 your instructions worked great, got the orin nx 8 up with the latest jetson sdk! so Proxmox (despite reporting reliably through lsusb) caused some issue with the usb write timeouts
d
I tried something like 4 VMs with no success, originally. I have to blame it on Apple silicon. Once I got a real shitbox (little tiny x86_64 Beelink that can dual boot windows or ubuntu), everything worked on the "first" try.
To take this a step further, to modify the kernel modules (such as enabling CONFIG_NET_CLS_BPF for cilium):
Copy code
sudo apt install build-essential bc libssl-dev libncurses-dev pkg-config

wget -O drivers.tbz2 https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v2.0/release/jetson_linux_r36.2.0_aarch64.tbz2
tar -xvpf drivers.tbz2

cd Linux_for_Tegra/source
./source_sync.sh -k -t jetson_36.2

cd kernel/kernel-jammy-src
make nconfig # Enable CONFIG_NET_CLS_BPF kernel feature as module
make modules
sudo make modules_install

# I can't seem to get the -tegra suffix to take. Any ideas?
sudo cp -r /lib/modules/5.15.122/* /lib/modules/5.15.122-tegra/
sudo depmod
d
I will update the docs, just hit a queue of things to do. Should be updated soon, though
If you flash using USB A-A cable, the flashing might fail due to the USB signal integrity issues in Node 1 (sometimes also happens in Node 2, but it's rare). You can try flashing in Node 4
This would help you, yes
With the BMC firmware v1.x set the USB mode to
device
and in the BMC firmware v2.x into
flashing
mode - this works the same as the jumper on the Nvidia carrier boards
a
OK after spelunking across the internet (and running across @Dudo 's posts from last year too!) I think I need to recompile the kernel or go with cilium? continuing my investigation from here: https://discord.com/channels/754950670175436841/1212910456688349264/1213367097480974386 Fact 1) it seems that the 36.2 kernel is missing vxlan (and maybe more I haven't discovered yet) which is required by the default k3s install for flannel. @Dudo are you saying the only missing component for cilium is the above kernel module or was that an addition after you compiled a custom kernel with other features already? Trying a simple vxlan module build results in operation not permitted + unknown symbols in dmesg, which suggests either a secureboot or kernel+modules mismatch (maybe nconfig activated other =y configs). (BTW maybe you figured it out by now @Dudo but the suffix comes from specifying "make modules KERNELVERSION=" <- this removes the need for the cp at the end) I'm doing all this in the orin nx (on nvme so plenty of storage) to avoid the x-arch compilation so I am for now ruling out a bad module compilation and assuming I need to go compile a full custom kernel to enable vxlan (and hence, enable flannel + k3s). Did my reasoning get to crazytown? Did anyone else successfully get to a working k3s agent? The flash + node join went fine, but the agent becomes unreachable, hence why I'm asking how far people got
d
Yeah, cilium only needed the 1 module.
After some more setup, k8s needs quite a few more kernel flags to operate…
Apparently we’re going to be able to modify the Image/modules pre-flash with 36.3.
s
has anyone solved the infinite "waiting for target to bootup" then timeout? Ive followed all the suggestions above and modified nv_enable_remote.sh, but still get a timeout on waiting for target to boot-up
t
I've seen this myself. You'll need a UART/TTL-to-USB cable to help diagnose. Is the NVIDIA installer waiting for the module to boot? On a regular NVIDIA Jetson carrier board, the forced recovery jumper is removed once installation has begun. This allows the module to boot so that the installer can complete. I'm not certain whether the TPiv2 does this step automatically or whether it would require manually changing the TPiv2's switchable USB port from flash to device mode. I can try it. Haven't had much success with my Orin NX.
s
I worked with it for a few more hours and now get a timeout after "sending blob". Seems like the USB connection times out in step 2. I do have it installed in Node 2, so I may try again in Node 4 to rule out the Orin Node 1/2 flashing issues. Am also using VMWARE to emulate Ubuntu, so it may be an issue with it not being bare metal.
I did the ole module slot shuffle and eventually got it to work
Figured this out when I started using the most recent Linux for tegra build and jetpack push, had to go off script a bit from the OP instructions, but got it up and running after trial and error
i
i am also attempting this and due to some challenges I am wondering if I can flash the nvme/jetson on an official nvidia devkit carrier board and then swap them into the turing pi after?
m
yup, this is what I had to do ^
t
Same here. I'm hoping things improve with the TPiv2.5.x.
k
I just finished flashing my Orin Nano with the Documentation from Garret, however i run into an Issue: after flashing the Board on Node 3 and switching the Node back to Device, then swapping my Module back to node1 for the HDMI output, the chip gets stuck on "finished Record Runlevel Change in UTMP." did anyone run into this issue? Edit: i flashed it again in Node 3 instead of Node 4 and it worked fine now.
60 Views