https://turingpi.com logo
#│forum
Upstream Kernel Panics
# │forum
s

soxrok2212

03/28/2024, 11:46 PM
Hi, I'm aware this isn't an officially supported build, but I created an RK1 debian build based on https://gitlab.collabora.com/hardware-enablement/rockchip-3588 on both kernels 6.8 and 6.9-rc1. They install and run and I'm able to run an Ubuntu 22 LXC container from an lvm-thin volume on my SSD. The bug: seems to be triggered by disk writes. I was building an OpenWrt 23.05.3 image utilizing all 8 cores with 16GB RAM and about 20 minutes in, I was hit with a panic. It's reproducable, happens every time and happened on multiple different chips so I don't think it's hardware related. I've attached the crash log. Pinging @spooky8086 and @cfsworks for visibility https://cdn.discordapp.com/attachments/1223055516104659026/1223055516666957955/message.txt?ex=66187636&is=66060136&hm=84f378bf34b28ef383e5f19a68ceba69dfe42bacc8e7adca829ac398ca92959e&
I can provide the exact image if needed as well
c

cfsworks

03/29/2024, 12:13 AM
This really looks like some kind of memory corruption in the inode cache.
s

soxrok2212

03/29/2024, 12:16 AM
i could try kernel 6.7
could it be a clock thing?
c

cfsworks

03/29/2024, 12:19 AM
It happens way too consistently in the inode cache for me to suspect clocks or other memory/hardware management. A memory test is never a bad idea, but this feels like the kernel is doing some kind of memory unsafety thing.
s

soxrok2212

03/29/2024, 12:19 AM
wouldn't it be not unique to a particular device then?
e.g. if i build the same image for x86 i should see it there
i feel like someone wouldve reported it already if so
c

cfsworks

03/29/2024, 12:22 AM
Hard to say. I remember hearing about an issue like this that was caused by invoking UB and the compiler for only a particular architecture was producing errant code. But you're right that this should be reported elsewhere, because it's not like AArch64 is a "niche" platform.
I'd see how consistent the
[ 3392.667905] Unable to handle kernel paging request at virtual address 0000018001ffff78
is. What's interesting about that to me is:
Copy code
@ rasm2 -d -b64 -aarm -e 'f94002d3 b4000173 d1036273 b4000133 f9402263'
ldr x19, [x22]
cbz x19, 0x30
sub x19, x19, 0xd8
cbz x19, 0x30
ldr x3, [x19, 0x40]
@ hex(0x18000000010-0xd8+0x40)
'0x17fffffff78'
i.e. what's being loaded from x22 is
0x18000000010
which looks more like a flags field to me, and then pointer arithmetic is being done on that
Oh wait, messed up my arithmetic, sec
Copy code
@ hex(0x18002000010-0xd8+0x40)
'0x18001ffff78'
But still, an integer value with only 4 bits sparsely set screams "bitfield" to me.
s

soxrok2212

03/29/2024, 12:31 AM
hmmmm
c

cfsworks

03/29/2024, 12:31 AM
well that makes it hard to see what's going on with the struct management. Maybe you just got very (un)lucky (depending on whether you consider being the first to discover a memory error "lucky") with the layout randomization. Do you happen to have
CONFIG_RANDSTRUCT_*
set?
s

soxrok2212

03/29/2024, 12:32 AM
lemme look
Copy code
$ zcat /proc/config.gz | grep CONFIG_RANDSTRUCT
CONFIG_RANDSTRUCT_NONE=y
yep
c

cfsworks

03/29/2024, 12:39 AM
"None" would mean the randomization is disabled, so that's good for debugging (but there goes my previous guess)
s

soxrok2212

03/29/2024, 12:39 AM
oh
wouldve help if i read it lmao
wonder if i scuffed something with my kernel config
theres the whole kernel config
c

cfsworks

03/29/2024, 12:42 AM
Ah, that you're on such an "atypical" config that the bug is happening?
this defconfig
added
CONFIG_DM_THIN_PROVISIONING=m
to get lvm-thin
then
Copy code
ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make olddefconfig
ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make bindeb-pkg -j $(nproc)
c

cfsworks

03/29/2024, 12:49 AM
I've never used it and I'm taking a shot in the dark, but try enabling
CONFIG_KASAN
and rebuilding the kernel. There'll likely be a performance cost to this, but with any luck it'll identify exactly where the corruption is happening.
s

soxrok2212

03/29/2024, 12:51 AM
ok, gimme an hour or so to build and install
built, installing and triggering bug
c

cfsworks

03/29/2024, 1:12 AM
Here's hoping it isn't a heisenbug
s

soxrok2212

03/29/2024, 1:12 AM
ok apparently i bricked it
c

cfsworks

03/29/2024, 1:13 AM
Won't boot at all?
s

soxrok2212

03/29/2024, 1:13 AM
stuck in initramfs
c

cfsworks

03/29/2024, 1:14 AM
Unable to load kernel modules due to a version skew? Or is there a KASAN problem flagged as early as initramfs?
s

soxrok2212

03/29/2024, 1:15 AM
not sure, lemme reinstall
almost done
alright building openwrt,,,
i liked, just starting the build now
mkay it wasnt even started and crash
does this help @cfsworks
c

cfsworks

03/29/2024, 3:53 AM
It's going to take some studying to understand this. It doesn't look like KASAN was triggered but rather that KASAN's memory-tracking code caused a crash earlier in the execution.
s

soxrok2212

03/29/2024, 3:54 AM
🙂
thats not epic
c

cfsworks

03/29/2024, 3:55 AM
I do think this crash is related to the same memory corruption, and earlier is always better since it's closer to the culprit.
s

soxrok2212

03/29/2024, 3:56 AM
anything i can provide or attempt or change?
this is a bit outside of my forte
c

cfsworks

03/29/2024, 3:59 AM
I always feel like I'm flying by the seat of my pants with this kind of debugging too. I've never seen the same kind of problem twice.
s

soxrok2212

03/29/2024, 3:59 AM
i sent a message to the collabora guys too, but im doubtful they will reply
c

cfsworks

03/29/2024, 4:02 AM
Here's an odd idea, but what about disabling cores 1-7 and only running core 0?
Adding
maxcpus=1
to the command line on boot ought to achieve that.
If the issue goes away entirely, then we know it's a data race. If it doesn't go away, it should get easier to debug. I'm wondering if this is corruption in "shared" caches and another core happens to trip over the corruption before the culprit thread is caught.
s

soxrok2212

03/29/2024, 4:05 AM
maxcpus=1 in uboot?
c

cfsworks

03/29/2024, 4:05 AM
In the kernel
bootargs
, but yes set from U-Boot
s

soxrok2212

03/29/2024, 4:05 AM
i can limit it in lxc too
c

cfsworks

03/29/2024, 4:05 AM
As long as it appears in
cat /proc/cmdline
, it should be good.
s

soxrok2212

03/29/2024, 4:06 AM
and pin it to one core
prob preferred in the kernel tho
c

cfsworks

03/29/2024, 4:06 AM
The cmdline option should prevent the other cores from being powered up at all. But a good second test might be to use core affinity to achieve the same thing. 🤔
s

soxrok2212

03/29/2024, 4:07 AM
k hang on
KASAN makes the kernel take 5ever to start lol
c

cfsworks

03/29/2024, 4:12 AM
Yeah, and it's gonna be a lot worse when on a single core.
s

soxrok2212

03/29/2024, 4:13 AM
this uboot is weird
setenv bootargs 'maxcpus=1'
?
c

cfsworks

03/29/2024, 4:14 AM
That should be it, but does your /boot have a script that overrides the bootargs?
s

soxrok2212

03/29/2024, 4:14 AM
probably
cause it didnt work lol
either that or something in the bootcmd overwrites it
actually, theres the initrd, sysmap, kernel and the extboot config
c

cfsworks

03/29/2024, 4:15 AM
extboot config might be interesting to look at. The others aren't part of that part of the boot chain.
But yes, also studying the bootcmd sounds good.
s

soxrok2212

03/29/2024, 4:16 AM
extlinux* sorry
bootcmd=bootflow scan -lb
bootflow is new to me
oh hang on theres a boot script
Copy code
scriptaddr=0x00c00000
c

cfsworks

03/29/2024, 4:16 AM
extlinux/extlinux.conf would be the next file that bootflow looks at
s

soxrok2212

03/29/2024, 4:17 AM
it just looks like a grub config
bootmenu
Copy code
## /boot/extlinux/extlinux.conf
##
## IMPORTANT WARNING
##
## The configuration of this file is generated automatically.
## Do not edit this file manually, use: u-boot-update

default l0
menu title U-Boot menu
prompt 0
timeout 50


label l0
    menu label Debian GNU/Linux 12 (bookworm) 6.8.0-g235e32bb9813-dirty
    linux /boot/vmlinuz-6.8.0-g235e32bb9813-dirty
    initrd /boot/initrd.img-6.8.0-g235e32bb9813-dirty
    fdtdir /usr/lib/linux-image-6.8.0-g235e32bb9813-dirty/
    
    append root=UUID=afb0e1eb-b0ad-4b79-b9a2-8354818b3b63 rootwait

label l0r
    menu label Debian GNU/Linux 12 (bookworm) 6.8.0-g235e32bb9813-dirty (rescue target)
    linux /boot/vmlinuz-6.8.0-g235e32bb9813-dirty
    initrd /boot/initrd.img-6.8.0-g235e32bb9813-dirty
    fdtdir /usr/lib/linux-image-6.8.0-g235e32bb9813-dirty/
    append root=UUID=afb0e1eb-b0ad-4b79-b9a2-8354818b3b63 rootwait single
(6.8.0 on my other node)
prob the append line
c

cfsworks

03/29/2024, 4:19 AM
I'd disregard the warning and add
maxcpus=1
to the
append
for now
s

soxrok2212

03/29/2024, 4:19 AM
yep
hang on
alright were online with one core
core 0
c

cfsworks

03/29/2024, 4:26 AM
Have you also double-checked with
/proc/cpuinfo
?
s

soxrok2212

03/29/2024, 4:26 AM
Copy code
$ cat /proc/cpuinfo
processor    : 0
BogoMIPS    : 48.00
Features    : fp asimd evtstrm crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
CPU implementer    : 0x41
CPU architecture: 8
CPU variant    : 0x2
CPU part    : 0xd05
CPU revision    : 0
yep
c

cfsworks

03/29/2024, 4:26 AM
Cool. If nothing else I hope this makes the oops/panic output more legible
s

soxrok2212

03/29/2024, 4:28 AM
this may take a while, so im going to monitor the chip's uart from a tmux session
might be asleep by the time it crashes (if it does)
c

cfsworks

03/29/2024, 4:28 AM
👍
s

soxrok2212

03/29/2024, 4:28 AM
appreciate the debugging help 🙂
c

cfsworks

03/29/2024, 4:28 AM
I'm in a kernel debugging mood this week anyway
I'm multitasking this and also tracking down a crash in the experimental NVK open-source NVIDIA Vulkan driver at the same time
s

soxrok2212

03/29/2024, 4:30 AM
oh ive heard of that driver
my buddy was telling me about it
c

cfsworks

03/29/2024, 4:31 AM
It's definitely not ready to be a "daily driver" but I'm impressed with how well it works already.
s

soxrok2212

03/29/2024, 4:38 AM
yea thats what im hearing
i heard the intel gpu drivers are actually decent too
c

cfsworks

03/29/2024, 4:38 AM
Those very much are, they're my primary driver.
s

soxrok2212

03/29/2024, 4:39 AM
transcode capabilities are awesome too
other node just crashed doing a tarball extract
c

cfsworks

03/29/2024, 5:08 AM
Other node meaning one with all 8 cores enabled?
s

soxrok2212

03/29/2024, 5:10 AM
yea
single core one is still churning
c

cfsworks

03/29/2024, 5:13 AM
If the single core one doesn't die, I might start to suspect a cache coherency issue. The one big controversy with the RK3588 is it doesn't implement cache snooping on its interconnect (no idea if that includes caches within a single core cluster, or not). If this doesn't end in a crash, I'm wondering if there's some cache management issue that's masked by all of the platforms that do implement snooping.
(And the cache management issue is apparently unique to the ext4 code.)
s

soxrok2212

03/29/2024, 5:14 AM
sounds very suspect the way you describe it
theres often cma warnings too
unsure if theyre related
I may have just reset the wrong machine… let me re-run everything 😰
so i think it still crashed but i didnt see anything in the logs
im going to try again
a day later, i am positive it still crashes even on one core
@cfsworks 🙂
c

cfsworks

03/30/2024, 2:25 AM
Definitely a kernel bug then. What's killing it, I have no idea. Seems like the first fault was in filesystem code though... the consistency of this happening in the filesystem doesn't seem like a coincidence.
s

soxrok2212

03/30/2024, 2:26 AM
Agreed
Maybe I should 6.7
c

cfsworks

03/30/2024, 2:28 AM
If you can find a version where it doesn't happen, you could do a git-bisect
s

soxrok2212

03/30/2024, 2:28 AM
Problem is I can only go back so far for this board
6.6 was LTS yeah?
c

cfsworks

03/30/2024, 2:30 AM
It was; you'll want to keep the .dtb from your latest build though, since the RK1 .dts landed in 6.7.
s

soxrok2212

03/30/2024, 7:35 PM
it might be related to lvm actually
c

cfsworks

03/30/2024, 8:11 PM
Changing to a different fs but keeping lvm sounds like a good test
s

soxrok2212

03/30/2024, 9:21 PM
First I’m going to try in vanilla Debian w/o proxmox
so far so good on 8c with vanilla debian (no proxmox).
ok, crashed on vanilla debian, so it is indeed a kernel issue
alright ill build 6.7 now and try that
alright, openwrt compiling on 6.7. lets see if it crashes
crashed on 6.7 too
there has to be something else wrong, theres' no way a basic ext4 system would be crashing for 3 minor kernel versions
@cfsworks sure it’s nothing in the device tree?
It really almost seems like a clock now
c

cfsworks

04/01/2024, 1:00 AM
I haven't encountered anything on my end, but that doesn't mean it's definitely error-free. What makes it seem like a clock?
s

soxrok2212

04/01/2024, 1:01 AM
I read something on the collabora git about certain clocks being unstable
Let me see if I can find it
It wouldn’t be a missing kernel module right? Since it technically “works”
c

cfsworks

04/01/2024, 1:05 AM
It doesn't make sense for it to be a missing kernel module, no.
s

soxrok2212

04/01/2024, 1:09 AM
I can’t find that clk reference
Maybe I should try building the vanilla Linux kernel?
And not from collabora
Maybe they introduced a buggy patch
Oh here’s one
That’s for rock-5b though
Ok lemme try vanilla kernel
Else, I’m out of ideas
c

cfsworks

04/01/2024, 1:21 AM
Also run a test with a non-ext4 filesystem. If the error still happens with, I dunno, xfs, then we know it's not ext4-related.
s

soxrok2212

04/01/2024, 3:21 AM
ok still crashed on pure vanilla kernel 6.9-rc2 tag
guess i need to try btrfs or xfs
sounds like a tomorrow problem
i think i need to re-build u-boot with btrfs support
interesting data point
i tried to dd an image from /tmp on my nvme to mmcblk0 and it panicked
Copy code
[ 1620.511576] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[ 1620.520153] CPU: 6 PID: 1 Comm: systemd Not tainted 6.9.0-rc2 #1
[ 1620.526868] Hardware name: Turing Machines RK1 (DT)
[ 1620.532315] Call trace:
[ 1620.535042]  dump_backtrace+0x94/0xec
[ 1620.539144]  show_stack+0x18/0x24
[ 1620.542848]  dump_stack_lvl+0x38/0x90
[ 1620.546944]  dump_stack+0x18/0x24
[ 1620.550648]  panic+0x39c/0x3d0
[ 1620.554054]  do_exit+0x834/0x92c
[ 1620.557659]  do_group_exit+0x34/0x90
[ 1620.561650]  copy_siginfo_to_user+0x0/0xc8
[ 1620.566227]  do_signal+0x118/0x1378
[ 1620.570126]  do_notify_resume+0xc8/0x140
[ 1620.574508]  el0_undef+0x84/0x98
[ 1620.578113]  el0t_64_sync_handler+0xa0/0x12c
[ 1620.582884]  el0t_64_sync+0x190/0x194
[ 1620.586974] SMP: stopping secondary CPUs
[ 1620.591452] Kernel Offset: disabled
[ 1620.595344] CPU features: 0x4,00000003,80140528,4200720b
[ 1620.601280] Memory Limit: none
[ 1620.604690] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---
alright, were on btrfs. lets see
@cfsworks btrfs crashed
this was on eMMC and not the nvme (ext crashes were all on nvme) so its not the drives/storage either
as proof
/dev/mmcblk0p3 on / type btrfs (rw,relatime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/)
c

cfsworks

04/01/2024, 6:50 PM
A btrfs crash means it's more likely to be hardware related, yeah. The only patch of mine that I can think related is already upstream, though. This is a tricky problem.
s

soxrok2212

04/01/2024, 8:00 PM
but its crashed on 3/4 of my devices
seems less likely hardware, no?
actually no, its crashed on all 4
one or two also on emmc
c

cfsworks

04/01/2024, 8:29 PM
Oh oops I guess I meant "driver related"
s

soxrok2212

04/01/2024, 8:40 PM
what driver(s) are you thinking?
probably not related to pcie
not mmc
c

cfsworks

04/01/2024, 8:59 PM
Could be eMMC, but I have no idea. Does this happen consistently whether you have NVMes installed or not?
s

soxrok2212

04/01/2024, 9:29 PM
I haven't tried remove nvme drives yet, but I have left 2 nodes on (running from nvme) with low IO and they've been up for 2-3 days straight now
I sent an email to collabora requesting a little assistance
s

spooky8086

04/03/2024, 12:16 AM
Sorry for delay in chiming in, ubuntu has been busy with beta launch coming up. Do we have any theory on the issue?
s

soxrok2212

04/03/2024, 12:17 AM
Nope
I’ve tested just about everything I can test
Same behavior on all my nodes so it’s not hardware
Tried with one single core enabled
Tried on ext4 and btrfs
Tried collabora kernels 6.7, 6.8 and 6.9 as well as upstream 6.9-rc2
s

spooky8086

04/03/2024, 12:18 AM
Hmm i have a 6.7 build, would you be able to see if it also crashes?
s

soxrok2212

04/03/2024, 12:18 AM
I did try on 6.7
s

spooky8086

04/03/2024, 12:18 AM
Its not collabra tho
s

soxrok2212

04/03/2024, 12:18 AM
I’m willing to give it a shot though
s

spooky8086

04/03/2024, 12:18 AM
Its more of an Armbian + my suff kernel
s

soxrok2212

04/03/2024, 12:18 AM
Yeah drop it here I’ll try it
s

spooky8086

04/03/2024, 12:19 AM
Oh geez its 3 months old
I keep getting pulled into the BSP kernel trap
c

cfsworks

04/03/2024, 12:19 AM
What are the common elements of all of the crashes so far? - On an RK1 - Compiling OpenWrt heavy filesystem access to eMMC to any external (network/block) target - ext4 filesystem - LVM - Proxmox, not bare metal - Multiple CPU cores enabled
s

soxrok2212

04/03/2024, 12:20 AM
Compiling openwrt is just one way to crash it. I’ve been able to crash when copying a large file from an NFS share to another
As well as dd’ing a file from /tmp to mmcblk0
s

spooky8086

04/03/2024, 12:20 AM
I have about 20 different rk3588 bords btw, if i can get the exact reproducible steps i can rule out that its RK1 specific
s

soxrok2212

04/03/2024, 12:20 AM
Screw the BSP kernel lol
s

spooky8086

04/03/2024, 12:21 AM
6.1 is not that bad, but 5.10 was CURSED
c

cfsworks

04/03/2024, 12:21 AM
Can we eliminate the eMMC as the culprit, by doing heavy filesystem access to the NVMe instead?
s

soxrok2212

04/03/2024, 12:21 AM
@spooky8086 try to mount an NFS share or two and copy a large file between (like 20g+)
I’ve tested on both eMMC and NVME. Same result
c

cfsworks

04/03/2024, 12:21 AM
Does NFS->NFS trigger it?
s

soxrok2212

04/03/2024, 12:21 AM
Yes
Here’s my build
c

cfsworks

04/03/2024, 12:22 AM
How about mounting tmpfs and repeatedly doing
dd if=/dev/urandom of=/tmp/test.bin bs=4096 count=1024
in a tight loop?
s

spooky8086

04/03/2024, 12:23 AM
Hmmm i need to power on my nas, it has an NFS server with a bunch of ubuntu build artifacts i can try transferring
c

cfsworks

04/03/2024, 12:24 AM
Can you trigger the crash by running
iperf3
or other non-filesystem I/O?
s

soxrok2212

04/03/2024, 12:24 AM
let me try iperf
s

spooky8086

04/03/2024, 12:26 AM
Off topic: but with Panthor being merged in 6.10 we will finally have gpu support, ill likely send a patch for HDMI, but the edid quirk may take some time to figure out a proper patch.
s

soxrok2212

04/03/2024, 12:26 AM
im stoked for this
will try this after
s

spooky8086

04/03/2024, 12:28 AM
Ive been talking to some of the GPU / VPU devs, they are insanely smart. I beleive AV1 support should be coming in soon.
s

soxrok2212

04/03/2024, 12:28 AM
also stoked for that, my whole media library is in av1
what exactly do you mean "mounting tmpfs" ?
s

spooky8086

04/03/2024, 12:30 AM
Personally im waiting for 6.10 before doing any rebase on mainline. But im flashing a fresh 6.7 image now to test.
c

cfsworks

04/03/2024, 12:30 AM
tmpfs is just an in-RAM filesystem.
/tmp
is typically one.
mount | grep /tmp
to check its type
(Sometimes it's not tmpfs, but a directory in
/
that gets cleared on boot)
s

soxrok2212

04/03/2024, 12:32 AM
Copy code
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16245064k,nr_inodes=4061266,mode=755)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3254680k,mode=755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
tmpfs on /run/user/1001 type tmpfs (rw,nosuid,nodev,relatime,size=3254676k,nr_inodes=813669,mode=700,uid=1001,gid=1001)
i actually dont understand this one
Copy code
$ df /tmp
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/nvme0n1p3  32937936 3895364  27348492  13% /
c

cfsworks

04/03/2024, 12:33 AM
So /run and /dev/shm are tmpfs, but it looks like /tmp is on the NVMe
s

soxrok2212

04/03/2024, 12:33 AM
it is
it shouldn't be
c

cfsworks

04/03/2024, 12:34 AM
Sometimes that's done just to give /tmp more space than just RAM can handle
s

soxrok2212

04/03/2024, 12:34 AM
i think 32g is enough lol
c

cfsworks

04/03/2024, 12:35 AM
Well,
mount -t tmpfs /tmp /tmp
if you'd like to use RAM instead
s

soxrok2212

04/03/2024, 12:36 AM
iperf for 5 mins at 1gbit didnt crash it
lemme write to /tmp (in ram)
for i in $(seq 1 1024); do dd if=/dev/urandom of=/tmp/test.bin bs=4096 count=20480; done
seems to be okay so far
aha!
seems to be reading from disk
or reading in general
not writing
dd if=/dev/urandom of=/tmp/test.bin bs=4096 count=2048000 status=progress
then copy this somewhere else, e.g. to an nfs share
cp /tmp/test.bin /mnt/share/test.bin
crashed instantly
logs from this one
c

cfsworks

04/03/2024, 12:59 AM
I wonder if you should test a lower patchlevel of each kernel minor you're testing
It occurs to me that there might be a bad fix commit that got cherrypicked onto each of the 6.6-8 branches
Maybe try out 6.7.1 first and go from there?
s

soxrok2212

04/03/2024, 1:00 AM
yep that is 100% it, just crashed again immediately
lemme check out 6.7.1
6.7.1 crashes
im trying @spooky8086 's 6.7 ubuntu build now...
3 Views