AGX xavier dev kit faulty USB-C cannot flash

Good afternoon. I have an AGX xavier dev kit and the USB-C port seems to deactivate itself during flashing, so I cannot flash or use the device either with flash.sh or the sdk manager (attempting to install jetpack 5.1). As a workaround I have attempted to flash an nvme on another identical device, where it boots successfully, but when inserted into this one i get a kernel panic error. Any advice would be appreciated.

Hello,

Thanks for visiting the NVIDIA Developer Forums.
To ensure better visibility and support, I’ve moved your post to the Jetson category where it’s more appropriate

Cheers,
Tom

Hi,

We recommend putting the AGX Xavier Developer Kit into Recovery Mode first.
Please refer to the following documentation to verify whether the device is in Recovery Mode:
Quick Start — Jetson Linux Developer Guide

Thank you.

Please… Obviously I did this.

The device is in recovery mode otherwise the flashing would not initiate. It initiates, then aborts because the usb c port turns itself off.

During a normal flash the USB will disconnect and reconnect, which is not an error. The problem is usually the host not picking up the reconnect. If any kind of VM is involved, then this is almost always the cause. Something to note is that USB itself is itself “hot plug”, and if you run into a disconnect and it isn’t on a VM, then try to unplug and re-plug the USB to see if it picks up again.

Also, when flash actually completes the Jetson will automatically reboot for the stage where it wants to install optional software (which is over USB to the account added during first boot setup).

Thank you for your message. I assure you this is not it.

I am not running from a VM, and the same pipeline successfully flashed another identical unit.

The flashing gets to 6% and then it aborts with an error. I include images.

(attachments)



Can someone advise what is really the issue here? Is the issue with the eMMC, the firmware or the USB C connection?

eMMC is very rarely at fault. Usually something is going wrong with USB such that this is neither a hardware nor software failure, so much as it is a signal quality issue which is very hard to pin down. Sometimes there is a mismatch of configuration of what is being flashed with what the software is trying to flash. In this case though, look closely at the screenshot where it says this excerpt:

target: Depends on failed component

The error starts with:
...Error: None of the bootloaders are running on device...

In recovery mode the Jetson is a custom USB device understood only by the custom USB driver. That driver is literally called the “driver package” if manually downloading the flash software. That package is what creates the “Linux_for_Tegra/” content on the host PC when unpacked (the other major part is the sample root filesystem, which fills in “Linux_for_Tegra/rootfs/”). Nothing after that first error really has a chance to complete because the flash itself did not complete and so it cannot install components.

We don’t have much detail as is. However, in the last screenshot, where the big red “INSTALLATION FAILED” is, note in the lower right corner it has a way to “EXPORT LOGS”. This could add those details. Without that everything else is guessing.

In terms of just guessing, USB is a serial protocol without a separate clock signal (the simplest is a D+ and D- wire). That signal has to have fairly tight timings and is the primary reason for restrictions on USB cable length (USB cable length restrictions have very little to do with signal strength loss at distance). Anything which messes up that timing can cause failure even though the hardware itself and the software running on that hardware are otherwise perfect. Something as simple as a different cable, different port, adding/removing a HUB or changing HUB, can change timings and make something previously failing work, or previously working fail. This could cause reading board information to fail, although odds are that it is something else. If USB corrupts data while reading this though, it might be that all hardware is valid and there is no real fault other than little timing issues breaking it.

Note: It is easier to get correct timings on slower data, e.g., mouse or keyboard. USB2 tends to be reliable, but things get more complicated since it is a much faster data rate. USB3 can get insanely tight on timing requirements and is more likely to revert back to USB2 speeds.

Attach the zip file from the “EXPORT LOG” button, and more might be visible.

SDKM_logs_JetPack_5.1.5_Linux_for_Jetson_AGX_Xavier_2025-10-24_20-31-37.zip (2.8 MB)

Thank you really very much for this very detailed and illuminating reply. I attach the logs from the most recent failed attempt - in the meantime I am trying all of the ports and USB C cables that I can find…. I really hope that it is something this simple.

Playing around with different cables and passing it through a powered usb hub made it go as far as 20% flashed (up from 6%), so I am beginning to believe in the signal quality theory. But i don’t have infinite cables to try - is there a way to maximise the synchronisation somehow if this is the issue?

Incidentally, when I insert a correctly flashed nvme from another identical and functioning device into the fault one, it does show the Nvidia splash screen and gets some distance into booting before reporting a kernel panic. Since I can read the nvme on another machine, I could make changes to it if this is feasible.

I’m evidently trying to bypass the eMMC flash since nothing gets it to finish successfully.

To start with, which release are you installing? I see logs indicating both JetPack 4.6.6 and 5.1.5. I’m assuming 5.1.5, which is valid (the 4.x release would technically work, but it had a lot more bugs for Xavier and is further out of date).

For reference, JetPack/SDK Manager is just the GUI front end to flashing. What actually gets flashed is L4T (which in turn is what Ubuntu gets called after NVIDIA drivers are added). Normally a given JetPack release is associated with a particular L4T release, but JetPack has the ability to show earlier releases. Here is the URL listing L4T releases versus what is valid on given hardware, assuming it is a developer kit and not a third party carrier board (the L4T page also lists the JetPack release):
https://developer.nvidia.com/linux-tegra

This is the list of JetPack releases, which also leads to L4T releases:
https://developer.nvidia.com/embedded/jetpack-archive

L4T R35.6.2 is the recommended most recent release for Xavier, and ties to JetPack 5.1.5. The L4T release URL helps because it has documentation with it and other downloads, e.g., for manual flash software install.

Some JetsonHacks.com content:
https://jetsonhacks.com/?s=jetson+agx+xavier

This URL specifically shows the developer kit version of the Jetson AGX Xavier:
https://jetsonhacks.com/2018/09/28/nvidia-jetson-agx-xavier-developer-kit/


I’ll go back to USB in a moment…

Looking at actual flash and screenshot information, the screenshot seems to be failing with audio drivers (there was a kernel OOPS). Assuming this is the right release for your hardware, I have to then ask is this an actual developer kit (see the JetsonHacks URL above)? A developer kit is from NVIDIA, but if you purchased an AGX Xavier from a third party which produces its own carrier boards, then the firmware will differ. Firmware being incorrect can lead to this type of error, so first verify if there is any kind of third party hardware (a pure developer’s kit won’t have that issue when used with NVIDIA’s flash software; third party hardware changes flash).

Next, one normally flashes differently for using an NVMe versus the internal eMMC (Jetsons don’t have an actual hardware BIOS, it is software which is flashed along with the o/s; this software will go into partitions of the eMMC even when using NVMe). Even though you are wanting to use the NVMe it is useful to flash just eMMC for testing purposes. This would validate the Jetson itself (or show a failure). One normally uses an initrd flash for external devices.

Can you try flashing again without the NVMe just using the eMMC? You said you used flash.sh before, but I’m assuming the NVMe was attached, and ordinary flash.sh won’t handle the NVMe case correctly. If you want to flash on command line it can simplify this for eMMC (flash logs and results for this will lead to much better USB information; if there is a disconnect during flash, try to unplug and replug the cable). Typically, from the “Linux_for_Tegra/” directory, and with the Jetson connected in recovery mode, this would be a command line flash with a log created:

  • sudo ./flash.sh jetson-agx-xavier-devkit mmcblk0p1 2>&1 | tee log_flash.txt
    (you could then attach log_flash.txt to the forum)

Note that I’m suggesting this as a way to test without the NVMe. Some of the failures can result from NVMe issues too, not just from USB. If this flashes and works on command line, then we can go specifically to the required initrd flash of NVMe. An example of perfectly good hardware failing with NVMe when it should work is that USB power to a Jetson has a limit, and if the NVMe draws too much power during a data write, it can cause USB to shut down as it removes power to avoid going over its max limit (which reminds me, is anything here powered by USB, or is it using the barrel connector? barrel connector won’t have those limits).


So far as USB goes we still have not verified that USB is the issue (certainly this is the case quite often when flash does not detect the Jetson or loses it during heavy data transfer; you’ve already seen how changing cables and ports changes how far flash completes…hopefully the non-NVMe flash log above gives us more information).

You got to a point where the audio failed (presumably during boot). This was when the Linux kernel was running, so it got past the boot stages to reach that point. At least part of flash had to have succeeded. What can you tell me about the flash which got the Jetson to the point that it booted to the audio failure? Is this with the NVMe?

What I saw in the logs does indicate failure for USB to find a recovery mode Jetson (USB was there, but then disconnected).

Is there anything unusual about your host PC itself? Is it just an ordinary tower type PC?


This URL should have valid documentation of flash for L4T R35.6.2 despite being R35.4.1:
https://docs.nvidia.com/jetson/archives/r35.4.1/DeveloperGuide/text/SD/FlashingSupport.html
(then search for l4t_initrd_flash.sh; from here forward I do not have an NVMe to test with so I’m just using what I know and might be wrong)


Once you have that you can move on to initrd flash via command line with the NVMe. I’ll ask @DavidDDD to suggest a correct initrd flash command since I don’t have an NVMe to test with. Any comment on use of l4t_initrd_flash.sh for the Xavier AGX with NVMe and L4T R35.x would be appreciated.

Hello linuxdev,

Let me begin by saying that I really appreciate the time and effort that you have made to help me. I will address each point one by one.

  1. I downloaded JetPack 4.6.6 at some point to see if it could be made to flash with it, but it did not. I am only attempting to flash 5.1.5 now.

  2. The device was purchased on ebay and it looks like a genuine NVIDIA device to me. I attach a photo of the undercarriage where the NVIDIA logo is visible as well as serial numbers. It looks very like the other unit I have.

  3. The device is powered by the barrel connector using what appears to be a genuine adaptor.

  4. I removed the NVME (a Samsung 980 Pro, same as in my other devie) and attempted to flash just to the eMMC as suggested. I noticed that the USB disconnects itself when flashing begins and does not automatically reconnect as it should, which is what causes the process to fail. However if I put the device into recovery mode again quickly it reconnects and flashing apparently continues. It reaches 99%, with each new percentage point coming slower and slower, and never actually gets to 100% even after a few hours. I get several ‘installation of ‘flash jetson linux’ is taking longer than expected’ prompts. Eventually I give up. This behaviour is encountered both when flashing to the eMMC and to an NVME.

  5. I attach the log file generated by “sudo ./flash.sh jetson-agx-xavier-devkit mmcblk0p1 2>&1 | tee log_flash.txt” generated by the script in the repository for L4T R35.6.2. Really all it says is that it cannot communicate with the device via USB.

  6. The boot sequence you saw on the image was with an NVME flashed on another AGX Xavier Dev Kit device then inserted into the ‘faulty one’, I was hoping it could bypass the flashing somehow, but evidently it cannot because of the kernel panic which you say is caused by a failed audio driver.

  7. When booted without an NVME, I actually reach a login screen with what appears to be the device’s previous owner’s username. This leads me to believe that my 99% ‘flash’ did nothing. Is it possible that they could have ‘bricked’ the eMMC somehow?

  8. The host PC is a linux laptop running Ubuntu 24.04.3 LTS with the /etc/os-release file tweaked to pretend to be 20.04. I’m not proud of doing that, but this worked on every other AGX Xavier Dev Kit that I have flashed and I really really don’t think this is the issue.

  9. Regarding running l4t_initrd_flash.sh I understand that I need to have the NVME connected to the host. I must purchase an adaptor first.

    log_flash.txt (12.6 KB)

I can’t say for certain, but it does look like a genuine AGX. I won’t guarantee it, but likely it is valid. On the other hand, I don’t see the plastic base. Maybe you’ve removed it? Here is a picture, look closely at the plastic at the bottom:

I’d say it would be about 1.5 cm or so tall. Is that present? If not, then it is missing something, or it is a module on a third party carrier board.

The barrel connector won’t have any problem with power, it does not have the limitations of USB-C power. I suppose the power adapter itself could have a limit, but that is very unlikely an issue.

You are correct that it is not detecting the internal bootloader for flash. Normally there is a disconnect and reconnect, but if the internal software/hardware is not running, then this too can mean the reconnect isn’t possible. If you can repeat this command line flash, but this time monitor serial console on the AGX Xavier itself, then we can see what the Jetson itself is saying. If you see the 40-pin header, the USB connector to the left of that connector is for serial console. Here are some instructions if you’ve not set this up before:
https://forums.developer.nvidia.com/t/setting-up-a-serial-console-for-debug/176716/2

The serial console hardware inside of the Jetson will survive almost all failures and likely this will tell us some more definitive answer. Serial console applications like gtkterm (or others, e.g., minicom) can log the entire session. Basically you just start the serial console program after the Jetson is in recovery mode, then tell it to start logging the session. You would then try that same command line flash, and even if the flash fails, we should see some output on the serial console.

You might also want to experiment to see if anything shows up on serial console when trying to boot without recovery mode. Pretty much anything we see on that console will be useful, especially seeing what comes out as a result of being in recovery mode and flashing. Keep going without the NVMe for this.

I did not see the reconnect where it continues after unplug/replug of USB. Was the log without that, or was the log with the unplug/replug? It would be useful to have another flash log with the USB unplug/replug, and the serial console log from that same flash session. If flash continues and proceeds somewhat after an unplug/replug while stalled and not seeing USB, then something inside the Jetson is at least trying. I wouldn’t let it sit more than an hour. With the eMMC only and no NVMe I’m just going to guess that you’d see progress and completion sooner than that (the NVMe implies a bigger partition and more data transfer plus time for that transfer). I really want to see the combination of flash log and serial console log from the same session, especially if an unplug/replug helped when it said USB was not responding (USB is hot plug, and although you don’t want to pull it when transferring anything, any complaint that the host can’t find the USB is game for an unplug/replug to trigger the hot plug event).

I don’t know if the audio failure was just a sign of something else, or directly a problem. It seems promising that it got to the point of having a loaded Linux kernel. The audio could have been for a lot of different reasons. Mostly it says the boot stages completed and probably an initrd loaded or the kernel directly loaded. It’s one of the reasons for not giving up yet and suspecting software rather than hardware.

Being able to reach the previous owner’s login is very promising. If security fuses were burned, then you’d need his private key to flash, but so far I don’t see that issue. What it does do is guarantee that a large part of the hardware is functional. It also means the previous owner did not flash to use the NVMe, or at least a non-NVMe flash existed and perhaps it reverted to eMMC when no NVMe was found (it couldn’t be found since you physically removed it).

Jetsons are very very hard to “brick”. Bricking means it can’t be recovered without unsoldering components and working with those components in a programmer. Since a Jetson has no BIOS and is merely a custom USB device when in recovery mode, it is incredibly difficult to brick one. In some rare cases with i2c commands it might be possible, but that’s really really difficult to cause. Sometimes someone will burn the security fuses though to stop flash, and this would indeed be the equivalent of “bricked” if you don’t have the key. However, loss of USB during flash is not what you would see; there would instead be a full flash log and statements in the log that keys are required. What many people do not realize is that all of the eMMC models of Jetsons have signed partitions for every partition except the rootfs, and that no such partition will be accepted for boot if the key is wrong…but the default is a NULL signature. When a NULL signature is used you can see a note about missing a key, but that is how it is supposed to work if nobody burned the security fuses (a NULL key is just a missing key). I see no indication yet that security fuses are burned, nor do we yet see any indication of being bricked.

The 20.04 tweak could in theory be an issue, but in reality it rarely is a problem. If it flashes with the correct content it should work. This shouldn’t change the USB at all. It would still be interesting to see if a 20.04 actual host PC changes this. I’m going to guess that you don’t have enough disk space to add a 20.04 boot, but if you could add another hard drive for booting 20.04 and leave all your other content alone it would be a useful test. Fake 20.04 might be a problem, but I doubt such a problem would show up in the USB. Also, command line flash won’t have that problem, it is the GUI front end to the flash software which tends to have issues like that (which is another reason we are flashing with command line flash.sh).

Someone else can comment on l4t_initrd_flash.sh when we get there, but you don’t need this for eMMC flash (that’s the other reason we are doing this without the NVMe).

I am so far through the looking glass here. I attach three log files from the serial output with gtkterm.

Log1 was from the attempt at running flash.sh to eMMC via the command line. Once the connection dropped and I did the fast reconnect trick, nothing ever showed up again. Interestingly, the SDK manager exhibited the very same behaviour, and the ‘flashing to 99%’ that it continues to do is clearly nonsense. This explains why it never had any effect, and the trick is worthless.

log1.txt (1.4 KB)

Log2 was from the eMMC boot, namely the previous owner’s install.

log2.txt (43.8 KB)

Log3 was from an NVME boot (from an NVME flashed on another AGX dev kit).

log3.txt (102.3 KB)

You are right that the plastic stand is missing, it came like that from ebay. But everything else looks legit on the outside. I’ve grown quite attached to this crippled device and I would love to resurrect it.

I forgot to add, there is no further flash console log beyond the ‘disconnect - reconnect’ moment, which reflects what is (not) visible in the serial log. There’s simply no further activity.

(If I can test this on a genuine 20.04 build I will).

The first log is interesting. It shows that the 3p server is working (which is basically what is in control of recovery mode; the unit did reach recovery mode). This particular serial console is on a different USB connector from the actual flash connector, and so it doesn’t guarantee that the flash connector is connected or not. It shows the internals are working and ready for flash.

The fact that you know it is in the 3p server, but it doesn’t show any sign of a command being issued and responded to, makes me think this might be an actual USB issue (though not necessarily the cable or host; it isn’t possible to know). If you monitor this as the unit is put into recovery mode, then the 3p server line should show as it is put into recovery mode. At this point, if you watch “dmesg --follow” on the host PC, does the host PC or the serial console of the recovery mode Jetson react to plugging the Jetson into the host PC?

The other log with and without the NVMe does show something had been flashed. It also shows a lot of i2c errors, but i2c errors might not really be an error. There is perhaps i2c probing going on for optional hardware which isn’t there (the message would not distinguish between an actual failure and simply not detecting hardware which isn’t there). However, if the device tree is set up for a different carrier board, or modified in some way, then it may be because there was a different carrier board and this carrier board was added later (which would have the incorrect device tree for i2c).

I’m convinced that you have to get this working without the NVMe first. If you can get a reaction from the 3p server on serial console for a USB plug-in, or a reaction on the host PC (or better yet a reaction from both), then there is still likely something which can be done to flash this unit. The serial console talks directly to the 3p server, so monitor this to see the startup of recovery mode, and then experiment with plugging in to different host PC ports and flash attempts (any sign of reaction is hopeful). Can you get the 3p server to respond to anything on the flash USB port?

Thank you very much for this. I will do as you suggest at the earliest opportunity, I may have to be away from this for a few days to deal with a personal matter. Please do not close the thread.

If this really is an issue with the j512 USB port, what can be done to fix it? Is this something an electronics shop would know how to do?

If it is just a broken solder joint, then someone with surface mount rework equipment might be able to solve it. Basically some no-clean solder flux (or something to clean with regular flux) and a hot air gun. It isn’t something you could work on with just a soldering iron, the traces are too small.

It could be something other than this though, in which case the module is probably still good, but the carrier board is bad. One can buy carrier boards for Jetsons from third party vendors, but I don’t know what the cost would be. If you decide to go that route, then you’d remove the module first and inspect the pins; if pins look good, then it is likely just a case of getting a compatible carrier board. You might also want to test the NVMe on another computer, but try to avoid using the NVMe until the eMMC flash works.

Ok, thank you this is helpful. Looking on eBay they appear to be on sale for well over 1000 GBP, which don’t seem worth it considering you can get a complete unit for half that.