Skip to content

Update gomod and re-enable tests#2495

Merged
hallyn merged 4 commits intolxc:mainfrom
stgraber:main
Sep 24, 2025
Merged

Update gomod and re-enable tests#2495
hallyn merged 4 commits intolxc:mainfrom
stgraber:main

Conversation

@stgraber
Copy link
Copy Markdown
Member

No description provided.

@stgraber
Copy link
Copy Markdown
Member Author

@bensmrs @luissimas looks like Linstor is out of rc and on stable now.

Sadly tests still fail and appear to fail consistently as I've had the same errors 3 tries in a row. Any idea what's going on?

@bensmrs
Copy link
Copy Markdown
Contributor

bensmrs commented Sep 22, 2025

I’m throwing you under the bus, @luissimas, as I have no work time scheduled on Incus development this month, and my personal dev time is fully taken by other projects…
Best I can do is have a look in 2 weeks.

@luissimas
Copy link
Copy Markdown
Contributor

I can take a look at it this week. I'm currently getting my development environment up and running again, and fighting Fedora packages in the process :').

I haven't looked into it yet, but from the information we have so far I think we either:

  1. Have a real regression in the cloning operation in these new Linstor versions. I think this is unlikely, and if it really is the case then maybe we are encountering some weird edge case that only Incus can reproduce.
  2. Had some sort of small breaking change in the API (perhaps the cloning operation status enum) that may be making us incorrectly report the status of the operation as failed.

Anyway, these are just initial guesses. Once I get my environment up and running again I'll try to reproduce the issue and really see what's going on.

I'm also a bit short on time, so I'll be working on this a little bit every day. I should have an update until the end of the week, but probably earlier than that :).

@luissimas
Copy link
Copy Markdown
Contributor

Actually, I think this might be a different error altogether. The clustered tests are green, and the failure from the standalone tests seems different from the one we were seeing with the RC version of Linstor:

Error: Could not restore volume to snapshot: Message: 'Snapshot 'incus-volume-bfbd7c29d3c74ccaabc10cb114e86742' of resource 'incus-volume-cdca20f280384bc986dce1674fe4f192' marked down for rollback.' next error: Message: '(local) Resource 'incus-volume-cdca20f280384bc986dce1674fe4f192' [DRBD] adjusted.' next error: Message: 'Deactivated resource 'incus-volume-cdca20f280384bc986dce1674fe4f192' on 'local' for rollback' next error: Message: '(local) Resource 'incus-volume-cdca20f280384bc986dce1674fe4f192' [DRBD] adjusted.' next error: Message: 'Rsc 'incus-volume-cdca20f280384bc986dce1674fe4f192' on 'local' updated' next error: Message: '(local) Failed to query symlinks of device /dev/drbd1002'; Details: 'Command 'udevadm info -q symlink /dev/drbd1002' returned with exitcode 1. 

Standard out: 


Error message: 
Unknown device "/dev/drbd1002": No such device

'; Reports: '[68D0D77B-0426A-000000]'

I'll investigate this anyway, but it seems like at least the stable Linstor version gives us a behavior somewhat closer to our expectations.

@stgraber
Copy link
Copy Markdown
Member Author

Yeah, the error definitely is different from the RC, we seem to be hitting some issues with snapshot restoration now.

@bensmrs
Copy link
Copy Markdown
Contributor

bensmrs commented Sep 22, 2025

Oh that’s “good” news, as that one feels easier to troubleshoot. Now we need to know why the volume is not made available.

@luissimas
Copy link
Copy Markdown
Contributor

I fixed my setup and was able to reproduce the issue locally (and consistently).

It seems to only happen when using the ZFS backend. I did some tests using the LVM backend and was not able to reproduce the issue there. I'll dig deeper in the next few days, but there are several mentions to "zfs rollback" in the latest release notes, so I think this might be related.

root@test ~# incus launch images:debian/12 c1 --storage linstor-lvm
Launching c1
root@test ~# incus snapshot create c1
root@test ~# incus snapshot rename c1 snap0 foo
root@test ~# incus snapshot restore c1 foo
root@test ~# incus snapshot delete c1 foo
root@test ~# incus delete c1 --force
root@test ~# incus launch images:debian/12 c1 --storage linstor-zfs
Launching c1
root@test ~# incus snapshot create c1
root@test ~# incus snapshot rename c1 snap0 foo
root@test ~# linstor resource-definition list --show-props Aux/Incus/name Aux/Incus/type
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName                                  ┊ ResourceGroup ┊ Layers       ┊ State ┊ Aux/Incus/name                                                                ┊ Aux/Incus/type ┊
╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ incus-volume-4e46c5df4fa94eca897f2fbab5f19cf0 ┊ linstor-lvm   ┊ DRBD         ┊ ok    ┊ incus-volume-6cfef15ee02c8d5dc4f91c1df26a9336164dd37c40d9295d830694898b4351f6 ┊ images         ┊
┊ incus-volume-bf8e0a1f499f47fea7bcef2f85a76f10 ┊ linstor-zfs   ┊ DRBD,STORAGE ┊ ok    ┊ incus-volume-c1                                                               ┊ containers     ┊
┊ incus-volume-c93f57597a8149ea82f366b272bb3c84 ┊ linstor-zfs   ┊ DRBD,STORAGE ┊ ok    ┊ incus-volume-6cfef15ee02c8d5dc4f91c1df26a9336164dd37c40d9295d830694898b4351f6 ┊ images         ┊
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
root@test ~# linstor snapshot list
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName                                  ┊ SnapshotName                                  ┊ NodeNames ┊ Volumes   ┊ CreatedOn           ┊ State      ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ incus-volume-bf8e0a1f499f47fea7bcef2f85a76f10 ┊ incus-volume-0d735bfb790c4646a343207f923f1ff7 ┊ test      ┊ 0: 10 GiB ┊ 2025-09-24 11:55:03 ┊ Successful ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
root@test ~# linstor volume-definition list
╭─────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName                                  ┊ VolumeNr ┊ VolumeMinor ┊ Size   ┊ Gross ┊ State ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ incus-volume-4e46c5df4fa94eca897f2fbab5f19cf0 ┊ 0        ┊ 1000        ┊ 10 GiB ┊       ┊ ok    ┊
┊ incus-volume-bf8e0a1f499f47fea7bcef2f85a76f10 ┊ 0        ┊ 1002        ┊ 10 GiB ┊       ┊ ok    ┊
┊ incus-volume-c93f57597a8149ea82f366b272bb3c84 ┊ 0        ┊ 1001        ┊ 10 GiB ┊       ┊ ok    ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
root@test ~# incus snapshot restore c1 foo
Error: Could not restore volume to snapshot: Message: 'Snapshot 'incus-volume-0d735bfb790c4646a343207f923f1ff7' of resource 'incus-volume-bf8e0a1f499f47fea7bcef2f85a76f10' marked down for rollback.' next error: Message: '(test) Resource 'incus-volume-bf8e0a1f499f47fea7bcef2f85a76f10' [DRBD] adjusted.' next error: Message: 'Deactivated resource 'incus-volume-bf8e0a1f499f47fea7bcef2f85a76f10' on 'test' for rollback' next error: Message: '(test) Resource 'incus-volume-bf8e0a1f499f47fea7bcef2f85a76f10' [DRBD] adjusted.' next error: Message: 'Rsc 'incus-volume-bf8e0a1f499f47fea7bcef2f85a76f10' on 'test' updated' next error: Message: '(test) Failed to query symlinks of device /dev/drbd1002'; Details: 'Command 'udevadm info -q symlink /dev/drbd1002' returned with exitcode 1.

Standard out:


Error message:
Unknown device "/dev/drbd1002": No such device

'; Reports: '[68D3DBAC-114C8-000000]'

@stgraber
Copy link
Copy Markdown
Member Author

Ah, I'll do a test to see if LVM somehow behaves now in the tests, maybe we can switch to that until we track down the issue with ZFS.

@stgraber stgraber force-pushed the main branch 2 times, most recently from c673b0e to 78752c9 Compare September 24, 2025 16:39
@stgraber
Copy link
Copy Markdown
Member Author

No such luck with LVM, it's also failing but with an error we've seen before around LV name length. So probably best to sort out the ZFS failure, that seems easier :)

For now I'm going to have this PR only handle swagger so we can get it merged before the 6.17 release.

@hallyn hallyn merged commit 1e88aa5 into lxc:main Sep 24, 2025
36 checks passed
@luissimas luissimas mentioned this pull request Sep 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

4 participants