QA Run #69920
openaclamk-testing-phoebe-2025-02-12-0909
Description
these PRs were included:
https://github.com/ceph/ceph/pull/60791 - blk/kernel : Make bdev stop immediately
https://github.com/ceph/ceph/pull/61455 - blk/KernelDevice: Introduce a cap on the number of pending discards
https://github.com/ceph/ceph/pull/61646 - qa/rados: Augmented bluestore testing
https://github.com/ceph/ceph/pull/61679 - os/bluestore: Fix default base size for histogram
https://github.com/ceph/ceph/pull/61693 - include: interval_set: Re-introduce the original behaviour strict interval set
Updated by Adam Kupczyk about 1 year ago
[8134338]
[8134380]
[8134442]
[8134456]
rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/few msgr/async-v1only objectstore/{bluestore/{alloc$/{hybrid_btree2} base compr$/{yes$/{lz4}} mem$/{normal-1} write$/{write_v2}}} rados supported-random-distro$/{centos_latest}}
hybrid_btree2 produces smaller granularity then required alloc_size
NEW TRACKER https://tracker.ceph.com/issues/70143
[8134348]
rados/verify/{centos_latest ceph clusters/{fixed-2 fixed-4 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/{bluestore/{alloc$/{hybrid} base compr$/{no$/{no}} mem$/{normal-1} write$/{write_v2}}} rados read-affinity/balance tasks/mon_recovery validater/lockdep}
Failed to deploy.
[8134351]
rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e}
Command failed on smithi002 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2'
[8134388]
rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/2-size-2-min-size 1-install/squid backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/radosbench}
"2025-02-17T10:40:00.000193+0000 mon.a (mon.0) 1209 : cluster [WRN] [WRN] PG_BACKFILL_FULL: Low space hindering backfill (add storage if this doesn't resolve itself): 2 pgs backfill_toofull" in cluster log
[8134393]
rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore/{alloc$/{avl} base compr$/{no$/{no}} mem$/{low} write$/{write_v1}} rados supported-random-distro$/{ubuntu_latest}}
Command failed (workunit test cephtool/test.sh) on smithi165 with status 124: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0c59d437e2b406e5d628fe144208418bb3dffc25 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'
2025-02-17T10:37:39.363 INFO:tasks.workunit.client.0.smithi165.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:42: expect_true: diff -au keyring1 keyring2 2025-02-17T10:37:39.364 INFO:tasks.workunit.client.0.smithi165.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:42: expect_true: return 0 2025-02-17T10:37:39.364 INFO:tasks.workunit.client.0.smithi165.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:619: test_auth: env CEPH_KEYRING=keyring1 ceph -n client.admin2 auth rotate client.admin2 2025-02-17T10:52:39.486 INFO:tasks.workunit.client.0.smithi165.stderr:2025-02-17T10:52:39.486+0000 7f9625a78640 0 --1- 172.21.15.165:0/1067412820 >> v1:172.21.15.165:6812/3612510196 conn(0x7f9620059a10 0x7f95f00c1420 :-1 s=CONNECTING_SEND_CONNECT_MSG pgs=0 cs=0 l=1).handle_connect_reply_2 connect got BADAUTHORIZER 2025-02-17T10:52:40.486 INFO:tasks.workunit.client.0.smithi165.stderr:2025-02-17T10:52:40.486+0000 7f9625a78640 0 --1- 172.21.15.165:0/1067412820 >> v1:172.21.15.165:6812/3612510196 conn(0x7f9620059a10 0x7f95ec023640 :-1 s=CONNECTING_SEND_CONNECT_MSG pgs=0 cs=0 l=1).handle_connect_reply_2 connect got BADAUTHORIZER ..... 2025-02-17T13:34:40.999 INFO:tasks.workunit.client.0.smithi165.stderr:2025-02-17T13:34:40.995+0000 7f9625a78640 0 --1- 172.21.15.165:0/1067412820 >> v1:172.21.15.165:6812/3612510196 conn(0x7f9620059a10 0x7f95ec023620 :-1 s=CONNECTING_SEND_CONNECT_MSG pgs=0 cs=0 l=1).handle_connect_reply_2 connect got BADAUTHORIZER 2025-02-17T13:34:41.459 INFO:tasks.workunit.client.0.smithi165.stderr://home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh:1: test_auth: rm -fr /tmp/cephtool.Q6y 2025-02-17T13:34:41.461 DEBUG:teuthology.orchestra.run:got remote process result: 124
NEW TRACKER: https://tracker.ceph.com/issues/70142
[8134397]
rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub}
Command failed (workunit test scrub/osd-scrub-test.sh) on smithi059 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0c59d437e2b406e5d628fe144208418bb3dffc25 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-test.sh'
[8134423]
rados/verify/{centos_latest ceph clusters/{fixed-2 fixed-4 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/{bluestore/{alloc$/{avl} base compr$/{yes$/{zstd}} mem$/{normal-1} write$/{write_v1}}} rados read-affinity/default tasks/rados_api_tests validater/valgrind}
Failed to deploy.
[8134425]
rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/dashboard}
Test failure: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest)
[8134426]
rados/encoder/{0-start 1-tasks supported-random-distro$/{ubuntu_latest}}
[8134453]
rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}}
2025-02-17T11:20:10.150 DEBUG:teuthology.orchestra.run:got remote process result: None
2025-02-17T11:20:10.150 INFO:tasks.cephadm.osd.2:Stopped osd.2
2025-02-17T11:20:10.150 DEBUG:teuthology.orchestra.run.smithi172:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 15e86b74-ed20-11ef-bb82-bd4984dce30f --force --keep-logs
2025-02-17T19:04:14.135 DEBUG:teuthology.exit:Got signal 15; running 1 handler...
[8134490]
rados/standalone/{supported-random-distro$/{centos_latest} workloads/erasure-code}
Command failed (workunit test erasure-code/test-erasure-code.sh) on smithi071 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0c59d437e2b406e5d628fe144208418bb3dffc25 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code.sh'
2025-02-17T11:58:49.763 INFO:tasks.workunit.client.0.smithi071.stderr:Error ENOENT: technique= is not a valid coding technique. Choose one of the following: reed_sol_van, reed_sol_r6_op, cauchy_orig, cauchy_good, liberation, blaum_roth, liber8tion
[8134503]
Failed to deploy
[8134505]
rados/singleton/{all/osd-recovery-incomplete mon_election/connectivity msgr-failures/none msgr/async objectstore/{bluestore/{alloc$/{avl} base compr$/{yes$/{zlib}} mem$/{low} write$/{write_v2}}} rados supported-random-distro$/{ubuntu_latest}}
No module named 'tasks.ceph'
[8134511]
No module named 'tasks.ceph'
[8134512]
No module named 'tasks.workunit'
[8134514]
No module named 'tasks.ceph'
[8134516]
rados/mgr/{clusters/{2-node-mgr} debug/mgr distro/{ubuntu_latest} mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore/{alloc$/{bitmap} base compr$/{no$/{no}} mem$/{normal-2} write$/{write_v1}}} tasks/{1-install 2-ceph 3-mgrmodules 4-units/module_selftest}}
Test failure: test_selftest_command_spam (tasks.mgr.test_module_selftest.TestModuleSelftest)
[8134521]
[8134522]
[8134523]
No module named 'tasks.ceph'
Updated by Adam Kupczyk about 1 year ago
- Status changed from QA Testing to QA Needs Rerun/Rebuilt
https://github.com/ceph/ceph/pull/61646 - qa/rados: Augmented bluestore testing
^ do not merge, caused new testing paths to open with new failures
tracked with: https://tracker.ceph.com/issues/70143 , but ultimate resolution may be different.