Even when https://status.maven.org/ says that it is up and all systems are operational, we frequently encounter problems with search.maven.org.
The latest issue seems to be around retrieving poms. so
Unable to download pom.xml for hmpps-sqs-spring-boot-starter-5.4.11.jar from Central; this could result in undetected CPE/CVEs.
Unable to download pom.xml for hmpps-sqs-spring-boot-starter-5.4.11.jar from Central; this could result in undetected CPE/CVEs.
Unable to download pom.xml for hmpps-sqs-spring-boot-starter-5.4.11.jar from Central; this could result in undetected CPE/CVEs.
... repeated 20 times in total
Unable to download pom.xml for hmpps-kotlin-spring-boot-starter-1.7.0.jar from Central; this could result in undetected CPE/CVEs.
... repeated 8 times
org.owasp.dependencycheck.analyzer.exception.UnexpectedAnalysisException: java.lang.InterruptedException: sleep interrupted
at org.owasp.dependencycheck.analyzer.CentralAnalyzer.analyzeDependency(CentralAnalyzer.java:268)
at org.owasp.dependencycheck.analyzer.AbstractAnalyzer.analyze(AbstractAnalyzer.java:131)
at org.owasp.dependencycheck.AnalysisTask.call(AnalysisTask.java:88)
at org.owasp.dependencycheck.AnalysisTask.call(AnalysisTask.java:37)
at [email protected]/java.util.concurrent.FutureTask.run(FutureTask.java:317)
at [email protected]/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at [email protected]/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at [email protected]/java.lang.Thread.run(Thread.java:1583)
Caused by: java.lang.InterruptedException: sleep interrupted
at java.base/java.lang.Thread.sleep0(Native Method)
at java.base/java.lang.Thread.sleep(Thread.java:509)
at org.owasp.dependencycheck.analyzer.CentralAnalyzer.analyzeDependency(CentralAnalyzer.java:265)
... 7 more
In debug mode I can see the failure
2025-09-30T07:51:17.724+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 >> GET /remotecontent?filepath=org/reactivestreams/reactive-streams/1.0.4/reactive-streams-1.0.4.pom HTTP/1.1
2025-09-30T07:51:17.724+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 >> Accept-Encoding: gzip, x-gzip, deflate
2025-09-30T07:51:17.725+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 >> Host: search.maven.org
2025-09-30T07:51:17.725+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 >> Connection: keep-alive
2025-09-30T07:51:17.725+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 >> User-Agent: Apache-HttpClient/5.5 (Java/21)
2025-09-30T07:51:17.818+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 << HTTP/1.1 502 Bad Gateway
2025-09-30T07:51:17.818+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 << Content-Type: text/html
2025-09-30T07:51:17.818+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 << Content-Length: 150
2025-09-30T07:51:17.818+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 << Connection: keep-alive
2025-09-30T07:51:17.818+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 << Date: Tue, 30 Sep 2025 06:51:17 GMT
2025-09-30T07:51:17.818+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 << Server: nginx
2025-09-30T07:51:17.818+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 << X-Cache: Error from cloudfront
2025-09-30T07:51:17.819+0100 [DEBUG] [org.apache.hc.client5.http.headers] http-outgoing-7 << Via: 1.1 965dae290e5ccc4a515861ea79a81932.cloudfront.net (CloudFront)
Although I think the issue will be fixed with dependency-check/DependencyCheck#5827, is there any way we could configure the gradle plugin to fail earlier / quicker in the meantime?
Our latest github action failed in 3h 4m 18s, which means that our runners are all tied up retrying for that time and can't then be used for anything else. I've seen the ossIndex.warnOnlyOnRemoteErrors property, but am quite happy for the build to fail if the OSS Index is down. I've also seen that there are nvd.maxRetryCount and nvd.delay properties, so wondered if there were any equivalent OSS Index ones that means that the build fails after a few minutes instead?
Even when https://status.maven.org/ says that it is up and all systems are operational, we frequently encounter problems with search.maven.org.
The latest issue seems to be around retrieving poms. so
In debug mode I can see the failure
Although I think the issue will be fixed with dependency-check/DependencyCheck#5827, is there any way we could configure the gradle plugin to fail earlier / quicker in the meantime?
Our latest github action failed in 3h 4m 18s, which means that our runners are all tied up retrying for that time and can't then be used for anything else. I've seen the
ossIndex.warnOnlyOnRemoteErrorsproperty, but am quite happy for the build to fail if the OSS Index is down. I've also seen that there arenvd.maxRetryCountandnvd.delayproperties, so wondered if there were any equivalent OSS Index ones that means that the build fails after a few minutes instead?