Address Intellij inspection findings#10583
Conversation
5aef101 to
2ab97a5
Compare
| } | ||
| Preconditions.checkArgument( | ||
| name != null && !name.isEmpty(), "Cannot use empty or null partition name: %s", name); | ||
| !name.isEmpty(), "Cannot use empty or null partition name: %s", name); |
There was a problem hiding this comment.
we can remove "or null" text from message also.
There was a problem hiding this comment.
+1 on updating the error msg here
There was a problem hiding this comment.
this still needs to be addressed
| final int numPartitions = 10; | ||
| final int partitionToFail = new Random().nextInt(numPartitions); | ||
| MapPartitionsFunction<Row, Row> failOnFirstPartitionFunc = | ||
| (MapPartitionsFunction<Row, Row>) |
There was a problem hiding this comment.
We can fix it for other version of spark folders also.
There was a problem hiding this comment.
Similar comment for all Spark and Flink module related changes. Only default modules are handled currently. Maybe follow up PR also is fine.
| * ByteBuffer)} | ||
| */ | ||
| public static ByteBuffer shortToOrderedBytes(short val, ByteBuffer reuse) { | ||
| ByteBuffer bytes = ByteBuffers.reuse(reuse, PRIMITIVE_BUFFER_SIZE); |
There was a problem hiding this comment.
a) should we replace contents of intToOrderedBytes to call longToOrderedBytes ?
b) Replace all other intToOrderedBytes to call longToOrderedBytes to avoid double casting (type -> int -> long)
There was a problem hiding this comment.
let me rephrase this comment.
a) Why didn't we handle the same for intToOrderedBytes? It is a duplicate of longToOrderedBytes
b) Why are we not calling longToOrderedBytes in the new places? calling intToOrderedBytes will leads to double casting.
ajantha-bhat
left a comment
There was a problem hiding this comment.
Thanks for this awesome clean up work. It was easy to review commit by commit.
Few minor comments. It is good to go.
The non-default modules (of Flink and Spark) can be handled as a follow up PR.
|
I'd really prefer to keep non-IJ-autofixes out of this PR. |
|
The PR is already quite large and given that we need to fix those things across all Flink/Spark versions I'd suggest to extract the Flink/Spark related things into a separate PR |
I think the current changes are easy to review if we review commit by commit. The backport to Flink/Spark can be handled in the follow up. |
2ab97a5 to
4012df7
Compare
|
should be good to go once https://github.com/apache/iceberg/pull/10583/files#r1663651011 has been addressed |
Sorry, it's really out of scope of this PR ("Adress IntelliJ Inspection findings"). |
We're improving the code and the null check is being removed since it wasn't null in the first place, so what prevents us from updating |
"improving code" is very broad - and the null-check was never hit. Really - this PR is meant to address the intellij findings and nothing more. |
I let IntelliJ run all inspections on the Iceberg source tree, and it returned quite some amount of issues. A bunch are reported as issues, that require attention.
This PR addresses a lot of these inspection issues, none of the individual changes seem controversial, so I opted to push one PR with a couple commits.