Commit c013050
committed
kgo: avoid rare panic
Scenario is:
* Metadata update is actively running and has stopped an active session,
returning all topicPartitions that were actively in list/epoch. These
list/epoch loads are stored in reloadOffsets. Metadata grabs the
session change mutex.
* Client.Close is now called, stores client.consumer.kill(true). The
Close is blocked briefly because Close calls assignPartitions which
tries to lock to stop the session. Close is now paused -- however,
importantly, the consumer.kill atomic is set to true.
* Metadata tries to start a new session. startNewSession returns
noConsumerSession because consumer.kill is now true.
* Metadata calls reloadOffsets.loadWithSession, which panics once
the session tries to access the client variable c.
This panic can only happen if all of the following are true:
* Client.Close is being called
* Metadata is updating
* Metadata response is moving a partition from one broker to another
* The timing is perfect
The fix to this is to check in listOrEpoch if the consumerSession is
noConsumerSession. If so, return early.
Note that doOnMetadataUpdate, incWorker, and decWorker already checked
noConsumerSession. The other methods do not need to check:
* mapLoadsToBrokers is called in listOrEpochs on a valid session
* handleListOrEpochResults is the same
* desireFetch is only called in source after noConsumerSession is
checked, and manageFetchConcurrency is called only in desireFetch
Closes redpanda-data/redpanda#13791.1 parent 6a961da commit c013050
1 file changed
+9
-0
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1667 | 1667 | | |
1668 | 1668 | | |
1669 | 1669 | | |
| 1670 | + | |
| 1671 | + | |
| 1672 | + | |
| 1673 | + | |
| 1674 | + | |
| 1675 | + | |
| 1676 | + | |
| 1677 | + | |
| 1678 | + | |
1670 | 1679 | | |
1671 | 1680 | | |
1672 | 1681 | | |
| |||
0 commit comments