Finished reading information for table: positionamxgpx_REALTIME
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412576
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android--2147483648
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412582
Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-
info,id=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
at
java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)
~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithR
epository(DefaultMBeanServerInterceptor.java:1865) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynam
icMBean(DefaultMBeanServerInterceptor.java:960) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObjec
t(DefaultMBeanServerInterceptor.java:895) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean
(DefaultMBeanServerInterceptor.java:320) ~[?:?]
at
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer
.java:523) ~[?:?]
at
org.apache.pinot.shaded.org.apache.kafka.common.utils.AppInfoParser.registerAppInfo
(AppInfoParser.java:64) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:814) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:665) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:646) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:626) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.createC
onsumer(KafkaPartitionLevelConnectionHandler.java:83) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.<init>(
KafkaPartitionLevelConnectionHandler.java:70) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaStreamMetadataProvider.<init>(KafkaStre
amMetadataProvider.java:59) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory.createPartitionMetadata
Provider(KafkaConsumerFactory.java:38) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.StreamMetadataProvider.computePartitionGroupMetadata(St
reamMetadataProvider.java:91) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:70) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:31) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.PinotTableIdealStateBuilder.getPartitionGrou
pMetadataList(PinotTableIdealStateBuilder.java:93) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.realtime.MissingConsumingSegmentFinder.<init
>(MissingConsumingSegmentFinder.java:79) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.updateSegmentMetrics(Segment
StatusChecker.java:375) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:124) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:66) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.processT
ables(ControllerPeriodicTask.java:116) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.runTask(
ControllerPeriodicTask.java:79) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:150)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:135)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.PeriodicTaskScheduler.lambda$start$0(PeriodicTas
kScheduler.java:87) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
[?:?]
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[?:?]
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
ScheduledThreadPoolExecutor.java:305) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java
:1136) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav
a:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-0
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412589
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-1
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412598
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-2
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412606
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-3
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412626
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-4
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412633
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-5
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412644
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-6
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412652
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-7
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412659
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-8
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412668
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-9
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412676
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-10
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412684
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-11
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412693
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-12
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412700
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-13
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412707
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-14
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412716
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-15
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412723
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-16
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412730
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-17
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412738
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-18
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412747
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-19
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412753
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-20
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412762
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-21
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412769
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-22
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412776
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-23
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412784
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-24
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412791
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-25
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412799
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-26
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412809
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-27
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412817
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-28
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412826
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-29
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412835
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-30
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412843
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-31
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412852
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-32
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412861
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-33
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412869
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-34
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110412877
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-35
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Reading segment sizes from 7 servers for table: clickstreamd_REALTIME with timeout:
30000ms
Finished reading information for table: clickstreamd_REALTIME
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110418645
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2--
2147483648
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110418701
Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-
info,id=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-position-data-v2
at
java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)
~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithR
epository(DefaultMBeanServerInterceptor.java:1865) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynam
icMBean(DefaultMBeanServerInterceptor.java:960) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObjec
t(DefaultMBeanServerInterceptor.java:895) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean
(DefaultMBeanServerInterceptor.java:320) ~[?:?]
at
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer
.java:523) ~[?:?]
at
org.apache.pinot.shaded.org.apache.kafka.common.utils.AppInfoParser.registerAppInfo
(AppInfoParser.java:64) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:814) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:665) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:646) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:626) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.createC
onsumer(KafkaPartitionLevelConnectionHandler.java:83) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.<init>(
KafkaPartitionLevelConnectionHandler.java:70) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaStreamMetadataProvider.<init>(KafkaStre
amMetadataProvider.java:59) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory.createPartitionMetadata
Provider(KafkaConsumerFactory.java:38) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.StreamMetadataProvider.computePartitionGroupMetadata(St
reamMetadataProvider.java:91) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:70) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:31) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.PinotTableIdealStateBuilder.getPartitionGrou
pMetadataList(PinotTableIdealStateBuilder.java:93) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.realtime.MissingConsumingSegmentFinder.<init
>(MissingConsumingSegmentFinder.java:79) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.updateSegmentMetrics(Segment
StatusChecker.java:375) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:124) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:66) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.processT
ables(ControllerPeriodicTask.java:116) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.runTask(
ControllerPeriodicTask.java:79) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:150)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:135)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.PeriodicTaskScheduler.lambda$start$0(PeriodicTas
kScheduler.java:87) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
[?:?]
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[?:?]
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
ScheduledThreadPoolExecutor.java:305) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java
:1136) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav
a:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-0
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110418811
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-1
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110418920
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-2
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110419030
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-3
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110419141
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-4
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110419251
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-5
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110419361
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-6
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110419471
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-7
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110419580
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-8
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Reading segment sizes from 4 servers for table: positionamxntt_REALTIME with
timeout: 30000ms
Finished reading information for table: positionamxntt_REALTIME
Finish processing 5/5 tables in task: SegmentStatusChecker
[TaskRequestId: auto] Finish running task: SegmentStatusChecker in 22536ms
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Getting Helix leader: pinot-controller-1.pinot-
controller.pinot.svc.cluster.local_9000, Helix version: 1.3.1, mtime: 1729238439629
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
url string passed is : http://10.5.234.128:8099/query/sql
Query: select * from amxordersgpx limit 10 Time: 19
Handled request from 10.5.229.104 POST http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/sql, content-type
application/json; charset=UTF-8 status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tasks/tasktypes, content-type
null status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tables?type=realtime,
content-type null status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tenants/DefaultTenant?
type=server, content-type null status code 200 OK
Tenant 'AMX' not found.
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/brokers/tenants/AMX, content-
type null status code 404 Not Found
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tenants/CLICKSTREAM?
type=server, content-type null status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Controller_pinot-
controller-0.pinot-controller.pinot.svc.cluster.local_9000, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Controller_pinot-
controller-1.pinot-controller.pinot.svc.cluster.local_9000, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-10.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-12.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-0.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-2.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-7.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-5.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/
Minion_10.5.225.15_9514, content-type null status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Getting Helix leader: pinot-controller-1.pinot-
controller.pinot.svc.cluster.local_9000, Helix version: 1.3.1, mtime: 1729238439629
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tenants/DefaultTenant/tables,
content-type null status code 200 OK
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tables/
positionamxntt_REALTIME/idealstate, content-type null status code 200 OK
Reading segment sizes from 4 servers for table: positionamxgpx_REALTIME with
timeout: 30000ms
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tables/
positionamxgpx_REALTIME/externalview, content-type null status code 200 OK
Finished reading information for table: positionamxgpx_REALTIME
Handled request from 10.5.229.104 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tables/
positionamxgpx_REALTIME/size, content-type null status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Starting TaskMetricsEmitter with running frequency of 300 seconds.
[TaskRequestId: auto] Start running task: TaskMetricsEmitter
[TaskRequestId: auto] Finish running task: TaskMetricsEmitter in 6ms
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Getting Helix leader: pinot-controller-1.pinot-
controller.pinot.svc.cluster.local_9000, Helix version: 1.3.1, mtime: 1729238439629
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Getting Helix leader: pinot-controller-1.pinot-
controller.pinot.svc.cluster.local_9000, Helix version: 1.3.1, mtime: 1729238439629
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Starting RebalanceChecker with running frequency of 300 seconds.
[TaskRequestId: auto] Start running task: RebalanceChecker
Processing 5 tables in task: RebalanceChecker
Finish processing 0/5 tables in task: RebalanceChecker
[TaskRequestId: auto] Finish running task: RebalanceChecker in 3ms
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Getting Helix leader: pinot-controller-1.pinot-
controller.pinot.svc.cluster.local_9000, Helix version: 1.3.1, mtime: 1729238439629
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Starting SegmentStatusChecker with running frequency of 300 seconds.
[TaskRequestId: auto] Start running task: SegmentStatusChecker
Processing 5 tables in task: SegmentStatusChecker
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110720201
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2--2147483648
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110720257
Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-
info,id=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-v2
at
java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)
~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithR
epository(DefaultMBeanServerInterceptor.java:1865) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynam
icMBean(DefaultMBeanServerInterceptor.java:960) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObjec
t(DefaultMBeanServerInterceptor.java:895) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean
(DefaultMBeanServerInterceptor.java:320) ~[?:?]
at
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer
.java:523) ~[?:?]
at
org.apache.pinot.shaded.org.apache.kafka.common.utils.AppInfoParser.registerAppInfo
(AppInfoParser.java:64) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:814) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:665) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:646) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:626) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.createC
onsumer(KafkaPartitionLevelConnectionHandler.java:83) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.<init>(
KafkaPartitionLevelConnectionHandler.java:70) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaStreamMetadataProvider.<init>(KafkaStre
amMetadataProvider.java:59) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory.createPartitionMetadata
Provider(KafkaConsumerFactory.java:38) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.StreamMetadataProvider.computePartitionGroupMetadata(St
reamMetadataProvider.java:91) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:70) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:31) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.PinotTableIdealStateBuilder.getPartitionGrou
pMetadataList(PinotTableIdealStateBuilder.java:93) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.realtime.MissingConsumingSegmentFinder.<init
>(MissingConsumingSegmentFinder.java:79) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.updateSegmentMetrics(Segment
StatusChecker.java:375) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:124) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:66) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.processT
ables(ControllerPeriodicTask.java:116) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.runTask(
ControllerPeriodicTask.java:79) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:150)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:135)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.PeriodicTaskScheduler.lambda$start$0(PeriodicTas
kScheduler.java:87) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
[?:?]
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[?:?]
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
ScheduledThreadPoolExecutor.java:305) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java
:1136) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav
a:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2-0
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110720365
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2-1
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110720476
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2-2
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110720585
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2-3
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110720696
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2-4
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110720806
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2-5
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110720915
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2-6
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110721026
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2-7
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'stream.kafka.consumer.prop.metadata.max.age.ms' was supplied but
isn't a known config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110721135
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Subscribed to partition(s): amx-trade-data-v2-8
[Consumer clientId=PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-trade-data-
v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-tradesntt_REALTIME-amx-
trade-data-v2 unregistered
Reading segment sizes from 6 servers for table: tradesntt_REALTIME with timeout:
30000ms
Finished reading information for table: tradesntt_REALTIME
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110721463
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Subscribed to partition(s): rtab--2147483648
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110721468
Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-
info,id=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab
at
java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)
~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithR
epository(DefaultMBeanServerInterceptor.java:1865) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynam
icMBean(DefaultMBeanServerInterceptor.java:960) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObjec
t(DefaultMBeanServerInterceptor.java:895) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean
(DefaultMBeanServerInterceptor.java:320) ~[?:?]
at
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer
.java:523) ~[?:?]
at
org.apache.pinot.shaded.org.apache.kafka.common.utils.AppInfoParser.registerAppInfo
(AppInfoParser.java:64) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:814) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:665) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:646) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:626) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.createC
onsumer(KafkaPartitionLevelConnectionHandler.java:83) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.<init>(
KafkaPartitionLevelConnectionHandler.java:70) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaStreamMetadataProvider.<init>(KafkaStre
amMetadataProvider.java:59) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory.createPartitionMetadata
Provider(KafkaConsumerFactory.java:38) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.StreamMetadataProvider.computePartitionGroupMetadata(St
reamMetadataProvider.java:91) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:70) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:31) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.PinotTableIdealStateBuilder.getPartitionGrou
pMetadataList(PinotTableIdealStateBuilder.java:93) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.realtime.MissingConsumingSegmentFinder.<init
>(MissingConsumingSegmentFinder.java:79) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.updateSegmentMetrics(Segment
StatusChecker.java:375) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:124) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:66) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.processT
ables(ControllerPeriodicTask.java:116) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.runTask(
ControllerPeriodicTask.java:79) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:150)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:135)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.PeriodicTaskScheduler.lambda$start$0(PeriodicTas
kScheduler.java:87) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
[?:?]
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[?:?]
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
ScheduledThreadPoolExecutor.java:305) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java
:1136) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav
a:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Subscribed to partition(s): rtab-0
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-
rtab unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110721476
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Subscribed to partition(s): rtab-1
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-
rtab unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110721484
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Subscribed to partition(s): rtab-2
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-
rtab unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110721492
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Subscribed to partition(s): rtab-3
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-
rtab unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110721499
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Subscribed to partition(s): rtab-4
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-
rtab unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110721506
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Subscribed to partition(s): rtab-5
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-rtab,
groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-
rtab unregistered
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamweb_REALTIME-
rtab unregistered
Reading segment sizes from 6 servers for table: clickstreamweb_REALTIME with
timeout: 30000ms
Finished reading information for table: clickstreamweb_REALTIME
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110727897
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2--
2147483648
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110727904
Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-
info,id=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-position-data-v2
at
java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)
~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithR
epository(DefaultMBeanServerInterceptor.java:1865) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynam
icMBean(DefaultMBeanServerInterceptor.java:960) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObjec
t(DefaultMBeanServerInterceptor.java:895) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean
(DefaultMBeanServerInterceptor.java:320) ~[?:?]
at
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer
.java:523) ~[?:?]
at
org.apache.pinot.shaded.org.apache.kafka.common.utils.AppInfoParser.registerAppInfo
(AppInfoParser.java:64) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:814) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:665) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:646) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:626) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.createC
onsumer(KafkaPartitionLevelConnectionHandler.java:83) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.<init>(
KafkaPartitionLevelConnectionHandler.java:70) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaStreamMetadataProvider.<init>(KafkaStre
amMetadataProvider.java:59) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory.createPartitionMetadata
Provider(KafkaConsumerFactory.java:38) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.StreamMetadataProvider.computePartitionGroupMetadata(St
reamMetadataProvider.java:91) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:70) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:31) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.PinotTableIdealStateBuilder.getPartitionGrou
pMetadataList(PinotTableIdealStateBuilder.java:93) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.realtime.MissingConsumingSegmentFinder.<init
>(MissingConsumingSegmentFinder.java:79) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.updateSegmentMetrics(Segment
StatusChecker.java:375) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:124) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:66) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.processT
ables(ControllerPeriodicTask.java:116) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.runTask(
ControllerPeriodicTask.java:79) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:150)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:135)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.PeriodicTaskScheduler.lambda$start$0(PeriodicTas
kScheduler.java:87) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
[?:?]
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[?:?]
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
ScheduledThreadPoolExecutor.java:305) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java
:1136) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav
a:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-0
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110727917
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-1
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110727929
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-2
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110727941
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-3
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110727954
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-4
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110727967
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-5
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110727981
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-6
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110727993
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-7
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.gpx.prod.angelone.in:9094, amx-
cnf-kafka-broker1.gpx.prod.angelone.in:9094, amx-cnf-kafka-
broker0.gpx.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110728008
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-8
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: 4sWan3AeScuz2SdBvbzSRA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxgpx_REALTIME-
amx-position-data-v2 unregistered
Reading segment sizes from 4 servers for table: positionamxgpx_REALTIME with
timeout: 30000ms
Finished reading information for table: positionamxgpx_REALTIME
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734886
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android--2147483648
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734892
Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-
info,id=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
at
java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)
~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithR
epository(DefaultMBeanServerInterceptor.java:1865) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynam
icMBean(DefaultMBeanServerInterceptor.java:960) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObjec
t(DefaultMBeanServerInterceptor.java:895) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean
(DefaultMBeanServerInterceptor.java:320) ~[?:?]
at
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer
.java:523) ~[?:?]
at
org.apache.pinot.shaded.org.apache.kafka.common.utils.AppInfoParser.registerAppInfo
(AppInfoParser.java:64) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:814) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:665) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:646) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:626) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.createC
onsumer(KafkaPartitionLevelConnectionHandler.java:83) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.<init>(
KafkaPartitionLevelConnectionHandler.java:70) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaStreamMetadataProvider.<init>(KafkaStre
amMetadataProvider.java:59) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory.createPartitionMetadata
Provider(KafkaConsumerFactory.java:38) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.StreamMetadataProvider.computePartitionGroupMetadata(St
reamMetadataProvider.java:91) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:70) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:31) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.PinotTableIdealStateBuilder.getPartitionGrou
pMetadataList(PinotTableIdealStateBuilder.java:93) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.realtime.MissingConsumingSegmentFinder.<init
>(MissingConsumingSegmentFinder.java:79) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.updateSegmentMetrics(Segment
StatusChecker.java:375) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:124) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:66) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.processT
ables(ControllerPeriodicTask.java:116) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.runTask(
ControllerPeriodicTask.java:79) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:150)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:135)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.PeriodicTaskScheduler.lambda$start$0(PeriodicTas
kScheduler.java:87) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
[?:?]
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[?:?]
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
ScheduledThreadPoolExecutor.java:305) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java
:1136) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav
a:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-0
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734900
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-1
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734907
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-2
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734917
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-3
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734924
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-4
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734931
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-5
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734941
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-6
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734947
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-7
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734955
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-8
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734963
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-9
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734970
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-10
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734976
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-11
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734987
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-12
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110734994
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-13
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735002
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-14
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735011
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-15
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735019
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-16
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735026
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-17
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735036
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-18
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735048
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-19
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735055
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-20
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735064
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-21
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735071
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-22
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735078
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-23
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735087
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-24
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735096
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-25
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735104
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-26
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735113
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-27
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735121
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-28
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735130
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-29
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735139
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-30
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735148
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-31
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735155
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-32
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735163
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-33
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735170
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-34
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-1.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092, b-3.kafkacnfprod2.sbtdrb.c2.kafka.ap-south-
1.amazonaws.com:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-clickstreamd_REALTIME-spark_android
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110735178
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Subscribed to partition(s): spark_android-35
[Consumer clientId=PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android, groupId=null] Cluster ID: Bj1atX7tR-WJwTk-7ArZPQ
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-clickstreamd_REALTIME-
spark_android unregistered
Reading segment sizes from 7 servers for table: clickstreamd_REALTIME with timeout:
30000ms
Finished reading information for table: clickstreamd_REALTIME
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110740794
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2--
2147483648
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110740849
Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-
info,id=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-position-data-v2
at
java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)
~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithR
epository(DefaultMBeanServerInterceptor.java:1865) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynam
icMBean(DefaultMBeanServerInterceptor.java:960) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObjec
t(DefaultMBeanServerInterceptor.java:895) ~[?:?]
at
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean
(DefaultMBeanServerInterceptor.java:320) ~[?:?]
at
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer
.java:523) ~[?:?]
at
org.apache.pinot.shaded.org.apache.kafka.common.utils.AppInfoParser.registerAppInfo
(AppInfoParser.java:64) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:814) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:665) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:646) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.<init>(Kafk
aConsumer.java:626) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.createC
onsumer(KafkaPartitionLevelConnectionHandler.java:83) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaPartitionLevelConnectionHandler.<init>(
KafkaPartitionLevelConnectionHandler.java:70) ~[pinot-kafka-2.0-1.2.0-
shaded.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaStreamMetadataProvider.<init>(KafkaStre
amMetadataProvider.java:59) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory.createPartitionMetadata
Provider(KafkaConsumerFactory.java:38) ~[pinot-kafka-2.0-1.2.0-shaded.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.StreamMetadataProvider.computePartitionGroupMetadata(St
reamMetadataProvider.java:91) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:70) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.stream.PartitionGroupMetadataFetcher.call(PartitionGroupMetada
taFetcher.java:31) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.PinotTableIdealStateBuilder.getPartitionGrou
pMetadataList(PinotTableIdealStateBuilder.java:93) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.realtime.MissingConsumingSegmentFinder.<init
>(MissingConsumingSegmentFinder.java:79) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.updateSegmentMetrics(Segment
StatusChecker.java:375) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:124) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.SegmentStatusChecker.processTable(SegmentStatusCh
ecker.java:66) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.processT
ables(ControllerPeriodicTask.java:116) ~[pinot-all-1.2.0-jar-with-
dependencies.jar:1.2.0-cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.controller.helix.core.periodictask.ControllerPeriodicTask.runTask(
ControllerPeriodicTask.java:79) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:150)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.BasePeriodicTask.run(BasePeriodicTask.java:135)
~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
org.apache.pinot.core.periodictask.PeriodicTaskScheduler.lambda$start$0(PeriodicTas
kScheduler.java:87) ~[pinot-all-1.2.0-jar-with-dependencies.jar:1.2.0-
cc33ac502a02e2fe830fe21e556234ee99351a7a]
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
[?:?]
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[?:?]
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(
ScheduledThreadPoolExecutor.java:305) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java
:1136) [?:?]
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav
a:635) [?:?]
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-0
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110740960
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-1
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110741068
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-2
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110741179
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-3
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110741289
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-4
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110741399
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-5
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110741508
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-6
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110741620
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-7
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Skipping auto SSL server validation since it's not configured.
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [amx-cnf-kafka-broker2.ntt.prod.angelone.in:9094, amx-
cnf-kafka-broker1.ntt.prod.angelone.in:9094, amx-cnf-kafka-
broker0.ntt.prod.angelone.in:9094]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.pinot.shaded.org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.pinot.shaded.org.apache.kafka.common.serialization.BytesDeserializer
The configuration 'realtime.segment.flush.threshold.rows' was supplied but isn't a
known config.
The configuration 'realtime.segment.flush.threshold.size' was supplied but isn't a
known config.
The configuration 'stream.kafka.decoder.class.name' was supplied but isn't a known
config.
The configuration 'streamType' was supplied but isn't a known config.
The configuration 'stream.kafka.consumer.prop.request.timeout.ms' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.prop.auto.commit.enable' was supplied but
isn't a known config.
The configuration 'stream.kafka.consumer.type' was supplied but isn't a known
config.
The configuration 'stream.kafka.broker.list' was supplied but isn't a known config.
The configuration 'realtime.segment.flush.threshold.time' was supplied but isn't a
known config.
The configuration 'stream.kafka.consumer.prop.auto.offset.reset' was supplied but
isn't a known config.
The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.
Kafka version: 2.8.1
Kafka commitId: 839b886f9b732b15
Kafka startTimeMs: 1730110741730
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Subscribed to partition(s): amx-position-data-v2-8
[Consumer clientId=PartitionGroupMetadataFetcher-positionamxntt_REALTIME-amx-
position-data-v2, groupId=null] Cluster ID: r6ejb632T2O-TE80nW2iAA
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Metrics scheduler closed
Closing reporter
org.apache.pinot.shaded.org.apache.kafka.common.metrics.JmxReporter
Metrics reporters closed
App info kafka.consumer for PartitionGroupMetadataFetcher-positionamxntt_REALTIME-
amx-position-data-v2 unregistered
Reading segment sizes from 4 servers for table: positionamxntt_REALTIME with
timeout: 30000ms
Finished reading information for table: positionamxntt_REALTIME
Finish processing 5/5 tables in task: SegmentStatusChecker
[TaskRequestId: auto] Finish running task: SegmentStatusChecker in 22158ms
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Getting Helix leader: pinot-controller-1.pinot-
controller.pinot.svc.cluster.local_9000, Helix version: 1.3.1, mtime: 1729238439629
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Instance pinot-controller-2.pinot-controller.pinot.svc.cluster.local_9000 is not
leader of cluster PinotCluster due to current session 2000a8125260038 does not
match leader session 3000a812cfc0036
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tenants, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tasks/tasktypes, content-type
null status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tables?type=offline, content-
type null status code 200 OK
Tenant 'AMX' not found.
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/brokers/tenants/AMX, content-
type null status code 404 Not Found
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tenants/CLICKSTREAM?
type=server, content-type null status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/tenants/CLICKSTREAM/tables,
content-type null status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Controller_pinot-
controller-1.pinot-controller.pinot.svc.cluster.local_9000, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/zk/ls?path=%2FPinotCluster
%2FLIVEINSTANCES, content-type null status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/
Broker_10.5.226.104_8099, content-type null status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/zk/ls?path=%2FPinotCluster
%2FLIVEINSTANCES, content-type null status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-9.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-15.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-0.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-2.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-4.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Server_pinot-
server-5.pinot-server.pinot.svc.cluster.local_8098, content-type null status code
200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/
Minion_10.5.234.190_9514, content-type null status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Controller_pinot-
controller-1.pinot-controller.pinot.svc.cluster.local_9000, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://internal-k8s-pinot-pinotcon-
1114a22f57-1757926636.ap-south-1.elb.amazonaws.com:80/instances/Controller_pinot-
controller-0.pinot-controller.pinot.svc.cluster.local_9000, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.232.26 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.229.104 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Handled request from 10.5.225.229 GET http://10.5.233.80:9000/, content-type null
status code 200 OK
Processing segmentConsumed:Offset: -1,Segment name:
positionamxntt__3__546__20241028T0919Z,Instance Id: Server_pinot-server-14.pinot-
server.pinot.svc.cluster.local_8098,Reason: timeLimit,NumRows:
99306,BuildTimeMillis: -1,WaitTimeMillis: -1,ExtraTimeSec: -1,SegmentLocation:
null,MemoryUsedBytes: 32375257,SegmentSizeBytes: -1,StreamPartitionMsgOffset:
266608835
Created FSM
{positionamxntt__3__546__20241028T0919Z,HOLDING,1730110795962,null,null,http://
pinot-controller-2.pinot-controller.pinot.svc.cluster.local:9000}
Processing segmentConsumed(Server_pinot-server-14.pinot-
server.pinot.svc.cluster.local_8098, 266608835)
HOLDING:Picking winner time=3 size=1
HOLDING:Committer notified winner instance=Server_pinot-server-14.pinot-
server.pinot.svc.cluster.local_8098 offset=266608835
HOLDING:COMMIT for instance=Server_pinot-server-14.pinot-
server.pinot.svc.cluster.local_8098 offset=266608835 buldTimeSec=126
Response to segmentConsumed for segment:positionamxntt__3__546__20241028T0919Z is :
{"status":"COMMIT","controllerVipUrl":"http://pinot-controller-2.pinot-
controller.pinot.svc.cluster.local:9000","streamPartitionMsgOffset":"266608835","bu
ildTimeSec":126,"isSplitCommitType":true}
Handled request from 10.5.231.65 GET http://pinot-controller-2.pinot-
controller.pinot.svc.cluster.local:9000/segmentConsumed?
reason=timeLimit&streamPartitionMsgOffset=266608835&instance=Server_pinot-server-
14.pinot-server.pinot.svc.cluster.local_8098&offset=-
1&name=positionamxntt__3__546__20241028T0919Z&rowCount=99306&memoryUsedBytes=323752
57, content-type null status code 200 OK