700字范文,内容丰富有趣,生活中的好帮手!
700字范文 > Flink Caused by:org.apache.flink.streaming.connectors.kafka.internal.Handover$ClosedException

Flink Caused by:org.apache.flink.streaming.connectors.kafka.internal.Handover$ClosedException

时间:2019-03-28 17:16:28

相关推荐

Flink Caused by:org.apache.flink.streaming.connectors.kafka.internal.Handover$ClosedException

Flink程序从kafka中读取数据进行计算,FLink程序一启动就报以下错误,看到错误很懵逼。加班到9点没解决,第二天提前来半小时,把如下错误信息又看了一遍。具体错误如下:

错误信息1.

20/12/17 09:31:07 WARN NetworkClient: [Consumer clientId=consumer-14, groupId=qa_topic_flink_group] Error connecting to node 172.16.40.233:9092 (id: -3 rack: null)java.nio.channels.ClosedByInterruptExceptionat java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:659)at org.work.Selector.doConnect(Selector.java:278)at org.work.Selector.connect(Selector.java:256)at org.apache.workClient.initiateConnect(NetworkClient.java:920)at org.apache.workClient.access$700(NetworkClient.java:67)at org.apache.workClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1090)at org.apache.workClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:976)at org.apache.workClient.poll(NetworkClient.java:533)at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)at org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:292)at org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1803)at org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1771)at org.apache.flink.streaming.connectors.kafka.internal.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:77)at org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:131)at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:508)at org.apache.mon.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1007)at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)at java.lang.Thread.run(Thread.java:748)

错误信息2:

20/12/17 09:31:27 WARN StreamTask: Error while canceling .apache.flink.streaming.connectors.kafka.internal.Handover$ClosedExceptionat org.apache.flink.streaming.connectors.kafka.internal.Handover.close(Handover.java:182)at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.cancel(KafkaFetcher.java:175)at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.cancel(FlinkKafkaConsumerBase.java:818)at org.apache.flink.streaming.api.operators.StreamSource.cancel(StreamSource.java:147)at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.cancelTask(SourceStreamTask.java:136)at org.apache.flink.streaming.runtime.tasks.StreamTask.cancel(StreamTask.java:602)at org.apache.flink.runtime.taskmanager.Task$TaskCanceler.run(Task.java:1355)at java.lang.Thread.run(Thread.java:748)20/12/17 09:31:27 WARN StreamTask: Error while canceling .apache.flink.streaming.connectors.kafka.internal.Handover$ClosedExceptionat org.apache.flink.streaming.connectors.kafka.internal.Handover.close(Handover.java:182)at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.cancel(KafkaFetcher.java:175)at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.cancel(FlinkKafkaConsumerBase.java:818)at org.apache.flink.streaming.api.operators.StreamSource.cancel(StreamSource.java:147)at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.cancelTask(SourceStreamTask.java:136)at org.apache.flink.streaming.runtime.tasks.StreamTask.cancel(StreamTask.java:602)at org.apache.flink.runtime.taskmanager.Task$TaskCanceler.run(Task.java:1355)at java.lang.Thread.run(Thread.java:748)

解决办法:我们kafka的topic为9个分区。因此我们Flink程序中的并行度也要设置为9.

运行程序后正常了。

后续又来了。。。。当我们把并行度设置为9的时候,我的数据要分为好多个侧输出流,

当我写到最后一个侧输出流的时候,又爆了以上同样的错误。然后我就各种尝试。

然后我把所有的侧输出流都注释掉了,一个一个的打开,每每到最后一个侧输出流打开时就报错了。

很郁闷!我开始怀疑玄学了。。。。

最后突发灵感。看看资源管理器,怀疑是内存不够,

但是发现从程序刚启动就报错,内存里面啥也没有。但是在我启动程序的瞬间发现了下图。。。。。。

此图为CPU的资源使用情况,发现在启动程序的瞬间CPU使用率到了100%。

这时就怀疑是CPU了。

果然修改了并行度为3,好了。

env.setParallelism(3);

然后重新启动程序正常CPU资源使用图如下:

启动瞬间没有到100%启动正常。。。。

开心。。。

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。