小编给大家分享一下如何从指定的网络端口上采集日志到控制台输出和hdfs,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!需求1:从指定的网络端口上采集日志到控制台输出和HDFS负载算法故障转移:可以指定优先级,数字越大越优先a
小编给大家分享一下如何从指定的网络端口上采集日志到控制台输出和hdfs,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!
需求1:
从指定的网络端口上采集日志到控制台输出和HDFS
负载算法
故障转移:可以指定优先级,数字越大越优先
a1.sinkgroups.g1.processor.type = failover
a1.sinkgroups = g1a1.sinkgroups.g1.sinks = k1 k2a1.sinkgroups.g1.processor.type = failovera1.sinkgroups.g1.processor.priority.k1 = 5a1.sinkgroups.g1.processor.priority.k2 = 10a1.sinkgroups.g1.processor.maxpenalty = 10000
全部轮询
a1.sinkgroups.g1.processor.type = load_balance
#从指定的网络端口上采集日志到控制台输出和HDFS
a1.sources = r1a1.sinks = k1 k2 a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = netcata1.sources.r1.bind = 0.0.0.0a1.sources.r1.port = 44444# Describe the sinka1.sinkgroups = g1a1.sinkgroups.g1.sinks = k1 k2a1.sinkgroups.g1.processor.type = load_balancea1.sinks.k1.type = loggera1.sinks.k2.type = hdfsa1.sinks.k2.hdfs.path = hdfs://192.168.0.129:9000/user/hadoop/flumea1.sinks.k2.hdfs.batchSize = 10a1.sinks.k2.hdfs.fileType = DataStreama1.sinks.k2.hdfs.writeFORMat = Text# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1a1.sinks.k2.channel = c1
检查logger输出:
2018-08-10 18:58:39,659 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.netcatSource.start(NetcatSource.java:169)] Created serverSocket:sun.NIO.ch.ServerSocketChannelImpl[/0:0:0:0:0:0:0:0:44444]2018-08-10 18:59:17,723 (SinkRunner-PollingRunner-LoadBalancingSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:94)] Event: { headers:{} body: 7A 6F 75 72 63 20 6F 6B 0D zourc ok. }2018-08-10 19:00:35,744 (SinkRunner-PollingRunner-LoadBalancingSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:94)] Event: { headers:{} body: 61 73 64 66 0D asdf. }2018-08-10 19:00:35,774 (SinkRunner-PollingRunner-LoadBalancingSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:58)] Serializer = TEXT, UseRawLocalFileSystem = false2018-08-10 19:00:36,086 (SinkRunner-PollingRunner-LoadBalancingSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://192.168.0.129:9000/user/hadoop/flume/FlumeData.1533942035775.tmp
检查hdfs输出:
[hadoop@hadoop001 flume]$ hdfs dfs -text hdfs://192.168.0.129:9000/user/hadoop/flume/*18/08/10 19:14:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicablezourc12345678910
看完了这篇文章,相信你对“如何从指定的网络端口上采集日志到控制台输出和HDFS”有了一定的了解,如果想了解更多相关知识,欢迎关注编程网精选频道,感谢各位的阅读!
--结束END--
本文标题: 如何从指定的网络端口上采集日志到控制台输出和HDFS
本文链接: https://lsjlt.com/news/231484.html(转载时请注明来源链接)
有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
回答
回答
回答
回答
回答
回答
回答
回答
回答
回答
0