尚硅谷大数据技术之Kafka第7章 扩展
7.1 Kafka与Flume比较
在企业中必须要清楚流式数据采集框架flume和kafka的定位是什么:
flume:cloudera公司研发:
适合多个生产者;
适合下游数据消费者不多的情况;
适合数据安全性要求不高的操作;
适合与Hadoop生态圈对接的操作。
kafka:linkedin公司研发:
适合数据下游消费众多的情况;
适合数据安全性要求较高的操作,支持replication。
因此我们常用的一种模型是:
线上数据 --> flume --> kafka --> flume(根据情景增删该流程) --> HDFS
7.2 Flume与kafka集成
1)配置flume(flume-kafka.conf)
# define
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F -c +0 /opt/module/datas/flume.log
a1.sources.r1.shell = /bin/bash -c
# sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.sinks.k1.kafka.topic = first
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# bind
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
2) 启动kafkaIDEA消费者
3) 进入flume根目录下,启动flume
$ bin/flume-ng agent -c conf/ -n a1 -f jobs/flume-kafka.conf
4) 向 /opt/module/datas/flume.log里追加数据,查看kafka消费者消费情况
$ echo hello > /opt/module/datas/flume.log
7.3 Kafka配置信息
7.3.1 Broker配置信息
属性 |
默认值 |
描述 |
broker.id |
|
必填参数,broker的唯一标识 |
log.dirs |
/tmp/kafka-logs |
Kafka数据存放的目录。可以指定多个目录,中间用逗号分隔,当新partition被创建的时会被存放到当前存放partition最少的目录。 |
port |
9092 |
BrokerServer接受客户端连接的端口号 |
zookeeper.connect |
null |
Zookeeper的连接串,格式为:hostname1:port1,hostname2:port2,hostname3:port3。可以填一个或多个,为了提高可靠性,建议都填上。注意,此配置允许我们指定一个zookeeper路径来存放此kafka集群的所有数据,为了与其他应用集群区分开,建议在此配置中指定本集群存放目录,格式为:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消费者的参数要和此参数一致。 |
message.max.bytes |
1000000 |
服务器可以接收到的最大的消息大小。注意此参数要和consumer的maximum.message.size大小一致,否则会因为生产者生产的消息太大导致消费者无法消费。 |
num.io.threads |
8 |
服务器用来执行读写请求的IO线程数,此参数的数量至少要等于服务器上磁盘的数量。 |
queued.max.requests |
500 |
I/O线程可以处理请求的队列大小,若实际请求数超过此大小,网络线程将停止接收新的请求。 |
socket.send.buffer.bytes |
100 * 1024 |
The SO_SNDBUFF buffer the server prefers for socket connections. |
socket.receive.buffer.bytes |
100 * 1024 |
The SO_RCVBUFF buffer the server prefers for socket connections. |
socket.request.max.bytes |
100 * 1024 * 1024 |
服务器允许请求的最大值, 用来防止内存溢出,其值应该小于 Java heap size. |
num.partitions |
1 |
默认partition数量,如果topic在创建时没有指定partition数量,默认使用此值,建议改为5 |
log.segment.bytes |
1024 * 1024 * 1024 |
Segment文件的大小,超过此值将会自动新建一个segment,此值可以被topic级别的参数覆盖。 |
log.roll.{ms,hours} |
24 * 7 hours |
新建segment文件的时间,此值可以被topic级别的参数覆盖。 |
log.retention.{ms,minutes,hours} |
7 days |
Kafka segment log的保存周期,保存周期超过此时间日志就会被删除。此参数可以被topic级别参数覆盖。数据量大时,建议减小此值。 |
log.retention.bytes |
-1 |
每个partition的最大容量,若数据量超过此值,partition数据将会被删除。注意这个参数控制的是每个partition而不是topic。此参数可以被log级别参数覆盖。 |
log.retention.check.interval.ms |
5 minutes |
删除策略的检查周期 |
auto.create.topics.enable |
true |
自动创建topic参数,建议此值设置为false,严格控制topic管理,防止生产者错写topic。 |
default.replication.factor |
1 |
默认副本数量,建议改为2。 |
replica.lag.time.max.ms |
10000 |
在此窗口时间内没有收到follower的fetch请求,leader会将其从ISR(in-sync replicas)中移除。 |
replica.lag.max.messages |
4000 |
如果replica节点落后leader节点此值大小的消息数量,leader节点就会将其从ISR中移除。 |
replica.socket.timeout.ms |
30 * 1000 |
replica向leader发送请求的超时时间。 |
replica.socket.receive.buffer.bytes |
64 * 1024 |
The socket receive buffer for network requests to the leader for replicating data. |
replica.fetch.max.bytes |
1024 * 1024 |
The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader. |
replica.fetch.wait.max.ms |
500 |
The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader. |
num.replica.fetchers |
1 |
Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker. |
fetch.purgatory.purge.interval.requests |
1000 |
The purge interval (in number of requests) of the fetch request purgatory. |
zookeeper.session.timeout.ms |
6000 |
ZooKeeper session 超时时间。如果在此时间内server没有向zookeeper发送心跳,zookeeper就会认为此节点已挂掉。 此值太低导致节点容易被标记死亡;若太高,.会导致太迟发现节点死亡。 |
zookeeper.connection.timeout.ms |
6000 |
客户端连接zookeeper的超时时间。 |
zookeeper.sync.time.ms |
2000 |
H ZK follower落后 ZK leader的时间。 |
controlled.shutdown.enable |
true |
允许broker shutdown。如果启用,broker在关闭自己之前会把它上面的所有leaders转移到其它brokers上,建议启用,增加集群稳定性。 |
auto.leader.rebalance.enable |
true |
If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available. |
leader.imbalance.per.broker.percentage |
10 |
The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker. |
leader.imbalance.check.interval.seconds |
300 |
The frequency with which to check for leader imbalance. |
offset.metadata.max.bytes |
4096 |
The maximum amount of metadata to allow clients to save with their offsets. |
connections.max.idle.ms |
600000 |
Idle connections timeout: the server socket processor threads close the connections that idle more than this. |
num.recovery.threads.per.data.dir |
1 |
The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. |
unclean.leader.election.enable |
true |
Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. |
delete.topic.enable |
false |
启用deletetopic参数,建议设置为true。 |
offsets.topic.num.partitions |
50 |
The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200). |
offsets.topic.retention.minutes |
1440 |
Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic. |
offsets.retention.check.interval.ms |
600000 |
The frequency at which the offset manager checks for stale offsets. |
offsets.topic.replication.factor |
3 |
The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas. |
offsets.topic.segment.bytes |
104857600 |
Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads. |
offsets.load.buffer.size |
5242880 |
An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache. |
offsets.commit.required.acks |
-1 |
The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden. |
offsets.commit.timeout.ms |
5000 |
The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout. |