首先导入依赖:
代码:
package com.test;import org.apache.kafka.clients.producer.Callback;import org.apache.kafka.clients.producer.KafkaProducer;import org.apache.kafka.clients.producer.ProducerRecord;import org.apache.kafka.clients.producer.Recordmetadata;import java.util.Properties;public class MyProducer { public static void main(String[] args) { //创建kafka生产者的配置信息 Properties properties = new Properties(); //指定连接的Kafka properties.put("bootstrap.servers", "spark-local:9092"); //ack的应答级别 properties.put("acks", "all"); //重试次数 properties.put("retries", 3); //批次大小 properties.put("batch.size", 16384); //等待时间 properties.put("linger.ms", 10); //RecordAccumulator缓冲区大小 properties.put("buffer.memory", 33554432); properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); KafkaProducer
(设置的配置信息都是有默认值得,可以省略,链接地址和key、value序列化不能省略)
运行代码之前需要吧zookeeper和kafka启动起来:
zkServer.sh start
bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties &
再启动消费者
bin/kafka-console-consumer.sh --zookeeper spark-local:2181 --topic word
运行代码得到:
这里我们可以采用回调函数查看数据存放的分区,offest,topic等信息打印到控制台。
for (int i = 0; i < 10; i++) { producer.send(new ProducerRecord
将以上代码替换for循环,再次运行代码可以得到:
消费者代码:
package com.test.comsumer;import org.apache.kafka.clients.consumer.ConsumerConfig;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.apache.kafka.clients.consumer.ConsumerRecords;import org.apache.kafka.clients.consumer.KafkaConsumer;import java.util.Arrays;import java.util.Properties;public class MyConsumer { public static void main(String[] args) { Properties properties = new Properties(); //连接集群地址 properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "spark-local:9092"); //开启自动提交 properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true); //自动提交的延时 properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); //key,value的反序列化 properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); //消费者组 properties.put(ConsumerConfig.GROUP_ID_CONFIG, "bigdata"); //创建消费者 KafkaConsumer
同时运行生产者的进程,可以得到: