欢迎您访问365答案网,请分享给你的朋友!
生活常识 学习资料

Kafka的API

时间:2023-04-30

首先导入依赖:

org.apache.kafka kafka-clients 2.3.1

生产者

代码: 

package com.test;import org.apache.kafka.clients.producer.Callback;import org.apache.kafka.clients.producer.KafkaProducer;import org.apache.kafka.clients.producer.ProducerRecord;import org.apache.kafka.clients.producer.Recordmetadata;import java.util.Properties;public class MyProducer { public static void main(String[] args) { //创建kafka生产者的配置信息 Properties properties = new Properties(); //指定连接的Kafka properties.put("bootstrap.servers", "spark-local:9092"); //ack的应答级别 properties.put("acks", "all"); //重试次数 properties.put("retries", 3); //批次大小 properties.put("batch.size", 16384); //等待时间 properties.put("linger.ms", 10); //RecordAccumulator缓冲区大小 properties.put("buffer.memory", 33554432); properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); KafkaProducer producer = new KafkaProducer(properties); for (int i = 0; i < 10; i++) { producer.send(new ProducerRecord("word", "up" + i )); } producer.close(); }}

(设置的配置信息都是有默认值得,可以省略,链接地址和key、value序列化不能省略) 

运行代码之前需要吧zookeeper和kafka启动起来:
zkServer.sh start
 bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties &

再启动消费者
 bin/kafka-console-consumer.sh --zookeeper spark-local:2181 --topic word

运行代码得到:

 这里我们可以采用回调函数查看数据存放的分区,offest,topic等信息打印到控制台。

for (int i = 0; i < 10; i++) { producer.send(new ProducerRecord("word", "up" + i ), new Callback() { public void onCompletion(Recordmetadata recordmetadata, Exception e) { if (e == null){ System.out.println(recordmetadata.offset() + "--" + recordmetadata.partition()); } else { e.printStackTrace(); } } }); }

将以上代码替换for循环,再次运行代码可以得到:

消费者

代码:

package com.test.comsumer;import org.apache.kafka.clients.consumer.ConsumerConfig;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.apache.kafka.clients.consumer.ConsumerRecords;import org.apache.kafka.clients.consumer.KafkaConsumer;import java.util.Arrays;import java.util.Properties;public class MyConsumer { public static void main(String[] args) { Properties properties = new Properties(); //连接集群地址 properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "spark-local:9092"); //开启自动提交 properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true); //自动提交的延时 properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); //key,value的反序列化 properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); //消费者组 properties.put(ConsumerConfig.GROUP_ID_CONFIG, "bigdata"); //创建消费者 KafkaConsumer consumer = new KafkaConsumer(properties); //定约主题 consumer.subscribe(Arrays.asList("word", "MyTopic")); while (true) { //获取数据 ConsumerRecords consumerRecords = consumer.poll(100); //j解析并打印consumerRdcords for (ConsumerRecord consumerRecord : consumerRecords) { System.out.println(consumerRecord.key() + "--" + consumerRecord.value()); } }// consumer.close(); }}

 同时运行生产者的进程,可以得到:

 


Copyright © 2016-2020 www.365daan.com All Rights Reserved. 365答案网 版权所有 备案号:

部分内容来自互联网,版权归原作者所有,如有冒犯请联系我们,我们将在三个工作时内妥善处理。