<small id='vvJhL'></small><noframes id='vvJhL'>

    <legend id='vvJhL'><style id='vvJhL'><dir id='vvJhL'><q id='vvJhL'></q></dir></style></legend>
    • <bdo id='vvJhL'></bdo><ul id='vvJhL'></ul>

    <tfoot id='vvJhL'></tfoot>

      <i id='vvJhL'><tr id='vvJhL'><dt id='vvJhL'><q id='vvJhL'><span id='vvJhL'><b id='vvJhL'><form id='vvJhL'><ins id='vvJhL'></ins><ul id='vvJhL'></ul><sub id='vvJhL'></sub></form><legend id='vvJhL'></legend><bdo id='vvJhL'><pre id='vvJhL'><center id='vvJhL'></center></pre></bdo></b><th id='vvJhL'></th></span></q></dt></tr></i><div id='vvJhL'><tfoot id='vvJhL'></tfoot><dl id='vvJhL'><fieldset id='vvJhL'></fieldset></dl></div>

      为什么 Kafka jdbc 将插入数据作为 BLOB 而不是 var

      时间:2023-08-22

      <small id='Qtwf7'></small><noframes id='Qtwf7'>

      <tfoot id='Qtwf7'></tfoot>

        <bdo id='Qtwf7'></bdo><ul id='Qtwf7'></ul>
          <tbody id='Qtwf7'></tbody>

            <i id='Qtwf7'><tr id='Qtwf7'><dt id='Qtwf7'><q id='Qtwf7'><span id='Qtwf7'><b id='Qtwf7'><form id='Qtwf7'><ins id='Qtwf7'></ins><ul id='Qtwf7'></ul><sub id='Qtwf7'></sub></form><legend id='Qtwf7'></legend><bdo id='Qtwf7'><pre id='Qtwf7'><center id='Qtwf7'></center></pre></bdo></b><th id='Qtwf7'></th></span></q></dt></tr></i><div id='Qtwf7'><tfoot id='Qtwf7'></tfoot><dl id='Qtwf7'><fieldset id='Qtwf7'></fieldset></dl></div>
              <legend id='Qtwf7'><style id='Qtwf7'><dir id='Qtwf7'><q id='Qtwf7'></q></dir></style></legend>
                本文介绍了为什么 Kafka jdbc 将插入数据作为 BLOB 而不是 varchar 连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                问题描述

                我正在使用 Java 生成器在我的 Kafka 主题顶部插入数据.然后我使用 Kafka jdbc connect 将数据插入到我的 Oracle 表中.下面是我的生产者代码.

                I am using a Java producer to insert data top my Kafka topic. Then I use Kafka jdbc connect to insert data into my Oracle table. Below is my producer code.

                package producer.serialized.avro;
                
                import org.apache.avro.Schema;
                import org.apache.avro.generic.GenericData;
                import org.apache.avro.generic.GenericRecord;
                import org.apache.kafka.clients.producer.KafkaProducer;
                import org.apache.kafka.clients.producer.ProducerConfig;
                import org.apache.kafka.clients.producer.ProducerRecord;
                
                import java.util.Properties;
                
                
                public class Sender4 {
                
                    public static void main(String[] args) {
                
                        String flightSchema = "{\"type\":\"record\"," + "\"name\":\"Flight\","
                
                                + "\"fields\":[{\"name\":\"flight_id\",\"type\":\"string\"},{\"name\":\"flight_to\",\"type\":\"string\"},{\"name\":\"flight_from\",\"type\":\"string\"}]}";                
                
                        Properties props = new Properties();
                
                        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.1:9092");
                        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);
                        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);    
                        props.put("schema.registry.url", "http://192.168.0.1:8081");            
                
                        KafkaProducer producer = new KafkaProducer(props);    
                
                        Schema.Parser parser = new Schema.Parser();
                
                        Schema schema = parser.parse(flightSchema);            
                
                        GenericRecord avroRecord = new GenericData.Record(schema);
                
                        avroRecord.put("flight_id", "myflight");
                        avroRecord.put("flight_to", "QWE");
                        avroRecord.put("flight_from", "RTY");    
                
                        ProducerRecord<String, GenericRecord> record = new ProducerRecord<>("topic9",avroRecord);
                
                        producer.send(record);
                    }
                }
                

                下面是我的 Kafka 连接属性

                Below is my Kafka connect properties

                name=test-sink-6
                connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
                tasks.max=1
                topics=topic9
                connection.url=jdbc:oracle:thin:@192.168.0.1:1521:usera
                connection.user=usera
                connection.password=usera
                auto.create=true
                table.name.format=FLIGHTS4
                key.converter=io.confluent.connect.avro.AvroConverter
                key.converter.schema.registry.url=http://192.168.0.1:8081
                value.converter=io.confluent.connect.avro.AvroConverter
                value.converter.schema.registry.url=http://192.168.0.1:8081
                

                根据我的架构,我希望插入到我的 Oracle 表中的值是 varchar2.我创建了一个包含 3 个 varchar2 列的表.当我启动我的连接器时,没有插入任何东西.然后我删除了表并在表自动创建模式下运行连接器.那个时候,表被创建并且值被插入.但问题是,列数据类型是 CLOB.我希望它是 varchar2,因为它使用的数据较少.

                From my schema, I am expecting the values inserted to my Oracle table to be varchar2. I have created a table having 3 varchar2 columns. When i started my connector, nothing got inserted. Then i deleted the table and ran the connector with table auto create mode on. That time, the table got created and values got inserted. But the problem is, the column data type is CLOB. I want it to be varchar2 since it use less data.

                为什么会发生这种情况,我该如何解决?谢谢你.

                Why is this happening and how can i fix this? Thank you.

                推荐答案

                貌似Kafka的String映射到Oracle的NCLOB:

                Looks like Kafka's String is mapped to Oracle's NCLOB:

                <table border="1">
                <tr>
                <th>Schema Type</th><th>MySQL</th><th>Oracle</th><th>PostgreSQL</th><th>SQLite</th>
                </tr>
                <tr>
                <td>INT8</td><td>TINYINT</td><td>NUMBER(3,0)</td><td>SMALLINT</td><td>NUMERIC</td>
                </tr>
                <tr>
                <td>INT16</td><td>SMALLINT</td><td>NUMBER(5,0)</td><td>SMALLINT</td><td>NUMERIC</td>
                </tr>
                <tr>
                <td>INT32</td><td>INT</td><td>NUMBER(10,0)</td><td>INT</td><td>NUMERIC</td>
                </tr>
                <tr>
                <td>INT64</td><td>BIGINT</td><td>NUMBER(19,0)</td><td>BIGINT</td><td>NUMERIC</td>
                </tr>
                <tr>
                <td>FLOAT32</td><td>FLOAT</td><td>BINARY_FLOAT</td><td>REAL</td><td>REAL</td>
                </tr>
                <tr>
                <td>FLOAT64</td><td>DOUBLE</td><td>BINARY_DOUBLE</td><td>DOUBLE PRECISION</td><td>REAL</td>
                </tr>
                <tr>
                <td>BOOLEAN</td><td>TINYINT</td><td>NUMBER(1,0)</td><td>BOOLEAN</td><td>NUMERIC</td>
                </tr>
                <tr>
                <td>STRING</td><td>VARCHAR(256)</td><td>NCLOB</td><td>TEXT</td><td>TEXT</td>
                </tr>
                <tr>
                <td>BYTES</td><td>VARBINARY(1024)</td><td>BLOB</td><td>BYTEA</td><td>BLOB</td>
                </tr>
                <tr>
                <td>'Decimal'</td><td>DECIMAL(65,s)</td><td>NUMBER(*,s)</td><td>DECIMAL</td><td>NUMERIC</td>
                </tr>
                <tr>
                <td>'Date'</td><td>DATE</td><td>DATE</td><td>DATE</td><td>NUMERIC</td>
                </tr>
                <tr>
                <td>'Time'</td><td>TIME(3)</td><td>DATE</td><td>TIME</td><td>NUMERIC</td>
                </tr>
                <tr>
                <td>'Timestamp'</td><td>TIMESTAMP(3)</td><td>TIMESTAMP</td><td>TIMESTAMP</td><td>NUMERIC</td>
                </tr>
                </table>

                来源:https://www.ibm.com/support/knowledgecenter/en/SSPT3X_4.2.5/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_kafka_jdbc_sink.html

                https://docs.confluent.io/current/connect/connect-jdbc/docs/sink_connector.html

                更新

                OracleDialect 类(https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/dialect/OracleDialect.java) 具有硬编码的 CLOB 值,只需使用您自己的类扩展它,更改映射将无济于事,因为方言类型是在 CLOB 中的静态方法中定义的代码>JdbcSinkTask (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkTask.java)

                OracleDialect class (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/dialect/OracleDialect.java) has hardcoded CLOB value and simply extend it with your own class and change that mapping will not help as type of dialect is defined in static method in JdbcSinkTask (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkTask.java)

                final DbDialect dbDialect = DbDialect.fromConnectionString(config.connectionUrl);
                

                这篇关于为什么 Kafka jdbc 将插入数据作为 BLOB 而不是 varchar 连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                上一篇:kafka-connect-jdbc:SQLException:仅在使用分布式模式时没 下一篇:Confluent JDBC Source 连接器的问题

                相关文章

                最新文章

              1. <small id='TXa25'></small><noframes id='TXa25'>

                1. <i id='TXa25'><tr id='TXa25'><dt id='TXa25'><q id='TXa25'><span id='TXa25'><b id='TXa25'><form id='TXa25'><ins id='TXa25'></ins><ul id='TXa25'></ul><sub id='TXa25'></sub></form><legend id='TXa25'></legend><bdo id='TXa25'><pre id='TXa25'><center id='TXa25'></center></pre></bdo></b><th id='TXa25'></th></span></q></dt></tr></i><div id='TXa25'><tfoot id='TXa25'></tfoot><dl id='TXa25'><fieldset id='TXa25'></fieldset></dl></div>
                2. <legend id='TXa25'><style id='TXa25'><dir id='TXa25'><q id='TXa25'></q></dir></style></legend>
                  <tfoot id='TXa25'></tfoot>

                    <bdo id='TXa25'></bdo><ul id='TXa25'></ul>