Skip to main content

Confluent Avro

This article introduces you to usage examples, configuration options, and type mappings of the Avro format.

Background information

The Avro format allows reading and writing of Avro data based on Avro structures. Currently, the Avro structure is derived based on the table structure.

Example of use

Example of a table using raw UTF-8 string as Kafka key and Avro records registered in the Schema Registry as Kafka values:

    CREATE TABLE user_created (
-- one column mapped to the Kafka raw UTF-8 key
the_kafka_key STRING,
-- a few columns mapped to the Avro fields of the Kafka value
id STRING,
name STRING,
email STRING
) WITH (
'connector' = 'kafka',
'topic' = 'user_events_example1',
'properties.bootstrap.servers' = 'localhost:9092',
-- UTF-8 string as Kafka keys, using the 'the_kafka_key' table column
'key.format' = 'raw',
'key.fields' = 'the_kafka_key',
'value.format' = 'avro-confluent',
'value.avro-confluent.url' = 'http://localhost:8082',
'value.fields-include' = 'EXCEPT_KEY'
)

We can write data into the kafka table as follows:

    INSERT INTO user_created
SELECT
-- replicating the user id into a column mapped to the kafka key
id as the_kafka_key,
-- all values
id, name, email
FROM some_table

Example of a table with both the Kafka key and value registered as Avro records in the Schema Registry:

    CREATE TABLE user_created (
-- one column mapped to the 'id' Avro field of the Kafka key
kafka_key_id STRING,
-- a few columns mapped to the Avro fields of the Kafka value
id STRING,
name STRING,
email STRING
) WITH (
'connector' = 'kafka',
'topic' = 'user_events_example2',
'properties.bootstrap.servers' = 'localhost:9092',
-- Watch out: schema evolution in the context of a Kafka key is almost never backward nor
-- forward compatible due to hash partitioning.
'key.format' = 'avro-confluent',
'key.avro-confluent.url' = 'http://localhost:8082',
'key.fields' = 'kafka_key_id',
-- In this example, we want the Avro types of both the Kafka key and value to contain the field 'id'
-- => adding a prefix to the table column associated to the Kafka key field avoids clashes
'key.fields-prefix' = 'kafka_key_',
'value.format' = 'avro-confluent',
'value.avro-confluent.url' = 'http://localhost:8082',
'value.fields-include' = 'EXCEPT_KEY',
-- subjects have a default value since Flink 1.13, though can be overridden:
'key.avro-confluent.subject' = 'user_events_example2-key2',
'value.avro-confluent.subject' = 'user_events_example2-value2'
)

Example of a table with both the Kafka key and value registered as Avro records in the Schema Registry:

    CREATE TABLE user_created (
-- one column mapped to the 'id' Avro field of the Kafka key
kafka_key_id STRING,
-- a few columns mapped to the Avro fields of the Kafka value
id STRING,
name STRING,
email STRING
) WITH (
'connector' = 'kafka',
'topic' = 'user_events_example2',
'properties.bootstrap.servers' = 'localhost:9092',
-- Watch out: schema evolution in the context of a Kafka key is almost never backward nor
-- forward compatible due to hash partitioning.
'key.format' = 'avro-confluent',
'key.avro-confluent.url' = 'http://localhost:8082',
'key.fields' = 'kafka_key_id',
-- In this example, we want the Avro types of both the Kafka key and value to contain the field 'id'
-- => adding a prefix to the table column associated to the Kafka key field avoids clashes
'key.fields-prefix' = 'kafka_key_',
'value.format' = 'avro-confluent',
'value.avro-confluent.url' = 'http://localhost:8082',
'value.fields-include' = 'EXCEPT_KEY',
-- subjects have a default value since Flink 1.13, though can be overridden:
'key.avro-confluent.subject' = 'user_events_example2-key2',
'value.avro-confluent.subject' = 'user_events_example2-value2'
)

Example of a table using the upsert-kafka connector with the Kafka value registered as an Avro record in the Schema Registry:

    CREATE TABLE user_created (
-- one column mapped to the Kafka raw UTF-8 key
kafka_key_id STRING,
-- a few columns mapped to the Avro fields of the Kafka value
id STRING,
name STRING,
email STRING,
-- upsert-kafka connector requires a primary key to define the upsert behavior
PRIMARY KEY (kafka_key_id) NOT ENFORCED
) WITH (
'connector' = 'upsert-kafka',
'topic' = 'user_events_example3',
'properties.bootstrap.servers' = 'localhost:9092',
-- UTF-8 string as Kafka keys
-- We don't specify 'key.fields' in this case since it's dictated by the primary key of the table
'key.format' = 'raw',
-- In this example, we want the Avro types of both the Kafka key and value to contain the field 'id'
-- => adding a prefix to the table column associated to the kafka key field avoids clashes
'key.fields-prefix' = 'kafka_key_',
'value.format' = 'avro-confluent',
'value.avro-confluent.url' = 'http://localhost:8082',
'value.fields-include' = 'EXCEPT_KEY'
)

Configuration options

ParameterDescriptionRequiredForwardedDefaultType
formatSpecify what format to use, here should be ‘avro-confluent’.yesnononeString
avro-confluent.basic-auth.credentials-sourceBasic auth credentials source for Schema RegistrynoyesnoneString
avro-confluent.basic-auth.user-infoBasic auth user info for schema registrynoyesnoneString
avro-confluent.bearer-auth.credentials-sourceBearer auth credentials source for Schema RegistrynoyesnoneString
avro-confluent.bearer-auth.tokenSpecify what format to use, here should be ‘avro-confluent’.nonononeString
avro-confluent.propertiesProperties map that is forwarded to the underlying Schema Registry. This is useful for options that are not officially exposed via Flink config options. However, note that Flink options have higher precedence.nonononeMap
formatSpecify what format to use, here should be ‘avro-confluent’.yesnononeString
avro-confluent.ssl.keystore.locationLocation / File of SSL keystorenoyesnoneString
avro-confluent.ssl.keystore.passwordPassword for SSL keystorenoyesnoneString
avro-confluent.ssl.truststore.passwordPassword for SSL truststorenoyesnoneString
avro-confluent.schemaThe schema registered or to be registered in the Confluent Schema Registry. If no schema is provided Flink converts the table schema to avro schema. The schema provided must match the table schema.yesnononeString
avro-confluent.subjectThe Confluent Schema Registry subject under which to register the schema used by this format during serialization. By default, kafka and upsert-kafka connectors use <topic_name>-value or <topic_name>-key as the default subject name if this format is used as the value or key format. But for other connectors (e.g. filesystem), the subject option is required when used as sink.noyesnoneString

Configuration options

Currently, Apache Flink always uses the table schema to derive the Avro reader schema during deserialization and Avro writer schema during serialization. Explicitly defining an Avro schema is not supported yet. See `Apache Avro Format](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/avro/#data-type-mapping) for the mapping between Avro and Flink DataTypes.

In addition to the types listed there, Flink supports reading/writing nullable types. Flink maps nullable types to Avro union(something, null), where something is the Avro type converted from Flink type.

You can refer to Avro Specification for more information about Avro types.

note

This page is derived from the official Apache Flink® documentation.

Refer to the Credits page for more information.