Skip to main content

Avro

This article introduces you to usage examples, configuration options, and type mappings of the Avro format.

Background information

The Avro format allows reading and writing of Avro data based on Avro structures. Currently, the Avro structure is derived based on the table structure.

Example of use

    CREATE TABLE user_behavior (
user_id BIGINT,
item_id BIGINT,
category_id BIGINT,
behavior STRING,
ts TIMESTAMP(3)
) WITH (
'connector' = 'kafka',
'topic' = 'user_behavior',
'properties.bootstrap.servers' = 'localhost :9092',
'properties.group.id' = 'testGroup',
'format' = 'avro'
)

Configuration options

ParameterDescriptionRequiredDefaultType
formatThe format to use for the declaration. When the Avro format is used, the value of the parameter is avro.yesnoneString
avro.codecSpecifies the codec for Avro compression, only applicable when the connector is Filesystem. The parameter values are as follows: snappy (default), null, deflate, bzip2, xzyesnoneString

Type mapping

Flink SQL typeAvro type
CHAR/VARCHAR/STRINGstring
BOOLEANboolean
BINARY / VARBINARYbytes
DECIMALfixed (a decimal number with precision)
TINYINTint
SMALLINTint
INTint
BIGINTlong
FLOATfloat
DOUBLEdouble
DATEint: date
timeint (the time in milliseconds)
TIMESTAMPlong (the time in milliseconds)
ARRAYarray
MAP (Element must be of type STRING, CHAR or VARCHAR.)map
MULTISET (Element must be of type STRING, CHAR or VARCHAR.)map
ROWrecord

In addition to the above types, Flink supports reading and writing nullable types. Flink maps nullable types to Avrounion(something, null), where something is an Avro type converted from a Flink type.

note

For information about Avro types, see Avro specification.

note

This page is derived from the official Apache Flink® documentation.

Refer to the Credits page for more information.