DDL script file
Learn how to create the Data Definition Language (DDL) file corresponding to the workflow tasks of a Table to Table data operation.
🗺️ Overview
A SQL workflow is a sequence of tasks that feed tables in parallel or sequentially. The DDL file gives instructions to create a destination table.
⎨⎬ Creation task
Once the SQL queries are ready, you need to use one or several DDL files to create the destination BigQuery tables that will contain the output data.
📹 DDL script video
Parameters
Parameter | Description |
---|---|
bq_table_description type: string mandatory | Description of the BigQuery table. |
bq_table_schema type: array mandatory | BigQuery table schema. It contains a list of fields corresponding to the number of columns it will contain. Each field described has three attributes:
|
bq_table_clustering_fields type: array optional | List of fields used when clustering is enabled. The table data will be automatically organized based on the contents of the fields you specify. Their order determines the sort order of the data. If this parameter is set, time partitioning will be automatically enabled on the table. If you don't set partitioning parameters, default values will be used. |
bq_table_timepartitioning_field type: string optional | If this parameter is set, the table will be partitioned by this field. If not, the table will be partitioned by pseudo column _PARTITIONTIME. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. (Refer to BigQuery documentation for more information.) Note: You can set this parameter to a field that equals to DATE(''). Then, if you relaunch an execution with a partition, and if default_write_disposition is set to "WRITE_APPEND" in the JSON configuration file, Tailer will check if the corresponding partition already exists in the table:
|
bq_table_timepartitioning_expiration_ms type: integer optional | Number of milliseconds for which to keep the storage for a partition. (Refer to BigQuery documentation for more information.) |
bq_table_timepartitioning_require_partition_filter type: boolean optional | If set to true, queries over the partitioned table require a partition filter that can be used for partition elimination to be specified. (Refer to BigQuery documentation for more information.) |
Data types
Tailer Platform supports the following data types.
Numeric types
Name | Description |
---|---|
| Integers are numeric values that do not have fractional components. They range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. |
| Floating point values are approximate numeric values with fractional components. |
| This data type represents decimal values with 38 decimal digits of precision and 9 decimal digits of scale. (Precision is the number of digits that the number contains. Scale is how many of these digits appear after the decimal point.) It is particularly useful for financial calculations. |
Boolean type
Name | Description |
---|---|
| This data type supports the |
String type
Name | Description |
---|---|
| Variable-length character data. When converting data from string to a different data type, makes sure to use |
Bytes type
Name | Description |
---|---|
| Variable-length binary data. This data type is rarely used but can be useful for characters with unusual encoding. |
Time types
Only the date
, datetime
and timestamp
data types (not time
) allow table partitioning.
Time zone management being difficult with BigQuery, prefer the UTC format.
Name | Description |
---|---|
| This data type represents a calendar date. It includes the year, month, and day. |
| This data type represents a time, as might be displayed on a watch, independent of a specific date. It includes the hour, minute, second, and subsecond. |
| This data type represents a date and time, as they might be displayed on a calendar or clock. It includes the year, month, day, hour, minute, second, and subsecond. |
| This data type represents an absolute point in time, with microsecond precision. |
Last updated