[V3] Table to Storage configuration file

This is the description of the JSON configuration file of a Table to Storage data operation.

The configuration file is in JSON format. It contains the following sections:

  • Global parameters: General information about the data operation. You can specify here default values for parameters that will apply to all the tasks, if the parameter is not overriden in the task description.

  • Table copy parameters: Optionally, you can add a creation step for a table that will contain the result of the extraction.

👁️‍🗨️ Example

Here is an example of TTS configuration file:

{
    "$schema": "http://jsonschema.tailer.ai/schema/table-to-storage-v3editor",
    "configuration_type": "table-to-storage",
    "configuration_id": "tts-some-id-example",
    "short_description": "Short description of the job",
    "environment": "DEV",
    "account": "000099",    
    "version": "3",
    "activated": true,
    "archived": false,
    "doc_md": "readme.md",
    "start_date" : "2023, 2, 10",
    "schedule_interval" : "5 1 * * *",
    "print_header": true,
    "destination_format": "CSV",
    "gcs_dest_bucket": "fd-io-test-bucket-out",
    "gcs_dest_prefix": "tts_exemple/",
    "gcp_project_id": "fd-tailer-demo",
    "field_delimiter": ",",
    "compression": "None",
    "sql_query_template": "TEMPLATE_CURRENT_DATE",
    "bq_data_location": "EU",
    "generate_top_file": false,
    "delete_dest_bucket_content": false,
    "tasks": [
        {
            "task_id": "Export_with_default_values",
            "sql_file" : "the_tts_SQL_file.sql",
            "output_filename" : "THE_FILE_NAME_{{FD_DATE}}.csv",
            "copy_table": true,
            "dest_gcp_project_id": "fd-tailer-demo",
            "dest_gbq_dataset": "dlk_exemple_tts",
            "dest_gbq_table": "to_exemple_tts",
            "dest_gbq_table_suffix": "dag_execution_date",
            "bq_data_location": "US"
        },
        {
            "task_id": "Export_with_specific_values",
            "gcs_dest_bucket": "different-bucket-out",
            "gcs_dest_prefix": "tts_number_2/",
            "gcp_project_id": "fd-tailer-destination",
            "field_delimiter": "|",
            "compression": "GZIP",
            "sql_file": "my_other_SQL_file.sql",
            "output_filename": "A_DIFFERENT_FILE_NAME_{{FD_DATE}}.csv",
            "destination_format": "NEWLINE_DELIMITED_JSON",
            "sql_query_template": "TEMPLATE_CURRENT_DATE",
            "generate_top_file": true,
            "delete_dest_bucket_content": false,
            "copy_table": true,
            "dest_gcp_project_id": "fd-tailer-demo-destination",
            "dest_gbq_dataset": "my_destination_dataset",
            "dest_gbq_table": "my_other_extraction",
            "dest_gbq_table_suffix": "dag_execution_date"
        }
    ]
}

🌐 Global parameters

General information about the data operation.

You can specify here default values for parameters that will apply to all the tasks, if the parameter is not overriden in the task description.

ParameterDescription

$schema

type: string

optional

The url of the json-schema that contains the properties that your configuration must verify. Most Code Editor can use that to validate your configuration, display help boxes and enlighten issues.

configuration_type

type: string

mandatory

Type of data operation.

For a TTS data operation, the value is always "table-to-storage".

configuration_id

type: string

mandatory

ID of the data operation.

You can pick any name you want, but is has to be unique for this data operation type.

Note that in case of conflict, the newly deployed data operation will overwrite the previous one. To guarantee its uniqueness, the best practice is to name your data operation by concatenating:

  • your account ID,

  • the word "extract",

  • and a description of the data to extract.

short_description

type: string

optional

Short description of the table to storage data operation.

environment

type: string

mandatory

Deployment context.

Values: PROD, PREPROD, STAGING, DEV.

account

type: string

mandatory

Your account ID is a 6-digit number assigned to you by your Tailer Platform administrator.

version type: string mandatory, otherwise default is 1 and in that case refers to the deprecated V1.

Version of the configuration. Must be "3" in order to use the latest features.

Default : "1" for backward compatibility purposes but only version "3" supports the latest features. Version 1 is deprecated.

activated

type: boolean

optional

Flag used to enable/disable the execution of the data operation.

Default value: true

archived

type: boolean

optional

Flag used to enable/disable the visibility of the data operation's configuration and runs in Tailer¯Studio.

Default value: false

doc_md

type: string

optional

Path to a file containing a detailed description of the data operation. The file must be in Markdown format.

start_date

type: string

optional

Start date of the data operation.

The format must be:

"YYYY, MM, DD"

Where:

  • YYYY >= 1970

  • MM = [1, 12]

  • DD = [1, 31]

schedule_interval

type: string

optional

A Tables to Tables data operation can be launched in two different ways:

  • If schedule_interval is set to "None", the data operation will need to be started with a Workflow, when a given condition is met. (This solution is recommended.)

  • If you want the data operation to start at regular intervals, you can define this in the schedule_interval parameter with a Cron expression.

Example

For the data operation to start everyday at 7:00, you need to set it as follows:

"schedule_interval": "0 7 * * *",

You can find online tools to help you edit your Cron expression (for example, crontab.guru).

print_header type: boolean optional

Print a header row in the exported data.

Default value: true

destination_format

type: string

optional

Define the format of the output file :

Possible values: "NEWLINE_DELIMITED_JSON" (JSON file), "AVRO", "PARQUET"

Note that if you specify "NEWLINE_DELIMITED_JSON", the field-delimiter parameter is not taken into account. Default value: "CSV"

gcs_dest_bucket

type: string

mandatory

Google Cloud Storage destination bucket.

This is the bucket where the data is going to be extracted.

gcs_dest_prefix

type: string

mandatory

Path in the GCS bucket where the files will be extracted, e.g. "some/sub/dir". Note that you can use {{FD_DATE}} inside the path to include the current ISO date. e.g. "some/sub/dir/{{FD_DATE}}"

gcp_project_id

type: string

mandatory

ID of the Google Cloud project containing the BigQuery instance.

field_delimiter

type: string

optional

Separator for fields in the CSV output file, e.g. ";".

Note: For Tab separator, set to "\t".

Default value: "

compression

type: string

optional

Compression mode for the output file.

Possible values: "None", "GZIP", "SNAPPY".

Note that if you specify "GZIP", a ".gz" extension will be added at the end of the filename. Default value: "None"

sql_query_template type: string optional

If you want to use variables in your SQL query or script, you need to set this parameter to "TEMPLATE_CURRENT_DATE" (only supported value). This variable will be set to the execution date of the data operation (and not today's date).

For example, if you want to retrieve data corresponding to the execution date, you can use the following instruction:

WHERE sale_date = DATE('{{TEMPLATE_CURRENT_DATE}}')

bq_data_location type: string optional

Bigquery location used by default in all tasks. If not specified the value 'EU' will be set. The list of available values can be found here : https://cloud.google.com/bigquery/docs/locations

generate_top_file

type: boolean

optional

If true, generates an empty file when the data export is complete.

This file name is defined by the file name template. For exemple if the file name template is "{{FD_DATE}}-my_data_extraction.csv" then the top file generated on 2022-01-01 will be named as: 20220101-my_data_extraction.csv.top

Default value: false

delete_dest_bucket_content

type: boolean

optional

If set to true, this parameter will trigger the preliminary deletion of any items present in the destination directory.

This can prevent an issue when a new run of the same operation is needed after a fix. If the first run had generated file-0.csv and file-1.csv, and then the 2nd run only returns and erases file-0.csv, then you need to delete the destination bucket at the begining of the 2nd run, or you will end up with a file-0.csv from the 2nd run and a file-1.csv from the first run.

Default value: false

tasks type: array of maps mandatory

List of tasks the data operations will execute.

Check the section below for detailed information on their parameters.

📩Tasks Parametrers

With le latest version, it is now possible to export the data to different locations in one configuration. And this is possible thanks to the parameter "tasks".

Every task specifies an export. The tasks will use the parameters defined in the global configuration by default. If a parameter is specified in a task and in the global parameters, then the parameter in the task will overwrite the default parameter.

ParameterDescription

task_id type: string mandatory

The unique ID of your task.

sql_file

type: string

mandatory

Path to the file containing the extraction query.

output_filename

type: string

mandatory

Template for the output filename.

You can use the following placeholders inside the name:

  • {{FD_DATE}}: The date format will be YYYYMMDD

  • {{FD_TIME}}: The time format will be hhmmss

copy_table

type: boolean

optional

Parameter used to enable a copy of the output data in a BigQuery table.

Default value: false

dest_gcp_project_id

mandatory if copy_table is set to "true"

ID of the GCP project that will contain the table copy.

dest_gbq_dataset

mandatory if copy_table is set to "true"

Name of the BigQuery dataset that will contain the table copy.

dest_gbq_table_suffix

optional, to use only if copy_table is set to "true"

The only supported value for this parameter is "dag_execution_date".

This will add "_yyyymmdd" at the end of the table name to enable ingestion time partitioning. Default value: None

bq_data_location type: string optional

Bigquery location used in this specific task. If not specified the value used will be the global "bq_data_location" set at the configuration root. The list of available values can be found here : https://cloud.google.com/bigquery/docs/locations

and all the global parameters can be overwritten here

If a parameter is specified in a task and in the global parameters, then the parameter in the task will overwrite the default parameter.

Last updated