Comment on page
Transform data with Tables to Tables
Learn how to extract, transform and load Google BigQuery data using the Tables to Tables operation.
A Tables to Tables (TTT) data pipeline operation allows you to automate the execution of one or several BigQuery tasks in order to extract, transform and load data from tables to other tables.
A table can be the source and the destination of the same task.
- Google BigQuery (source and target)
When a Tables to Tables Tailer data operation is triggered by an event (for example a Storage to Tables data operation success) or scheduled to start:
- A number of workflow tasks (SQL queries and JSON table creation/copy tasks) are run in the order set in the task_dependencies parameter of the data operation configuration file.
- You obtain one or several BigQuery tables containing the reorganized data.
📋 How to deploy a Tables to Tables data operation
- 1.
- 2.Create a working folder as you want.
- 3.
- 4.Prepare your JSON configuration file to gather all this information. Refer to this page to learn about all its parameters.
- 5.Determine how to launch your Tables to Tables data operation: either use the schedule_interval parameter in the JSON configuration file, and/or create a Workflow configuration file that will define how to trigger it.
- 6.Access your working folder by running the following command:cd "[path to your working folder]"
- 7.To deploy the data operation, run the following command:tailer deploy your-file.json
- 8.
- 9.For your workflow to be executed, you either need to run the data operation corresponding to the previous step of your data pipeline (per your Workflow configuration file), or to launch it manually from Tailer Studio.
- 10.Access your output table(s) in BigQuery to check the result of the data operation.
Last modified 1yr ago