Prepare data
The third data operation of this tutorial will consist in preparing data within BigQuery tables.
Last updated
The third data operation of this tutorial will consist in preparing data within BigQuery tables.
Last updated
The objective of this step will be to create new BigQuery tables into which we will load and reorganize the contents of the tables created at the previous step. As in most cases, this will happen within one BigQuery dataset. For this, we will need:
a JSON file to configure the data operation,
a JSON file to trigger the workflow,
a JSON file for each table creation,
and a SQL file for each transfer of data into our new tables.
Access your tailer-demo folder.
Inside, create a folder named 3-Prepare-data for this new step.
In this folder, create a JSON file named 000099-tailer-demo-prepare-data.json for your data operation.
The data operation will load a temporary table, and then, if the query runs correctly, then we copy the temporary table into the target table. Copy the following contents into your file:
Edit the following values: ◾ Replace my-gcp-project with the ID of the GCP project containing your BigQuery dataset in the default_gcp_project_id and in the source_gcp_project_id parameters. ◾ Replace my_gbq_dataset with the name of your working dataset. ◾ Also replace the project and dataset in the configuration_id
Create a SQL file in the same directory and name it load_sales.sql. It must contain a query that will load the sales table.
Inside the 3-Prepare-data folder, create a file named workflow.json.
Copy the following contents into your file:
This worklfow will trigger our Table-to-Table that loads the sales table each time a sales file is ingested with our Storage-to-Table operation.
The authorized_job_ids defines the job that triggers the target job. We need to insert here the job_id of the Storage-to-Table operation. You can find it in on Tailer Studio. Navigate to the Storage-to-Table runs section, find the last run for a sales file and go to the Run Details tab. Search for a job_id and copy it into the authorized_job_ids section.
Replace the configuration_id in the target_dag section by your configuration's configuration_id, concatenated with "_DEV" (for its environment should be DEV).
Once your files are ready, you can deploy the data operation:
Access your working folder by running the following command:
To deploy the data operation, run the following command:\
To trigger the workflow, run the following command:
You may be asked to select a context (see this page for more information). If you haven't deployed any context, then choose "no context". You can also use the flag --context to specify the context of your choice, or NO_CONTEXT if that's what you want:
Your data operation is now deployed, which means the new tables will shortly be created and loaded with data, and your data operation status is now visible in Tailer Studio.
Access Tailer Studio again.
In the left navigation menu, select Tables-to-tables.
In the Configurations tab, search for your data operation. You can see its status is Activated. You can see in the top right corner that it as been deployed by you a few seconds ago.
Check for your workflow configuration. It should also be activated and deployed by you a few seconds ago.
Try to copy a sales file in your Storage-to-Tables source bucket. It should automatically trigger a Storage-to-Tables run. If it's successful, then it should trigger the workflow and create a Table-to-Table run.
When it's completed and successful, you can check on BigQuery and see that your target table is loaded.
We've seen here a very basic example of a Table-to-Table data operation that loads a temporary table, and then copy it to the target destination when no error occurs.
We could go further and add different steps, and use different Tailer features, for instance:
create the temporary table using a task of type create_gbq_table. This way, you can specify a DDL for this table, add column descriptions, column types, and define partitioning and clustering fields.
add a task that performs tests using expectations or custom asserts
add several SQL tasks