Export data
The fifth and final data operation of this tutorial will consist in exporting our data back to a bucket.
πΊοΈ Overview
During this step, we will take our aggregated store data located one BigQuery table and export them to a Google Cloud Storage bucket CSV file so they can later be used with other tools, such as a warehouse management system.
ποΈ Create a bucket and a folder
For the detailed procedure on how to create GCS buckets (manually or using gsutil), refer to this page.
Create a bucket in the project of your choice. As bucket names need to be unique globally, you can pick any name you want. Select the settings that you want.β
In this bucket, create a folder named store_clustering_export that will contain our future export file.
π Create your configuration files
Create the JSON file that configures the data pipeline operation
Access your tailer-demo folder.
Inside, create a folder named 5-Export-data for this new step.
In this folder, create a JSON file named 000099-tailer-demo-export-data.json for your data operation.
Copy the following contents into your file:
Edit the following values: βΎ Replace my-gcp-project with the ID of the GCP project containing the source table. This is where the SQL query will be run. βΎ Replace my-gbq-dataset with the name of the dataset where you want to create a copy of the table generated with the SQL request. βΎ Replace my-gcs-bucket with the name of the bucket that you've just created, where the export file will be generated. βΎ If you share the project with others, then don't forget to personalize your outputs so you won't erase your team mate's work. You can search for "_YOUR_NAME" and replace all the occurrences.
Save your file.
Create a SQL file
Inside the 5-Export-data folder, create a file named export_data.sql.
Copy the following contents into the export_data.sql file:
Replace my-gbq-dataset with the name of your working dataset in the previous step.
Save your file.
Create the JSON file that will trigger the workflow
Inside the 5-Export-data folder, create a file named workflow.json.
Copy the following contents into your file:
Save your file.
βΆοΈ Deploy the data operation
Once your files are ready, you can deploy the data operation:
Access your working folder by running the following command:
To deploy the data operation, run the following command:
ποΈ Run your workflow manually
Deploying the workflow at this stage would not launch it, as the workflow will only be triggered by the execution of the previous step (building predictions). We will run it manually for now so we can see the result. Once we finish setting up the pipeline, the workflow will run automatically starting from its first step (copying files) when we add files into the source bucket.
Access Tailer Studio.β
In the left navigation menu, select Table-to-storage.
In the Configurations tab, search for your data operation, 000099-tailer-demo-export-data.
Click the data operation ID to display its details.
In the upper right corner, click on Launch.
π³οΈ Check the result in GCS
Access the GCS folder in the bucket you've just created. Your data should now appear in the form of a CSV file that you can export to a different system.
You can now add more files into the input folder from the first step of this tutorial to see the whole pipeline play out!
π Further steps
You can check the full Tables to Storage documentation and try other parameters:
Add some tasks to perform different extractions
Create a JSON extract or compress the output using GZIP
Send the data file to a partner using a Storage to Storage operation, or ingest it into Firestore using a specific VM Launcher operation.
Insert the run date in your query using the "sql_query_template" parameters
Insert environment variables in your SQL using a Context configuration.
Last updated