Tailer Documentation
  • What is Tailer Platform?
  • Getting Started
    • Prepare your local environment for Tailer
    • Install Tailer SDK
    • Set up Google Cloud Platform
    • Encrypt your credentials
  • [Tutorial] Create a first data pipeline
    • Introduction
    • Prepare the demonstration environment
    • Copy files from one bucket to another
    • Load files into BigQuery tables
    • Prepare data
    • Build predictions
    • Export data
    • Congratulations!
    • [Video] Automatic Script
      • SQL script file
      • DDL script file
      • Tables to Tables script file
      • Launch configuration and furthermore
  • Data Pipeline Operations
    • Overview
    • Set constants with Context
      • Context configuration file
    • Move files with Storage to Storage
      • Storage to Storage configuration file
    • Load data with Storage to Tables
      • Storage to Tables configuration file
      • Storage to Tables DDL files
    • Stream incoming data with API To Storage
      • API To Storage configuration file
      • API To Storage usage examples
    • Transform data with Tables to Tables
      • Tables to Tables configuration file
      • Table to Table SQL and DDL files
    • Export data with Tables to Storage
      • [V3] Table to Storage configuration file
      • Table to Storage SQL file
      • [V1-V2: deprecated] Table to Storage configuration file
    • Orchestrate processings with Workflow
      • [V2] Workflow configuration file
      • [V1: deprecated] Workflow configuration file
    • Convert XML to CSV
      • Convert XML to CSV configuration file
    • Use advanced features with VM Launcher
      • Process code with VM Launcher
        • VM Launcher configuration file for code processing
      • Encrypt/Decrypt data with VM Launcher
        • VM Launcher configuration file for data encryption
        • VM Launcher configuration file for data decryption
    • Monitoring and Alerting
      • Monitoring and alerting parameters
    • Asserting Data quality with Expectations
      • List of Expectations
    • Modify files with File Utilities
      • Encrypt/Decrypt data with File Utilities
        • Configuration file for data encryption
        • Configuration file for data decryption
    • Transfer data with GBQ to Firestore
      • Table to Storage: configuration file
      • Table to Storage: SQL file
      • VM Launcher: configuration file
      • File-to-firestore python file
  • Tailer Studio
    • Overview
    • Check data operations' details
    • Monitor data operations' status
    • Execute data operations
    • Reset Workflow data operations
    • Archive data operations
    • Add notes to data operations and runs
    • View your data catalog
    • Time your data with freshness
  • Tailer API
    • Overview
    • Getting started
    • API features
  • Release Notes
    • Tailer SDK Stable Releases
    • Tailer Beta Releases
      • Beta features
      • Beta configuration
      • Tailer SDK API
    • Tailer Status
Powered by GitBook
On this page
  • What is Storage to Tables?
  • ✅ Supported file types
  • Source data files
  • Databases
  • ⚙️ How it works
  • 🤖 Automated metadata
  • 📋 How to deploy a Storage to Tables data operation

Was this helpful?

Edit on GitHub
  1. Data Pipeline Operations

Load data with Storage to Tables

Learn how to transfer data from files to database tables using the Storage to Tables operation.

PreviousStorage to Storage configuration fileNextStorage to Tables configuration file

Last updated 1 year ago

Was this helpful?

What is Storage to Tables?

A Storage to Tables (STT) data pipeline operation allows you to load data files from a Google Cloud Storage (GCS) bucket into one or several BigQuery databases.

Note that the uniqueness of the configuration is checked against the GCS bucket name AND directory combination. This means that you can have only one configuration per bucket/directory combination, as any new configuration will overwrite the previous one.

✅ Supported file types

Source data files

  • CSV and any delimited flat files

  • New line delimited JSON files

  • These two file types can be compressed using gzip

Databases

  • Google BigQuery

⚙️ How it works

Every time a new file matching the specified rule appears in a given directory of a Google Cloud Storage bucket:

  • it will be removed from the source directory,

  • if options have been set accordingly, the file will be copied to an archive directory located in the same storage, inside a folder named with the date contained in the filename,

  • the file data will be loaded into a BigQuery table matching its filename template for each database specified.

🤖 Automated metadata

Automatic metadata feature will add specific columns during the ingestion process related to the inpput source.

The added columns are:

tlr_ingestion_timestamp_utc (TIMESTAMP)
tlr_input_file_source_type (STRING)
tlr_input_file_name (STRING)
tlr_input_file_full_resource_name (STRING)

📋 How to deploy a Storage to Tables data operation

  1. Create a working folder as you want, and create a JSON file for your data operation inside.

  2. Access your working folder by running the following command:

    cd "[path to your working folder]"
  3. To deploy the data operation, run the following command:

    tailer deploy your-file.json
  4. Access your output table(s), and archive folder, if any, to check the result of the data operation.

Access your tailer folder (created during ).

Prepare your JSON configuration file. Refer to this page to learn about all the .

Prepare a DDL file for each database table. Refer to this page to learn about all the .

Log in to to check the status and details of your data operation.

💡
installation
parameters
parameters
Tailer Studio