In this example, we want to process and analyze data from a fictional retailer called "Iowa Liquor". These data are organized in CSV files named according to a specific pattern including a timestamp, and are streamed at regular intervals into a Google Cloud Storage bucket. We'll first transfer them to a bucket located in a different Google Cloud project, load the data into a BigQuery table, and then prepare the data in order to analyze them with an AI model.
How data flows through a Tailer Platform data pipeline
Creating JSON configuration files for data operations
Deploying data operations with Tailer SDK
Checking information in Tailer Studio
Access to two projects in GCP on which you have the appropriate permissions
A terminal to run commands
CSV files to process (provided at next step)