ReOrc docs
Get ReOrc
English
English
  • About ReOrc
  • Set up and deployment
    • Set up organization
    • Install ReOrc agent
  • Getting started
    • 1. Set up a connection
      • BigQuery setup
    • 2. Create a project
    • 3. Create data models
    • 4. Build models in console
    • 5. Set up a pipeline
  • Connections
    • Destinations
      • Google Service Account
    • Integrations
      • Slack
  • Data modeling
    • Overview
    • Sources
    • Models
      • Model schema
      • Model configurations
    • Jinja templating
      • Variables
      • Macros
    • Materialization
    • Data lineage
    • Data tests
      • Built-in generic tests
      • Custom generic tests
      • Singular tests
  • Semantic modeling
    • Overview
    • Data Modelling vs Semantic Layer
    • Cube
      • Custom Dimension
      • Custom Measure
        • Aggregation Function
        • SQL functions and operators
        • Calculating Period-over-Period Changes
      • Relationship
    • View
      • Primary Dimension
      • Add Shared Fields
    • Shared Fields
    • Integration
      • Guandata Integration
      • Looker Studio
  • Pipeline
    • Overview
    • Modeling pipeline
    • Advanced pipeline
    • Job
  • Health tracking
    • Pipeline health
    • Data quality
  • Data governance
    • Data protection
  • Asset management
    • Console
    • Metadata
    • Version history
    • Packages and dependencies
  • DATA SERVICE
    • Overview
    • Create & edit Data Service
    • Data preview & download
    • Data sharing API
    • Access control
  • AI-powered
    • Rein AI Copilot
  • Settings
    • Organization settings
    • Project settings
    • Profile settings
    • Roles and permissions
  • Platform Specific
    • Doris/SelectDB
Powered by GitBook
On this page
  • Prerequisites
  • Prepare sample dataset
  • Ingest sample data
  • Create a service account
  • Connect Recurve to BigQuery
  1. Getting started
  2. 1. Set up a connection

BigQuery setup

Previous1. Set up a connectionNext2. Create a project

Last updated 5 months ago

Recurve's BigQuery connector enables you to seamlessly integrate your BigQuery data warehouse as a destination for your analytics database. This guide walks you through establishing a secure connection using Google Cloud service accounts, from initial setup to your first successful connection.

You'll learn how to:

  • Prepare a sample dataset in your Google Cloud Project (GCP).

  • Create and configure a service account with sufficient permissions.

  • Establish a connection between Recurve and BigQuery.

Prerequisites

  • You already join a Recurve organization.

  • You've already created a .

Prepare sample dataset

To demonstrate the connetion process, we'll be using the jaffle_shop dataset — a fictional e-commerce store's data provided by the dbt community. This dataset offers a practical example of typical e-commerce data structures that mirror real-world scenarios.

You can use the CLI tool generate synthetic data for any specified year range.

The dataset includes these tables:

  • Customers (who place Orders)

  • Orders (from those Customers)

  • Products (the food and beverages the Orders contain)

  • Order Items (of those Products)

  • Supplies (needed for making those Products)

  • Stores (where the Orders are placed and fulfilled)

  • Tweets (Customers sometimes issue Tweets after placing an Order)

Or simply download the generated data below:

Ingest sample data

Follow these steps to ingest the data:

  1. Create a new dataset:

    1. Enter provide a name for the dataset in Dataset ID. You can leave the other fields as default.

    2. Click Create dataset.

  2. Create tables for the dataset.

    1. Click on the Actions option of the dataset and select Create table.

    2. Choose Upload as the creation method, select the CSV file, and name the table after the file name. For example, we upload the raw_orders.csv and name the table raw_orders.

    3. In Schema, check Auto detect to automatically generate the schema for the table.

    4. Click Create table.

    5. Repeat the four steps above to create and upload the other tables.

Create a service account

After loading data into the project, you need to create a Google Service Account. This account represents the identity and permissions that the connector can use to authenticate and interact with BigQuery.

Follow these steps:

  1. Enable BigQuery API:

    1. Select BigQuery API from the dropdown.

    2. Choose Application data for data processing type.

    3. Click Next. This navigates to to the Create service account section.

  2. Configure your service account:

    1. Provide a descriptive name for your service account.

    2. Click Create and continue.

    3. Assign the two following roles:

      • BigQuery Job User.

      • BigQuery Data Editor.

    4. Click Done.

    1. Navigate to the Keys tab.

    2. Click the Add key -> Create new key.

    3. Select JSON as the Key type and click Create. This will download the JSON file that contains authentication credentials and metadata.

Connect Recurve to BigQuery

With the Google Service Account key, you now can set up a connection to BigQuery from Recurve.

Follow these steps:

  1. Open the left sidebar.

  2. Navigate to Connections -> Destinations.

  3. Click + Create connection.

  4. Select Google BigQuery.

  5. Fill in the fields with the information from the JSON key file.

    • Destination name: the name to identify this specific connection in your organization.

    • Auth Type: service_account

    • Google Project ID: the unique identifier for your GCP project

    • Google Auth Private Key ID: the unique identifier for your service account's private key

    • Google Auth Private Key: the actual private key used for authentication

    • Client ID: your service account's unique identifier

    • Client Email: the email address associated with your service account

  6. Select Development environment and click Test connection.

  7. Once the connection is successfully tested, click Create Destination.

The connection is now ready to be used in a project.

Go to your .

Expand the Actions option and select .

Go to the and select your project.

After the service account is created, the console navigates you to the page, where you can generate keys to authenticate the account. Locate the Service Accounts rows and click on the account

GCP project
jafgen
GCP project console
Create dataset
GCP credentials wizard
Credentials
79MB
jaffle-data.zip
archive