ReOrc docs
Get ReOrc
English
English
  • About ReOrc
  • Set up and deployment
    • Set up organization
    • Install ReOrc agent
  • Getting started
    • 1. Set up a connection
      • BigQuery setup
    • 2. Create a project
    • 3. Create data models
    • 4. Build models in console
    • 5. Set up a pipeline
  • Connections
    • Destinations
      • Google Service Account
    • Integrations
      • Slack
  • Data modeling
    • Overview
    • Sources
    • Models
      • Model schema
      • Model configurations
    • Jinja templating
      • Variables
      • Macros
    • Materialization
    • Data lineage
    • Data tests
      • Built-in generic tests
      • Custom generic tests
      • Singular tests
  • Semantic modeling
    • Overview
    • Data Modelling vs Semantic Layer
    • Cube
      • Custom Dimension
      • Custom Measure
        • Aggregation Function
        • SQL functions and operators
        • Calculating Period-over-Period Changes
      • Relationship
    • View
      • Primary Dimension
      • Add Shared Fields
    • Shared Fields
    • Integration
      • Guandata Integration
      • Looker Studio
  • Pipeline
    • Overview
    • Modeling pipeline
    • Advanced pipeline
    • Job
  • Health tracking
    • Pipeline health
    • Data quality
  • Data governance
    • Data protection
  • Asset management
    • Console
    • Metadata
    • Version history
    • Packages and dependencies
  • DATA SERVICE
    • Overview
    • Create & edit Data Service
    • Data preview & download
    • Data sharing API
    • Access control
  • AI-powered
    • Rein AI Copilot
  • Settings
    • Organization settings
    • Project settings
    • Profile settings
    • Roles and permissions
  • Platform Specific
    • Doris/SelectDB
Powered by GitBook
On this page
  1. Getting started

1. Set up a connection

PreviousGetting startedNextBigQuery setup

Last updated 6 months ago

Before you can begin transforming your data, you'll need to establish connections to your data destinations - the "L" (Load) in the ELT (Extract, Load, Transform) pipeline. These destinations are where your ingested data will ultimately reside, whether it's a database, data warehouse, or data lake.

The Destinations section in Recurve serves as your centralized hub for managing these crucial connection points. Here you can configure and maintain connections to various data storage systems.

For a simple, first-time connection, we recommend you set up connection to a relational database or a data warehouse, so that the raw tables are ready for data modeling without extra transformation.

To access Destinations, from Recurve left sidebar, select Connections -> Destinations.

In the Destinations page, click + Create Connection.

Select the connector to your data source and click Set up.

Each connector has its own set of authentication fields and configuration parameters. You can check out the list of available data sources here: Destinations.

For a practical demo, you can follow the setup guide for specific data warehouse:

  • BigQuery setup

  • Snowflake setup