ReOrc docs
Get ReOrc
English
English
  • About ReOrc
  • Set up and deployment
    • Set up organization
    • Install ReOrc agent
  • Getting started
    • 1. Set up a connection
      • BigQuery setup
    • 2. Create a project
    • 3. Create data models
    • 4. Build models in console
    • 5. Set up a pipeline
  • Connections
    • Destinations
      • Google Service Account
    • Integrations
      • Slack
  • Data modeling
    • Overview
    • Sources
    • Models
      • Model schema
      • Model configurations
    • Jinja templating
      • Variables
      • Macros
    • Materialization
    • Data lineage
    • Data tests
      • Built-in generic tests
      • Custom generic tests
      • Singular tests
  • Semantic modeling
    • Overview
    • Data Modelling vs Semantic Layer
    • Cube
      • Custom Dimension
      • Custom Measure
        • Aggregation Function
        • SQL functions and operators
        • Calculating Period-over-Period Changes
      • Relationship
    • View
      • Primary Dimension
      • Add Shared Fields
    • Shared Fields
    • Integration
      • Guandata Integration
      • Looker Studio
  • Pipeline
    • Overview
    • Modeling pipeline
    • Advanced pipeline
    • Job
  • Health tracking
    • Pipeline health
    • Data quality
  • Data governance
    • Data protection
  • Asset management
    • Console
    • Metadata
    • Version history
    • Packages and dependencies
  • DATA SERVICE
    • Overview
    • Create & edit Data Service
    • Data preview & download
    • Data sharing API
    • Access control
  • AI-powered
    • Rein AI Copilot
  • Settings
    • Organization settings
    • Project settings
    • Profile settings
    • Roles and permissions
  • Platform Specific
    • Doris/SelectDB
Powered by GitBook
On this page
  • Preparation
  • Connection Steps
  • Creating a Data Connection Account
  • Creating a Dataset
  • Selecting Data Tables
  • Database Connection and Update Settings
  • Confirming Dataset Information
  • Using the Dataset
  1. Semantic modeling
  2. Integration

Guandata Integration

PreviousIntegrationNextLooker Studio

Last updated 3 months ago

Preparation

Before connecting to a PostgreSQL database, ensure you have collected the following information:

  • Database version (minimum requirement: PostgreSQL ≥12.5)

  • IP address and port number of the database server

  • Database name

  • Database username and password

  • Connection method

  • Schema name (optional)

Connection Steps

Creating a Data Connection Account

  1. Log in to Guandata BI.

  2. Navigate to "Data Center" > "Data Accounts" and click "New Data Account".

  3. In the pop-up window, select "PostgreSQL" as the account platform.

Creating a Dataset

Selecting the Connector

  1. Go to "Data Center" > "Datasets", then click "+ New Dataset".

  2. Choose "Database" as the data source type.

  3. In the "Select Connector" step, choose "PostgreSQL", then click Next.

Selecting Data Tables

  1. Select an existing data account from the dropdown menu.

  2. Choose the desired database table.

Once data preview is successful, proceed to the next step.

Database Connection and Update Settings

Guandata supports two types of database connections: Direct Connection and Guan-Index. In our case, please choose Direct Connection.

Confirming Dataset Information

At this stage, users can finalize the dataset name, storage location, and field settings.

  • Dataset Name & Storage Location: Assign a clear and recognizable name to the dataset and choose where to store it. After confirmation, the dataset will appear in "Data Center" > "Datasets". (For Guan-Index datasets, there may be a delay before data extraction is complete).

  • Field Renaming: Users can rename fields by clicking the dropdown arrow next to the field name.

Using the Dataset

Once the dataset is created, users can immediately utilize it for visualization and analysis. Additional dataset settings and management options are available—refer to the "Dataset" documentation for more details.