Data source-specific permissions
Before you can start ingesting metadata, ensure that you meet the required permissions for your specific data source.
These permissions apply regardless of whether you're using Edge or the lineage harvester.
Select a data source, to show Currently, information is shown for: |
Amazon Redshift
Azure Data Factory
Azure SQL Data Warehouse
Azure SQL Server
Azure Synapse Analytics
DB2
dbt Cloud dbt Core
Google BigQuery
Google Dataplex
Greenplum
HiveQL
IBM InfoSphere DataStage
Informatica Intelligent Cloud Services
Informatica PowerCenter
Looker
Matillion
MicroStrategy
Oracle
PostgreSQL
Power BI
MySQL
Netezza
SAP Analytics Cloud
SAP Hana
Snowflake
Spark SQL
Downloaded SQL files
SQL Server
SQL Server Integration Services
SSRS-PBRS
Sybase
Tableau
Teradata
Custom technical lineage
|
- As a technical lineage user, ensure that your Catalog Author global role has the following global permissions. With these permissions, Collibra Data Lineage can process the lineage and synchronize the results to Data Catalog to create technical lineage.
- Catalog > Advanced Data Type > Add
- Catalog > Advanced Data Type > Remove
- Catalog > Advanced Data Type > Update
- Catalog > Technical lineage
- As a Data Catalog user, ensure that your Edge integration engineer global role has the following global permissions. With these permissions, you can create connections and capabilities on Edge, configure the integration, and synchronize the integration.
- Manage connections and capabilities
- View Edge connections and capabilities
- As a Google Dataplex user, ensure that you have the following access. Use the service account of this user when you create a GCP connection so that CollibraData Lineage can harvest lineage from Dataplex.
- Enable the Data Lineage API in Dataplex for the projects that you want to harvest lineage from. For more information, go to Data Lineage API in Google Cloud documentation.
- The Data Lineage Viewer role.
- The BigQuery Admin role if you want Collibra Data Lineage to collect lineage not only from stored procedures that you created but also from those that other Dataplex users created.
- The
bigquery.jobs.get
permission.For more information, go to IAM basic and predefined roles reference in the Google Cloud documentation. - When you synchronize technical lineage for Google Dataplex, you can add Project IDs that you want to harvest lineage from. If you want to have Project IDs available for selection when you add Project IDs, ensure that the service account has the
resourcemanager.projects.get
permission to GCP Projects where Dataplex is enabled. If the service account does not have this permission, you can enter the Project IDs manually on the Synchronization configuration page.
- bigquery.datasets.get
- bigquery.tables.get
- bigquery.tables.list
- bigquery.jobs.create
- bigquery.routines.get
- bigquery.routines.list
If you are using Edge, you also need:
- resourcemanager.projects.get
- bigquery.readsessions.create
- bigquery.readsessions.getData
- SELECT, at table level. Grant this to every table for which you want to create a technical lineage.
- Read access to the SYS schema or the tables in the schema.
dbt compile
command, to a local folder.You need Monitoring role permissions.
To create technical lineage from calculated views in SAP HANA Classic on-premises, you need the following permissions:
- SELECT on the following views:
- _SYS_REPO.ACTIVE_OBJECT
- _SYS_REPO.ACTIVE_OBJECTCROSSREF
- SYS.OBJECT_DEPENDENCIES
- The CATALOG READ system privilege
To create technical lineage from calculated views in SAP HANA Cloud/Advanced, you need the following permission:
- The CATALOG READ system privilege
- GRANT SELECT, at table level. Grant this to every table for which you want to create a technical lineage.
-
The role of the user that you specify in the
username
property in lineage harvester configuration file must be the owner of the views in PostgreSQL.
- all_tab_cols
- all_col_comments
- all_objects
- ALL_DB_LINKS
- all_mviews
- all_source
- all_synonyms
- all_views
all_source
table to retrieve Package bodies. However, this requires the EXECUTE privilege. As an alternative, you can direct the harvester to query the dba_source
table, which requires the SELECT_CATALOG_ROLE role. To do so, you need to:- If via Edge: Replace
all_source
bydba_source
in the Other Queries field in your Edge capability. - If via the CLI lineage harvester: Replace
all_source
bydba_source
in the file ./sql/oracle/queries.sql, which is included in the ZIP file when you download the lineage harvester.
- Your user role must have privileges to export assets.
- You must have read permission on all assets that you want to export.
- You have at least a Matillion Enterprise license.
- You have generated the Matillion certificate. For more information, go to Recreating self-signed SSL certificates on a Matillion ETL instance.
- You have added the Matillion certificate to a Java truststore. For more information about adding a certificate to a Java truststore, go to Add a Certificate to a Truststore Using Keytool.
- If you encounter the
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
error message, go to Data Source Name Failed exception with Tableau & technical lineage in Collibra Support Portal for a solution.
SQL
or SQL-API
.-
Ensure that the Snowflake user has the appropriate allowed host list. For details, go to Allowing Hostnames in Snowflake documentation.
-
You need a role that can access the Snowflake shared read-only database. To access the shared database, the account administrator the account administrator must grant the OBJECT_VIEWER and GOVERNANCE_VIEWER database role on the shared database to the user that runs the lineage harvester. The username of the user must be specified in the JDBC connection that you use to access Snowflake.
- The source code files must be in the same directory as your JSON files. For complete information, go to Working with custom technical lineage.
- To stitch the data objects of your data sources with Data Catalog assets, you need to register your data sources in Data Catalog. When you then prepare the Data Catalog physical data layer, ensure that you use a structure that matches the structure of ingested assets in Data Catalog.
- Determine whether you want to use the single-file or batch definition option.
- If you choose the single-file definition option, determine whether you want to create a simple or advanced custom technical lineage.
Collibra Data Lineage supports:
- Power BI on the Microsoft Power Platform.
- Power BI on Fabric.
You need the following roles, with user access to the server from which you want to ingest:
- A system-level role that is at least a System user role.
- An item-level role that is at least a Content Manager role.
We recommend that you use SQL Server 2019 Reporting Services or newer. We can't guarantee that older versions will work.
Before you start the Looker integration process, you need to set up Looker.
-
You need the following Admin API permissions:
- The first call we make to MicroStrategy is to authenticate. We connect to:
<MSTR URL>:<Port>/MicroStrategyLibrary/api-docs/ and use GET api/auth/login.
For complete information, see the MicroStrategy documentation.
If this API call can be made successfully, you can ingest the metadata. - The same connection:
<MSTR URL>:<Port>/MicroStrategyLibrary/api-docs/, but with GET api/model/tables/<tableId>.
For complete information, see the MicroStrategy documentation.
This endpoint is needed to create lineage and stitching.
- The first call we make to MicroStrategy is to authenticate. We connect to:
- You need permissions to access the library server.
- The lineage harvester uses port 443. If the port is not open, you also need permissions to access the repository.
- You have to configure the MicroStrategy Modeling Service. For complete information, see the MicroStrategy documentation.
Collibra Data Lineage uses the API 4.0 endpoints GET /queries/<query_id>
and GET /running_queries
. Due to a security update by Looker, the behavior of these endpoints has changed. Therefore, you must now:
- Select the "Disallow Numeric Query IDs" option in Looker.
- Ensure that your Looker user has the Admin role. The Admin role has the Administer permission, which is not available in the custom permission set.
For complete information, see the Looker Query ID API Patch Notice.
- By creating a shared storage connection (if you are using Edge).
- By using the folder connection method (if you are using the lineage harvester).
There are no specific permission requirements for this data source type.
There are no specific permissions requirements for downloaded SQL files.