Release Notes

Important 
  • Failure to upgrade to the most recent release of the Collibra Service and/or Software may adversely impact the security, reliability, availability, integrity, performance or support (including Collibra’s ability to meet its service levels) of the Service and/or Software. For more information, read our Collibra supported versions policy.
  • Some items included in this release may require an additional cost. Please contact your Collibra representative or Collibra Account Team with any questions.

Release 2025.04

Release Information

  • Expected release date of Collibra Data Quality & Observability 2025.04: April 28, 2025
  • Release notes publication date: April 2, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In this release (2025.04), only the Java 17 build profile of Collibra Data Quality & Observability contains all new and improved features and bug fixes listed in the release notes. The Java 8 and 11 build profiles for Standalone installations contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Depending on your installation of Collibra Data Quality & Observability, you can expect the following in this release for Java 17 build profiles:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • To install Collibra Data Quality & Observability 2025.04, you must upgrade to Java 17 and Spark 3.5.3 if you have not already done so in the 2025.02 or 2025.03 releases.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Collibra Data Quality & Observability 2025.04 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

While this release contains Java 8, 11, and 17 builds of Collibra Data Quality & Observability for Standalone installations, this will be the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Collibra Data Quality & Observability. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Collibra Data Quality & Observability versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Collibra Data Quality & Observability Java Upgrade FAQ.

New and improved

Platform

  • FIPS-enabled standalone installations of Collibra Data Quality & Observability now support SAML authentication.
  • The DQ agent now automatically restores itself when the Metastore reconnects after a temporary disconnection due to maintenance or a glitch.

Jobs

  • We are pleased to announce that Oracle Pushdown is now available for beta testing.
  • Trino Pushdown jobs now support validate source.
  • The Profile page now correctly shows the top 5 TopN and BottomN shape results for Pullup jobs.
  • When you add a timeslice to a timeUUID data type column in a Cassandra dataset, an unsupported data type error message now appears.
  • Dremio, Snowflake, and Trino Pullup jobs now support common table expressions (CTE) for parallel JDBC processing.
  • You can now archive the break records of shapes from Trino Pushdown jobs.
  • Link IDs for exact match duplicates are no longer displayed on the findings page.
  • We improved the insertion procedure of validate source findings into the Metastore.

Rules

  • When you add or edit a rule with an active status, SQL syntax validation now runs automatically when you save it. If the rule passes validation, it saves as expected. If validation fails, a dialog box appears, asking whether you want to continue saving with errors. Rules with an inactive status save without validation checks.
  • When a rule condition exceeds 2 lines, only the first 2 lines are shown in the Condition column of the Rules tab on the Findings page. You can click "more..." or hover over the cell to show the full condition.

Profile

  • When there are only two unique string values, the histogram on the Profile page now shows them correctly.

Findings

  • To improve page alignment across the application, the Findings page now has a page title.

Dataset Manager

  • If you don't have the required permissions to perform certain tasks in the Dataset Manager, such as updating a business unit, an error message now appears when you attempt the action.

Scorecards

  • You can now delete scorecards with names that contain trailing empty spaces, such as "scorecard " and "test scorecard ".

Integration

  • You can now search with trailing spaces when looking up a community for tenant mapping in the Collibra Data Quality & Observability and Collibra Platform integration configuration.

APIs

  • You now need the ROLE_ADMIN or ROLE_ADMIN_VIEWER role to access the /v2/getdatasetusagecount API endpoint.

Fixes

Platform

  • We have improved the security of our application.
  • The data retention process now works as expected for all tenants in multi-tenant instances.
  • Permissions errors for SAML users without the ROLE_PUBLIC role are now resolved. You no longer need to assign ROLE_PUBLIC to users who already have other valid roles.

Jobs

  • Pushdown jobs scheduled to run concurrently now process the correlation activity correctly.
  • Pushdown jobs with queries that contain new line characters now process correctly, and the primary table from the query is shown in the Dataset Manager.

Findings

  • The confidence calculations for numerical outliers in Pullup jobs have been updated for negative values. Positive value confidence calculations and Pushdown calculations remain unchanged.

Alerts

  • Rule score-based alerts now respect rule tolerance settings. Alerts are suppressed when the rule break percentage falls within the tolerance threshold.
  • Conditional alerts now work as expected when based on rules with names that start with numbers.

Integration

  • The default setting for the integration schema, table, and column recalculation service (DQ_DGC_MAPPING_STATS_SCHEDULER_ENABLED) is now false, reducing unnecessary database activity. You can enable the service or call it through the API when needed.
  • Important From versions 2024.11 to 2025.03 of Collibra Data Quality & Observability, if you don't want queries from the recalculation of mapped and unmapped stats of total entities to run, set the DQ_DGC_MAPPING_STATS_SCHEDULER_ENABLED to false in the owl-env.sh or Web ConfigMap. You can keep DQ_INTEGRATION_SCHEDULER_ENABLED set to true.
    For more information, go to the Collibra Support center.

  • The Quality tab for database assets is now supported in out-of-the-box aggregation path configurations.
  • The auto map feature now correctly maps schemas that contain only views.
  • Dimension configuration for integration mapping no longer shows duplicate Collibra Data Quality & Observability dimensions from the dq_dimension table.

API

  • When you copy a rule with an incorrect or non-existent dataset name using the v3/rules/copy API, an error message now specifies the invalid dataset or rule reference. This prevents invalid references in the new dataset.

Release 2025.03

Release Information

  • Release date of Collibra Data Quality & Observability 2025.03: March 31, 2025
  • Release notes publication date: March 4, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In this release (2025.03), Collibra Data Quality & Observability is only available on Java 17 and Spark 3.5.3. Depending on your installation of Collibra Data Quality & Observability, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • To install Collibra Data Quality & Observability 2025.03, you must upgrade to Java 17 and Spark 3.5.3 if you have not already done so in the 2025.02 release.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Collibra Data Quality & Observability 2025.03 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

The April 2025 (2025.04) release will contain Java 8, 11, and 17 versions of Collibra Data Quality & Observability. This will be the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3, and will include feature enhancements and bug fixes from the 2025.02 release and critical bug fixes from the 2025.03 and 2025.04 releases. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Collibra Data Quality & Observability. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Collibra Data Quality & Observability versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Collibra Data Quality & Observability Java Upgrade FAQ.

Enhancements

Platform

  • On April 9, 2025, Google will deprecate the Vertex text-bison AI model, which SQL Assistant for Data Quality uses for the "beta path" option. To continue using SQL Assistant for Data Quality, you must switch to the "platform path," which requires an integration with Collibra Platform. For more information about how to configure the platform path, go to About SQL Assistant for Data Quality.
  • We removed support for Kafka streaming.

Connections

  • Private endpoints are now supported for Azure Data Lake Storage (Gen2) (ABFSS) key- and service principal-based authentication and Azure Blob Storage (WASBS) key-based authentication using the cloud.endpoint=<endpoint> driver property. To do this, add cloud.endpoint=<endpoint> to the Driver Properties field on the Properties tab of an ABFSS or WASBS connection template. For example, cloud.endpoint=microsoftonline.us.
  • Trino now supports parallel processing in Pullup mode. To enable this enhancement, the Trino driver has been upgraded to version 1.0.50.

Jobs

  • The Jobs tab on the Findings page now includes two button options:
    •  Run Job from CMD/JSON allows you to run a job with updates made in the job's command line or JSON. This option is Run Job from JSON in Pushdown mode since the command line option is not available.
    •  Run Job with Date allows you to select a specific run date.
    • Note The Run DQ Job button on the metadata bar retains its functionality, allowing you to rerun a job for the selected date.

  • The order of link IDs in the rule_breaks and opt_owl Metastore tables for Pushdown jobs is now aligned.
  • The options to archive the break records of associated monitors in the Explorer settings dialog box of a Pushdown job are now disabled when the Archive Break Records option is disabled at the connection-level.
  • We updated the logic of the maximum global job count to ensure it only increases, rather than fluctuating based on the maximum count of the last run job's tenant. This change allows tenants with lower maximum job counts to potentially run more total jobs while still enforcing the maximum connections for individual jobs. Over time, the global job count will align with the highest limit among all tenants.
  • You can now archive the break records of shapes from SAP HANA and Trino Pushdown jobs.
  • You can now use the new "behaviorShiftCheck" element in the JSON payload of jobs on Pullup connections. This allows you to enable or disable the shift metric results of Pullup jobs, helping you avoid misleading mixed data type results in string columns. By default, the "behaviorShiftCheck" element is enabled (set to true). To disable it, use the following configuration: “behaviorShiftCheck”: false.

Rules

  • You can now set the MANDATORY_PRIMARY_RULE_COLUMN setting to TRUE from the Application Configuration Settings page of the Admin Console to require users to select a primary column when creating a rule. This requirement is enforced when a user creates a new rule or saves an existing rule for the first time after the setting is enabled. Existing rules are not affected automatically.
  • The names of Template rules can no longer include spaces.
  • CSV rule export files now include a Filter Query column when a rule filter is defined. If no filter is used, the column remains empty. The Condition column has been renamed to Rule Query to better distinguish between rule and filter queries. Additionally, the Passing Records column now shows the correct values.
  • You can now apply custom dimensions added to the dq_dimension table in the metastore to rules from the Rule Details dialog box on the Rule Workbench. These custom dimensions are also included in the Column Dimension Report.
  • Livy caching now uses a combination of username and connection type instead of just the username. This improvement allows you to seamlessly switch between connections to access features such as retrieving the run results previews for rules or creating new jobs for remote file connections, without manually terminating sessions.
  • Note Manually terminating a Livy session will still end all sessions associated with that user.

Findings

  • You can now work with the Findings page in full-screen view.

Scorecards

  • You now receive a helpful error message in the following scenarios:
    • Create a scorecard without including any datasets.
    • Update a scorecard and remove all its datasets.
    • Add or update a scorecard page with a name that already exists.

Fixes

Platform

  • We have improved the security of our application.
  • We mitigated the risk of SQL injection vulnerabilities in our application.
  • Helm Charts now include the external JWT properties required to configure an externally managed JWT.

Jobs

  • Google BigQuery jobs no longer fail during concurrent runs.
  • When you add a -conf setting to the agent configuration of an existing job and rerun it, the command line no longer includes duplicate -conf parameters.
  • When you expand a Snowflake connection in Explorer, the schema is now passed as a parameter in the query. This ensures the Generate Report function loads correctly.
  • Record change detection now works as expected with Databricks Pushdown datasets.
  • When you select a Parquet file in the Job Creator workflow, the Formatted view tab now shows the file’s formatted data.
  • When you edit a Pullup job from the Command Line, JSON, or Query tab, the changes initially appear only on the tab where you made the edits. After you rerun the job, the changes are reflected across all three tabs.
  • The Dataset Overview now performs additional checks to validate queries that don’t include a SELECT statement.

Rules

  • When you update the adaptive level or pass value option in the Change Detection dialog box of an adaptive rule, you must now retrain it by clicking Retrain on the Behaviors tab of the Findings page.
  • @t1 rules on file-based datasets with a row filter now return only the rows included in the filter.
  • @t1 rules on Databricks datasets no longer return a NullPointerException error.
  • When you run Rule Discovery on a dataset with the “Money” Data Class in the Data Category, the job no longer returns a syntax error when it runs.

Findings

  • We updated the time zone library. As a result, some time zone options, such as "US/Eastern," have been updated to their new format. Scheduled jobs are fully compatible with the corresponding time zones in the new library. If you need to adjust a time zone, you must use the updated format. For example, "US/Eastern" is now "America/New_York."
  • Labels under the data quality score meter are now highlighted correctly according to the selected time zone of the dataset.

Alerts

  • You no longer receive erroneous job failure alerts for successful runs. Additional checks now help determine whether a job failed, improving the accuracy of job status notifications.
  • You can now consistently select or deselect the Add Rule Details option in the Condition Alert dialog box.

Reports

  • The link to the Dataset Findings documentation topic on the Dataset Findings report now works as expected.

Connections

  • Editing an existing remote file job no longer results in an error.
  • Teradata connections now function properly without requiring you to manually add the STRICT_NAMES driver property.

APIs

  • When you run a job using the /v3/jobs/run API that was previously exported and imported with /v3/datasetDefs, the Shape settings from the original job now persist in the new job.
  • Bearer tokens generated in one environment using the /v3/auth/signin endpoint (for local users) or the /v3/auth/oauth/signin endpoint (for OAuth users) are now restricted to that specific Collibra Data Quality & Observability environment and cannot be used across other environments.
  • We improved the security of our API endpoints.

Integration

  • You can now use the automapping option to map schemas, tables, and columns when setting up an integration between Collibra Data Quality & Observability and Collibra Platform in single-tenant Collibra Data Quality & Observability environments.
  • The Quality tab now correctly shows the data quality score when the head asset of the starting relation type in the aggregation path is a generic asset or when the starting relation type is based on the co-role instead of the role of the relation type.
  • Parentheses in column names are no longer replaced with double quotes when mapped to Collibra Platform assets. This change allows automatic relations to be created between Data Quality Rule and Column assets in Collibra Platform.

Release 2025.02

Release Information

  • Release date of Collibra Data Quality & Observability 2025.02: February 24, 2025
  • Release notes publication date: January 20, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In this release (2025.02) and the March (2025.03) release, Collibra Data Quality & Observability is only available on Java 17 and Spark 3.5.3. Depending on your installation of Collibra Data Quality & Observability, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • You must upgrade to Java 17 and Spark 3.5.3 to install Collibra Data Quality & Observability 2025.02.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Collibra Data Quality & Observability 2025.02 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

The April 2025 (2025.04) release will contain Java 8, 11, and 17 versions of Collibra Data Quality & Observability. This will be the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3, and will include feature enhancements and bug fixes from the 2025.02 release and critical bug fixes from the 2025.03 and 2025.04 releases. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Collibra Data Quality & Observability. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Collibra Data Quality & Observability versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Collibra Data Quality & Observability Java Upgrade FAQ.

Enhancements

Platform

  • When an externally managed user enables assignment alert notifications and enters an email address on their user profile, findings assigned to that email address now receive alert notifications.
  • SAML_ENTITY_BASEURL is now a required property for SAML authentication.
  • To improve the security of our application, we upgraded SAML. As part of the upgrade, the following SAML Authentication are no longer used:
    • SAML_TENANT_PROP_FROM_URL_ALIAS
    • SAML_METADATA_USE_URL
    • SAML_METADATA_TRUST_CHECK
    • SAML_INCLUDE_DISCOVERY_EXTENSION
    • SAML_SUB_DOMAIN_REPLACE_SOURCE
    • SAML_SUB_DOMAIN_REPLACE_TARGET
    • SAML_MAX_AUTH_AGE
    • Note If the SAML_LB_EXISTS property is set to true and SAML_LB_INCLUDE_PORT_IN_REQUEST set to false, you may need to update SAML_ENTITY_BASEURL to include the port in the URL. The SAML_ENTITY_BASEURL should match the IdP's ACS URL.

      Additionally, we recommend using the Collibra DQ UI when signing in with SAML, as SP-initiated sign-on is not fully supported in Collibra DQ.

      Tip We recommend synchronizing the clocks of the Identity Provider (IdP) and Service Provider (Collibra DQ) using Network Time Protocol (NTP) to prevent authentication failures caused by significant clock skew.

Connections

  • You can now limit the schemas that are displayed in a JDBC connection in Explorer. This enhancement helps you manage usage and maintain security by restricting access to only the necessary schemas. (idea #CDQ-I-152)
  • Note When you include a restricted schema in the query of a DQ Job, the query scope may be overwritten when the job runs. While only the schemas you selected when you set up the connection are shown in the Explorer menu, users are not restricted from running SQL queries on any schema from the data source.

  • Hive connections now use the driver class name, com.cloudera.hive.jdbc.HS2Driver. Additionally, only Hive versions 2.6.25 and newer are supported.
  • Warning If you have a Standalone installation of Collibra Data Quality & Observability and leverage Hive connections, you must upgrade to the latest 2.6.25 Hive driver to use Collibra Data Quality & Observability 2025.02.

Jobs

  • DQ Jobs on Trino Pushdown connections now allow you to select multiple link ID columns when setting up the dupes monitor.

Rules

  • We are delighted to announce that rule filtering is now generally available. To use rule filtering, an admin needs to set RULE_FILTER to TRUE in the Application Configuration.
  • We are also pleased to announce that rule tolerance is generally available.
  • You can now define a DQ Dimension when you create a new template or edit an existing one to be applied to all new custom rules created using this template. Additionally, a Dimension column is now shown on the Templates page.
  • There is now a dedicated break_msg column in the rule_output Metastore table, which shows the break message when a rule break occurs.

Dataset Manager

  • You can now add meta tags up to 100 characters long from the Edit option under the Actions menu on the Dataset Manager. (idea #CDQ-I-74)

Fixes

Platform

  • We have improved the security of our application.
  • To improve the security of our application, Collibra Data Quality & Observability now supports a Content Security Policy (CSP).

Jobs

  • When a scheduled job runs, only the time it is scheduled to run displays in the Scheduled Time column on the Jobs page.
  • When you edit and re-run a DQ Job with a source-to-target mapping of data from two different data sources, the -srcds parameter on the command line now correctly contains the src_ prefix before the source dataset, and the DQ Job re-runs successfully.
  • When you include a custom column via the source query of a source-to-target mapping, the validate overlay function no longer fails with an error message.
  • When you clone a dataset from Dataset Manager, Explorer, or the Job tab on Findings, the Job Name field and the Command Line in the Review step of Explorer now correctly reflect the "temp_cloned_dataset" naming convention for cloned datasets.
  • DQ Jobs with the same run date can no longer run concurrently.
  • Pullup DQ Jobs with names that include dashes, such as "sample-dataset," no longer remain stalled in the "staged" activity when you rerun them.
  • The correlation and histogram activities no longer cause DQ Jobs on Databricks Pushdown connections to fail sporadically.
  • The correlation activity now displays correctly in the UI and no longer causes DQ Jobs on Pushdown connections to fail.

Rules

  • When you use Freeform SQL queries with LIKE % conditions, for example, SELECT * FROM @public.test where testtext LIKE '%a, c%', they now return the expected results.
  • SQL Server job queries where the table name is escaped with brackets, for example, select * from dbo.[Table], now process correctly when the job runs.
  • The Rule Results Preview button is no longer disabled when the API call to gather the Livy status fails due to an invalid or non-existent session. The API call now correctly manages the cache for Livy sessions terminated due to idle timeout.
  • The contents of the Export Rules with Details now include data from the new Tolerance rule setting.
  • Data type rules now evaluate the contents of each column, not just the data type of each column to ensure the correct breaks are returned.
  • SQL reserved keywords included in string parsing are now correctly preserved in their original case.
  • When you update the name of a rule and an alert is configured on it, the alert will now show the updated name when sent.

Alerts

  • When you set a breaking rule as passing, a rule status alert for rules with breaking statuses no longer sends erroneous error messages for that rule.

Connections

  • When you substitute a PWD value as a sensitive variable on the Variables tab of a Databricks connection template, the sensitive variable in the connection URL is now set correctly for source-to-target mappings where Databricks is the source dataset.

APIs

  • The /v3/jobs/<jobId>/breaks/rules endpoint no longer returns a 500 error when using a valid jobId. Instead, it now returns empty files when no results are found for exports without findings.
  • When you schedule a Job to run monthly or quarterly, the jobSchedule object in the /v3/datasetDefs endpoint now reflects your selection.
  • When you run the /v3/rules/{dataset}/{ruleName}/{runId}/breaks endpoint when Archive Rules Break Records is enabled, break records are now retrieved from the source system instead of the Metastore.
  • Meta tags are now correctly applied to new datasets that are created with the /v3/datasetDefs endpoint.
  • When an integration import job fails, the /v3/dgc/integrations/jobs endpoint now returns the correct “failed” status. Additionally, the integration job status “ignored” is now available.

Integration

  • An invalid RunID no longer returns a successful response when using a Pushdown dataset with the /v3/jobs/run endpoint.

Release 2025.01

Release Information

  • Release date of Collibra Data Quality & Observability 2025.01: January 27, 2025
  • Release notes publication date: December 31, 2024

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In the February 2025 (2025.02) release, Collibra Data Quality & Observability will only be available on Java 17 and Spark 3.5.3. Depending on your installation of Collibra Data Quality & Observability, you can expect the following in the 2025.02 release:
  • Kubernetes installations: Kubernetes containers will automatically contain Java 17 and Spark 3.5.3. You may need to update your custom drivers and SAML keystore to maintain compatibility with Java 17.
  • Standalone installations: You must upgrade to Java 17 and Spark 3.5.3 to install Collibra Data Quality & Observability 2025.02. Additional upgrade guidance will be provided upon the release date. We encourage you to migrate to a Kubernetes installation, to improve the scalability and ease of future maintenance.

The April 2025 (2025.04) release will have Java 8, 11, and 17 versions of Collibra Data Quality & Observability and will be the final release to contain new features on Java 8 and 11. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be included in the Java 8 and 11 versions of Collibra Data Quality & Observability.

See what is changing
Collibra Data Quality & Observability versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


Additional details on driver compatibility, SAML upgrade procedures, and more will be available alongside the 2025.02 release.

For more information, visit the Collibra Data Quality & Observability Java Upgrade FAQ.

Enhancements

Platform

  • When you access Swagger from the Admin Console or the in the upper right corner of any Collibra Data Quality & Observability page, it now opens in a separate tab or window.

Connections

  • You can now authenticate Amazon S3 connections using OAuth2 with an Okta principal as a service account. This enhancement simplifies the authentication process and improves security.
  • When you add a connection, you can now use commas and equals characters in values in the Values option on the Variables tab. For example, val1=,val2 is now a supported variable value.
    • Additionally, the Driver Properties on the Properties tab now use a JSON array format instead of a comma-separated string. For example, [{"name":"prop1","value":"val1"},{"name":"prop2","value":"val2"}] is now the supported format for properties.

Jobs

  • Snowflake Pushdown connections now support source-to-target validation for datasets from the same Snowflake Pushdown connection.
  • Snowflake Pushdown connections now support all regex special characters in column names except for square brackets [ ].

Rules

  • When you hover over the break percentage (Perc) value on the Rules tab of the Findings page, a new popover message now shows the full value. This update improves visibility and makes it easier to view detailed information.
  • Tooltips are now available for the Points and Perc columns on the Rules tab of the Findings page.
  • You now see a descriptive error message if your data class rule fails to save because of missing required fields or a duplicate name when using the data class rule creator.
  • You no longer need to rerun a DQ Job after renaming a rule for its new name to be reflected across all relevant pages.

APIs

  • The POST /v3/datasetdefs endpoint now supports creating outlier and/or pattern records directly by including the outliers or patterns element in the JSON payload (dataset name is required).
    • If an ID is not provided, a new record is created.
    • If an ID is provided and matches an existing record, the existing record will be updated.
    • Alternatively, you can use the existing /v2/upsertPatternOpt and /v2/upsertOutlierOpt endpoints to create records and then reference their IDs when you use the PUT /v3/datasetdefs endpoints for updates.
  • The ControllerAdmin API now requires ROLE_ADMIN or ROLE_ADMIN_VIEWER to use the endpoints:
    • /gettotalcost
    • /gettotalbytes
  • The ControllerSecurity API now requires ROLE_ADMIN, ROLE_USER_MANAGER, or ROLE_OWL_ROLE_MANAGER to use the endpoints:
    • /getalldbuserdetails
    • /getdbuserwithroles
    • getAllAdGroupsandRoles
  • The ControllerUsageStats API now requires ROLE_ADMIN or ROLE_ADMIN_VIEWER to use the endpoints:
    • /getuserprofilecount
    • /getcolumnscount
    • /getusercount
    • /getdatasetcount
    • /getalertcount
    • /getemailcount
    • /getfailingdataset
    • /getpassingdataset
    • /getowlcheckcount
    • /getowlusagelast30days
    • /getowlusagemonthly
    • /getrulecount
    • /gettotalowlchecks
  • We deprecated the GET /deleteappconfig. As an alternative, you can use DELETE /deleteappconfig instead.
  • We deprecated the GET /deleteadminconfig. As an alternative, you can use DELETE /deleteadminconfig instead.

Fixes

Jobs

  • You can again preview columns on SAP HANA tables with periods (.) in their names.
  • When you run DQ Jobs from the command line, the command -srcinferschemaoff, which prevents Collibra Data Quality & Observability from inferring the schema in the validate source activity, now works as expected when files are in the source.
  • When DB_VIEWS_ON is set to TRUE on the Application Configuration page of the Admin Console, the Include Views option is now also applied when using the Mapping step for a Database connection.

Findings

  • The data preview of DQ Jobs created on file data sources that use pipe delimiters (|) now displays correctly on the Profile and Shapes tab of the Findings page.
  • DQ Jobs with the validate source layer enabled no longer return an error when you open their Findings pages.
  • When you edit the scheduled run time of a DQ Job from the Findings page, the updated scheduled time now saves correctly.
  • We removed the sort icon for the Breaking Records and Passing Records columns on the Rules tab of the Findings page, because it was not functional.
  • When you click the Run ID link of a dataset on the Assignments Queue, the Findings page now opens to the finding type associated with the Run ID. For example, a Rule finding links to the Rules tab on the Findings page.

Profile

  • When you change an Adaptive Rule from automatic to manual scoring on the Profile page, the points value now saves correctly.
  • When you manually set the lower and upper bounds of an Adaptive Rule, you can now use decimal values, such as 10.8.
  • When you use the search function within the Profile page to search for a partial string of a column name, such as "an" to return results for the columns "exchange" and "company," the correct number of results are now returned. Previously, a maximum of 10 results were returned.
  • When a column contains special characters, the column stats section on the Profile page now formats the results correctly, ensuring better readability.

Rules

  • When you use the Archive Break Records feature and the link ID column is an alias column, Collibra Data Quality & Observability now properly identifies the link ID column to ensure the rule processes correctly.
  • Complex DQ Job query joins containing SAP HANA reserved words in SQL queries are now supported. This ensures that the query compilation processes successfully. The following SAP HANA reserved words are supported in SQL queries:
    • "union", "all", "case", "when", "then", "else", "end", "in", "current_date", "current_timestamp", "current_time", "weekday", "add_days", "add_months", "add_years", "days_between", "to_timestamp", "quarter", "year", "month", "to_char"
  • User-defined rules against DQ Jobs on Snowflake Pushdown connections no longer return a syntax error when "@" is present anywhere in the query.
  • A rule on a DQ Job with Archive Breaking Records enabled now processes successfully without returning an exception message, even when it does not reference all link IDs in its SELECT statement.
  • We temporarily removed the ability to view rule results in fullscreen on the Findings page to address an issue where rule results were missing in fullscreen. The fullscreen view will be re-added in the upcoming Collibra Data Quality & Observability 2025.03 release.
  • Int Check rules using string column types no longer flag valid integers as breaking.

    Note When displaying preview break records, if a column in the dataset has an inferred data type that differs from the defined type, the data type check is performed based on the defined type rather than the inferred type.

  • Break records are no longer shown for rules with an exception status. Additionally, an error message now provides details about the exception.

Dataset Manager

  • When you rename a dataset from the Dataset Manager, you can now use a name that exists in a different tenant but not in your current one.
  • Datasets with spaces in their names no longer lose their rules when you rename them from the Dataset Manager.

Reports

  • The time zone configurations of certain Collibra Data Quality & Observability environments no longer prevent you from properly viewing the Dataset Findings Report.
  • When you use the search filter on the DQ Check Summary Report, you no longer encounter an error when you reach records for Check Type - Rule.

Connections

  • The CDATA driver is now supported for Athena Pushdown connections.

Admin Console

  • The Security Audit Trail now correctly shows the username of the user who deleted a dataset.

APIs

  • The /v3/jobs/<jobId>/breaks/rules endpoint no longer returns a 500 error when using a valid jobId. Instead, it now returns empty files when no results are found for exports without findings.
  • The /v3/jobs/{jobID}/findings and /v3/jobs/{dataset}/{runDate}/findings endpoints now return the correct value for the passFail parameter.

Integration

  • When you integrate a dataset into Collibra Platform, then add extra columns to it and re-run the integration, the exported JSON file now correctly shows the additional columns.
  • The score tile on the Quality tab of Data Quality Rule and Data Quality Job assets now works as expected.
  • Dates on the Overview - History table and chart on the Quality tab are now arranged in chronological order.
  • The tooltip for Last Run Status on the Summary tab of Data Quality Job assets now contains the correct message.

Beta features

Rules

  • When you edit an existing rule from the Rules tab of the Findings page, you can now edit the tolerance value.
  • Rule Tolerance is now available for Pushdown connections.
  • Rule filter queries applied to rules on Pullup DQ Job now support secondary datasets. Filter queries on @t1 rules are not supported at this time.

Release 2024.11

Release Information

  • Release dates of Collibra Data Quality & Observability:
    • November 25, 2024: Collibra Data Quality & Observability 2024.11
    • December 11, 2024: Collibra Data Quality & Observability 2024.11.1
  • Release notes publication date: October 31, 2024

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In the February 2025 (2025.02) release, Collibra Data Quality & Observability will only be available on Java 17. Depending on your installation of Collibra Data Quality & Observability, you can expect the following in the 2025.02 release:
  • Kubernetes installations: Kubernetes containers will automatically contain Java 17. You may need to update your custom drivers and SAML keystore to maintain compatibility with Java 17.
  • Standalone installations: You must upgrade to Java 17 to install Collibra Data Quality & Observability 2025.02. Additional upgrade guidance will be provided upon the release date. We encourage you to migrate to a Kubernetes installation, to improve the scalability and ease of future maintenance.

The March 2025 (2025.03) release will have Java 8 and 11 versions of Collibra Data Quality & Observability and will be the final release to contain new features on those Java versions. Between 2025.04 and 2025.07, only critical and high-priority bug fixes will be included in the Java 8 and 11 versions of Collibra Data Quality & Observability.

Additional details on driver compatibility, SAML upgrade procedures, and more will be available alongside the 2025.02 release.

For more information, visit the Collibra Data Quality & Observability Java Upgrade FAQ.

Enhancements

Platform

  • When querying the rule_output table in the Metastore, the rows_breaking and total_count columns now populate the correct values for each assignment_id. When a rule filter is used, the total_count column reflects the filtered number of total rows.

Integration

  • We automated connection mapping by introducing:
    • Automapping of schemas, tables, and columns.
    • The ability to view table statistics for troubleshooting unmapped or partially mapped connections.
  • Note Automapping is currently only available in multi-tenant environments. If your organization has a single-tenant environment, continue to use the manual mapping option until automapping is available in single-tenant environments in the 2025.03 Collibra Data Quality & Observability release.

  • The Quality tab is now hidden when an aggregation path is not available. (idea #DCC-I-3252)

Pushdown

  • Snowflake Pushdown connections now support source to target analysis for datasets from the same Snowflake Pushdown connection.
  • You can now monitor advanced data quality layers for SAP HANA Pushdown connections, including categorical and numerical outliers and records.
  • Trino Pushdown connections now support multiple link IDs for dupes scans.

Connections

  • We now provide out of the box support for Cassandra and Denodo data source connections. You can authenticate both connection types with the basic username and password combination and password manager method.
  • You can now authenticate SQL Server connections with NTLM.
  • We upgraded the Snowflake JDBC driver to 3.20.0.

Jobs

  • You can now set a new variable, -rdAdj, in the command line to dynamically calculate and substitute the run date for the -rd variable at the run time of your DQ Job.
  • The metadata bar now displays the schema and table name.

Findings

  • If you assign multiple Link IDs in a Dupes configuration, each Link ID is now present in the break record preview.
  • When there are rule findings, the Breaking Records column on the Rules tab displays the number of rows that do not pass the conditions of a rule. In the Metastore, the values from the Breaking Records column are included in the rows_breaking column of the rule_output table. However, after initially upgrading to 2024.11, values in the rows_breaking column remain [NULL] until you re-run your DQ Job.
  • Important To include data from the rows_breaking column in a dashboard or report, you first need to re-run your DQ Job to populate the column with data.

Alerts

  • There are now 8 new variables that allow you to create condition alerts for the scores of findings that meet their criteria. These condition variables include:
    • behaviorscore
    • outlierscore
    • patternscore
    • sourcescore
    • recordscore
    • schemascore
    • dupescore
    • shapescore
  • Example To create an alert for shapes scores above 25, you can set the condition to shapescore > 25.

  • Job failure alerts now send when a DQ Job fails in the Staged or Initiation activities.

Dataset Manager

  • You can now edit and clone DQ Jobs from the Actions button in the far right column on the Dataset Manager.

Fixes

Integration

  • Data Quality Job assets now display a “No data quality score available” message when an invalid rule is selected.
  • When Collibra Data Quality & Observability cannot retrieve the columns from a table or view during the column mapping process, the column UUIDs in Collibra Platform are now used by default.

Pushdown

  • You can now run Pushdown Jobs using OAuth Tokens generated by the /v3/auth/Oauth/signin endpoint.
  • Unique adaptive rules for Pushdown Jobs with columns that contain null values no longer fail when a scheduled run occurs.
  • When turning behavioral scoring off in the JSON definition of DQ Job created on Pushdown connections, behavior scores are no longer displayed.
  • When DQ Job created on Pushdown connections with Archive Break Records enabled run, references to link IDs in the rule query are now checked and added automatically if they are missing. This also allows you to add your own CONCAT() when using complex rules.
  • We improved the performance of DQ Jobs created on Snowflake Pushdown connections that use LIMIT 1 for data type queries.

Connections

  • We fixed a critical issue that prevented DQ Jobs on temp files from running because of a missing temp file bucket error.

Jobs

  • Backrun DQ Jobs are now included in the Stage 3 Job Logs.
  • Data Preview now works correctly when the source in the Mapping (source to target) activity is a remote file storage connection, such as Amazon S3.
  • DQ Jobs on Oracle datasets now run without errors when Parallel JDBC is enabled.
  • When using Dataset Overview to query an Oracle dataset, you no longer receive a generic "Error occurred. Please try again." error message when the source data contains a column with a "TIMESTAMP" data type.
  • When including any combination of the conformity options (Min, Mean, or Max) from the Adaptive Rules tab, the column of reference on the Shapes tab is no longer incorrectly marked “N/A” instead of “Auto.”
  • Shapes can now be detected after enabling additional Adaptive Rules beyond the default Adaptive Rules settings for file-based DQ Jobs.
  • After setting up a source to target mapping in the Mapping step of Explorer where both source and target are temp files, you no longer encounter a “Leave this Mapping” message when you click one of the arrow on the right side of the page to proceed to the next step.

Findings

  • After suppressing a behavior score for a dataset that you then use to create a scorecard, the scorecard and Findings page now reflect the same score.
  • After suppressing a behavior score and the total score is over 100, the new score is calculated correctly.

Rules

  • Rules with spaces in the link ID column now load successfully in the Rule Breaks preview. For example, the link ID column ~|ABC ~|, where ~| is the column delimiter, now loads successfully in the Rule Breaks preview.
  • When changing a rule type from a non-Native to Native rule, the Livy banner no longer displays and the Run Result Preview button is enabled. When changing any rule type to any other rule type that is non-Native, Livy checks run and the appropriate banner displays or the Run Result Preview button is enabled.

Alerts

  • When a single rule is passing after adding 3 distinct alerts for each Rule Status trigger (Breaking, Exception, and Passing) and one alert with all 3, unexpected alerts no longer send when the DQ Job runs.
  • Batch alerts now use the same alerts queue to process as all other alert emails.

APIs

  • The /v2/getdatapreview API is now crossed out and marked as deprecated in Swagger. While this API is now deprecated, it continues to function to allow backward compatibility and functionality in legacy workflows.
  • The Swagger UI response array now includes the 204 status code, which means that a request has been successfully completed, but no response payload body will be present.

Latest UI

  • When using Dataset Overview to query an Oracle dataset, you no longer receive a generic "Error occurred. Please try again." error message when the source data contains a column with a "TIMESTAMP" data type.
  • The Adaptive Rules modal on the Findings page now allows you to filter the results to display only Adaptive or Manual Rules or display both.
  • We re-added the ability to expand the results portion of the Findings page to full screen.
  • There is now an enhanced warning message when you create an invalid Distribution Rules from the Profile page.
  • The Select Rows step of Explorer now has a tooltip next to the Standard View option to explain why it is not always available.
  • The Actions button on the Dataset Manager now includes options to edit and clone DQ Jobs.
  • The Rule Details dialog now has a tooltip next to the "Score" buttons to explain the downscoring options.
  • We consolidated the individual login buttons on the Tenant Manager page to a single button that returns you to the main login page.
  • Table headers in exported Job Logs generated from the Jobs page now display correctly.

Beta features

Rules

  • You can now apply a rule tolerance value to indicate the threshold above which your rule breaks require the most urgent attention. Because alerts associated with rules can generate many alert notifications, this helps to declutter your inbox and allows you to focus on the rule breaks that matter most to you.
  • Rule filtering is now available for Pushdown DQ Jobs.

Maintenance Updates

  • We added a new check on the flyway library to resolve issues upon upgrade to Collibra Data Quality & Observability 2024.10.
  • Denodo connections now support OAuth2 authentication.
  • You can now Configure AWS passwordless authentication using Amazon RDS PostgreSQL as the Metastore using Amazon RDS PostgreSQL as the Metastore.
  • Note AWS passwordless authentication is currently only supported for EC2 Instance Profile-based authentication with an Amazon RDS Metastore for Collibra Data Quality & Observability standalone and cluster-based deployments. IAM pod role-based authentication support will be available in a future release.