Release Notes

Important 
  • Failure to upgrade to the most recent release of the Collibra Service and/or Software may adversely impact the security, reliability, availability, integrity, performance or support (including Collibra’s ability to meet its service levels) of the Service and/or Software. For more information, read our Collibra supported versions policy.
  • Some items included in this release may require an additional cost. Please contact your Collibra representative or Collibra Account Team with any questions.

Release 2025.05

Release Information

  • Expected release date of Collibra Data Quality & Observability 2025.05: June 2, 2025
  • Release notes publication date: May 6, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In this release (2025.05), Collibra Data Quality & Observability is only available on Java 17 and Spark 3.5.3. Depending on your installation of Collibra Data Quality & Observability, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • To install Collibra Data Quality & Observability 2025.05, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Collibra Data Quality & Observability 2025.05 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Collibra Data Quality & Observability. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Collibra Data Quality & Observability versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Collibra Data Quality & Observability Java Upgrade FAQ.

New and improved

Platform

  • FIPS-enabled standalone installations of Collibra Data Quality & Observability now support SAML authentication.
  • You can now authenticate users via Microsoft Azure Active Directory B2C when signing into Collibra Data Quality & Observability.
  • Only users with the ROLE_ADMIN role can now see the username and connection string of JDBC connections on the Admin page and in API responses, improving connection security.
  • Metastore credentials are no longer stored in plain text when you create a DQ Job via a notebook. This improvement increases the security of your credentials.
  • We enhanced the security of Data Category validation.
  • We improved the security of our application.

Jobs

  • Trino Pushdown jobs now support validate source.
  • The Profile page now correctly shows the top 5 TopN and BottomN shape results for Pullup jobs.
  • The Run Job buttons now have two enhancements:
    • The Run Job with Date button on the Jobs tab of the Findings page is now labeled Select Run Date.
    • The Run Job button on the metadata bar now includes a helpful tooltip on hover.
  • SAP HANA connections now support the operators Note =>, Note ||, and && for scoped queries, the reserved word ‘LIKE_REGEXTR’, and the function REPLACE in scoped queries.
  • The assignments queue is now disabled for the validate source activity on new installations of Collibra Data Quality & Observability. To enable it, an admin can set valsrcdisableaq to "false" on the Admin Limits page.
  • When you add a transform to a job with histogram enabled or set to auto, the job processes as expected, and aliased columns display correctly on the Profile page.

Rules

  • TrinoPushdown jobs now support profiling and custom rules on columns of the ARRAY data type.
  • The Rule Details dialog on the Rule Workbench page now has an optional Purpose field. This allows you to add details about Collibra Platform assets associated with your rule and see which assets should relate to your Data Quality Rule asset upon integration.
  • When Archive Break Records is disabled, you can now use NULL link IDs with Athena, BigQuery, Hive, Redshift, and SQL Server connections.

Alerts

  • You can now set up Collibra Data Quality & Observability to send data quality alerts to one or more webhooks, eliminating the dependency on SMTP for email alerts.
  • You can now rename alerts on the Dataset Alerts page. When you rename an alert, the updated name is reflected in the alert_nm column of the alert_cond Metastore table. The updated name applies only to alerts generated in future job runs and does not affect historical data.
  • In addition to the existing rule name condition variable for percentages, you can now include a rule name with a value identifier, such as test_rule.score > 0, to create condition alerts based on rule scores.

Profile

  • When you retrain a breaking adaptive rule, negative values are now supported for lower and upper bounds, min, max, and mean.

Findings

  • To enhance usability, we moved the data preview from the Rules results table into a dedicated modal, making it easier to navigate and view data.
    • The Actions button is now always visible on the right side of the Rules tab, and includes the following options:
      • Archived Breaks, previously called Rule Breaks, retains the same content and functionality.
      • Preview Breaks, previously embedded under the Plus icon in the first column, shows all breaking records for a given rule. This option provides the same information that was previously available in the Rules tab and still requires the ROLE_DATASET_ACCESS role. Additionally, the Rule Break Preview modal reflects the preview limit of the rule.
    • Tooltips were added to various components on the Rules tab and Rule Break Preview modal.

Integration

  • You can now use the Map to primary column only switch on the Connections step of the integration setup wizard when integrating a rule with a primary column. This allows you to see only the column identified as the primary column related to the rule in Collibra Platform.
  • When you rename a custom rule in Collibra Data Quality & Observabilitywith an active integration with Collibra Platform, a new asset is no longer created in Collibra Platform. Instead, renamed rules reuse the Collibra Platform asset IDs associated with the previous rule name when their metadata is integrated.

APIs

  • The /v3/datasetDefs/template API endpoint now returns the nested key-value pairs for shape settings and job schedule in the datasetDefs JSON object.

Admin Console

  • You can now use the previewlimit setting on the Admin Limits page to define the maximum number of preview results shown in the Dataset Overview and Rule Workbench. The default value is 250.
  • Warning Large previewlimit values can negatively impact performance.

Fixes

Platform

  • You can now use the Bouncy Castle FIPS provider for Standalone deployments of Collibra Data Quality & Observability.
  • All datasets are now available when you run a dataset rule fetch with dataset security disabled for the admin user account.

Jobs

  • When you remove a table from a manually triggered or scheduled job, you now receive a more descriptive error stating that the table does not exist instead of a generic syntax error.
  • You can now use Amazon S3 as a secondary dataset name for a rule query.
  • You can now rerun a job for a selected date using the Run DQ Job button on the metadata bar of the Findings page.
  • The command line now retains the created query, including the ${rd} parameter, when you run a job using the Run DQ Job button on the metadata bar.
  • The controller response for queries to the alert_cond table in the Metastore now maps internal objects correctly.

Rules

  • You can again save rules on Pushdown jobs that exceed ⅓ of a buffer page.
  • Pushdown rules that use stat variables now run with the correct information from the current job, rather than using data from previous job runs.
  • The DOUBLECHECK rule on PostgreSQL and Redshift Pullup jobs now flags rows with negative double values as valid.
  • The Result Preview in the Rule Workbench no longer produces an error when you use out-of-the-box templates.

Profile

  • You can now compare the Baseline and Run values for the "Filled rate" when the Quality switch is enabled for a column on the Profile report.

Findings

  • Pushdown jobs run on Snowflake connections now show as failed on the Findings page if a password retrieval issue occurs.

Dataset Manager

  • When you apply an option from the Actions drop-down list, such as delete a dataset, the correct dataset now has the action applied to it.

Admin Console

  • When you select a submenu option from the Admin Console menu, the submenu section now remains open.

Release 2025.04

Release Information

  • Release date of Collibra Data Quality & Observability 2025.04: April 28, 2025
  • Release notes publication date: April 2, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In this release (2025.04), only the Java 17 build profile of Collibra Data Quality & Observability contains all new and improved features and bug fixes listed in the release notes. The Java 8 and 11 build profiles for Standalone installations contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Depending on your installation of Collibra Data Quality & Observability, you can expect the following in this release for Java 17 build profiles:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • To install Collibra Data Quality & Observability 2025.04, you must upgrade to Java 17 and Spark 3.5.3 if you did not already do so in the 2025.02 or 2025.03 release.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Collibra Data Quality & Observability 2025.04 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

While this release contains Java 8, 11, and 17 builds of Collibra Data Quality & Observability for Standalone installations, it is the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Collibra Data Quality & Observability.

For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Collibra Data Quality & Observability versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Collibra Data Quality & Observability Java Upgrade FAQ.

New and improved

Platform

  • The Platform Path of the SQL Assistant for Data Quality feature now uses the Gemini AI model version gemini-1.5-pro-002.
  • Important To continue using the Platform Path of the SQL Assistant for Data Quality feature, you must upgrade to Collibra Data Quality & Observability 2025.04.

  • FIPS-enabled standalone installations of Collibra Data Quality & Observability now support SAML authentication.
  • The DQ agent now automatically restores itself when the Metastore reconnects after a temporary disconnection due to maintenance or a glitch.

Jobs

  • We are pleased to announce that Oracle Pushdown is now available for beta testing.
  • When you add a timeslice to a timeUUID data type column in a Cassandra dataset, an unsupported data type error message now appears.
  • Dremio, Snowflake, and Trino Pullup jobs now support common table expressions (CTE) for parallel JDBC processing.
  • You can now archive the break records of shapes from Trino Pushdown jobs.
  • Link IDs for exact match duplicates are no longer displayed on the findings page.
  • We improved the insertion procedure of validate source findings into the Metastore.

Rules

  • When you add or edit a rule with an active status, SQL syntax validation now runs automatically when you save it. If the rule passes validation, it saves as expected. If validation fails, a dialog box appears, asking whether you want to continue saving with errors. Rules with an inactive status save without validation checks.
  • When a rule condition exceeds 2 lines, only the first 2 lines are shown in the Condition column of the Rules tab on the Findings page. You can click "more..." or hover over the cell to show the full condition.

Profile

  • The Profile page now correctly shows the top 5 TopN and BottomN shape results for Pullup jobs.
  • When there are only two unique string values, the histogram on the Profile page now shows them correctly.

Findings

  • To improve page alignment across the application, the Findings page now has a page title.

Alerts

  • Rule score-based alerts now respect rule tolerance settings. Alerts are suppressed when the rule break percentage falls within the tolerance threshold.

Dataset Manager

  • If you don't have the required permissions to perform certain tasks in the Dataset Manager, such as updating a business unit, an error message now appears when you attempt the action.

Scorecards

  • You can now delete scorecards with names that contain trailing empty spaces, such as "scorecard " and "test scorecard ".

Integration

  • You can now search with trailing spaces when looking up a community for tenant mapping in the Collibra Data Quality & Observability and Collibra Platform integration configuration.

APIs

  • You now need the ROLE_ADMIN or ROLE_ADMIN_VIEWER role to access the /v2/getdatasetusagecount API endpoint.

Fixes

Platform

  • We have improved the security of our application.
  • The data retention process now works as expected for all tenants in multi-tenant instances.
  • Permissions errors for SAML users without the ROLE_PUBLIC role are now resolved. You no longer need to assign ROLE_PUBLIC to users who already have other valid roles.

Jobs

  • Native rules, secondary dataset rules, and scheduled jobs on Databricks Pullup connections authenticated via EntraID Service Principal now run as expected.
  • Pushdown jobs scheduled to run concurrently now process the correlation activity correctly.
  • Pushdown jobs with queries that contain new line characters now process correctly, and the primary table from the query is shown in the Dataset Manager.

Findings

  • The confidence calculations for numerical outliers in Pullup jobs have been updated for negative values. Positive value confidence calculations and Pushdown calculations remain unchanged.

Alerts

  • Conditional alerts now work as expected when based on rules with names that start with numbers.

Integration

  • The default setting for the integration schema, table, and column recalculation service (DQ_DGC_MAPPING_STATS_SCHEDULER_ENABLED) is now false, reducing unnecessary database activity. You can enable the service or call it through the API when needed.
  • Important From versions 2024.11 to 2025.03 of Collibra Data Quality & Observability, if you don't want queries from the recalculation of mapped and unmapped stats of total entities to run, set the DQ_DGC_MAPPING_STATS_SCHEDULER_ENABLED to false in the owl-env.sh or Web ConfigMap. You can keep DQ_INTEGRATION_SCHEDULER_ENABLED set to true.
    For more information, go to the Collibra Support center.

  • The Quality tab for database assets is now supported in out-of-the-box aggregation path configurations.
  • The auto map feature now correctly maps schemas that contain only views.
  • Dimension configuration for integration mapping no longer shows duplicate Collibra Data Quality & Observability dimensions from the dq_dimension table.

API

  • When you copy a rule with an incorrect or non-existent dataset name using the v3/rules/copy API, an error message now specifies the invalid dataset or rule reference. This prevents invalid references in the new dataset.

Release 2025.03

Release Information

  • Release date of Collibra Data Quality & Observability 2025.03: March 31, 2025
  • Release notes publication date: March 4, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In this release (2025.03), Collibra Data Quality & Observability is only available on Java 17 and Spark 3.5.3. Depending on your installation of Collibra Data Quality & Observability, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • To install Collibra Data Quality & Observability 2025.03, you must upgrade to Java 17 and Spark 3.5.3 if you have not already done so in the 2025.02 release.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Collibra Data Quality & Observability 2025.03 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

The April 2025 (2025.04) release will contain Java 8, 11, and 17 versions of Collibra Data Quality & Observability. This will be the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3, and will include feature enhancements and bug fixes from the 2025.02 release and critical bug fixes from the 2025.03 and 2025.04 releases. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Collibra Data Quality & Observability. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Collibra Data Quality & Observability versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Collibra Data Quality & Observability Java Upgrade FAQ.

Enhancements

Platform

  • On April 9, 2025, Google will deprecate the Vertex text-bison AI model, which SQL Assistant for Data Quality uses for the "beta path" option. To continue using SQL Assistant for Data Quality, you must switch to the "platform path," which requires an integration with Collibra Platform. For more information about how to configure the platform path, go to About SQL Assistant for Data Quality.
  • We removed support for Kafka streaming.

Connections

  • Private endpoints are now supported for Azure Data Lake Storage (Gen2) (ABFSS) key- and service principal-based authentication and Azure Blob Storage (WASBS) key-based authentication using the cloud.endpoint=<endpoint> driver property. To do this, add cloud.endpoint=<endpoint> to the Driver Properties field on the Properties tab of an ABFSS or WASBS connection template. For example, cloud.endpoint=microsoftonline.us.
  • Trino now supports parallel processing in Pullup mode. To enable this enhancement, the Trino driver has been upgraded to version 1.0.50.

Jobs

  • The Jobs tab on the Findings page now includes two button options:
    •  Run Job from CMD/JSON allows you to run a job with updates made in the job's command line or JSON. This option is Run Job from JSON in Pushdown mode since the command line option is not available.
    •  Run Job with Date allows you to select a specific run date.
    • Note The Run DQ Job button on the metadata bar retains its functionality, allowing you to rerun a job for the selected date.

  • The order of link IDs in the rule_breaks and opt_owl Metastore tables for Pushdown jobs is now aligned.
  • The options to archive the break records of associated monitors in the Explorer settings dialog box of a Pushdown job are now disabled when the Archive Break Records option is disabled at the connection-level.
  • We updated the logic of the maximum global job count to ensure it only increases, rather than fluctuating based on the maximum count of the last run job's tenant. This change allows tenants with lower maximum job counts to potentially run more total jobs while still enforcing the maximum connections for individual jobs. Over time, the global job count will align with the highest limit among all tenants.
  • You can now archive the break records of shapes from SAP HANA and Trino Pushdown jobs.
  • You can now use the new "behaviorShiftCheck" element in the JSON payload of jobs on Pullup connections. This allows you to enable or disable the shift metric results of Pullup jobs, helping you avoid misleading mixed data type results in string columns. By default, the "behaviorShiftCheck" element is enabled (set to true). To disable it, use the following configuration: “behaviorShiftCheck”: false.

Rules

  • You can now set the MANDATORY_PRIMARY_RULE_COLUMN setting to TRUE from the Application Configuration Settings page of the Admin Console to require users to select a primary column when creating a rule. This requirement is enforced when a user creates a new rule or saves an existing rule for the first time after the setting is enabled. Existing rules are not affected automatically.
  • The names of Template rules can no longer include spaces.
  • CSV rule export files now include a Filter Query column when a rule filter is defined. If no filter is used, the column remains empty. The Condition column has been renamed to Rule Query to better distinguish between rule and filter queries. Additionally, the Passing Records column now shows the correct values.
  • You can now apply custom dimensions added to the dq_dimension table in the metastore to rules from the Rule Details dialog box on the Rule Workbench. These custom dimensions are also included in the Column Dimension Report.
  • Livy caching now uses a combination of username and connection type instead of just the username. This improvement allows you to seamlessly switch between connections to access features such as retrieving the run results previews for rules or creating new jobs for remote file connections, without manually terminating sessions.
  • Note Manually terminating a Livy session will still end all sessions associated with that user.

Findings

  • You can now work with the Findings page in full-screen view.

Scorecards

  • You now receive a helpful error message in the following scenarios:
    • Create a scorecard without including any datasets.
    • Update a scorecard and remove all its datasets.
    • Add or update a scorecard page with a name that already exists.

Fixes

Platform

  • We have improved the security of our application.
  • We mitigated the risk of SQL injection vulnerabilities in our application.
  • Helm Charts now include the external JWT properties required to configure an externally managed JWT.

Jobs

  • Google BigQuery jobs no longer fail during concurrent runs.
  • When you add a -conf setting to the agent configuration of an existing job and rerun it, the command line no longer includes duplicate -conf parameters.
  • When you expand a Snowflake connection in Explorer, the schema is now passed as a parameter in the query. This ensures the Generate Report function loads correctly.
  • Record change detection now works as expected with Databricks Pushdown datasets.
  • When you select a Parquet file in the Job Creator workflow, the Formatted view tab now shows the file’s formatted data.
  • When you edit a Pullup job from the Command Line, JSON, or Query tab, the changes initially appear only on the tab where you made the edits. After you rerun the job, the changes are reflected across all three tabs.
  • The Dataset Overview now performs additional checks to validate queries that don’t include a SELECT statement.

Rules

  • When you update the adaptive level or pass value option in the Change Detection dialog box of an adaptive rule, you must now retrain it by clicking Retrain on the Behaviors tab of the Findings page.
  • @t1 rules on file-based datasets with a row filter now return only the rows included in the filter.
  • @t1 rules on Databricks datasets no longer return a NullPointerException error.
  • When you run Rule Discovery on a dataset with the “Money” Data Class in the Data Category, the job no longer returns a syntax error when it runs.

Findings

  • We updated the time zone library. As a result, some time zone options, such as "US/Eastern," have been updated to their new format. Scheduled jobs are fully compatible with the corresponding time zones in the new library. If you need to adjust a time zone, you must use the updated format. For example, "US/Eastern" is now "America/New_York."
  • Labels under the data quality score meter are now highlighted correctly according to the selected time zone of the dataset.

Alerts

  • You no longer receive erroneous job failure alerts for successful runs. Additional checks now help determine whether a job failed, improving the accuracy of job status notifications.
  • You can now consistently select or deselect the Add Rule Details option in the Condition Alert dialog box.

Reports

  • The link to the Dataset Findings documentation topic on the Dataset Findings report now works as expected.

Connections

  • Editing an existing remote file job no longer results in an error.
  • Teradata connections now function properly without requiring you to manually add the STRICT_NAMES driver property.

APIs

  • When you run a job using the /v3/jobs/run API that was previously exported and imported with /v3/datasetDefs, the Shape settings from the original job now persist in the new job.
  • Bearer tokens generated in one environment using the /v3/auth/signin endpoint (for local users) or the /v3/auth/oauth/signin endpoint (for OAuth users) are now restricted to that specific Collibra Data Quality & Observability environment and cannot be used across other environments.
  • We improved the security of our API endpoints.

Integration

  • You can now use the automapping option to map schemas, tables, and columns when setting up an integration between Collibra Data Quality & Observability and Collibra Platform in single-tenant Collibra Data Quality & Observability environments.
  • The Quality tab now correctly shows the data quality score when the head asset of the starting relation type in the aggregation path is a generic asset or when the starting relation type is based on the co-role instead of the role of the relation type.
  • Parentheses in column names are no longer replaced with double quotes when mapped to Collibra Platform assets. This change allows automatic relations to be created between Data Quality Rule and Column assets in Collibra Platform.

Release 2025.02

Release Information

  • Release date of Collibra Data Quality & Observability 2025.02: February 24, 2025
  • Release notes publication date: January 20, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In this release (2025.02) and the March (2025.03) release, Collibra Data Quality & Observability is only available on Java 17 and Spark 3.5.3. Depending on your installation of Collibra Data Quality & Observability, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • You must upgrade to Java 17 and Spark 3.5.3 to install Collibra Data Quality & Observability 2025.02.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Collibra Data Quality & Observability 2025.02 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

The April 2025 (2025.04) release will contain Java 8, 11, and 17 versions of Collibra Data Quality & Observability. This will be the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3, and will include feature enhancements and bug fixes from the 2025.02 release and critical bug fixes from the 2025.03 and 2025.04 releases. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Collibra Data Quality & Observability. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Collibra Data Quality & Observability versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Collibra Data Quality & Observability Java Upgrade FAQ.

Enhancements

Platform

  • When an externally managed user enables assignment alert notifications and enters an email address on their user profile, findings assigned to that email address now receive alert notifications.
  • SAML_ENTITY_BASEURL is now a required property for SAML authentication.
  • To improve the security of our application, we upgraded SAML. As part of the upgrade, the following SAML Authentication are no longer used:
    • SAML_TENANT_PROP_FROM_URL_ALIAS
    • SAML_METADATA_USE_URL
    • SAML_METADATA_TRUST_CHECK
    • SAML_INCLUDE_DISCOVERY_EXTENSION
    • SAML_SUB_DOMAIN_REPLACE_SOURCE
    • SAML_SUB_DOMAIN_REPLACE_TARGET
    • SAML_MAX_AUTH_AGE
    • Note If the SAML_LB_EXISTS property is set to true and SAML_LB_INCLUDE_PORT_IN_REQUEST set to false, you may need to update SAML_ENTITY_BASEURL to include the port in the URL. The SAML_ENTITY_BASEURL should match the IdP's ACS URL.

      Additionally, we recommend using the Collibra DQ UI when signing in with SAML, as SP-initiated sign-on is not fully supported in Collibra DQ.

      Tip We recommend synchronizing the clocks of the Identity Provider (IdP) and Service Provider (Collibra DQ) using Network Time Protocol (NTP) to prevent authentication failures caused by significant clock skew.

Connections

  • You can now limit the schemas that are displayed in a JDBC connection in Explorer. This enhancement helps you manage usage and maintain security by restricting access to only the necessary schemas. (idea #CDQ-I-152)
  • Note When you include a restricted schema in the query of a DQ Job, the query scope may be overwritten when the job runs. While only the schemas you selected when you set up the connection are shown in the Explorer menu, users are not restricted from running SQL queries on any schema from the data source.

  • Hive connections now use the driver class name, com.cloudera.hive.jdbc.HS2Driver. Additionally, only Hive versions 2.6.25 and newer are supported.
  • Warning If you have a Standalone installation of Collibra Data Quality & Observability and leverage Hive connections, you must upgrade to the latest 2.6.25 Hive driver to use Collibra Data Quality & Observability 2025.02.

Jobs

  • DQ Jobs on Trino Pushdown connections now allow you to select multiple link ID columns when setting up the dupes monitor.

Rules

  • We are delighted to announce that rule filtering is now generally available. To use rule filtering, an admin needs to set RULE_FILTER to TRUE in the Application Configuration.
  • We are also pleased to announce that rule tolerance is generally available. However, the RULE_TOLERANCE_BETA setting must remain set to TRUE in the Application Configuration until the 2025.04 version.
  • You can now define a DQ Dimension when you create a new template or edit an existing one to be applied to all new custom rules created using this template. Additionally, a Dimension column is now shown on the Templates page.
  • There is now a dedicated break_msg column in the rule_output Metastore table, which shows the break message when a rule break occurs.

Dataset Manager

  • You can now add meta tags up to 100 characters long from the Edit option under the Actions menu on the Dataset Manager. (idea #CDQ-I-74)

Fixes

Platform

  • We have improved the security of our application.
  • To improve the security of our application, Collibra Data Quality & Observability now supports a Content Security Policy (CSP).

Jobs

  • When a scheduled job runs, only the time it is scheduled to run displays in the Scheduled Time column on the Jobs page.
  • When you edit and re-run a DQ Job with a source-to-target mapping of data from two different data sources, the -srcds parameter on the command line now correctly contains the src_ prefix before the source dataset, and the DQ Job re-runs successfully.
  • When you include a custom column via the source query of a source-to-target mapping, the validate overlay function no longer fails with an error message.
  • When you clone a dataset from Dataset Manager, Explorer, or the Job tab on Findings, the Job Name field and the Command Line in the Review step of Explorer now correctly reflect the "temp_cloned_dataset" naming convention for cloned datasets.
  • DQ Jobs with the same run date can no longer run concurrently.
  • Pullup DQ Jobs with names that include dashes, such as "sample-dataset," no longer remain stalled in the "staged" activity when you rerun them.
  • The correlation and histogram activities no longer cause DQ Jobs on Databricks Pushdown connections to fail sporadically.
  • The correlation activity now displays correctly in the UI and no longer causes DQ Jobs on Pushdown connections to fail.

Rules

  • When you use Freeform SQL queries with LIKE % conditions, for example, SELECT * FROM @public.test where testtext LIKE '%a, c%', they now return the expected results.
  • SQL Server job queries where the table name is escaped with brackets, for example, select * from dbo.[Table], now process correctly when the job runs.
  • The Rule Results Preview button is no longer disabled when the API call to gather the Livy status fails due to an invalid or non-existent session. The API call now correctly manages the cache for Livy sessions terminated due to idle timeout.
  • The contents of the Export Rules with Details now include data from the new Tolerance rule setting.
  • Data type rules now evaluate the contents of each column, not just the data type of each column to ensure the correct breaks are returned.
  • SQL reserved keywords included in string parsing are now correctly preserved in their original case.
  • When you update the name of a rule and an alert is configured on it, the alert will now show the updated name when sent.

Alerts

  • When you set a breaking rule as passing, a rule status alert for rules with breaking statuses no longer sends erroneous error messages for that rule.

Connections

  • When you substitute a PWD value as a sensitive variable on the Variables tab of a Databricks connection template, the sensitive variable in the connection URL is now set correctly for source-to-target mappings where Databricks is the source dataset.

APIs

  • The /v3/jobs/<jobId>/breaks/rules endpoint no longer returns a 500 error when using a valid jobId. Instead, it now returns empty files when no results are found for exports without findings.
  • When you schedule a Job to run monthly or quarterly, the jobSchedule object in the /v3/datasetDefs endpoint now reflects your selection.
  • When you run the /v3/rules/{dataset}/{ruleName}/{runId}/breaks endpoint when Archive Rules Break Records is enabled, break records are now retrieved from the source system instead of the Metastore.
  • Meta tags are now correctly applied to new datasets that are created with the /v3/datasetDefs endpoint.
  • When an integration import job fails, the /v3/dgc/integrations/jobs endpoint now returns the correct “failed” status. Additionally, the integration job status “ignored” is now available.

Integration

  • An invalid RunID no longer returns a successful response when using a Pushdown dataset with the /v3/jobs/run endpoint.

Release 2025.01

Release Information

  • Release date of Collibra Data Quality & Observability 2025.01: January 27, 2025
  • Release notes publication date: December 31, 2024

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Collibra Data Quality & Observability, effective in the August 2025 (2025.08) release.

In the February 2025 (2025.02) release, Collibra Data Quality & Observability will only be available on Java 17 and Spark 3.5.3. Depending on your installation of Collibra Data Quality & Observability, you can expect the following in the 2025.02 release:
  • Kubernetes installations: Kubernetes containers will automatically contain Java 17 and Spark 3.5.3. You may need to update your custom drivers and SAML keystore to maintain compatibility with Java 17.
  • Standalone installations: You must upgrade to Java 17 and Spark 3.5.3 to install Collibra Data Quality & Observability 2025.02. Additional upgrade guidance will be provided upon the release date. We encourage you to migrate to a Kubernetes installation, to improve the scalability and ease of future maintenance.

The April 2025 (2025.04) release will have Java 8, 11, and 17 versions of Collibra Data Quality & Observability and will be the final release to contain new features on Java 8 and 11. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be included in the Java 8 and 11 versions of Collibra Data Quality & Observability.

See what is changing
Collibra Data Quality & Observability versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


Additional details on driver compatibility, SAML upgrade procedures, and more will be available alongside the 2025.02 release.

For more information, visit the Collibra Data Quality & Observability Java Upgrade FAQ.

Enhancements

Platform

  • When you access Swagger from the Admin Console or the in the upper right corner of any Collibra Data Quality & Observability page, it now opens in a separate tab or window.

Connections

  • You can now authenticate Amazon S3 connections using OAuth2 with an Okta principal as a service account. This enhancement simplifies the authentication process and improves security.
  • When you add a connection, you can now use commas and equals characters in values in the Values option on the Variables tab. For example, val1=,val2 is now a supported variable value.
    • Additionally, the Driver Properties on the Properties tab now use a JSON array format instead of a comma-separated string. For example, [{"name":"prop1","value":"val1"},{"name":"prop2","value":"val2"}] is now the supported format for properties.

Jobs

  • Snowflake Pushdown connections now support source-to-target validation for datasets from the same Snowflake Pushdown connection.
  • Snowflake Pushdown connections now support all regex special characters in column names except for square brackets [ ].

Rules

  • When you hover over the break percentage (Perc) value on the Rules tab of the Findings page, a new popover message now shows the full value. This update improves visibility and makes it easier to view detailed information.
  • Tooltips are now available for the Points and Perc columns on the Rules tab of the Findings page.
  • You now see a descriptive error message if your data class rule fails to save because of missing required fields or a duplicate name when using the data class rule creator.
  • You no longer need to rerun a DQ Job after renaming a rule for its new name to be reflected across all relevant pages.

APIs

  • The POST /v3/datasetdefs endpoint now supports creating outlier and/or pattern records directly by including the outliers or patterns element in the JSON payload (dataset name is required).
    • If an ID is not provided, a new record is created.
    • If an ID is provided and matches an existing record, the existing record will be updated.
    • Alternatively, you can use the existing /v2/upsertPatternOpt and /v2/upsertOutlierOpt endpoints to create records and then reference their IDs when you use the PUT /v3/datasetdefs endpoints for updates.
  • The ControllerAdmin API now requires ROLE_ADMIN or ROLE_ADMIN_VIEWER to use the endpoints:
    • /gettotalcost
    • /gettotalbytes
  • The ControllerSecurity API now requires ROLE_ADMIN, ROLE_USER_MANAGER, or ROLE_OWL_ROLE_MANAGER to use the endpoints:
    • /getalldbuserdetails
    • /getdbuserwithroles
    • getAllAdGroupsandRoles
  • The ControllerUsageStats API now requires ROLE_ADMIN or ROLE_ADMIN_VIEWER to use the endpoints:
    • /getuserprofilecount
    • /getcolumnscount
    • /getusercount
    • /getdatasetcount
    • /getalertcount
    • /getemailcount
    • /getfailingdataset
    • /getpassingdataset
    • /getowlcheckcount
    • /getowlusagelast30days
    • /getowlusagemonthly
    • /getrulecount
    • /gettotalowlchecks
  • We deprecated the GET /deleteappconfig. As an alternative, you can use DELETE /deleteappconfig instead.
  • We deprecated the GET /deleteadminconfig. As an alternative, you can use DELETE /deleteadminconfig instead.

Fixes

Jobs

  • You can again preview columns on SAP HANA tables with periods (.) in their names.
  • When you run DQ Jobs from the command line, the command -srcinferschemaoff, which prevents Collibra Data Quality & Observability from inferring the schema in the validate source activity, now works as expected when files are in the source.
  • When DB_VIEWS_ON is set to TRUE on the Application Configuration page of the Admin Console, the Include Views option is now also applied when using the Mapping step for a Database connection.

Findings

  • The data preview of DQ Jobs created on file data sources that use pipe delimiters (|) now displays correctly on the Profile and Shapes tab of the Findings page.
  • DQ Jobs with the validate source layer enabled no longer return an error when you open their Findings pages.
  • When you edit the scheduled run time of a DQ Job from the Findings page, the updated scheduled time now saves correctly.
  • We removed the sort icon for the Breaking Records and Passing Records columns on the Rules tab of the Findings page, because it was not functional.
  • When you click the Run ID link of a dataset on the Assignments Queue, the Findings page now opens to the finding type associated with the Run ID. For example, a Rule finding links to the Rules tab on the Findings page.

Profile

  • When you change an Adaptive Rule from automatic to manual scoring on the Profile page, the points value now saves correctly.
  • When you manually set the lower and upper bounds of an Adaptive Rule, you can now use decimal values, such as 10.8.
  • When you use the search function within the Profile page to search for a partial string of a column name, such as "an" to return results for the columns "exchange" and "company," the correct number of results are now returned. Previously, a maximum of 10 results were returned.
  • When a column contains special characters, the column stats section on the Profile page now formats the results correctly, ensuring better readability.

Rules

  • When you use the Archive Break Records feature and the link ID column is an alias column, Collibra Data Quality & Observability now properly identifies the link ID column to ensure the rule processes correctly.
  • Complex DQ Job query joins containing SAP HANA reserved words in SQL queries are now supported. This ensures that the query compilation processes successfully. The following SAP HANA reserved words are supported in SQL queries:
    • "union", "all", "case", "when", "then", "else", "end", "in", "current_date", "current_timestamp", "current_time", "weekday", "add_days", "add_months", "add_years", "days_between", "to_timestamp", "quarter", "year", "month", "to_char"
  • User-defined rules against DQ Jobs on Snowflake Pushdown connections no longer return a syntax error when "@" is present anywhere in the query.
  • A rule on a DQ Job with Archive Breaking Records enabled now processes successfully without returning an exception message, even when it does not reference all link IDs in its SELECT statement.
  • We temporarily removed the ability to view rule results in fullscreen on the Findings page to address an issue where rule results were missing in fullscreen. The fullscreen view will be re-added in the upcoming Collibra Data Quality & Observability 2025.03 release.
  • Int Check rules using string column types no longer flag valid integers as breaking.

    Note When displaying preview break records, if a column in the dataset has an inferred data type that differs from the defined type, the data type check is performed based on the defined type rather than the inferred type.

  • Break records are no longer shown for rules with an exception status. Additionally, an error message now provides details about the exception.

Dataset Manager

  • When you rename a dataset from the Dataset Manager, you can now use a name that exists in a different tenant but not in your current one.
  • Datasets with spaces in their names no longer lose their rules when you rename them from the Dataset Manager.

Reports

  • The time zone configurations of certain Collibra Data Quality & Observability environments no longer prevent you from properly viewing the Dataset Findings Report.
  • When you use the search filter on the DQ Check Summary Report, you no longer encounter an error when you reach records for Check Type - Rule.

Connections

  • The CDATA driver is now supported for Athena Pushdown connections.

Admin Console

  • The Security Audit Trail now correctly shows the username of the user who deleted a dataset.

APIs

  • The /v3/jobs/<jobId>/breaks/rules endpoint no longer returns a 500 error when using a valid jobId. Instead, it now returns empty files when no results are found for exports without findings.
  • The /v3/jobs/{jobID}/findings and /v3/jobs/{dataset}/{runDate}/findings endpoints now return the correct value for the passFail parameter.

Integration

  • When you integrate a dataset into Collibra Platform, then add extra columns to it and re-run the integration, the exported JSON file now correctly shows the additional columns.
  • The score tile on the Quality tab of Data Quality Rule and Data Quality Job assets now works as expected.
  • Dates on the Overview - History table and chart on the Quality tab are now arranged in chronological order.
  • The tooltip for Last Run Status on the Summary tab of Data Quality Job assets now contains the correct message.

Beta features

Rules

  • When you edit an existing rule from the Rules tab of the Findings page, you can now edit the tolerance value.
  • Rule Tolerance is now available for Pushdown connections.
  • Rule filter queries applied to rules on Pullup DQ Job now support secondary datasets. Filter queries on @t1 rules are not supported at this time.