Release notes

Important 
  • Failure to upgrade to the most recent release of the Collibra Service and/or Software may adversely impact the security, reliability, availability, integrity, performance or support (including Collibra’s ability to meet its service levels) of the Service and/or Software. For more information, read our Collibra supported versions policy.
  • Some items included in this release may require an additional cost. Please contact your Collibra representative or Collibra Account Team with any questions.

Release 2025.09

Release Information

  • Expected release date of Data Quality & Observability Classic 2025.09: September 29, 2025
  • Release notes publication date: September 2, 2025

New and improved

Platform

  • You can now configure Secret Key-based encryption in Data Quality & Observability Classic environments. This allows you to use an existing JKS file or override it with your own file for greater control over key size, algorithms, and encryption methods.
  • When you delete a tenant from the metastore, the schema and all its tables are also deleted.

Jobs

  • BigQuery Pushdown jobs now support column names with spaces.

Findings

  • We removed the "State" column from the Rules tab on the Findings page to improve readability and streamline the layout.

Rules

  • The column order in the Preview Breaks dialog box now matches the column order on the Rule Query page.
  • The values for rules, data preview, and rule preview now show full numeric values instead of exponential values.
  • The Lower Bound and Upper Bound input fields for adaptive rules now replace commas with periods, ensuring that values such as 10,23 are interpreted and displayed as 10.23. This prevents locale-specific misinterpretations.

Dataset Overview

  • You can now analyze tables in schemas with mixed or lowercase Snowflake schema names without editing the dataset query. This applies to both Pushdown and Pullup, making the analysis process more seamless.

Dataset Manager

  • You can now use the updated APIs to manage the new "Schema Table Mapping" field available in the Dataset Edit dialog box in the Dataset Manager. Use the "/v2/updatecatalogobj" API to update this field and the "/v2/getdatasetschematablemapping" API to retrieve the Schema Table Mapping JSON string. Existing authorization rules now apply to schema mapping API updates, ensuring consistent security. Additionally, the scope query parser now supports multiple schemas and tables, and cardinality has been enhanced to allow multiple tables in a single job. This advanced SQL parsing is executed for each Data Quality & Observability Classic job when the new field, schemaTableNames, is empty or Null.
  • You can now see a new read-only "Dataset Query" field in the Dataset Edit dialog box. This field shows the variables used in the query, making it easier to review the dataset's configuration.
  • When you edit a dataset in Dataset Manager, the "Schema Table Mapping" field is now automatically updated during the next job execution if it is empty or blank. The new parsing algorithm uses the scope query schema and table discovery to populate this field, ensuring more accurate and complete dataset information.

Collibra Platform integration

  • You can now associate a single data asset with multiple tables. The cardinality of the "Data Asset Represent Table" relation type has been updated from one-to-one to one-to-many, allowing for greater flexibility in managing data asset relationships.

Fixes

Platform

  • On the Quality tab of Collibra Platform, the ring chart color now matches the corresponding ring chart in the At a Glance sidebar.

Jobs

  • Pushdown jobs no longer fail with a "ConcurrentDeleteReadException" message caused by concurrent delete and read operations in a Delta Lake environment.
  • Potential errors in the estimation step caused by complex source queries are now resolved.

Rules

  • You can now preview the results of your rule on jobs from SQL Server connections using Livy without the application becoming unresponsive.
  • When a SQL rule contains multiple column names and the MAP_TO_PRIMARY_COLUMN option is enabled during integration setup, only the primary column name is now assigned to the data quality rule asset.
  • The "Actions" drop-down list on the Templates page no longer forces a selection.
  • En dash (–) comments in the SQL source query no longer cause the rule to throw an exception.

Dataset Manager

  • You can now find renamed datasets in the Dataset Manager.

Collibra Platform integration

  • The lowest possible passing fraction is now 0 when a dataset from Data Quality & Observability Classic is integrated with Collibra Platform, even if the total number of outlier findings would otherwise result in a negative passing fraction value.
  • You no longer receive an unexpected error stating “unable to fetch DGC schemas” when mapping connections from Data Quality & Observability Classic to databases in Collibra Platform.
  • The dgc_dq_mapping table now includes an alias for column names, ensuring the correct column relation is reflected in the data quality rule.

Release 2025.08

Release Information

  • Release date of Data Quality & Observability Classic 2025.08: September 2, 2025
  • Release notes publication date: August 6, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective this release (2025.08).

In this release (2025.08), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.6. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.6.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.6.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.08, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.6.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.08 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

In this release, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 onlyReleases 2025.02 and later support Google Cloud Dataproc for Spark 3.5.1. Dataproc requires Java 17 and currently does not support SQL Server pullup datasources.
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.6 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Platform

Warning To continue using SQL Assistant for Data Quality, you must upgrade to Data Quality & Observability Classic version 2025.08 by September 24, 2025. This version utilizes the gemini-2.5-pro AI model. After this date, SQL assistant for data quality will stop working unless you upgrade, as Google will retire the gemini-1.5-pro-002 model used in earlier versions of Data Quality & Observability Classic.

  • You can now control the strictness of SQL query validation in the Dataset Overview and in the SQL View step in Explorer. Add the "VALIDATE_QUERIES" and "STRICT_SQL_QUERY_PARSING" settings in the Application Configuration Settings page of the Admin Console to configure this behavior. When "VALIDATE_QUERIES" is enabled, you can choose between simple or strict validation via the "STRICT_SQL_QUERY_PARSING" setting. Simple validation checks if the query contains any keywords other than SELECT, while strict performs a full syntax check to ensure the query is a valid SELECT statement with no non-SELECT keywords. If both settings are disabled, SQL query validation is turned off. For more information about how this works, go to Application Configuration Settings.
  • When you upload a file to Data Quality & Observability Classic, the application now validates the content type to ensure it matches the file extension. Additionally, when uploading drivers on the Connections page, only JAR files with a content type of "application/java-archive" or "application/zip" are supported.
  • Data Quality & Observability Classic now runs on Spark 3.5.6. If you use a Spark Standalone or EMR deployment of Data Quality & Observability Classic and experience issues upgrading to 2025.08, we recommend upgrading your Spark version to 3.5.6. No action is required if you don't encounter any issues.

Jobs

  • Trino Pushdown connections now support profiling and custom rules on columns of the ROW data type.
  • You can now set the "Preview Limit" option to a value of 0 or higher when creating or editing a rule.
  • Null values are no longer counted as unique values in Pullup mode. However, you can include null values in the unique value count on the Profile page by setting the command line option "profileIncludeNulls" to "true."

Findings

  • When you export source-to-target findings, the export file now includes the column schemas and cell values.
  • Float values with commas now retain up to 4 decimal places and remove trailing zeroes from the Findings page. For example, 3722.25123455677800000000000 is now shown as 3722.2512. When you hover your pointer over the value, the full value is shown in a tooltip.
  • You can now download a CSV of break records directly from the Preview Breaks dialog box using the Download Results button. (idea #PE-I-3723, CDQ-I-386)

Alerts

  • You can now create alerts for adaptive rules from Pushdown jobs in the “Breaking” state. This eliminates the need to manually search for breaking records in adaptive rules, reducing the risk of missing them. (idea #DCC-I-609, DCC-I-2713, CDQ-I-199, CDQ-I-105)

Connections

Fixes

Platform

  • Updating an asset's name no longer breaks the Quality tab, as related assets are connected using the asset's full name.

Jobs

  • Columns from Trino connections are now accurately identified using their JDBC drivers and metadata instead of their catalog name, ensuring they load correctly.
  • The Partition Column field now shows the correct column in the Job Creator metadata box of the scanning method step.
  • An “invalid table name” error no longer occurs during job size estimation for data sources that previously caused errors, including Denodo, SQL Server, and Oracle.

Rules

  • When you use pretty print for a rule, it no longer introduces additional characters or alters values.

Findings

  • You no longer receive an "Access Denied" message when editing the Findings page for local files.

Profile

  • You can now compare the Baseline and Run values for the "Filled rate" when the Quality switch is enabled for a column on the Profile report.

Reports

  • You can no longer filter by date in the Alert Details report. This ensures that the charts and counts accurately reflect the applied filter criteria.

Collibra Platform integration

  • The Quality tab on asset pages in Collibra Platform no longer includes the passing fraction attribute values of suppressed rules from Data Quality & Observability Classic in the overview and dimension aggregation scoring.
  • To ensure the correct column relation is reflected in the data quality rule, the "dgc_dq_mapping" table now includes an alias for column names.

Release 2025.07

Release information

  • Release date of Data Quality & Observability Classic 2025.07: August 4, 2025
  • Release notes publication date: July 8, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.07), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.07, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.07 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

In this release, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 onlyReleases 2025.02 and later support Google Cloud Dataproc for Spark 3.5.1. Dataproc requires Java 17 and currently does not support SQL Server pullup datasources.
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.6 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Platform

  • You can now use the dataset security setting "Require TEMPLATE_RULES role to add, edit, or delete a rule template" and assign users the "ROLE_TEMPLATE_RULES" role to restrict updates to template rules. (idea #CDQ-I-169)
    • When "Require TEMPLATE_RULES role to add, edit, or delete a rule template" is enabled, users without the "ROLE_ADMIN" or "ROLE_TEMPLATE_RULES" roles can view the list of template rules but cannot add, edit, or delete them.
    • If "Require TEMPLATE_RULES role to add, edit, or delete a rule template" is disabled, or if it is enabled and the user has "ROLE_ADMIN" or "ROLE_TEMPLATE_RULES," they can view, add, edit, and delete rule templates.
  • When creating a job, the “Visualize” function in Explorer is now inaccessible if the dataset security setting "Require DATA_PREVIEW role to see source data" is enabled and the user doesn't have the "ROLE_DATA_PREVIEW" role.
  • You can now use SAML SSO to sign in from the Tenant Manager page. Additionally, the AD Security Settings page from the Admin Console is now available in the Tenant Manager, allowing you to map user roles to external groups.
  • The application now has improved security when parsing SQL queries in the Dataset Overview and the "SQL View" step in Explorer. As a result, new or edited datasets with complex queries may not pass validation during parsing if they include nested subqueries, non-standard syntax, or ambiguous joins that are not supported by the parser's grammar.

Jobs

  • You can now scan for duplicate records in Oracle Pushdown jobs.
  • When you transform a column from Explorer, it is now available in the “Target” drop-down list during the Mapping step.
  • When a job fails, the metadata bar remains visible on the Findings page.

Rules

  • The “Edit Rule” dialog box now includes Description and Purpose fields. These fields allow you to add details about Collibra Platform assets associated with your rule and identify which assets should relate to your Data Quality Rule asset upon integration. (idea #CDQ-I-219)
  • The Rule Workbench now supports the substitution of both string and numeric data types for $min, $average, and $max stat rule values.
  • The Results Preview on the Rule Workbench now shows columns in the same order as they are listed in the rule query. For example, the query select SYMBOL, VOLUME, LOW, HIGH, CLOSE, OPEN from PUBLIC.NYSE where SYMBOL IS NOT NULL LIMIT 3 displays the columns in the following order in the Results Preview:
    • SYMBOL
    • VOLUME
    • LOW
    • HIGH
    • CLOSE
    • OPEN

Alerts

  • You can now create alerts for adaptive rules from Pullup jobs in the “Breaking” state. This eliminates the need to manually search for breaking records in adaptive rules, reducing the risk of missing them. (idea #DCC-I-609, DCC-I-2713, CDQ-I-199, CDQ-I-105)
  • Condition alerts now include a "Condition Value" field to provide more context about the reason for the alert. For example, the Condition Value for a row count check might be rc = 0, while for an outlier score, it could be outlierscore > 25. (idea #CDQ-I-317)
  • When you create a new alert with a name already in use by another alert on the same dataset, the existing alert is no longer automatically replaced. Instead, a confirmation dialog box now appears, allowing you to choose whether to overwrite the existing alert.

Profile

  • The correlation heatmap in the Correlation tab of the Profile page now shows a maximum of 10 columns, regardless of the number of columns returned in the API response.
  • The “Quality” column on the Profile tab is now renamed to “Completeness/Consistency.”

Collibra Platform Integration

  • When you rename a job from Data Quality & Observability Classic that is integrated with Collibra Platform, the corresponding domains in Collibra Platform are now renamed to match the new job name.
  • When you add, update, or remove meta tags from a job in Data Quality & Observability Classic with an active integration to Collibra Platform, the import JSON and the corresponding Data Quality Job in Collibra Platform are now updated. The updated JSON can include up to 4 meta tags.

APIs

  • The GET /v3/datasetDefs API call now returns meta tags in the array response in the same order they are set in the Dataset Manager. For example, if the first meta tag input field in the Dataset Manager contains "DATA," the second and third fields are empty, and the fourth contains "QUALITY," the array response will return:
  • Copy
        “metaTags”:  [
          “DATA”, 
         null,
          null,
          “QUALITY”,
    ],

Admin Console

  • You can now control the maximum allowed temporary file size using the new "maxfileuploadsize" admin limit. The default value is 15 MB.
  • When you enable the "RULE_FILTER" setting in the Application Configuration Settings page, create a rule with a filter, and then disable the "RULE_FILTER" setting before rerunning the job, the results on the Findings page still reflect the rule using the filter.
  • When you delete a role, any datasets mapped to it are now removed. Previously, the relations between the role, datasets, and users were still shown in the UI after deleting the role.
  • Orphaned dataset-run_id pairs are now removed from the “dataset_scan” table when you use a time-based data retention purge.

Fixes

Platform

  • You no longer encounter unexpected errors with SAML/SSO, rules, and Pushdown after upgrading to some Spark standalone self-hosted instances of Data Quality & Observability Classic.
  • Group-based role assignments for Azure SSO now work correctly after upgrading Data Quality & Observability Classic. Users no longer receive only the default "ROLE_PUBLIC," ensuring proper access permissions.
  • Users with the "ROLE_PUBLIC" and "ROLE_DATA_PREVIEW" roles can no longer create and save rules, ensuring role restrictions are enforced as intended.

Connections

  • You can now run jobs on Teradata connections without requiring the DBC.RoleMembrs.RoleName permission.

Jobs

  • Alias names now appear for date fields during row selection for a new job.
  • You no longer see the "Pushdown Count" option on the Mapping Configuration page. This was removed for BigQuery because it is not supported by Google.

Rules

  • Rule findings using a rule filter for Pushdown jobs now show the correct subset of rows applied for the rule, instead of the total rows at the dataset level.

Findings

  • Boolean attributes now show their values correctly in the Result Preview on the Rule Workbench screen.
  • The handling of “shape/enabled” and “deprecated profile/shape” JSON flags in dataset definitions and job runs is now consistent.
  • When a column contains multiple shape findings, the data preview is now shown as expected when you expand a shape finding.

Profile

  • Profiling activities now correctly support profile pushdown on BigQuery, ensuring accurate and efficient data profiling.

Reports

  • Encrypted columns now appear correctly on the Dataset Findings report.

APIs

  • The /v3/rules/{dataset}/{ruleName}/{runId}/breaks API now returns a “204 No Content” HTTP status code during pullup jobs when a rule has no break records linked to a specific LinkID.

Patch release

2025.07.1

  • You can now control the strictness of SQL query validation in the Dataset Overview and in the SQL View step in Explorer. Add the "VALIDATE_QUERIES" and "STRICT_SQL_QUERY_PARSING" settings in the Application Configuration Settings page of the Admin Console to configure this behavior. When "VALIDATE_QUERIES" is enabled, you can choose between simple or strict validation via the "STRICT_SQL_QUERY_PARSING" setting. Simple validation checks if the query contains any keywords other than SELECT, while strict performs a full syntax check to ensure the query is a valid SELECT statement with no non-SELECT keywords. If both settings are disabled, SQL query validation is turned off. For more information about how this works, go to Application Configuration Settings.

Release 2025.06

Release Information

  • Release date of Data Quality & Observability Classic 2025.06: June 30, 2025
  • Release notes publication date: June 3, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.06), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.06, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.06 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

In the 2025.06 and 2025.07 releases, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 onlyReleases 2025.02 and later support Google Cloud Dataproc for Spark 3.5.1. Dataproc requires Java 17 and currently does not support SQL Server pullup datasources.
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.6 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Platform

  • This application, which was previously known as Collibra Data Quality & Observability, is now called Data Quality & Observability Classic. This change reflects the introduction of the new Data Quality & Observability as part of the unified Collibra Platform. For more information about Data Quality & Observability, go to About Data Quality & Observability.

Jobs

  • When archiving break records for Pushdown jobs, link IDs are now optional. This change provides greater flexibility in storing and accessing break records and offers a more scalable solution for remediating records outside of Data Quality & Observability Classic. (idea # CDQ-I-155, DCC-I-2566, DCC-I-5726, CDQ-I-90, CDQ-I-137)
    • If Data Quality & Observability Classic does not have permission to write to tables in your source database, Pushdown jobs with or without link IDs selected require you to run an ALTER statement on the COLLIBRA_DQ_RULES table to add the new results column for JSON data. Data Quality & Observability Classic will attempt this alteration automatically, which will cause the rule to fail because it will attempt to write to a column that doesn’t exist in your source database. For more information, go to ALTER statement requirements for optional link IDs.
    • Important If you don’t use a link ID, the results column in the COLLIBRA_DQ_RULES table will become exponentially larger, leading to unintended storage and compute costs in your source database. To minimize these costs, you can use a link ID or move these results out of your source database and into a remote file storage system, such as Amazon S3.

  • Data Quality & Observability Classic now checks that a link ID is set in the settings dialog on Explorer when creating a DQ job, to ensure the archive break records feature works when enabled. If one or more link ID columns are selected, a checkbox and drop-down list appear next to the Archive Breaking Records option in the settings dialog. If no link ID columns are selected, the checkbox and drop-down list are unavailable, and a tooltip appears when you hover over the option. (idea #DCC-I-2746)
  • Oracle Pushdown is now generally available.
  • Oracle Pushdown connections do not support the following data types:
    • LONG RAW
    • NCHAR
    • NVARCHAR2
    • ROWID
    • BYTES
  • Trino Pushdown jobs now support source-to-target analysis for datasets within the same Trino Pushdown connection.
  • SAP HANA, Snowflake, SQL Server, and Trino Pushdown connections do not support the VARBINARY data type.
  • The valsrclimitui admin limit, which controls the number of Validate Source records shown in the UI, now works as expected. For example, if valsrclimitui is set to 3, the UI shows 3 records from the Column Schema and 3 records from the Cell Panels.
  • Note The fpglimitui, histlimitui, and categoricallimitui admin limits are not working as expected today. Further, any admin limit ending “ui”, such as valsrclimitui, is intended only for updates made via the Admin Limits page of the Admin Console. Changes made to these admin limits via the command line will not be reflected.

Rules

  • Rules that use $colNm in the query, such as data class, template, and data type checks, no longer perform automatic rule validation.
  • The Rule Workbench now has a warning message when the preview limit exceeds the recommended maximum of 500 records.
  • The maximum length of the name of a rule is now 250 characters.
  • The Preview Breaks modal now indicates the name of the rule.
  • The Low, Medium, and High buttons now correctly reflect your selection when you reopen the Rule Details dialog box after automatically changing a rule's points. (idea #CDQ-I-150)
    • Additionally, if you manually update the score, the button automatically adjusts to match the corresponding scoring range:
      • Low: 1-4 points
      • Medium: 5-24
      • High: 25-100

Admin Console

  • Business unit names on the Business Unit page in the Admin Console cannot contain the special characters / (forward slash), `(backtick), | (pipe), or $ (dollar sign). If you use any of these special characters, an error message appears, and the Edit Business Unit dialog box remains open until the special characters are removed.
  • Data retention policies now apply to the following metastore tables:
    • alert_output
    • behavior
    • dataset_activity
    • dataset_schema_source
    • hint
    • validate_source
  • The Title and Story fields of the User Story section of the User Profile page cannot contain the following special characters:
    • ‘ (single quote)
    • “ (double quote)
    • -- (double dash)
    • < (less than)
    • > (greater than)
    • & (ampersand)
    • / (forward slash)
    • ` (backtick)
    • | (pipe)
    • $ (dollar sign)
    • If you use any of these special characters, an error message appears, and the Edit User Story dialog box remains open until the special characters are removed.

Fixes

Rules

  • You can now apply rule filter queries to rules created using rule templates on Pushdown datasets.
  • Duplicated data quality rules no longer appear after you import data assets using CMA.
  • Rules now run without errors when you use a single link ID to archive break records with SQL Server Pushdown.

Profile

  • Divide-by-zero errors and issues with empty histograms are now resolved. Calculations and visualizations now work as expected without interruptions.

Findings

  • You can now export rule break records with accurate counts. The total number of rule break records for a rule now matches the number in the Breaking Records column of the exported file.

Integration

  • When you rename a rule in Data Quality & Observability Classic that is part of a dataset integrated with Collibra Platform, additional Data Quality Rule assets are no longer created during the next integration. The existing Data Quality Rule asset is also no longer unintentionally set to a “suppressed” state during the next integration, and its name is updated to match the name in Data Quality & Observability Classic.

Release 2025.05

Release Information

  • Release date of Data Quality & Observability Classic 2025.05.1: June 16, 2025
  • Release date of Data Quality & Observability Classic 2025.05: June 2, 2025
  • Release notes publication date: May 6, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.05), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.05, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.05 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 onlyReleases 2025.02 and later support Google Cloud Dataproc for Spark 3.5.1. Dataproc requires Java 17 and currently does not support SQL Server pullup datasources.
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.6 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Warning We identified a known security vulnerability (CVE-2025-48734) in version 2025.05. We addressed this issue in the 2025.05.1 patch.

Platform

  • You can now authenticate users via Microsoft Azure Active Directory B2C when signing into Data Quality & Observability Classic.
  • Only users with the ROLE_ADMIN role can now see the username and connection string of JDBC connections on the Connections page and in API responses, improving connection security.
  • Metastore credentials are no longer stored in plain text when you create a DQ Job via a notebook. This improvement increases the security of your credentials.
  • We enhanced the security of Data Category validation.
  • We improved the security of our application.

Jobs

  • Trino Pushdown jobs now support validate source.
  • The Run Job buttons now have two enhancements:
    • The Run Job with Date button on the Jobs tab of the Findings page is now labeled Select Run Date.
    • The Run Job button on the metadata bar now includes a helpful tooltip on hover.
  • SAP HANA connections now support the operators =>, ||, and && for scoped queries, the reserved word ‘LIKE_REGEXTR’, and the function REPLACE in scoped queries.
  • The assignments queue is now disabled for the validate source activity on new installations of Data Quality & Observability Classic. To enable it, an admin can set valsrcdisableaq to "false" on the Admin Limits page.
  • Warning When you set valsrcdisableaq to "true," the ability to assign findings is disabled for all finding types, such as rules, outliers, and so on. This issue is resolved in version 2025.05.1.

Rules

  • When Archive Break Records is disabled, you can now use NULL link IDs with Athena, BigQuery, Hive, Redshift, and SQL Server connections.
  • When Archive Break Records is enabled, and a rule doesn't specify a link ID column but the job does, you can no longer archive break records in these scenarios, as Data Quality & Observability Classic no longer appends the link ID to the rule. When a rule does not contain the selected link ID, the break records for that rule will not be archived, and an exception will occur. The job will execute successfully, but the rule will not produce any archived break records, and the exception message will explain that the link ID is not part of the rule.
  • Trino Pushdown jobs now support profiling and custom rules on columns of the ARRAY data type.
  • The Rule Details dialog on the Rule Workbench page now has an optional Purpose field. This allows you to add details about Collibra Platform assets associated with your rule and see which assets should relate to your Data Quality Rule asset upon integration.

Alerts

  • You can now set up Data Quality & Observability Classic to send data quality alerts to one or more webhooks, eliminating the dependency on SMTP for email alerts.
  • You can now rename alerts on the Dataset Alerts page. When you rename an alert, the updated name is reflected in the alert_nm column of the alert_cond Metastore table. The updated name applies only to alerts generated in future job runs and does not affect historical data.
  • In addition to the existing rule name condition variable for percentages, you can now include a rule name with a value identifier, such as test_rule.score > 0, to create condition alerts based on rule scores.

Profile

  • When you retrain a breaking adaptive rule, negative values are now supported for lower and upper bounds, min, max, and mean.

Findings

  • To enhance usability, we moved the data preview from the Rules results table into a dedicated modal, making it easier to navigate and view data.
    • The Actions button is now always visible on the right side of the Rules tab, and includes the following options:
      • Archived Breaks, previously called Rule Breaks, retains the same content and functionality.
      • Preview Breaks, previously embedded under the Plus icon in the first column, shows all breaking records for a given rule. This option provides the same information that was previously available in the Rules tab and still requires the ROLE_DATASET_ACCESS role. Additionally, the Rule Break Preview modal reflects the preview limit of the rule.
    • Tooltips were added to various components on the Rules tab and Rule Break Preview modal.

Integration

  • You can now use the Map to primary column only switch in the Connections step of the integration setup wizard. This allows you to map a rule to only the Column asset in Collibra Platform that corresponds to the primary column in Data Quality & Observability Classic.
  • When you rename a custom rule in Data Quality & Observability Classic with an active integration with Collibra Platform, a new asset is no longer created in Collibra Platform. Instead, renamed rules reuse the Collibra Platform asset IDs associated with the previous rule name when their metadata is integrated.

APIs

  • The /v3/datasetDefs/template API endpoint now returns the nested key-value pairs for shape settings and job schedule in the datasetDefs JSON object.

Admin Console

  • You can now use the previewlimit setting on the Admin Limits page to define the maximum number of preview results shown in the Dataset Overview and Rule Workbench. The default value is 250.
  • Warning Large previewlimit values can negatively impact performance.

Fixes

Platform

  • All datasets are now available when you run a dataset rule fetch with dataset security disabled for the admin user account.
  • FIPS-enabled standalone installations of Data Quality & Observability Classic now support SAML authentication.

Jobs

  • When you remove a table from a manually triggered or scheduled job, you now receive a more descriptive error stating that the table does not exist instead of a generic syntax error.
  • You can now use Amazon S3 as a secondary dataset name for a rule query.
  • You can now rerun a job for a selected date using the Run DQ Job button on the metadata bar of the Findings page.
  • The command line now retains the created query, including the ${rd} parameter, when you run a job using the Run DQ Job button on the metadata bar.
  • The controller response for queries to the alert_cond table in the Metastore now maps internal objects correctly.
  • The Profile page now correctly shows the top 5 TopN and BottomN shape results for Pullup jobs.
  • When you add a transform to a job with histogram enabled or set to auto, the job processes as expected, and aliased columns display correctly on the Profile page.

Rules

  • You can again save rules on Pushdown jobs that exceed ⅓ of a buffer page.
  • Pushdown rules that use stat variables now run with the correct information from the current job, rather than using data from previous job runs.
  • The DOUBLECHECK rule on PostgreSQL and Redshift Pullup jobs now flags rows with negative double values as valid.
  • The Result Preview in the Rule Workbench no longer produces an error when you use out-of-the-box templates.

Findings

  • Pushdown jobs run on Snowflake connections now show as failed on the Findings page if a password retrieval issue occurs.

Dataset Manager

  • When you apply an option from the Actions drop-down list, such as delete a dataset, the correct dataset now has the action applied to it.

Admin Console

  • When you select a submenu option from the Admin Console menu, the submenu section now remains open.

Patch release

2025.05.1

  • Columns from Db2, Oracle, SQL Server, Sybase, and Teradata connections now load correctly in Explorer for users without the "ROLE_ADMIN" or "ROLE_CONNECTION_MANAGER" roles.
  • The ability to assign findings for all finding types, such as rules, outliers, and so on, is no longer disabled when the "valsrcdisableaq" limit is set to "true" on the Admin Limits page.
  • We resolved a security vulnerability related to CVE-2025-48734.