Release notes

Important 
  • Failure to upgrade to the most recent release of the Collibra Service and/or Software may adversely impact the security, reliability, availability, integrity, performance or support (including Collibra’s ability to meet its service levels) of the Service and/or Software. For more information, read our Collibra supported versions policy.
  • Some items included in this release may require an additional cost. Please contact your Collibra representative or Collibra Account Team with any questions.

Release 2025.10

Release Information

  • Expected release date of Data Quality & Observability Classic 2025.10: October 27, 2025
  • Release notes publication date: September 22, 2025

New and improved

Platform

  • When SAML is enabled, users with an active SSO session are now logged in automatically using the "Remember Me" option. Users without an active SSO session are redirected to their SSO provider page.
  • Tenants on the Login page are now listed in alphabetical order.

Jobs

  • A summary query is now generated for Pushdown jobs when archive break records are turned off and no link IDs are set. This helps limit the break records requested for the metastore data preview and prevents out-of-memory issues.
  • To ensure consistent validation behavior for job names between Dataset Manager and Explorer, special characters are no longer allowed when editing job names in Dataset Manager.
  • White spaces and parentheses are now handled correctly when creating a BigQuery job. You no longer need to use pretty print to resolve syntax errors.

Findings

  • The Rules tab on the Findings page now includes a search box to find matching rule names. This allows you to quickly locate specific rules, improving navigation and efficiency.

Fixes

Platform

  • The dq-web-configmap.yaml file no longer contains the incorrect parameter for the “LOCAL_REGISTRATION_ENABLED” setting.

Jobs

  • When an Amazon S3 connection is inactive, the error message "Unauthorized Access: Credentials are missing or have expired" now shows. This message also appears when mapping from JDBC to remote files, helping you identify credential issues more easily.
  • When you re-run a job after invalidating fuzzy match duplicates, saving, and retraining on the Findings page, the job no longer contains the same invalidated fuzzy match duplicates.
  • The metadata bar is now visible when you create a job from a Databricks connection. This ensures a consistent user experience and provides access to relevant options during job creation.

Findings

  • The "Failed to update label" error no longer shows when you edit an annotation label.
  • When you configure a job with a Patterns monitor, the results on the Findings page are now shown correctly.

Rules

  • When you use "+Quick Rule" from a Rule Template, the dimension is now added automatically if a dimension was added as part of the template’s settings. This streamlines the rule creation process and ensures the dimension is included without additional steps.

Alerts

  • You now must select a radio button for "Job Alert Status *" when creating a Job Status alert. This ensures that the required selection is enforced properly, preventing incomplete alert configurations.

Collibra Platform integration

  • You no longer receive an error when re-enabling an integration that takes more than 60 seconds.
  • You can no longer enable, disable, or reset a Collibra Platform integration on the Dataset Manager or Findings pages unless you have the ROLE_DATASET_ACTION, ROLE_ADMIN, or ROLE_DATASET_MANAGER role. This ensures that only users with the appropriate permissions can manage Data Quality & Observability Classic integrations with Collibra Platform.
  • On the "Mapping Status" tab of the Integration Setup page, a new partial column table count was added to the Tables Mapped column.

Release 2025.09

Release Information

  • Release date of Data Quality & Observability Classic 2025.09: September 29, 2025
  • Release notes publication date: September 2, 2025

New and improved

Platform

Warning To continue using SQL Assistant for Data Quality, you must upgrade to Data Quality & Observability Classic version 2025.09. This version utilizes the gemini-2.5-pro AI model. After this date, SQL Assistant for Data Quality will stop working unless you upgrade, as Google will retire the gemini-1.5-pro-002 model used in earlier versions of Data Quality & Observability Classic.

  • You can now configure Secret Key-based encryption in Data Quality & Observability Classic environments. This allows you to use an existing JKS file or override it with your own file for greater control over key size, algorithms, and encryption methods.
  • When you delete a tenant from the metastore, the schema and all its tables are also deleted.

Jobs

Findings

  • We removed the "State" column from the Rules tab on the Findings page to improve readability and streamline the layout.
  • The Lower Bound and Upper Bound input fields for adaptive rules on the Change Detection modal now replace commas with periods, ensuring that values such as 10,23 are interpreted and displayed as 10.23. This prevents locale-specific misinterpretations.

Rules

  • The column order in the Preview Breaks dialog box now matches the column order on the Rule Query page.
  • The values for rules, data preview, and rule preview now show full numeric values instead of exponential values.

Dataset Overview

  • You can now analyze tables in schemas with mixed or lowercase Snowflake schema names without editing the dataset query. This applies to both Pushdown and Pullup, making the analysis process more seamless.

Dataset Manager

  • You can now use the updated APIs to manage the new "Schema Table Mapping" field available in the Dataset Edit dialog box in the Dataset Manager. Use the "/v2/updatecatalogobj" API to update this field and the "/v2/getdatasetschematablemapping" API to retrieve the Schema Table Mapping JSON string. Existing authorization rules now apply to schema mapping API updates, ensuring consistent security. Additionally, the scope query parser now supports multiple schemas and tables, and cardinality has been enhanced to allow multiple tables in a single job. This advanced SQL parsing is executed for each Data Quality & Observability Classic job when the new field, schemaTableNames, is empty or Null.
  • When you edit a dataset in Dataset Manager, the "Schema Table Mapping" field is now automatically updated during the next job execution if it is empty or blank. The new parsing algorithm uses the scope query schema and table discovery to populate this field, ensuring more accurate and complete dataset information.
  • You can now see a new read-only "Dataset Query" field in the Dataset Edit dialog box. This field shows the variables used in the query, making it easier to review the dataset's configuration.

Collibra Platform integration

  • You can now associate a single data asset with multiple tables. The cardinality of the "Data Asset Represent Table" relation type has been updated from one-to-one to one-to-many, allowing for greater flexibility in managing data asset relationships.

Fixes

Platform

  • On the Quality tab of Collibra Platform, the ring chart color now matches the corresponding ring chart in the At a Glance sidebar.

Jobs

  • Pushdown jobs no longer fail with a "ConcurrentDeleteReadException" message caused by concurrent delete and read operations in a Delta Lake environment.
  • Potential errors in the estimation step caused by complex source queries are now resolved.

Rules

  • You can now preview the results of your rule on jobs from SQL Server connections using Livy without the application becoming unresponsive.
  • When a SQL rule contains multiple column names and the MAP_TO_PRIMARY_COLUMN option is enabled during integration setup, only the primary column name is now assigned to the data quality rule asset.
  • The "Actions" drop-down list on the Templates page no longer forces a selection.
  • En dash (–) comments in the SQL source query no longer cause the rule to throw an exception.

Dataset Manager

  • You can now find renamed datasets in the Dataset Manager.

Collibra Platform integration

  • The lowest possible passing fraction is now 0 when a dataset from Data Quality & Observability Classic is integrated with Collibra Platform, even if the total number of outlier findings would otherwise result in a negative passing fraction value.
  • You no longer receive an unexpected error stating “unable to fetch DGC schemas” when mapping connections from Data Quality & Observability Classic to databases in Collibra Platform.
  • The dgc_dq_mapping table now includes an alias for column names, ensuring the correct column relation is reflected in the data quality rule.

Release 2025.08

Release Information

  • Release date of Data Quality & Observability Classic 2025.08: September 2, 2025
  • Release notes publication date: August 6, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective this release (2025.08).

In this release (2025.08), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.6. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.6.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.6.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.08, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.6.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.08 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

In this release, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 onlyReleases 2025.02 and later support Google Cloud Dataproc for Spark 3.5.1. Dataproc requires Java 17 and currently does not support SQL Server pullup datasources.
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.6 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Platform

  • You can now control the strictness of SQL query validation in the Dataset Overview and in the SQL View step in Explorer. Add the "VALIDATE_QUERIES" and "STRICT_SQL_QUERY_PARSING" settings in the Application Configuration Settings page of the Admin Console to configure this behavior. When "VALIDATE_QUERIES" is enabled, you can choose between simple or strict validation via the "STRICT_SQL_QUERY_PARSING" setting. Simple validation checks if the query contains any keywords other than SELECT, while strict performs a full syntax check to ensure the query is a valid SELECT statement with no non-SELECT keywords. If both settings are disabled, SQL query validation is turned off. For more information about how this works, go to Application Configuration Settings.
  • When you upload a file to Data Quality & Observability Classic, the application now validates the content type to ensure it matches the file extension. Additionally, when uploading drivers on the Connections page, only JAR files with a content type of "application/java-archive" or "application/zip" are supported.
  • Data Quality & Observability Classic now runs on Spark 3.5.6. If you use a Spark Standalone or EMR deployment of Data Quality & Observability Classic and experience issues upgrading to 2025.08, we recommend upgrading your Spark version to 3.5.6. No action is required if you don't encounter any issues.

Jobs

  • Trino Pushdown connections now support profiling and custom rules on columns of the ROW data type.
  • You can now set the "Preview Limit" option to a value of 0 or higher when creating or editing a rule.
  • Null values are no longer counted as unique values in Pullup mode. However, you can include null values in the unique value count on the Profile page by setting the command line option "profileIncludeNulls" to "true."

Findings

  • When you export source-to-target findings, the export file now includes the column schemas and cell values.
  • Float values with commas now retain up to 4 decimal places and remove trailing zeroes from the Findings page. For example, 3722.25123455677800000000000 is now shown as 3722.2512. When you hover your pointer over the value, the full value is shown in a tooltip.
  • You can now download a CSV of break records directly from the Preview Breaks dialog box using the Download Results button. (idea #PE-I-3723, CDQ-I-386)

Alerts

  • You can now create alerts for adaptive rules from Pushdown jobs in the “Breaking” state. This eliminates the need to manually search for breaking records in adaptive rules, reducing the risk of missing them. (idea #DCC-I-609, DCC-I-2713, CDQ-I-199, CDQ-I-105)

Connections

Fixes

Platform

  • Updating an asset's name no longer breaks the Quality tab, as related assets are connected using the asset's full name.

Jobs

  • Columns from Trino connections are now accurately identified using their JDBC drivers and metadata instead of their catalog name, ensuring they load correctly.
  • The Partition Column field now shows the correct column in the Job Creator metadata box of the scanning method step.
  • An “invalid table name” error no longer occurs during job size estimation for data sources that previously caused errors, including Denodo, SQL Server, and Oracle.

Rules

  • When you use pretty print for a rule, it no longer introduces additional characters or alters values.

Findings

  • You no longer receive an "Access Denied" message when editing the Findings page for local files.

Profile

  • You can now compare the Baseline and Run values for the "Filled rate" when the Quality switch is enabled for a column on the Profile report.

Reports

  • You can no longer filter by date in the Alert Details report. This ensures that the charts and counts accurately reflect the applied filter criteria.

Collibra Platform integration

  • The Quality tab on asset pages in Collibra Platform no longer includes the passing fraction attribute values of suppressed rules from Data Quality & Observability Classic in the overview and dimension aggregation scoring.
  • To ensure the correct column relation is reflected in the data quality rule, the "dgc_dq_mapping" table now includes an alias for column names.

Release 2025.07

Release information

  • Release date of Data Quality & Observability Classic 2025.07: August 4, 2025
  • Release notes publication date: July 8, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.07), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.07, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.07 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

In this release, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 onlyReleases 2025.02 and later support Google Cloud Dataproc for Spark 3.5.1. Dataproc requires Java 17 and currently does not support SQL Server pullup datasources.
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.6 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Platform

  • You can now use the dataset security setting "Require TEMPLATE_RULES role to add, edit, or delete a rule template" and assign users the "ROLE_TEMPLATE_RULES" role to restrict updates to template rules. (idea #CDQ-I-169)
    • When "Require TEMPLATE_RULES role to add, edit, or delete a rule template" is enabled, users without the "ROLE_ADMIN" or "ROLE_TEMPLATE_RULES" roles can view the list of template rules but cannot add, edit, or delete them.
    • If "Require TEMPLATE_RULES role to add, edit, or delete a rule template" is disabled, or if it is enabled and the user has "ROLE_ADMIN" or "ROLE_TEMPLATE_RULES," they can view, add, edit, and delete rule templates.
  • When creating a job, the “Visualize” function in Explorer is now inaccessible if the dataset security setting "Require DATA_PREVIEW role to see source data" is enabled and the user doesn't have the "ROLE_DATA_PREVIEW" role.
  • You can now use SAML SSO to sign in from the Tenant Manager page. Additionally, the AD Security Settings page from the Admin Console is now available in the Tenant Manager, allowing you to map user roles to external groups.
  • The application now has improved security when parsing SQL queries in the Dataset Overview and the "SQL View" step in Explorer. As a result, new or edited datasets with complex queries may not pass validation during parsing if they include nested subqueries, non-standard syntax, or ambiguous joins that are not supported by the parser's grammar.

Jobs

  • You can now scan for duplicate records in Oracle Pushdown jobs.
  • When you transform a column from Explorer, it is now available in the “Target” drop-down list during the Mapping step.
  • When a job fails, the metadata bar remains visible on the Findings page.

Rules

  • The “Edit Rule” dialog box now includes Description and Purpose fields. These fields allow you to add details about Collibra Platform assets associated with your rule and identify which assets should relate to your Data Quality Rule asset upon integration. (idea #CDQ-I-219)
  • The Rule Workbench now supports the substitution of both string and numeric data types for $min, $average, and $max stat rule values.
  • The Results Preview on the Rule Workbench now shows columns in the same order as they are listed in the rule query. For example, the query select SYMBOL, VOLUME, LOW, HIGH, CLOSE, OPEN from PUBLIC.NYSE where SYMBOL IS NOT NULL LIMIT 3 displays the columns in the following order in the Results Preview:
    • SYMBOL
    • VOLUME
    • LOW
    • HIGH
    • CLOSE
    • OPEN

Alerts

  • You can now create alerts for adaptive rules from Pullup jobs in the “Breaking” state. This eliminates the need to manually search for breaking records in adaptive rules, reducing the risk of missing them. (idea #DCC-I-609, DCC-I-2713, CDQ-I-199, CDQ-I-105)
  • Condition alerts now include a "Condition Value" field to provide more context about the reason for the alert. For example, the Condition Value for a row count check might be rc = 0, while for an outlier score, it could be outlierscore > 25. (idea #CDQ-I-317)
  • When you create a new alert with a name already in use by another alert on the same dataset, the existing alert is no longer automatically replaced. Instead, a confirmation dialog box now appears, allowing you to choose whether to overwrite the existing alert.

Profile

  • The correlation heatmap in the Correlation tab of the Profile page now shows a maximum of 10 columns, regardless of the number of columns returned in the API response.
  • The “Quality” column on the Profile tab is now renamed to “Completeness/Consistency.”

Collibra Platform Integration

  • When you rename a job from Data Quality & Observability Classic that is integrated with Collibra Platform, the corresponding domains in Collibra Platform are now renamed to match the new job name.
  • When you add, update, or remove meta tags from a job in Data Quality & Observability Classic with an active integration to Collibra Platform, the import JSON and the corresponding Data Quality Job in Collibra Platform are now updated. The updated JSON can include up to 4 meta tags.

APIs

  • The GET /v3/datasetDefs API call now returns meta tags in the array response in the same order they are set in the Dataset Manager. For example, if the first meta tag input field in the Dataset Manager contains "DATA," the second and third fields are empty, and the fourth contains "QUALITY," the array response will return:
  • Copy
        “metaTags”:  [
          “DATA”, 
         null,
          null,
          “QUALITY”,
    ],

Admin Console

  • You can now control the maximum allowed temporary file size using the new "maxfileuploadsize" admin limit. The default value is 15 MB.
  • When you enable the "RULE_FILTER" setting in the Application Configuration Settings page, create a rule with a filter, and then disable the "RULE_FILTER" setting before rerunning the job, the results on the Findings page still reflect the rule using the filter.
  • When you delete a role, any datasets mapped to it are now removed. Previously, the relations between the role, datasets, and users were still shown in the UI after deleting the role.
  • Orphaned dataset-run_id pairs are now removed from the “dataset_scan” table when you use a time-based data retention purge.

Fixes

Platform

  • You no longer encounter unexpected errors with SAML/SSO, rules, and Pushdown after upgrading to some Spark standalone self-hosted instances of Data Quality & Observability Classic.
  • Group-based role assignments for Azure SSO now work correctly after upgrading Data Quality & Observability Classic. Users no longer receive only the default "ROLE_PUBLIC," ensuring proper access permissions.
  • Users with the "ROLE_PUBLIC" and "ROLE_DATA_PREVIEW" roles can no longer create and save rules, ensuring role restrictions are enforced as intended.

Connections

  • You can now run jobs on Teradata connections without requiring the DBC.RoleMembrs.RoleName permission.

Jobs

  • Alias names now appear for date fields during row selection for a new job.
  • You no longer see the "Pushdown Count" option on the Mapping Configuration page. This was removed for BigQuery because it is not supported by Google.

Rules

  • Rule findings using a rule filter for Pushdown jobs now show the correct subset of rows applied for the rule, instead of the total rows at the dataset level.

Findings

  • Boolean attributes now show their values correctly in the Result Preview on the Rule Workbench screen.
  • The handling of “shape/enabled” and “deprecated profile/shape” JSON flags in dataset definitions and job runs is now consistent.
  • When a column contains multiple shape findings, the data preview is now shown as expected when you expand a shape finding.

Profile

  • Profiling activities now correctly support profile pushdown on BigQuery, ensuring accurate and efficient data profiling.

Reports

  • Encrypted columns now appear correctly on the Dataset Findings report.

APIs

  • The /v3/rules/{dataset}/{ruleName}/{runId}/breaks API now returns a “204 No Content” HTTP status code during pullup jobs when a rule has no break records linked to a specific LinkID.

Patch release

2025.07.1

  • You can now control the strictness of SQL query validation in the Dataset Overview and in the SQL View step in Explorer. Add the "VALIDATE_QUERIES" and "STRICT_SQL_QUERY_PARSING" settings in the Application Configuration Settings page of the Admin Console to configure this behavior. When "VALIDATE_QUERIES" is enabled, you can choose between simple or strict validation via the "STRICT_SQL_QUERY_PARSING" setting. Simple validation checks if the query contains any keywords other than SELECT, while strict performs a full syntax check to ensure the query is a valid SELECT statement with no non-SELECT keywords. If both settings are disabled, SQL query validation is turned off. For more information about how this works, go to Application Configuration Settings.

Release 2025.06

Release Information

  • Release date of Data Quality & Observability Classic 2025.06: June 30, 2025
  • Release notes publication date: June 3, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.06), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.06, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.06 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

In the 2025.06 and 2025.07 releases, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 onlyReleases 2025.02 and later support Google Cloud Dataproc for Spark 3.5.1. Dataproc requires Java 17 and currently does not support SQL Server pullup datasources.
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.6 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Platform

  • This application, which was previously known as Collibra Data Quality & Observability, is now called Data Quality & Observability Classic. This change reflects the introduction of the new Data Quality & Observability as part of the unified Collibra Platform. For more information about Data Quality & Observability, go to About Data Quality & Observability.

Jobs

  • When archiving break records for Pushdown jobs, link IDs are now optional. This change provides greater flexibility in storing and accessing break records and offers a more scalable solution for remediating records outside of Data Quality & Observability Classic. (idea # CDQ-I-155, DCC-I-2566, DCC-I-5726, CDQ-I-90, CDQ-I-137)
    • If Data Quality & Observability Classic does not have permission to write to tables in your source database, Pushdown jobs with or without link IDs selected require you to run an ALTER statement on the COLLIBRA_DQ_RULES table to add the new results column for JSON data. Data Quality & Observability Classic will attempt this alteration automatically, which will cause the rule to fail because it will attempt to write to a column that doesn’t exist in your source database. For more information, go to ALTER statement requirements for optional link IDs.
    • Important If you don’t use a link ID, the results column in the COLLIBRA_DQ_RULES table will become exponentially larger, leading to unintended storage and compute costs in your source database. To minimize these costs, you can use a link ID or move these results out of your source database and into a remote file storage system, such as Amazon S3.

  • Data Quality & Observability Classic now checks that a link ID is set in the settings dialog on Explorer when creating a DQ job, to ensure the archive break records feature works when enabled. If one or more link ID columns are selected, a checkbox and drop-down list appear next to the Archive Breaking Records option in the settings dialog. If no link ID columns are selected, the checkbox and drop-down list are unavailable, and a tooltip appears when you hover over the option. (idea #DCC-I-2746)
  • Oracle Pushdown is now generally available.
  • Oracle Pushdown connections do not support the following data types:
    • LONG RAW
    • NCHAR
    • NVARCHAR2
    • ROWID
    • BYTES
  • Trino Pushdown jobs now support source-to-target analysis for datasets within the same Trino Pushdown connection.
  • SAP HANA, Snowflake, SQL Server, and Trino Pushdown connections do not support the VARBINARY data type.
  • The valsrclimitui admin limit, which controls the number of Validate Source records shown in the UI, now works as expected. For example, if valsrclimitui is set to 3, the UI shows 3 records from the Column Schema and 3 records from the Cell Panels.
  • Note The fpglimitui, histlimitui, and categoricallimitui admin limits are not working as expected today. Further, any admin limit ending “ui”, such as valsrclimitui, is intended only for updates made via the Admin Limits page of the Admin Console. Changes made to these admin limits via the command line will not be reflected.

Rules

  • Rules that use $colNm in the query, such as data class, template, and data type checks, no longer perform automatic rule validation.
  • The Rule Workbench now has a warning message when the preview limit exceeds the recommended maximum of 500 records.
  • The maximum length of the name of a rule is now 250 characters.
  • The Preview Breaks modal now indicates the name of the rule.
  • The Low, Medium, and High buttons now correctly reflect your selection when you reopen the Rule Details dialog box after automatically changing a rule's points. (idea #CDQ-I-150)
    • Additionally, if you manually update the score, the button automatically adjusts to match the corresponding scoring range:
      • Low: 1-4 points
      • Medium: 5-24
      • High: 25-100

Admin Console

  • Business unit names on the Business Unit page in the Admin Console cannot contain the special characters / (forward slash), `(backtick), | (pipe), or $ (dollar sign). If you use any of these special characters, an error message appears, and the Edit Business Unit dialog box remains open until the special characters are removed.
  • Data retention policies now apply to the following metastore tables:
    • alert_output
    • behavior
    • dataset_activity
    • dataset_schema_source
    • hint
    • validate_source
  • The Title and Story fields of the User Story section of the User Profile page cannot contain the following special characters:
    • ‘ (single quote)
    • “ (double quote)
    • -- (double dash)
    • < (less than)
    • > (greater than)
    • & (ampersand)
    • / (forward slash)
    • ` (backtick)
    • | (pipe)
    • $ (dollar sign)
    • If you use any of these special characters, an error message appears, and the Edit User Story dialog box remains open until the special characters are removed.

Fixes

Rules

  • You can now apply rule filter queries to rules created using rule templates on Pushdown datasets.
  • Duplicated data quality rules no longer appear after you import data assets using CMA.
  • Rules now run without errors when you use a single link ID to archive break records with SQL Server Pushdown.

Profile

  • Divide-by-zero errors and issues with empty histograms are now resolved. Calculations and visualizations now work as expected without interruptions.

Findings

  • You can now export rule break records with accurate counts. The total number of rule break records for a rule now matches the number in the Breaking Records column of the exported file.

Integration

  • When you rename a rule in Data Quality & Observability Classic that is part of a dataset integrated with Collibra Platform, additional Data Quality Rule assets are no longer created during the next integration. The existing Data Quality Rule asset is also no longer unintentionally set to a “suppressed” state during the next integration, and its name is updated to match the name in Data Quality & Observability Classic.