Release 2025.07
Release information
- Release date of Data Quality & Observability Classic 2025.07: August 4, 2025
- Release notes publication date: July 8, 2025
Announcement
Important
As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.
In this release (2025.07), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
In this release, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.
For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.
As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.
In this release (2025.07), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
- Kubernetes installations
- Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URL
variable set tofalse
in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
- Standalone installations
- To install Data Quality & Observability Classic 2025.07, you must upgrade to Java 17 and Spark 3.5.3.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
- Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.07 with Java 17.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URL
variable set tofalse
in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication. - We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.
In this release, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.
For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.
New and improved
Platform
- You can now use the dataset security setting "Require TEMPLATE_RULES role to add, edit, or delete a rule template" and assign users the "ROLE_TEMPLATE_RULES" role to restrict updates to template rules. (idea #CDQ-I-169)
- When "Require TEMPLATE_RULES role to add, edit, or delete a rule template" is enabled, users without the "ROLE_ADMIN" or "ROLE_TEMPLATE_RULES" roles can view the list of template rules but cannot add, edit, or delete them.
- If "Require TEMPLATE_RULES role to add, edit, or delete a rule template" is disabled, or if it is enabled and the user has "ROLE_ADMIN" or "ROLE_TEMPLATE_RULES," they can view, add, edit, and delete rule templates.
- When creating a job, the “Visualize” function in Explorer is now inaccessible if the dataset security setting"Require DATA_PREVIEW role to see source data" is enabled and the user doesn't have the "ROLE_DATA_PREVIEW" role.
- You can now use SAML SSO to sign in from the Tenant Manager page. Additionally, the AD Security Settings page from the Admin Console is now available in the Tenant Manager, allowing you to map user roles to external groups.
- The application now has improved security when parsing SQL queries in the Dataset Overview and the "SQL View" step in Explorer. As a result, new or edited datasets with complex queries may not pass validation during parsing if they include nested subqueries, non-standard syntax, or ambiguous joins that are not supported by the parser's grammar.
Jobs
- You can now scan for duplicate records in Oracle Pushdown jobs.
- When you transform a column from Explorer, it is now available in the “Target” drop-down list during the Mapping step.
- When a job fails, the metadata bar remains visible on the Findings page.
Rules
- The “Edit Rule” dialog box now includes Description and Purpose fields. These fields allow you to add details about Collibra Platform assets associated with your rule and identify which assets should relate to your Data Quality Rule asset upon integration. (idea #CDQ-I-219)
- The Rule Workbench now supports the substitution of both string and numeric data types for $min, $average, and $max stat rule values.
- The Results Preview on the Rule Workbench now shows columns in the same order as they are listed in the rule query. For example, the query
select SYMBOL, VOLUME, LOW, HIGH, CLOSE, OPEN from PUBLIC.NYSE where SYMBOL IS NOT NULL LIMIT 3
displays the columns in the following order in the Results Preview:- SYMBOL
- VOLUME
- LOW
- HIGH
- CLOSE
- OPEN
Alerts
- You can now create alerts for adaptive rules from Pullup jobs in the “Breaking” state. This eliminates the need to manually search for breaking records in adaptive rules, reducing the risk of missing them. (idea #DCC-I-609, DCC-I-2713, CDQ-I-199, CDQ-I-105)
- Condition alerts now include a "Condition Value" field to provide more context about the reason for the alert. For example, the Condition Value for a row count check might be rc = 0, while for an outlier score, it could be outlierscore > 25. (idea #CDQ-I-317)
- When you create a new alert with a name already in use by another alert on the same dataset, the existing alert is no longer automatically replaced. Instead, a confirmation dialog box now appears, allowing you to choose whether to overwrite the existing alert.
Profile
- The correlation heatmap in the Correlation tab of the Profile page now shows a maximum of 10 columns, regardless of the number of columns returned in the API response.
- The “Quality” column on the Profile tab is now renamed to “Completeness/Consistency.”
Collibra Platform Integration
- When you rename a job from Data Quality & Observability Classic that is integrated with Collibra Platform, the corresponding domains in Collibra Platform are now renamed to match the new job name.
- When you add, update, or remove meta tags from a job in Data Quality & Observability Classic with an active integration to Collibra Platform, the import JSON and the corresponding Data Quality Job in Collibra Platform are now updated. The updated JSON can include up to 4 meta tags.
APIs
- The GET /v3/datasetDefs API call now returns meta tags in the array response in the same order they are set in the Dataset Manager. For example, if the first meta tag input field in the Dataset Manager contains "DATA," the second and third fields are empty, and the fourth contains "QUALITY," the array response will return:
Copy
“metaTags”: [
“DATA”,
null,
null,
“QUALITY”,
],
Admin Console
- You can now control the maximum allowed temporary file size using the new "maxfileuploadsize" admin limit. The default value is 15 MB.
- When you enable the "RULE_FILTER" setting in the Application Configuration Settings page, create a rule with a filter, and then disable the "RULE_FILTER" setting before rerunning the job, the results on the Findings page still reflect the rule using the filter.
- When you delete a role, any datasets mapped to it are now removed. Previously, the relations between the role, datasets, and users were still shown in the UI after deleting the role.
- Orphaned dataset-run_id pairs are now removed from the “dataset_scan” table when you use a time-based data retention purge.
Fixes
Platform
- You no longer encounter unexpected errors with SAML/SSO, rules, and Pushdown after upgrading to some Spark standalone self-hosted instances of Data Quality & Observability Classic.
- Group-based role assignments for Azure SSO now work correctly after upgrading Data Quality & Observability Classic. Users no longer receive only the default "ROLE_PUBLIC," ensuring proper access permissions.
- Users with the "ROLE_PUBLIC" and "ROLE_DATA_PREVIEW" roles can no longer create and save rules, ensuring role restrictions are enforced as intended.
Connections
- You can now run jobs on Teradata connections without requiring the DBC.RoleMembrs.RoleName permission.
Jobs
- Alias names now appear for date fields during row selection for a new job.
- You no longer see the "Pushdown Count" option on the Mapping Configuration page. This was removed for BigQuery because it is not supported by Google.
Rules
- Rule findings using a rule filter for Pushdown jobs now show the correct subset of rows applied for the rule, instead of the total rows at the dataset level.
Findings
- Boolean attributes now show their values correctly in the Result Preview on the Rule Workbench screen.
- The handling of “shape/enabled” and “deprecated profile/shape” JSON flags in dataset definitions and job runs is now consistent.
- When a column contains multiple shape findings, the data preview is now shown as expected when you expand a shape finding.
Profile
- Profiling activities now correctly support profile pushdown on BigQuery, ensuring accurate and efficient data profiling.
Reports
- Encrypted columns now appear correctly on the Dataset Findings report.
APIs
- The /v3/rules/{dataset}/{ruleName}/{runId}/breaks API now returns a “204 No Content” HTTP status code during pullup jobs when a rule has no break records linked to a specific LinkID.
Patch release
2025.07.1
- You can now control the strictness of SQL query validation in the Dataset Overview and in the SQL View step in Explorer. Add the "VALIDATE_QUERIES" and "STRICT_SQL_QUERY_PARSING" settings in the Application Configuration Settings page of the Admin Console to configure this behavior. When "VALIDATE_QUERIES" is enabled, you can choose between simple or strict validation via the "STRICT_SQL_QUERY_PARSING" setting. Simple validation checks if the query contains any keywords other than
SELECT
, while strict performs a full syntax check to ensure the query is a validSELECT
statement with no non-SELECT
keywords. If both settings are disabled, SQL query validation is turned off. For more information about how this works, go to Application Configuration Settings.