Release 2025.04
Release Information
- Release date of Data Quality & Observability Classic 2025.04: April 28, 2025
- Release notes publication date: April 2, 2025
Announcement
As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.
In this release (2025.04), only the Java 17 build profile of Data Quality & Observability Classic contains all new and improved features and bug fixes listed in the release notes. The Java 8 and 11 build profiles for Standalone installations contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.
Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release for Java 17 build profiles:
- Kubernetes installations
- Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URL
variable set tofalse
in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
- Standalone installations
- To install Data Quality & Observability Classic 2025.04, you must upgrade to Java 17 and Spark 3.5.3 if you did not already do so in the 2025.02 or 2025.03 release.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
- Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.04 with Java 17.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URL
variable set tofalse
in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication. - We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.
While this release contains Java 8, 11, and 17 builds of Data Quality & Observability Classic for Standalone installations, it is the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic.
For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.
For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.
New and improved
Platform
- The Platform Path of the SQL Assistant for Data Quality feature now uses the Gemini AI model version gemini-1.5-pro-002.
- FIPS-enabled standalone installations of Data Quality & Observability Classic now support SAML authentication.
- The DQ agent now automatically restores itself when the Metastore reconnects after a temporary disconnection due to maintenance or a glitch.
Important To continue using the Platform Path of the SQL Assistant for Data Quality feature, you must upgrade to Data Quality & Observability Classic 2025.04.
Jobs
- We are pleased to announce that Oracle Pushdown is now available for preview testing.
- When you add a timeslice to a timeUUID data type column in a Cassandra dataset, an unsupported data type error message now appears.
- Dremio, Snowflake, and Trino Pullup jobs now support common table expressions (CTE) for parallel JDBC processing.
- You can now archive the break records of shapes from Trino Pushdown jobs.
- Link IDs for exact match duplicates are no longer displayed on the findings page.
- We improved the insertion procedure of validate source findings into the Metastore.
Rules
- When you add or edit a rule with an active status, SQL syntax validation now runs automatically when you save it. If the rule passes validation, it saves as expected. If validation fails, a dialog box appears, asking whether you want to continue saving with errors. Rules with an inactive status save without validation checks.
- When a rule condition exceeds 2 lines, only the first 2 lines are shown in the Condition column of the Rules tab on the Findings page. You can click "more..." or hover over the cell to show the full condition.
Profile
- The Profile page now correctly shows the top 5 TopN and BottomN shape results for Pullup jobs.
- When there are only two unique string values, the histogram on the Profile page now shows them correctly.
Findings
- To improve page alignment across the application, the Findings page now has a page title.
Alerts
- Rule score-based alerts now respect rule tolerance settings. Alerts are suppressed when the rule break percentage falls within the tolerance threshold.
Dataset Manager
- If you don't have the required permissions to perform certain tasks in the Dataset Manager, such as updating a business unit, an error message now appears when you attempt the action.
Scorecards
- You can now delete scorecards with names that contain trailing empty spaces, such as "scorecard " and "test scorecard ".
Integration
- You can now search with trailing spaces when looking up a community for tenant mapping in the Data Quality & Observability Classic and Collibra Platform integration configuration.
APIs
- You now need the ROLE_ADMIN or ROLE_ADMIN_VIEWER role to access the /v2/getdatasetusagecount API endpoint.
Fixes
Platform
- We have improved the security of our application.
- The data retention process now works as expected for all tenants in multi-tenant instances.
- Permissions errors for SAML users without the ROLE_PUBLIC role are now resolved. You no longer need to assign ROLE_PUBLIC to users who already have other valid roles.
Jobs
- Native rules, secondary dataset rules, and scheduled jobs on Databricks Pullup connections authenticated via EntraID Service Principal now run as expected.
- Pushdown jobs scheduled to run concurrently now process the correlation activity correctly.
- Pushdown jobs with queries that contain new line characters now process correctly, and the primary table from the query is shown in the Dataset Manager.
Findings
- The confidence calculations for numerical outliers in Pullup jobs have been updated for negative values. Positive value confidence calculations and Pushdown calculations remain unchanged.
Alerts
- Conditional alerts now work as expected when based on rules with names that start with numbers.
Integration
- The default setting for the integration schema, table, and column recalculation service (DQ_DGC_MAPPING_STATS_SCHEDULER_ENABLED) is now false, reducing unnecessary database activity. You can enable the service or call it through the API when needed.
- The Quality tab for database assets is now supported in out-of-the-box aggregation path configurations.
- The auto map feature now correctly maps schemas that contain only views.
- Dimension configuration for integration mapping no longer shows duplicate Data Quality & Observability Classic dimensions from the dq_dimension table.
Important From versions 2024.11 to 2025.03 of Data Quality & Observability Classic, if you don't want queries from the recalculation of mapped and unmapped stats of total entities to run, set the DQ_DGC_MAPPING_STATS_SCHEDULER_ENABLED to false in the owl-env.sh or Web ConfigMap. You can keep DQ_INTEGRATION_SCHEDULER_ENABLED set to true.
For more information, go to the Collibra Support center.
API
- When you copy a rule with an incorrect or non-existent dataset name using the v3/rules/copy API, an error message now specifies the invalid dataset or rule reference. This prevents invalid references in the new dataset.