Release notes
-
Failure to upgrade to the most recent release of the Collibra Service and/or Software may adversely impact the security, reliability, availability, integrity, performance or support (including Collibra’s ability to meet its service levels) of the Service and/or Software. For more information, read our Collibra supported versions policy.
- Some items included in this release may require an additional cost. Please contact your Collibra representative or Collibra Account Team with any questions.
Release 2025.11
Release Information
- Expected release date of Data Quality & Observability Classic 2025.11: November 24, 2025
- Release notes publication date: October 31, 2025
New and improved
Platform
- To enhance security, authentication tokens now require a minimum secret key length of 256 bits (32 characters) for the
security.jwt.token.secret-key. If you use environment variables, update theSECURITY_JWT_TOKEN_SECRET-KEYconfiguration to a securely generated value of at least this length. The existing default value will be removed in January 2026. We strongly recommend providing a custom value if you haven't already. - You can now use the OAuth2 authentication type for Snowflake connections. This enhancement provides a more secure and flexible authentication option for connecting to Snowflake.
- The Snowflake JDBC driver has been updated to version 3.26.1 to support OAuth-based authentication.
- Out-of-the-box FIPS-compliant JDBC drivers are now included in Data Quality & Observability Classic installation packages. You can test connections by setting "DQ_APP_FIPS_ENABLED" to true or false in the owl-env.sh file or DQ Web ConfigMap.
- In FIPS mode, "SAML_KEYSTORE_FILE" is no longer effective. Instead, DQ uses "SAML_KEYSTORE_ALIAS" at the "KEYSTORE_FILEPATH" with the "SAML_KEYSTORE_PASS."
- You can now manage admin user creation during tenant creation, supporting SSO-only user access and meeting security policy requirements.
- You can now use the new "rulelimit" admin configuration option to limit the number of rule break records stored in the metastore for Pushdown jobs. This helps prevent out-of-memory issues when a rule returns a large number of break records. The default value is 1000. This option does not affect Pullup or Pushdown jobs with archive break records enabled.
Jobs
- When archiving break records for Pullup jobs, link IDs are now optional. This change provides greater flexibility in storing and accessing break records, offering a more scalable solution for remediating records outside of Data Quality & Observability Classic. The new format includes the full rule result.
- Additionally, archiving break records for Pullup jobs now supports multiple runs in a single day. The newest archive file remains "ruleBreaks.csv," and a "ruleBreaks[timestamp].csv" file is created for each run.
- The pagination controls in Explorer are now at the top of the table list, making navigation easier.
- When you access the Dataset Overview from Explorer or the Rule Workbench, column headers now stay locked in place, consistent with other areas of the application.
Rules
- SQL Assistant for Data Quality now directly connects to your Collibra Platform instance to make LLM-router calls, eliminating the need for a Kong endpoint configuration. This improves efficiency and expedites requests to the Collibra AI proxy.
- When you add or edit a rule template, the same naming rules as creating a rule are now applied. You can no longer use special characters in rule template names.
- The "Add Rule Template" modal now requires you to complete the "Rule Query" fields before submitting, ensuring all necessary information is provided when creating a rule template.
- The checkbox in the "Settings" modal now accurately shows the rule's status. It is checked when the rule is active and clear when the rule is inactive.
- Users without sufficient permissions can no longer see break record data in the "Archived Breaks" modal, which is available from the Rules tab of the Findings page. This enhancement improves data security by ensuring that only authorized users can access sensitive information.
Alerts
- You can now configure webhooks to send Global Failure alerts, enabling faster identification and resolution of issues.
- You can now include JSON templates with replacement variables in the definition of a webhook. These variables are included in a custom JSON payload sent to the webhook server, enabling more flexible and tailored integrations for alerting.
Reports
- When you open the Dataset Findings report on the Reports page, you now see the total number of rows in the dataset.
APIs
- The API v3/jobs/findings now populates the same valid values for datashapes as the API v2/getdatashapes.
- The API v3/datasetdef now includes time zone information for job schedules.
Fixes
Platform
- You can now enter credentials for a remote connection when creating a database, restoring the previous functionality.
- When you add role mappings without first specifying a role in Admin Settings > Role Management, the role name and mapping now show correctly. The success message accurately reflects the completed action.
Jobs
- Jobs that run in EMR instances no longer get stuck in an "Unknown" status.
- When you use the file lookback parameter "-fllb" with the added date column "-adddc", the "OWL_RUN_ID" column is now correctly included in historical and current dataframes. This prevents job failures during outlier calculation.
- When working with SQL Server Pushdown jobs, the "ORDER BY" clause in a source query now requires the "TOP" command due to a SQL Server limitation. For queries that return an entire table, "ORDER BY" cannot be used because Pushdown relies on the source system for query processing. Additionally, API calls for rule breaks on SQL Server Pushdown jobs require a limit to ensure repeatable ordering for queries using offsets to page the rule preview. If no limit is applied, the API call will return a 404 error.
- The search string in the profile now refreshes correctly when you change the column. Search results are displayed accurately without needing to manually collapse and re-expand to update the values.
- The integration status button in the Metadata Bar now correctly reflects the status when manual integration fails. This behavior is aligned with the accurate functionality of the Dataset Manager.
- Explorer now remembers its last state when you return from the job configuration.
- When creating a job, the "Include Views" option under the Mapping step now functions correctly.
Findings
- The data preview of Length_N shapes now displays correctly in the Shapes tab on the Findings page.
Rules
- You no longer encounter access errors when using the "Run Result Preview" function in Rule Workbench. The function now checks if you are the dataset owner, ensuring proper access permissions.
- When you add or edit a rule template, the same naming rules as when creating a rule are now applied. You can no longer use special characters in rule template names.
- The validation message "Validation is not supported for stat rules" is now shown again during syntax validation, both as a warning banner and in the dialog box, restoring the legacy behavior.
- When "snowflake_key_pair" is used as a secondary connection under a rule in a dataset, the command line now adds a Spark configuration in the Kubernetes environment.
- The "Pretty Print" function on the Rule Workbench now preserves spaces inside double or single quotes during formatting, preventing unintended changes to column names.
Alerts
- Alerts for non-authenticated webhooks now send correctly at runtime.
- Multi-condition alerts now trigger correctly and emails are sent successfully.
- The search feature now works correctly on the Alert Notifications page.
Collibra Platform integration
- Large aggregation requests no longer cause out-of-memory errors. Results are now fetched in pages and partially stored on the filesystem, ensuring memory usage stays within safe limits while still returning results in the required stats format.
- After running an integration, Collibra Platform now shows correct values for custom rule attributes when a rule in Data Quality & Observability Classic contains a filter. The Loaded Rows, Rows Failed, and Rows Passed attributes now reflect the rule row count, which is based on the subset of the dataset defined by the rule filter.
APIs
- The "/v2/getdataasset" and "/v2/getcatalogbydataset" endpoints now correctly return the name of the Data Category assigned to the dataset.
Release 2025.10
Release Information
- Release date of Data Quality & Observability Classic 2025.10: October 27, 2025
- Release notes publication date: September 22, 2025
New and improved
Platform
Note We updated Tomcat to address several security vulnerabilities (CVEs). The maximum part size for multipart requests is now set to 80 by default. This default limit might affect APIs that process datasets with an large number of columns. If necessary, you can change this limit using the property "DQ_SERVLET_MULTIPART_MAXPARTCOUNT" in the owl-env.sh file for Spark Standalone installations or the owl-web-configmap for Kubernetes environments.
- Scheduled jobs no longer run inconsistently or fail due to a JobRunr licensing issue.
- Tenants on the "Login" page are now listed in alphabetical order for local users. This makes it easier to locate and select the desired tenant. This feature will be added for the SSO tenant list in a future release.
Jobs
- A summary query is now generated for Pushdown jobs when archive break records are turned off and no link IDs are set. This helps limit the break records requested for the metastore data preview and prevents out-of-memory issues.
- To ensure consistent validation behavior for job names between Dataset Manager and Explorer, special characters are no longer allowed when renaming existing jobs in Dataset Manager.
- Whitespace characters and parentheses are now handled correctly when creating a BigQuery job. You no longer need to use pretty print to resolve syntax errors.
Findings
- The Rules tab on the Findings page now includes a search box to find matching rule names. This allows you to quickly locate specific rules, improving navigation and efficiency.
Fixes
Platform
- The dq-web-configmap.yaml file no longer contains the incorrect parameter for the “LOCAL_REGISTRATION_ENABLED” setting.
Jobs
- When an Amazon S3 connection is inactive, the error message "Unauthorized Access: Credentials are missing or have expired" now shows. This message also appears when mapping from JDBC to remote files, helping you identify credential issues more easily.
- When you re-run a job after invalidating fuzzy match duplicates, saving, and retraining on the Findings page, the job no longer contains the same invalidated fuzzy match duplicates.
- The metadata bar is now visible when you create a job from a Databricks connection. This ensures a consistent user experience and provides access to relevant options during job creation.
- Pushdown jobs no longer reset dataset ownership each time they run, even if the user running the job is not the original creator.
Findings
- The "Failed to update label" error no longer shows when you edit an annotation label.
- When you configure a job with a Patterns monitor, the results on the Findings page are now shown correctly.
Rules
- When you use "+Quick Rule" from a Rule Template, the dimension is now added automatically if a dimension was added as part of the template’s settings. This streamlines the rule creation process and ensures the dimension is included without additional steps.
Alerts
- You now must select a radio button for "Job Alert Status *" when creating a Job Status alert. This ensures that the required selection is enforced properly, preventing incomplete alert configurations.
Collibra Platform integration
- You no longer receive an error when re-enabling an integration that takes more than 60 seconds.
- You can no longer enable, disable, or reset a Collibra Platform integration on the Dataset Manager or Findings pages unless you have the ROLE_DATASET_ACTION, ROLE_ADMIN, or ROLE_DATASET_MANAGER role. This ensures that only users with the appropriate permissions can manage Data Quality & Observability Classic integrations with Collibra Platform.
- On the "Mapping Status" tab of the Integration Setup page, a new partial column table count was added to the Tables Mapped column.
Release 2025.09
Release Information
- Release date of Data Quality & Observability Classic 2025.09: September 29, 2025
- Release notes publication date: September 2, 2025
New and improved
Platform
Warning To continue using SQL Assistant for Data Quality, you must upgrade to Data Quality & Observability Classic version 2025.09. This version utilizes the gemini-2.5-pro AI model. After this date, SQL Assistant for Data Quality will stop working unless you upgrade, as Google will retire the gemini-1.5-pro-002 model used in earlier versions of Data Quality & Observability Classic.
- Add the JOBRUNR_PRO_LICENSE environment variable to all environments
- Securely add the JobRunr license as an environment variable to the DQ-web ConfigMap for Kubernetes deployments or to the owl-env.sh for Spark Standalone:
JOBRUNR_PRO_LICENSE="[license]" - Restart the server to apply the license after updating.
- Securely add the JobRunr license as an environment variable to the DQ-web ConfigMap for Kubernetes deployments or to the owl-env.sh for Spark Standalone:
- Upgrade to the upcoming 2025.10 version
- No further action is required after upgrading.
Note We updated Tomcat to address several security vulnerabilities (CVEs). The maximum part size for multipart requests is now set to 80 by default. This default limit might affect APIs that process datasets with an large number of columns. If necessary, you can change this limit using the property "DQ_SERVLET_MULTIPART_MAXPARTCOUNT" in the owl-env.sh file for Spark Standalone installations or the owl-web-configmap for Kubernetes environments.
- You can now configure Secret Key-based encryption in Data Quality & Observability Classic environments. This allows you to use an existing JKS file or override it with your own file for greater control over key size, algorithms, and encryption methods.
- When you delete a tenant from the metastore, the schema and all its tables are also deleted.
Jobs
- BigQuery Pushdown jobs now support column names with spaces.
Findings
- We removed the "State" column from the Rules tab on the Findings page to improve readability and streamline the layout.
- The Lower Bound and Upper Bound input fields for adaptive rules on the Change Detection modal now replace commas with periods, ensuring that values such as 10,23 are interpreted and displayed as 10.23. This prevents locale-specific misinterpretations.
Rules
- The column order in the Preview Breaks dialog box now matches the column order on the Rule Query page.
- The values for rules, data preview, and rule preview now show full numeric values instead of exponential values.
Dataset Overview
- You can now analyze tables in schemas with mixed or lowercase Snowflake schema names without editing the dataset query. This applies to both Pushdown and Pullup, making the analysis process more seamless.
Dataset Manager
- You can now use the updated APIs to manage the new "Schema Table Mapping" field available in the Dataset Edit dialog box in the Dataset Manager. Use the "/v2/updatecatalogobj" API to update this field and the "/v2/getdatasetschematablemapping" API to retrieve the Schema Table Mapping JSON string. Existing authorization rules now apply to schema mapping API updates, ensuring consistent security. Additionally, the scope query parser now supports multiple schemas and tables, and cardinality has been enhanced to allow multiple tables in a single job. This advanced SQL parsing is executed for each Data Quality & Observability Classic job when the new field, schemaTableNames, is empty or Null.
- When you edit a dataset in Dataset Manager, the "Schema Table Mapping" field is now automatically updated during the next job execution if it is empty or blank. The new parsing algorithm uses the scope query schema and table discovery to populate this field, ensuring more accurate and complete dataset information.
- You can now see a new read-only "Dataset Query" field in the Dataset Edit dialog box. This field shows the variables used in the query, making it easier to review the dataset's configuration.
Collibra Platform integration
- You can now associate a single data asset with multiple tables. The cardinality of the "Data Asset Represent Table" relation type has been updated from one-to-one to one-to-many, allowing for greater flexibility in managing data asset relationships.
Fixes
Platform
- On the Quality tab of Collibra Platform, the ring chart color now matches the corresponding ring chart in the At a Glance sidebar.
Jobs
- Pushdown jobs no longer fail with a "ConcurrentDeleteReadException" message caused by concurrent delete and read operations in a Delta Lake environment.
- Potential errors in the estimation step caused by complex source queries are now resolved.
Rules
- You can now preview the results of your rule on jobs from SQL Server connections using Livy without the application becoming unresponsive.
- When a SQL rule contains multiple column names and the MAP_TO_PRIMARY_COLUMN option is enabled during integration setup, only the primary column name is now assigned to the data quality rule asset.
- The "Actions" drop-down list on the Templates page no longer forces a selection.
- En dash (–) comments in the SQL source query no longer cause the rule to throw an exception.
Dataset Manager
- You can now find renamed datasets in the Dataset Manager.
Collibra Platform integration
- The lowest possible passing fraction is now 0 when a dataset from Data Quality & Observability Classic is integrated with Collibra Platform, even if the total number of outlier findings would otherwise result in a negative passing fraction value.
- You no longer receive an unexpected error stating “unable to fetch DGC schemas” when mapping connections from Data Quality & Observability Classic to databases in Collibra Platform.
- The dgc_dq_mapping table now includes an alias for column names, ensuring the correct column relation is reflected in the data quality rule.
Release 2025.08
Release Information
- Release date of Data Quality & Observability Classic 2025.08: September 2, 2025
- Release notes publication date: August 6, 2025
Announcement
As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective this release (2025.08).
In this release (2025.08), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.6. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
- Kubernetes installations
- Kubernetes containers automatically contain Java 17 and Spark 3.5.6.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.6.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URLvariable set tofalsein the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
- Standalone installations
- To install Data Quality & Observability Classic 2025.08, you must upgrade to Java 17 and Spark 3.5.3.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.6.
- Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.08 with Java 17.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URLvariable set tofalsein the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication. - We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.
In this release, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.
For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.
New and improved
Platform
Note We updated Tomcat to address several security vulnerabilities (CVEs). The maximum part size for multipart requests is now set to 80 by default. This default limit might affect APIs that process datasets with an large number of columns. If necessary, you can change this limit using the property "DQ_SERVLET_MULTIPART_MAXPARTCOUNT" in the owl-env.sh file for Spark Standalone installations or the owl-web-configmap for Kubernetes environments.
- You can now control the strictness of SQL query validation in the Dataset Overview and in the SQL View step in Explorer. Add the "VALIDATE_QUERIES" and "STRICT_SQL_QUERY_PARSING" settings in the Application Configuration Settings page of the Admin Console to configure this behavior. When "VALIDATE_QUERIES" is enabled, you can choose between simple or strict validation via the "STRICT_SQL_QUERY_PARSING" setting. Simple validation checks if the query contains any keywords other than
SELECT, while strict performs a full syntax check to ensure the query is a validSELECTstatement with no non-SELECTkeywords. If both settings are disabled, SQL query validation is turned off. For more information about how this works, go to Application Configuration Settings. - When you upload a file to Data Quality & Observability Classic, the application now validates the content type to ensure it matches the file extension. Additionally, when uploading drivers on the Connections page, only JAR files with a content type of "application/java-archive" or "application/zip" are supported.
- Data Quality & Observability Classic now runs on Spark 3.5.6. If you use a Spark Standalone or EMR deployment of Data Quality & Observability Classic and experience issues upgrading to 2025.08, we recommend upgrading your Spark version to 3.5.6. No action is required if you don't encounter any issues.
Jobs
- Trino Pushdown connections now support profiling and custom rules on columns of the ROW data type.
- You can now set the "Preview Limit" option to a value of 0 or higher when creating or editing a rule.
- Null values are no longer counted as unique values in Pullup mode. However, you can include null values in the unique value count on the Profile page by setting the command line option "profileIncludeNulls" to "true."
Findings
- When you export source-to-target findings, the export file now includes the column schemas and cell values.
- Float values with commas now retain up to 4 decimal places and remove trailing zeroes from the Findings page. For example, 3722.25123455677800000000000 is now shown as 3722.2512. When you hover your pointer over the value, the full value is shown in a tooltip.
- You can now download a CSV of break records directly from the Preview Breaks dialog box using the Download Results button. (idea #PE-I-3723, CDQ-I-386)
Alerts
- You can now create alerts for adaptive rules from Pushdown jobs in the “Breaking” state. This eliminates the need to manually search for breaking records in adaptive rules, reducing the risk of missing them. (idea #DCC-I-609, DCC-I-2713, CDQ-I-199, CDQ-I-105)
Connections
- You can now upload keys for Snowflake authorizations into Kubernetes environments to reference them within connection strings. (idea #CDQ-I-332)
Fixes
Platform
- Updating an asset's name no longer breaks the Quality tab, as related assets are connected using the asset's full name.
Jobs
- Columns from Trino connections are now accurately identified using their JDBC drivers and metadata instead of their catalog name, ensuring they load correctly.
- The Partition Column field now shows the correct column in the Job Creator metadata box of the scanning method step.
- An “invalid table name” error no longer occurs during job size estimation for data sources that previously caused errors, including Denodo, SQL Server, and Oracle.
- You can now view the "Discard Changes" and "Compile" buttons on the "Write and Compile a SQL Query" page, regardless of the Zoom setting or browser window size.
Rules
- When you use pretty print for a rule, it no longer introduces additional characters or alters values.
- When you click "Add Rule Template" on the "Templates" page, the selected value in the "Dimension" drop-down no longer disappears when you click elsewhere on the screen.
Findings
- You no longer receive an "Access Denied" message when editing the Findings page for local files.
Profile
- You can now compare the Baseline and Run values for the "Filled rate" when the Quality switch is enabled for a column on the Profile report.
Reports
- You can no longer filter by date in the Alert Details report. This ensures that the charts and counts accurately reflect the applied filter criteria.
Collibra Platform integration
- The Quality tab on asset pages in Collibra Platform no longer includes the passing fraction attribute values of suppressed rules from Data Quality & Observability Classic in the overview and dimension aggregation scoring.
- To ensure the correct column relation is reflected in the data quality rule, the "dgc_dq_mapping" table now includes an alias for column names.
Release 2025.07
Release information
- Release date of Data Quality & Observability Classic 2025.07: August 4, 2025
- Release notes publication date: July 8, 2025
Announcement
As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.
In this release (2025.07), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
- Kubernetes installations
- Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URLvariable set tofalsein the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
- Standalone installations
- To install Data Quality & Observability Classic 2025.07, you must upgrade to Java 17 and Spark 3.5.3.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
- Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.07 with Java 17.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URLvariable set tofalsein the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication. - We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.
In this release, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.
For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.
New and improved
Platform
Note We updated Tomcat to address several security vulnerabilities (CVEs). The maximum part size for multipart requests is now set to 80 by default. This default limit might affect APIs that process datasets with an large number of columns. If necessary, you can change this limit using the property "DQ_SERVLET_MULTIPART_MAXPARTCOUNT" in the owl-env.sh file for Spark Standalone installations or the owl-web-configmap for Kubernetes environments.
- You can now use the dataset security setting "Require TEMPLATE_RULES role to add, edit, or delete a rule template" and assign users the "ROLE_TEMPLATE_RULES" role to restrict updates to template rules. (idea #CDQ-I-169)
- When "Require TEMPLATE_RULES role to add, edit, or delete a rule template" is enabled, users without the "ROLE_ADMIN" or "ROLE_TEMPLATE_RULES" roles can view the list of template rules but cannot add, edit, or delete them.
- If "Require TEMPLATE_RULES role to add, edit, or delete a rule template" is disabled, or if it is enabled and the user has "ROLE_ADMIN" or "ROLE_TEMPLATE_RULES," they can view, add, edit, and delete rule templates.
- When creating a job, the “Visualize” function in Explorer is now inaccessible if the dataset security setting "Require DATA_PREVIEW role to see source data" is enabled and the user doesn't have the "ROLE_DATA_PREVIEW" role.
- You can now use SAML SSO to sign in from the Tenant Manager page. Additionally, the AD Security Settings page from the Admin Console is now available in the Tenant Manager, allowing you to map user roles to external groups.
- The application now has improved security when parsing SQL queries in the Dataset Overview and the "SQL View" step in Explorer. As a result, new or edited datasets with complex queries may not pass validation during parsing if they include nested subqueries, non-standard syntax, or ambiguous joins that are not supported by the parser's grammar.
Jobs
- You can now scan for duplicate records in Oracle Pushdown jobs.
- When you transform a column from Explorer, it is now available in the “Target” drop-down list during the Mapping step.
- When a job fails, the metadata bar remains visible on the Findings page.
Rules
- The “Edit Rule” dialog box now includes Description and Purpose fields. These fields allow you to add details about Collibra Platform assets associated with your rule and identify which assets should relate to your Data Quality Rule asset upon integration. (idea #CDQ-I-219)
- The Rule Workbench now supports the substitution of both string and numeric data types for $min, $average, and $max stat rule values.
- The Results Preview on the Rule Workbench now shows columns in the same order as they are listed in the rule query. For example, the query
select SYMBOL, VOLUME, LOW, HIGH, CLOSE, OPEN from PUBLIC.NYSE where SYMBOL IS NOT NULL LIMIT 3displays the columns in the following order in the Results Preview:- SYMBOL
- VOLUME
- LOW
- HIGH
- CLOSE
- OPEN
Alerts
- You can now create alerts for adaptive rules from Pullup jobs in the “Breaking” state. This eliminates the need to manually search for breaking records in adaptive rules, reducing the risk of missing them. (idea #DCC-I-609, DCC-I-2713, CDQ-I-199, CDQ-I-105)
- Condition alerts now include a "Condition Value" field to provide more context about the reason for the alert. For example, the Condition Value for a row count check might be rc = 0, while for an outlier score, it could be outlierscore > 25. (idea #CDQ-I-317)
- When you create a new alert with a name already in use by another alert on the same dataset, the existing alert is no longer automatically replaced. Instead, a confirmation dialog box now appears, allowing you to choose whether to overwrite the existing alert.
Profile
- The correlation heatmap in the Correlation tab of the Profile page now shows a maximum of 10 columns, regardless of the number of columns returned in the API response.
- The “Quality” column on the Profile tab is now renamed to “Completeness/Consistency.”
Collibra Platform Integration
- When you rename a job from Data Quality & Observability Classic that is integrated with Collibra Platform, the corresponding domains in Collibra Platform are now renamed to match the new job name.
- When you add, update, or remove meta tags from a job in Data Quality & Observability Classic with an active integration to Collibra Platform, the import JSON and the corresponding Data Quality Job in Collibra Platform are now updated. The updated JSON can include up to 4 meta tags.
APIs
- The GET /v3/datasetDefs API call now returns meta tags in the array response in the same order they are set in the Dataset Manager. For example, if the first meta tag input field in the Dataset Manager contains "DATA," the second and third fields are empty, and the fourth contains "QUALITY," the array response will return:
“metaTags”: [
“DATA”,
null,
null,
“QUALITY”,
],
Admin Console
- You can now control the maximum allowed temporary file size using the new "maxfileuploadsize" admin limit. The default value is 15 MB.
- When you enable the "RULE_FILTER" setting in the Application Configuration Settings page, create a rule with a filter, and then disable the "RULE_FILTER" setting before rerunning the job, the results on the Findings page still reflect the rule using the filter.
- When you delete a role, any datasets mapped to it are now removed. Previously, the relations between the role, datasets, and users were still shown in the UI after deleting the role.
- Orphaned dataset-run_id pairs are now removed from the “dataset_scan” table when you use a time-based data retention purge.
Fixes
Platform
- You no longer encounter unexpected errors with SAML/SSO, rules, and Pushdown after upgrading to some Spark standalone self-hosted instances of Data Quality & Observability Classic.
- Group-based role assignments for Azure SSO now work correctly after upgrading Data Quality & Observability Classic. Users no longer receive only the default "ROLE_PUBLIC," ensuring proper access permissions.
- Users with the "ROLE_PUBLIC" and "ROLE_DATA_PREVIEW" roles can no longer create and save rules, ensuring role restrictions are enforced as intended.
Connections
- You can now run jobs on Teradata connections without requiring the DBC.RoleMembrs.RoleName permission.
Jobs
- Alias names now appear for date fields during row selection for a new job.
- You no longer see the "Pushdown Count" option on the Mapping Configuration page. This was removed for BigQuery because it is not supported by Google.
- Datasets with missing run IDs now appear in the global search.
Rules
- Rule findings using a rule filter for Pushdown jobs now show the correct subset of rows applied for the rule, instead of the total rows at the dataset level.
Findings
- Boolean attributes now show their values correctly in the Result Preview on the Rule Workbench screen.
- The handling of “shape/enabled” and “deprecated profile/shape” JSON flags in dataset definitions and job runs is now consistent.
- When a column contains multiple shape findings, the data preview is now shown as expected when you expand a shape finding.
Profile
- Profiling activities now correctly support profile pushdown on BigQuery, ensuring accurate and efficient data profiling.
Reports
- Encrypted columns now appear correctly on the Dataset Findings report.
APIs
- The /v3/rules/{dataset}/{ruleName}/{runId}/breaks API now returns a “204 No Content” HTTP status code during pullup jobs when a rule has no break records linked to a specific LinkID.
Patch release
2025.07.1
- You can now control the strictness of SQL query validation in the Dataset Overview and in the SQL View step in Explorer. Add the "VALIDATE_QUERIES" and "STRICT_SQL_QUERY_PARSING" settings in the Application Configuration Settings page of the Admin Console to configure this behavior. When "VALIDATE_QUERIES" is enabled, you can choose between simple or strict validation via the "STRICT_SQL_QUERY_PARSING" setting. Simple validation checks if the query contains any keywords other than
SELECT, while strict performs a full syntax check to ensure the query is a validSELECTstatement with no non-SELECTkeywords. If both settings are disabled, SQL query validation is turned off. For more information about how this works, go to Application Configuration Settings.