Release notes
-
Failure to upgrade to the most recent release of the Collibra Service and/or Software may adversely impact the security, reliability, availability, integrity, performance or support (including Collibra’s ability to meet its service levels) of the Service and/or Software. For more information, read our Collibra supported versions policy.
- Some items included in this release may require an additional cost. Please contact your Collibra representative or Collibra Account Team with any questions.
Release 2026.01
Release Information
- Expected release date of Data Quality & Observability Classic 2026.01: January 26, 2026
- Release notes publication date: January 21, 2026
New and improved
Platform
- Kubernetes installations: The upgrade to 2026.02 is seamless and managed automatically within the application container.
- Spark Standalone installations: You must manually upgrade your underlying Spark environment to Spark 4 before upgrading to version 2026.02. This is a mandatory prerequisite. To learn how to upgrade Spark, go to Upgrade Spark.
- You can now configure email alerts using OAuth 2.0 via Microsoft's Graph API with a service-to-service client credentials flow. This modern configuration offers a secure alternative to classic SMTP authentication. To set up email alerts, register an application in Azure and enter its Client ID, Secret, and Tenant ID.
- You can now require the Dimension setting for rules by enabling the "MANDATORY_RULE_DIMENSION" configuration flag. When enabled, the Dimension field becomes mandatory for all rule types, ensuring rules cannot be saved without selecting a Dimension.
- When using the global search bar, you are now notified if your search yields no results.
- You can now copy "Environment Details" directly to your clipboard from the "Application Info" button. This simplifies sharing environment information for support.
- We have updated the "terms" link on our sign in page.
Jobs
- You now see a more user-friendly and informative error message when manually triggering a DQ job on an S3 connection with expired or missing credentials. This helps you quickly identify and resolve the issue.
- Downloaded break records now handle quoted data correctly, ensuring data with quotes is no longer split across multiple columns. Column headers with commas remain intact in the exported CSV. Additionally, dates in the "Preview Breaks" download now match the format shown in the UI, ensuring consistency when opened in Excel.
- When no Link ID is specified for the job and "Archive breaks" is selected, break records are now archived in the source. A new results column is added, containing the source record in JSON format.
Rules
- Access to the Rule Result Preview feature is now restricted. Users must have the ROLE_DATA_PREVIEW security role to view preview data.
Collibra Platform integration
- The UI now correctly shows "0/0/0" when the values for fully mapped, partially mapped, and total tables are zero. Previously, incorrect values could appear because default values were not set to 0. If any of these values are null or undefined, the UI will continue to show "N/A."
- You can now retrieve Snowflake table metadata through the API for the Explorer UI and auto-map. Auto-map automatically populates the dgc_dq_mapping table with table columns, simplifying integration.
Fixes
Platform
- You now see an informative landing page prompting you to set the license name if it wasn't configured during installation. After setting the license name and key, the page disappears. Previously, this process failed with a "403 Unauthorized" error, which has been resolved.
- You can now fully disable Pendo scripts alongside other application properties, providing greater control over application configurations.
Jobs
- When you create an SAP HANA connection and run a job with a string field starting with "+" or "-", the job now runs and profiles the column as a string instead of showing an error.
- The time window now displays correctly for IST and other half-hour offset time zones. Jobs no longer run during restricted periods, ensuring accurate scheduling.
- You can now update the date range to a future date for an outlier without errors. This ensures smooth updates and allows breaks to be set until the specified end date.
- Job estimates now validate tables for Snowflake data sources without errors. This ensures smoother operation and accurate validation during job execution.
- The "Previous" and "Next" buttons are now hidden in the Explorer tree when the schema contains fewer than 25 tables. Previously, these buttons appeared as disabled.
Findings
- An issue that caused a user-passed adaptive rule violation to reset incorrectly when another adaptive rule was retrained or suppressed during a data job run has been fixed. Adaptive rule violations now remain unchanged during these operations.
Rules
- Rules now work with "REGEXP" in the same way as "RLIKE," including scenarios with parentheses. This ensures consistent behavior across both operators.
- When you hover over "App Frequent Runs" in the dataset selection screen of the Alert or Rule Builder, the double line no longer appears.
Connections
- Filtering by schema or database name now correctly shows the associated schema and tables when you expand a relevant connection. The "Search Catalog" API returns valid results for existing schemas. Clearing the filter and expanding the connection now behaves consistently with filtered behavior.
APIs
- The "v3/alerts" API now prevents null or empty values for the alertTriggerType and alert_types parameters. This ensures more reliable API behavior and data validation.
Release 2025.11
Release Information
- Release date of Data Quality & Observability Classic 2025.11: November 24, 2025
- Release notes publication date: October 31, 2025
New and improved
Platform
- Kubernetes installations: The upgrade to 2026.02 is seamless and managed automatically within the application container.
- Spark Standalone installations: You must manually upgrade your underlying Spark environment to Spark 4 before upgrading to version 2026.02. This is a mandatory prerequisite. To learn how to upgrade Spark, go to Upgrade Spark.
- The CVE-2025-66516 vulnerability is detected in Apache Tika, which Data Quality & Observability Classic only uses for content-type detection and the detection workflow processes. For more information on this issue, go to the official Collibra Support article.
- The CVE-2025-59419 vulnerability is detected only in the DQ Livy containers. However, Data Quality & Observability Classic does not use the vulnerable module netty-codec-smtp. This module is included due to a transitive dependency, but it is not in use, and there is no direct or indirect way to exploit it because DQ Livy does not expose or use any SMTP protocol or functionality.
- The release images may show CVE-2025-59250 as a vulnerability, even though it has been addressed in the packages. This detection is a false positive.
- To enhance security, authentication tokens now require a minimum secret key length of 256 bits (32 characters) for the
security.jwt.token.secret-key. If you use environment variables, update theSECURITY_JWT_TOKEN_SECRET-KEYconfiguration to a securely generated value of at least this length. The existing default value will be removed in January 2026. We strongly recommend providing a custom value if you haven't already. - You can now use the OAuth2 authentication type for Snowflake connections. This enhancement provides a more secure and flexible authentication option for connecting to Snowflake.
- The following drivers have been upgraded:
- The Snowflake JDBC driver is now version 3.26.1, to support OAuth-based authentication.
- The Databricks JDBC driver is now version 2.7.5.
- The Sybase JDBC driver is now version 16.
- Out-of-the-box FIPS-compliant JDBC drivers are now included in Data Quality & Observability Classic installation packages. You can test connections by setting "DQ_APP_FIPS_ENABLED" to true or false in the owl-env.sh file or DQ Web ConfigMap.
- In FIPS mode, "SAML_KEYSTORE_FILE" is no longer effective. Instead, DQ uses "SAML_KEYSTORE_ALIAS" at the "KEYSTORE_FILEPATH" with the "SAML_KEYSTORE_PASS."
- You can now manage admin user creation during tenant creation, supporting SSO-only user access and meeting security policy requirements.
- You can now use the new "rulelimit" admin configuration option to limit the number of rule break records stored in the metastore for Pushdown jobs. This helps prevent out-of-memory issues when a rule returns a large number of break records. The default value is 1000. This option does not affect Pullup or Pushdown jobs with archive break records enabled.
Jobs
- When archiving break records for Pullup jobs, link IDs are now optional. This change provides greater flexibility in storing and accessing break records, offering a more scalable solution for remediating records outside of Data Quality & Observability Classic. The new format includes the full rule result.
- Additionally, archiving break records for Pullup jobs now supports multiple runs in a single day. The newest archive file remains "ruleBreaks.csv," and a "ruleBreaks[timestamp].csv" file is created for each run.
- The pagination controls in Explorer are now at the top of the table list, making navigation easier.
- When you access the Dataset Overview from Explorer or the Rule Workbench, column headers now stay locked in place, consistent with other areas of the application.
Rules
- SQL Assistant for Data Quality now directly connects to your Collibra Platform instance to make LLM-router calls, eliminating the need for a Kong endpoint configuration. This improves efficiency and expedites requests to the Collibra AI proxy.
- When you add or edit a rule template, the same naming rules as creating a rule are now applied. You can no longer use special characters in rule template names.
- The "Add Rule Template" modal now requires you to complete the "Rule Query" fields before submitting, ensuring all necessary information is provided when creating a rule template.
- The checkbox in the "Settings" modal now accurately shows the rule's status. It is checked when the rule is active and clear when the rule is inactive.
- Users without sufficient permissions can no longer see break record data in the "Archived Breaks" modal, which is available from the Rules tab of the Findings page. This enhancement improves data security by ensuring that only authorized users can access sensitive information.
Alerts
- You can now configure webhooks to send Global Failure alerts, enabling faster identification and resolution of issues.
- You can now include JSON templates with replacement variables in the definition of a webhook. These variables are included in a custom JSON payload sent to the webhook server, enabling more flexible and tailored integrations for alerting.
Reports
- When you open the Dataset Findings report on the Reports page, you now see the total number of rows in the dataset.
APIs
- The API v3/jobs/findings now populates the same valid values for datashapes as the API v2/getdatashapes.
- The API v3/datasetdef now includes time zone information for job schedules.
Fixes
Platform
- You can now enter credentials for a remote connection when creating a database, restoring the previous functionality.
- When you add role mappings without first specifying a role in Admin Settings > Role Management, the role name and mapping now show correctly. The success message accurately reflects the completed action.
Jobs
- Jobs that run in EMR instances no longer get stuck in an "Unknown" status.
- When you use the file lookback parameter "-fllb" with the added date column "-adddc", the "OWL_RUN_ID" column is now correctly included in historical and current dataframes. This prevents job failures during outlier calculation.
- When working with SQL Server Pushdown jobs, the "ORDER BY" clause in a source query now requires the "TOP" command due to a SQL Server limitation. For queries that return an entire table, "ORDER BY" cannot be used because Pushdown relies on the source system for query processing. Additionally, API calls for rule breaks on SQL Server Pushdown jobs require a limit to ensure repeatable ordering for queries using offsets to page the rule preview. If no limit is applied, the API call will return a 404 error.
- The search string in the profile now refreshes correctly when you change the column. Search results are displayed accurately without needing to manually collapse and re-expand to update the values.
- The integration status button in the Metadata Bar now correctly reflects the status when manual integration fails. This behavior is aligned with the accurate functionality of the Dataset Manager.
- Explorer now remembers its last state when you return from the job configuration.
- When creating a job, the "Include Views" option under the Mapping step now functions correctly.
Findings
- The data preview of Length_N shapes now displays correctly in the Shapes tab on the Findings page.
Rules
- You no longer encounter access errors when using the "Run Result Preview" function in Rule Workbench. The function now checks if you are the dataset owner, ensuring proper access permissions.
- When you add or edit a rule template, the same naming rules as when creating a rule are now applied. You can no longer use special characters in rule template names.
- The validation message "Validation is not supported for stat rules" is now shown again during syntax validation, both as a warning banner and in the dialog box, restoring the legacy behavior.
- When "snowflake_key_pair" is used as a secondary connection under a rule in a dataset, the command line now adds a Spark configuration in the Kubernetes environment.
- The "Pretty Print" function on the Rule Workbench now preserves spaces inside double or single quotes during formatting, preventing unintended changes to column names.
Alerts
- Alerts for non-authenticated webhooks now send correctly at runtime.
- Multi-condition alerts now trigger correctly and emails are sent successfully.
- The search feature now works correctly on the Alert Notifications page.
Collibra Platform integration
- Large aggregation requests no longer cause out-of-memory errors. Results are now fetched in pages and partially stored on the filesystem, ensuring memory usage stays within safe limits while still returning results in the required stats format.
- After running an integration, Collibra Platform now shows correct values for custom rule attributes when a rule in Data Quality & Observability Classic contains a filter. The Loaded Rows, Rows Failed, and Rows Passed attributes now reflect the rule row count, which is based on the subset of the dataset defined by the rule filter.
APIs
- The "/v2/getdataasset" and "/v2/getcatalogbydataset" endpoints now correctly return the name of the Data Category assigned to the dataset.
Release 2025.10
Release Information
- Release date of Data Quality & Observability Classic 2025.10: October 27, 2025
- Release notes publication date: September 22, 2025
New and improved
Platform
Note We updated Tomcat to address several security vulnerabilities (CVEs). The maximum part size for multipart requests is now set to 80 by default. This default limit might affect APIs that process datasets with an large number of columns. If necessary, you can change this limit using the property "DQ_SERVLET_MULTIPART_MAXPARTCOUNT" in the owl-env.sh file for Spark Standalone installations or the owl-web-configmap for Kubernetes environments.
- Scheduled jobs no longer run inconsistently or fail due to a JobRunr licensing issue.
- Tenants on the "Login" page are now listed in alphabetical order for local users. This makes it easier to locate and select the desired tenant. This feature will be added for the SSO tenant list in a future release.
- You can now use Amazon Elastic Kubernetes Service (EKS) version 1.33 with Data Quality & Observability Classic version 2025.10.
Jobs
- A summary query is now generated for Pushdown jobs when archive break records are turned off and no link IDs are set. This helps limit the break records requested for the metastore data preview and prevents out-of-memory issues.
- To ensure consistent validation behavior for job names between Dataset Manager and Explorer, special characters are no longer allowed when renaming existing jobs in Dataset Manager.
- Whitespace characters and parentheses are now handled correctly when creating a BigQuery job. You no longer need to use pretty print to resolve syntax errors.
Findings
- The Rules tab on the Findings page now includes a search box to find matching rule names. This allows you to quickly locate specific rules, improving navigation and efficiency.
Fixes
Platform
- The dq-web-configmap.yaml file no longer contains the incorrect parameter for the “LOCAL_REGISTRATION_ENABLED” setting.
Jobs
- When an Amazon S3 connection is inactive, the error message "Unauthorized Access: Credentials are missing or have expired" now shows. This message also appears when mapping from JDBC to remote files, helping you identify credential issues more easily.
- When you re-run a job after invalidating fuzzy match duplicates, saving, and retraining on the Findings page, the job no longer contains the same invalidated fuzzy match duplicates.
- The metadata bar is now visible when you create a job from a Databricks connection. This ensures a consistent user experience and provides access to relevant options during job creation.
- Pushdown jobs no longer reset dataset ownership each time they run, even if the user running the job is not the original creator.
Findings
- The "Failed to update label" error no longer shows when you edit an annotation label.
- When you configure a job with a Patterns monitor, the results on the Findings page are now shown correctly.
Rules
- When you use "+Quick Rule" from a Rule Template, the dimension is now added automatically if a dimension was added as part of the template’s settings. This streamlines the rule creation process and ensures the dimension is included without additional steps.
Alerts
- You now must select a radio button for "Job Alert Status *" when creating a Job Status alert. This ensures that the required selection is enforced properly, preventing incomplete alert configurations.
Collibra Platform integration
- You no longer receive an error when re-enabling an integration that takes more than 60 seconds.
- You can no longer enable, disable, or reset a Collibra Platform integration on the Dataset Manager or Findings pages unless you have the ROLE_DATASET_ACTION, ROLE_ADMIN, or ROLE_DATASET_MANAGER role. This ensures that only users with the appropriate permissions can manage Data Quality & Observability Classic integrations with Collibra Platform.
- On the "Mapping Status" tab of the Integration Setup page, a new partial column table count was added to the Tables Mapped column.
Release 2025.09
Release Information
- Release date of Data Quality & Observability Classic 2025.09: September 29, 2025
- Release notes publication date: September 2, 2025
New and improved
Platform
Warning To continue using SQL Assistant for Data Quality, you must upgrade to Data Quality & Observability Classic version 2025.09. This version utilizes the gemini-2.5-pro AI model. After this date, SQL Assistant for Data Quality will stop working unless you upgrade, as Google will retire the gemini-1.5-pro-002 model used in earlier versions of Data Quality & Observability Classic.
- Add the JOBRUNR_PRO_LICENSE environment variable to all environments
- Contact Support for the license key.
- Securely add the JobRunr license as an environment variable to the DQ-web ConfigMap for Kubernetes deployments or to the owl-env.sh for Spark Standalone:
JOBRUNR_PRO_LICENSE="[license]" - Restart the server to apply the license after updating.
- Upgrade to the upcoming 2025.10 version
- No further action is required after upgrading.
Note We updated Tomcat to address several security vulnerabilities (CVEs). The maximum part size for multipart requests is now set to 80 by default. This default limit might affect APIs that process datasets with an large number of columns. If necessary, you can change this limit using the property "DQ_SERVLET_MULTIPART_MAXPARTCOUNT" in the owl-env.sh file for Spark Standalone installations or the owl-web-configmap for Kubernetes environments.
- You can now configure Secret Key-based encryption in Data Quality & Observability Classic environments. This allows you to use an existing JKS file or override it with your own file for greater control over key size, algorithms, and encryption methods.
- When you delete a tenant from the metastore, the schema and all its tables are also deleted.
Jobs
- BigQuery Pushdown jobs now support column names with spaces.
Findings
- We removed the "State" column from the Rules tab on the Findings page to improve readability and streamline the layout.
- The Lower Bound and Upper Bound input fields for adaptive rules on the Change Detection modal now replace commas with periods, ensuring that values such as 10,23 are interpreted and displayed as 10.23. This prevents locale-specific misinterpretations.
Rules
- The column order in the Preview Breaks dialog box now matches the column order on the Rule Query page.
- The values for rules, data preview, and rule preview now show full numeric values instead of exponential values.
Dataset Overview
- You can now analyze tables in schemas with mixed or lowercase Snowflake schema names without editing the dataset query. This applies to both Pushdown and Pullup, making the analysis process more seamless.
Dataset Manager
- You can now use the updated APIs to manage the new "Schema Table Mapping" field available in the Dataset Edit dialog box in the Dataset Manager. Use the "/v2/updatecatalogobj" API to update this field and the "/v2/getdatasetschematablemapping" API to retrieve the Schema Table Mapping JSON string. Existing authorization rules now apply to schema mapping API updates, ensuring consistent security. Additionally, the scope query parser now supports multiple schemas and tables, and cardinality has been enhanced to allow multiple tables in a single job. This advanced SQL parsing is executed for each Data Quality & Observability Classic job when the new field, schemaTableNames, is empty or Null.
- When you edit a dataset in Dataset Manager, the "Schema Table Mapping" field is now automatically updated during the next job execution if it is empty or blank. The new parsing algorithm uses the scope query schema and table discovery to populate this field, ensuring more accurate and complete dataset information.
- You can now see a new read-only "Dataset Query" field in the Dataset Edit dialog box. This field shows the variables used in the query, making it easier to review the dataset's configuration.
Collibra Platform integration
- You can now associate a single data asset with multiple tables. The cardinality of the "Data Asset Represent Table" relation type has been updated from one-to-one to one-to-many, allowing for greater flexibility in managing data asset relationships.
Fixes
Platform
- On the Quality tab of Collibra Platform, the ring chart color now matches the corresponding ring chart in the At a Glance sidebar.
Jobs
- Pushdown jobs no longer fail with a "ConcurrentDeleteReadException" message caused by concurrent delete and read operations in a Delta Lake environment.
- Potential errors in the estimation step caused by complex source queries are now resolved.
Rules
- You can now preview the results of your rule on jobs from SQL Server connections using Livy without the application becoming unresponsive.
- When a SQL rule contains multiple column names and the MAP_TO_PRIMARY_COLUMN option is enabled during integration setup, only the primary column name is now assigned to the data quality rule asset.
- The "Actions" drop-down list on the Templates page no longer forces a selection.
- En dash (–) comments in the SQL source query no longer cause the rule to throw an exception.
Dataset Manager
- You can now find renamed datasets in the Dataset Manager.
Collibra Platform integration
- The lowest possible passing fraction is now 0 when a dataset from Data Quality & Observability Classic is integrated with Collibra Platform, even if the total number of outlier findings would otherwise result in a negative passing fraction value.
- You no longer receive an unexpected error stating “unable to fetch DGC schemas” when mapping connections from Data Quality & Observability Classic to databases in Collibra Platform.
- The dgc_dq_mapping table now includes an alias for column names, ensuring the correct column relation is reflected in the data quality rule.
Release 2025.08
Release Information
- Release date of Data Quality & Observability Classic 2025.08: September 2, 2025
- Release notes publication date: August 6, 2025
Announcement
As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective this release (2025.08).
In this release (2025.08), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.6. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
- Kubernetes installations
- Kubernetes containers automatically contain Java 17 and Spark 3.5.6.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.6.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URLvariable set tofalsein the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication.
- Standalone installations
- To install Data Quality & Observability Classic 2025.08, you must upgrade to Java 17 and Spark 3.5.3.
- If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.6.
- Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.08 with Java 17.
- If you use file-based SAML authentication with the
SAML_METADATA_USE_URLvariable set tofalsein the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in Configuring SAML authentication. - We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.
In this release, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.
For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.
New and improved
Platform
Note We updated Tomcat to address several security vulnerabilities (CVEs). The maximum part size for multipart requests is now set to 80 by default. This default limit might affect APIs that process datasets with an large number of columns. If necessary, you can change this limit using the property "DQ_SERVLET_MULTIPART_MAXPARTCOUNT" in the owl-env.sh file for Spark Standalone installations or the owl-web-configmap for Kubernetes environments.
- You can now control the strictness of SQL query validation in the Dataset Overview and in the SQL View step in Explorer. Add the "VALIDATE_QUERIES" and "STRICT_SQL_QUERY_PARSING" settings in the Application Configuration Settings page of the Admin Console to configure this behavior. When "VALIDATE_QUERIES" is enabled, you can choose between simple or strict validation via the "STRICT_SQL_QUERY_PARSING" setting. Simple validation checks if the query contains any keywords other than
SELECT, while strict performs a full syntax check to ensure the query is a validSELECTstatement with no non-SELECTkeywords. If both settings are disabled, SQL query validation is turned off. For more information about how this works, go to Application Configuration Settings. - When you upload a file to Data Quality & Observability Classic, the application now validates the content type to ensure it matches the file extension. Additionally, when uploading drivers on the Connections page, only JAR files with a content type of "application/java-archive" or "application/zip" are supported.
- Data Quality & Observability Classic now runs on Spark 3.5.6. If you use a Spark Standalone or EMR deployment of Data Quality & Observability Classic and experience issues upgrading to 2025.08, we recommend upgrading your Spark version to 3.5.6. No action is required if you don't encounter any issues.
Jobs
- Trino Pushdown connections now support profiling and custom rules on columns of the ROW data type.
- You can now set the "Preview Limit" option to a value of 0 or higher when creating or editing a rule.
- Null values are no longer counted as unique values in Pullup mode. However, you can include null values in the unique value count on the Profile page by setting the command line option "profileIncludeNulls" to "true."
Findings
- When you export source-to-target findings, the export file now includes the column schemas and cell values.
- Float values with commas now retain up to 4 decimal places and remove trailing zeroes from the Findings page. For example, 3722.25123455677800000000000 is now shown as 3722.2512. When you hover your pointer over the value, the full value is shown in a tooltip.
- You can now download a CSV of break records directly from the Preview Breaks dialog box using the Download Results button. (idea #PE-I-3723, CDQ-I-386)
Alerts
- You can now create alerts for adaptive rules from Pushdown jobs in the “Breaking” state. This eliminates the need to manually search for breaking records in adaptive rules, reducing the risk of missing them. (idea #DCC-I-609, DCC-I-2713, CDQ-I-199, CDQ-I-105)
Connections
- You can now upload keys for Snowflake authorizations into Kubernetes environments to reference them within connection strings. (idea #CDQ-I-332)
Fixes
Platform
- Updating an asset's name no longer breaks the Quality tab, as related assets are connected using the asset's full name.
Jobs
- Columns from Trino connections are now accurately identified using their JDBC drivers and metadata instead of their catalog name, ensuring they load correctly.
- The Partition Column field now shows the correct column in the Job Creator metadata box of the scanning method step.
- An “invalid table name” error no longer occurs during job size estimation for data sources that previously caused errors, including Denodo, SQL Server, and Oracle.
- You can now view the "Discard Changes" and "Compile" buttons on the "Write and Compile a SQL Query" page, regardless of the Zoom setting or browser window size.
Rules
- When you use pretty print for a rule, it no longer introduces additional characters or alters values.
- When you click "Add Rule Template" on the "Templates" page, the selected value in the "Dimension" drop-down no longer disappears when you click elsewhere on the screen.
Findings
- You no longer receive an "Access Denied" message when editing the Findings page for local files.
Alerts
-
Non-admin users can now create alerts with webhooks.
Profile
- You can now compare the Baseline and Run values for the "Filled rate" when the Quality switch is enabled for a column on the Profile report.
Reports
- You can no longer filter by date in the Alert Details report. This ensures that the charts and counts accurately reflect the applied filter criteria.
Collibra Platform integration
- The Quality tab on asset pages in Collibra Platform no longer includes the passing fraction attribute values of suppressed rules from Data Quality & Observability Classic in the overview and dimension aggregation scoring.
- To ensure the correct column relation is reflected in the data quality rule, the "dgc_dq_mapping" table now includes an alias for column names.