Release notes
-
Failure to upgrade to the most recent release of the Collibra Service and/or Software may adversely impact the security, reliability, availability, integrity, performance or support (including Collibra’s ability to meet its service levels) of the Service and/or Software. For more information, read our Collibra supported versions policy.
- Some items included in this release may require an additional cost. Please contact your Collibra representative or Collibra Account Team with any questions.
Release 2026.04
Release Information
- Expected release date of Data Quality & Observability Classic 2026.04: April 27, 2026
- Release notes publication date: April 16, 2026
New and improved
Platform
As of Data Quality & Observability Classic 2026.02, a self-service option to download temporary repo keys from the "Docker Images" section of the Downloads page (login required) is available. Temporary repo keys are valid for 5 days from the download date. The process of requesting long-lived keys from Collibra Support will no longer be supported.
To ensure deployment stability, we recommend using the temporary keys to download images, uploading them to your internal Artifactory or registry, and generating internal credentials for production deployments. There is no impact to self-hosted Spark Standalone installations. For instructions on pulling images from the Collibra registry, go to Install on self-hosted Kubernetes .
- You can now configure access to scorecards and reports using the "Require INSIGHTS_VIEWER role to view scorecards and reports" setting in Security Settings > General. When enabled, only users with the ROLE_INSIGHTS_VIEWER role can access scorecards and reports, including sub-pages, ensuring stricter access control. Users without the role are redirected to the Access Denied page when attempting to access restricted content.
- You can now manage the WASBS storage account key securely through Azure Key Vault (AKV). This eliminates the need to store the key directly in the Helm values file, enhancing security.
- We upgraded the Databricks driver version to 2.8.0.
- Databricks Pushdown is now ANSI-compliant.
- We upgraded the OWASP ESAPI (Enterprise Security API) library to enhance application security and address known vulnerabilities. While no functional changes are expected, edge cases may behave differently.
- Data Quality & Observability Classic now supports the SQL Assistant for Data Quality feature when you have a Data Quality & Observability Classic to Collibra Platform OAuth-based integration. Previously, this capability was available only with Basic Authentication. This enhancement ensures broader integration options for leveraging SQL Assistant for Data Quality.
- You can now propagate labels and annotations to the Kubernetes metastore secret created by the DQ Agent service at runtime. This is configurable via Helm chart values, allowing you to customize the secret used by Pullup jobs.
- You can now customize the Kubernetes pod template for Spark driver and executor pods launched by the DQ Agent.
- You can now enable the PUSH_NA_TO_DIP application configuration setting to push adaptive rules with "Not Applicable" status to Collibra Platform. This setting is disabled by default and can be enabled in the Admin Console when the “Create Metric Assets” option is set to true in the integration configuration.
- You can now grant non-admin users access to the Admin > Integrations page based on their existing connection role mappings. When you enable the new "Enable Integration Management for Connection Users" setting in Security Settings, users with roles mapped to specific connections can view and configure integration settings for only those connections they have access to.
- You can now manage standardized tags using the new "Strict Tagging" feature. This feature ensures consistent metadata application by allowing users to select only approved tags from a dropdown. Admins can manage tags directly in the Admin Console under "Metatags," with validation to prevent duplicates and empty values. Additionally, new API endpoints are available for creating, updating, and deleting tags.
- As a one-time conversion process, the values in the t1, t2, t3, and t4 columns of the owl_catalog table will be used to populate the new defined_tags table, with each unique value stored as a tag and assigned a unique identifier.
Jobs
- Databricks Pushdown jobs now run successfully on tables with special characters in column names.
- Jobs now support the -metadataEnhancedBreakRecords command-line option. This allows archived break records to include additional metadata columns, providing more detailed insights.
- You can now enable metadata-enhanced break records for Pullup jobs. This option allows you to include additional data in pullup break records, aiding in remediation and reporting.
- The replay functionality no longer executes unnecessary SQL queries when you select "REPLAY" from the Run dropdown menu. This improvement significantly reduces wait times when working with complex SQL queries, as the system now only updates the timeframe without re-running the underlying query logic.
Findings
- You can now control user access to export functionality on the Findings page "Export" tab through role-based permissions. When the "Require DATA_EXPORT role to export rule breaks" security setting is enabled, users without the DATA_EXPORT role cannot access export checkboxes or buttons, while users with the role retain full export capabilities.
Rules
- You can now control export permissions for the "Copy Results" and "Download Results" buttons in Rule Workbench's Result Preview section. When the "Require DATA_EXPORT role to export rule breaks" setting is enabled, users without the DATA_EXPORT role will see these buttons disabled with a tooltip explaining they don't have permission to copy or download data.
- You can now configure the Purpose field as mandatory for custom rules to meet integration requirements. When the "MANDATORY_RULE_PURPOSE" configuration flag is enabled, the Purpose field is required for all rule types in Rule Workbench. The "Save" and "Submit" buttons are disabled until you enter a purpose.
- You can now add metadata columns to archived rule break records in Pullup jobs.
Alerts
- Email alerts now use 6-digit hexadecimal color codes to ensure the "Job Status" field displays correctly across all email clients.
Dataset Manager
- The "Copy to Clipboard" and "Download Results" options now respect the global exportlimit configuration setting, ensuring data extraction controls are uniformly enforced while maintaining full visibility of preview data on screen.
Collibra Platform integration
- The Data Quality & Observability Classic and Collibra Platform integration now uses a dedicated thread pool to increase integration job throughput without affecting other scheduler tasks. New properties provide granular control over job concurrency and thread queuing. For most customers, no changes are needed. However, if integration job throughput exceeds 50 jobs per hour, you may need to adjust the following properties:
- dq.dgc.integration.job.thread.pool.size: Maximum concurrent jobs based on available threads (default: 10).
- dq.dgc.integration.job.thread.pool.queue.capacity: Threads (jobs) concurrent + queued for the pool (default: 256).
- dq.dgc.max.concurrent.integration.job: Batch size for integration jobs (default: 10).
- The metastore mapping table for integrating Data Quality & Observability Classic with Collibra Platform now includes a new column that shows the time stamp of when the table was mapped. This time stamp is added when you use the auto-map feature for a schema.
- You can now delete all mapping records at the schema level by selecting "Delete Mapping" from the three-dot menu. This action removes existing mappings for a specific schema, allowing you to clear outdated mappings before running the auto-map job. The delete mapping option works alongside the existing auto-map feature to help you maintain accurate schema-level mappings.
- You can now delete integrations directly from the "Actions" option in Dataset Manager. The dataset is disabled for integration but can be re-enabled, allowing the integration job to run successfully.
- When you delete a dataset from Data Quality & Observability Classic, the system now automatically removes the corresponding integration assets (job asset, score domain, and rules domain) from Collibra Platform instances. This eliminates the need to manually identify and delete orphaned assets, reducing cleanup time and potential errors. The 2026.04 version of Collibra Platform requires the corresponding 2026.04 version of Data Quality & Observability Classic, and vice versa, for this feature to work correctly.
- You can now control which assets are created during the integration setup process. When setting up an integration, you can choose to exclude "Metrics" assets from being published to the Data Catalog while still ingesting the underlying data. This option helps reduce catalog clutter and makes it easier to find relevant datasets. The "Metrics" asset creation is enabled by default to maintain existing behavior.
- You can now automatically sync tolerance values from Data Quality & Observability Classic rules to Collibra Platform as threshold parameters when rules are promoted or updated. The tolerance values become visible on the rule asset in the platform. Each sync action is logged with timestamps, rule IDs, and user information for complete traceability. If syncing fails, the system logs detailed error information to help with troubleshooting.
Fixes
Platform
- Data Quality & Observability Classic now starts successfully when the SAML tenant manager is enabled. Previously, the web application failed to initialize after upgrading to version 2026.03 if SAML_TENANT_MANAGER_ENABLED was set to true.
- We added support for additional types of SAML envelope configurations.
- The Helm chart value global.web.security.jwt.token.secretkey now defaults to a sentinel value that causes a random JWT secret key to be generated at deploy time. This random key is regenerated on every application restart, which means short-term tokens issued from previous restarts will be invalidated. To maintain a consistent key across restarts, set global.web.security.jwt.token.secretkey to a custom value of at least 32 characters. Custom values shorter than 32 characters will now cause chart rendering to fail with an error.
- Users with the ROLE_ADMIN_VIEWER role can now access the Schedule Restriction page without encountering the "Failed to get schedule restrictions" error. This resolves an issue where business administrators were unable to view schedule restrictions due to insufficient permissions. The role maintains appropriate read-only access while continuing to restrict edit and delete capabilities for these users.
Jobs
- Databricks Pushdown jobs now run successfully on tables with special characters in column names.
Findings
- When exporting findings with Link IDs, the export limit now applies to each rule individually rather than to all rules collectively. Previously, if you set an export limit, the system would take the first N records from all rules combined, which could result in some rules being excluded entirely from the export if earlier rules filled the limit.
Rules
- Rule syntax validation for rules no longer fails when datasets sourced from Edge Information Core (EIC) return zero preview rows. You can now create and modify rules without encountering "No data preview available" errors. The validation process now evaluates SQL logic independently of dataset row count, ensuring consistent rule development across all data sources.
- The "Preview Breaks" dialog box now displays columns in the same order as specified in your rule query, rather than alphabetical order. This improvement applies to Databricks, Redshift, and Snowflake connections for both Pushdown and Pullup jobs, including complex queries with CTEs, window functions, and column aliases.
- The Rule Builder data preview now loads only when you select the "Data Preview" tab.
- The purpose field in data quality rules now behaves consistently across all rule types. Previously, quick rules returned an empty string ("") for the purpose field when no value was provided, while manually created rules returned null. Both rule types now handle empty purpose fields uniformly.
Release 2026.03
Release Information
- Release date of Data Quality & Observability Classic 2026.03: April 3, 2026
- Release notes publication date: March 9, 2026
New and improved
Platform
As of Data Quality & Observability Classic 2026.02, a self-service option to download temporary repo keys from the "Docker Images" section of the Downloads page (login required) is available. Temporary repo keys are valid for 5 days from the download date. The process of requesting long-lived keys from Collibra Support will no longer be supported.
To ensure deployment stability, we recommend using the temporary keys to download images, uploading them to your internal Artifactory or registry, and generating internal credentials for production deployments. There is no impact to self-hosted Spark Standalone installations. For instructions on pulling images from the Collibra registry, go to Install on self-hosted Kubernetes .
Important You must configure the ALLOWED_LOCAL_PATHS variable in either "owl-env.sh" or "ConfigMap" to specify all local file base paths where files, such as drivers, are stored. Ensure you set this variable for both "dq-web" and "dq-agent" configurations if the configurations are not shared.
For example: If DQ Classic is installed to /home/collibradq/owl and JDBC drivers are located at /home/collibradq/owl/drivers, then /home/dq/config/owl-env.sh should include the following: export ALLOWED_LOCAL_PATHS=/opt/collibradq/owl
If drivers are stored in a different directory structure, such as /data/storage/owl/drivers, then you must append that to the configuration.
- The Amazon Athena Simba JDBC driver is now updated to version 2.2.2.
- You can now pass an encrypted SAML keystore password to the SAML setup.
- The ENABLE_STRICT_TAGGING flag on the Configuration Settings > Application Config page now defaults to "false." This ensures that the behavior of metatags remains unchanged for existing customers.
- Connection URLs now include all connection parameters when rewritten with an authentication token. This ensures complete and accurate URL generation.
Jobs
- The struct column type for the SDO_GEOMETRY SQL data type is now correctly identified as "unsupported" when creating or editing datasets.
Findings
- Source export now consolidates multiple files for SOURCE_VALUE, SOURCE_COUNT, and SOURCE_SCHEMA into a single SOURCE Excel file, matching the behavior in legacy.
Rules
- The “Custom Template Rules” rule query value from Rule Builder now persists on the Findings > Rules page and the Rule Definitions page.
Dataset Manager
- You can now choose metatags from a pre-selected drop-down list when ENABLE_STRICT_TAGGING is set to "false." ENABLE_STRICT_TAGGING is set to "true," you can add your own metatags and also select from the pre-selected list. This replaces the previous t1, t2, t3, and t4 metatag fields with pre-selected and user-approved values.
APIs
- Time zone settings are now properly preserved during export and import processes when using APIs.
Collibra Platform integration
- Dimensions are now correctly assigned to layers, even if the dimension name appears multiple times in the DQ_dimension table. Duplicate dimension names are now supported, ensuring proper layer assignment when the DQ_dimension table is correctly configured.
- When you delete a rule in Data Quality & Observability Classic, it is now marked as "Deleted" in Collibra Platform instead of "Suppressed." Additionally, deleted rules are no longer included in the export object or JSON.
Fixes
Platform
- Pendo no longer causes a "pendo.clearSession()" error when logging in with a block script set to false. Additionally, Pendo does not activate upon login when it is blocked internally.
Jobs
- White spaces in dataset source queries are now handled correctly. Schema and table names now show properly in the metadata bar and the Dataset Manager page.
- You no longer receive an error message on the Explorer page when local file management configuration is disabled.
Findings
- When you re-run a job after invalidating fuzzy match duplicates, saving, and retraining on the Findings page, the job no longer contains the same invalidated fuzzy match duplicates.
Rules
- Secondary rules now support functions in the
DATE_FORMAT(DATE('${rd}'), 'yyyyMMdd')query.
APIs
- Time zone settings are now properly preserved during export and import processes when using APIs.
Collibra Platform integration
- Dataset integration now succeeds even when metatags contain spaces. Spaces in metatag values are automatically replaced with hyphens during saving to prevent integration errors.
- The toaster message for the auto map feature now states "Tables with the provided schema <yourSchema> are not found" when no tables or views exist, instead of just stating "The schema was not found."
Release 2026.02
Release Information
- Release date of Data Quality & Observability Classic 2026.02: February 26, 2026
- Release notes publication date: January 30, 2026
New and improved
Platform
- Kubernetes installations: The upgrade to 2026.02 is seamless and managed automatically within the application container.
- Spark Standalone installations: You must manually upgrade your underlying Spark environment to Spark 4.1.0 before upgrading to version 2026.02. This is a mandatory prerequisite. To learn how to upgrade Spark, go to Upgrade Spark.
As of Data Quality & Observability Classic 2026.02, a self-service option to download temporary repo keys from the "Docker Images" section of the Downloads page (login required) is available. Temporary repo keys are valid for 5 days from the download date. The process of requesting long-lived keys from Collibra Support will no longer be supported.
To ensure deployment stability, we recommend using the temporary keys to download images, uploading them to your internal Artifactory or registry, and generating internal credentials for production deployments. There is no impact to self-hosted Spark Standalone installations. For instructions on pulling images from the Collibra registry, go to Install on self-hosted Kubernetes .
- The JobRunner version has been upgraded to 8.3.1.
- The Simba BigQuery JDBC driver is now updated to version 1.6.5.1002, and the Spark BigQuery connector is now updated to version 0.43.1.
- The Admin page for Webhook Alerts now includes new columns for "Global" and "JSON Template." These additions allow you to view the full configuration of a webhook directly in the table, eliminating the need to open each webhook for editing.
- A new security setting and user role are now available in the Findings module to provide granular control over data export capabilities. Administrators can restrict specific users from downloading or copying records to the clipboard while still allowing them to preview data in the UI.
Jobs
- You can now archive duplicate break records when using SQL Server Pushdown.
- When you rename a dataset using Dataset Manager, you now receive a notification that the process might take some time. This ensures you're informed and prevents issues caused by refreshing or navigating away during the renaming process.
- You can no longer select non-text rows when adding a "Shapes" layer during the job creation process. This ensures compatibility, as non-text rows are not supported.
- The Explorer SQL Query landing page now supports Common Table Expression (CTE) queries. CTE-based queries compile successfully and are no longer automatically escaped. Queries starting with the
WITHkeyword no longer require parentheses or additional whitespace around the CTE definition.- Known limitation: Pretty Print is not supported, and random characters entered in the SQL Query, such as "asdfasdfasf," returns an error message.
- For S3 connections using an Instance Profile, missing or expired credentials now result in an explicit error when you access the connection from the Explorer tree. This helps you quickly identify and resolve credential issues.
Findings
- You can now add or edit a rule dimension from the "Edit Rule" dialog box on the Findings page. This provides more flexibility in managing rule dimensions.
- The Source Export report now passes the label instead of the passStatus value to improve usability. This update improves usability by providing clear visual indicators for rule statuses: a green checkmark for "passing," a horizontal rule for a neutral state, and a red "X" icon for "failing."
Rules
- The preview behavior now correctly supports alias names with spaces and preserves the expected column order. Additionally, the Run Result Preview in the Rule Workbench now properly handles queries with aliases for pullup datasets.
APIs
- The /v2/putweights endpoint now accepts URL-encoded form parameters with the request type application/x-www-form-urlencoded. This enhancement ensures all current form parameters are handled correctly while maintaining the same response and behavior as the existing multipart implementation. Additionally, the server is no longer limited by maxPartCount for these requests.
Fixes
Platform
- You can now edit remote file jobs and view columns without needing ROLE_ADMIN or ROLE_CONNECTION_MANAGER privileges, as long as the environment's security settings don't explicitly require them. The URL parsing logic has been corrected to support this functionality.
Jobs
- When you delete a dataset with default dataset ownership enabled, other datasets that include the original dataset name are no longer removed from default owner access. This ensures that datasets remain accessible to their owners.
- For complex queries, the correct schema and table name are now pulled from the query and placed in the metabar accurately.
- You can now validate source mapping on columns that have spaces or special characters in their names.
- The SCHEDULE_ENABLED flag now properly disables the job scheduler. When the flag is set to FALSE, the scheduled jobs no longer run.
Alerts
- The Rule Details table in email alerts now correctly shows all rows for conditional alerts with multiple conditions connected by
andoror. For example, multi-alert conditions such asscore > 75 AND find_valid_rules > 20are now fully supported.
Release 2026.01
Release Information
- Release date of Data Quality & Observability Classic 2026.01: February 3, 2026
- Release notes publication date: January 21, 2026
New and improved
Platform
- Kubernetes installations: The upgrade to 2026.02 is seamless and managed automatically within the application container.
- Spark Standalone installations: You must manually upgrade your underlying Spark environment to Spark 4 before upgrading to version 2026.02. This is a mandatory prerequisite. To learn how to upgrade Spark, go to Upgrade Spark.
- You can now configure email alerts using OAuth 2.0 via Microsoft's Graph API with a service-to-service client credentials flow. This modern configuration offers a secure alternative to classic SMTP authentication. To set up email alerts, register an application in Azure and enter its Client ID, Secret, and Tenant ID.
- The Kubernetes Java client has been upgraded from version 21.0.2 to 25.0.0. This upgrade ensures compatibility with Kubernetes versions up to 1.34.
- You can now configure email alerts using OAuth 2.0 via Microsoft's Graph API with a service-to-service client credentials flow. This modern configuration offers a secure alternative to classic SMTP authentication. To set up email alerts, register an application in Azure and enter its Client ID, Secret, and Tenant ID.
- You can now require the Dimension setting for rules by enabling the "MANDATORY_RULE_DIMENSION" configuration flag. When enabled, the Dimension field becomes mandatory for all rule types, ensuring rules cannot be saved without selecting a Dimension.
- You can now copy "Environment Details" directly to your clipboard from the "Application Info" button. This simplifies sharing environment information for Collibra Support.
- When using the global search bar, you are now notified if your search yields no results.
- We have updated the "terms" link on our sign in page.
Jobs
- You now see a more user-friendly and informative error message when manually triggering a DQ job on an S3 connection with expired or missing credentials. This helps you quickly identify and resolve the issue.
- When you configure shapes for a Pushdown job without a link ID specified and enable "Archive breaks," break records are now archived in the source. Additionally, a new results column shows the source record in JSON format.
Rules
- Access to the Rule Result Preview feature is now restricted. Users must have the ROLE_DATA_PREVIEW security role to view preview data.
- When you delete a rule, it is removed from all jobs run after the deletion date, and their scores are automatically recalculated. Jobs run before the deletion date remain unchanged, and the deleted rule's impact on the score is retained.
Alerts
- All dataset metatags are now available for use in webhook custom payloads. When a webhook is triggered, metatags are correctly passed in the JSON using placeholders such as "%%metatag_1%%." Additionally, the business unit is now exposed as "%%business_unit%%."
Collibra Platform integration
- You can now retrieve Snowflake table metadata through the API for the Explorer UI and auto-map. Auto-map automatically populates the dgc_dq_mapping table with table columns, simplifying integration.
Fixes
Platform
- You now see an informative landing page prompting you to set the license name if it wasn't configured during installation. After setting the license name and key, the page disappears. Previously, this process failed with a "403 Unauthorized" error, which has been resolved.
- You can now fully disable Pendo scripts alongside other application properties, providing greater control over application configurations.
Jobs
- Downloaded break records now handle quoted data correctly, ensuring data with quotes is no longer split across multiple columns. Column headers with commas remain intact in the exported CSV. Additionally, dates in the "Preview Breaks" download now match the format shown in the UI, ensuring consistency when opened in Excel.
- When you create an SAP HANA connection and run a job with a string field starting with "+" or "-", the job now runs and profiles the column as a string instead of showing an error.
- The time window now displays correctly for IST and other half-hour offset time zones. Jobs no longer run during restricted periods, ensuring accurate scheduling.
- You can now update the date range to a future date for an outlier without errors. This ensures smooth updates and allows breaks to be set until the specified end date.
- Job estimates now validate tables for Snowflake data sources without errors. This ensures smoother operation and accurate validation during job execution.
- The "Previous" and "Next" buttons are now hidden in the Explorer tree when the schema contains fewer than 25 tables. Previously, these buttons appeared as disabled.
Findings
- An issue that caused a user-passed adaptive rule violation to reset incorrectly when another adaptive rule was retrained or suppressed during a data job run has been fixed. Adaptive rule violations now remain unchanged during these operations.
Rules
- Rules now work with "REGEXP" in the same way as "RLIKE," including scenarios with parentheses. This ensures consistent behavior across both operators.
- When you hover over "App Frequent Runs" in the dataset selection screen of the Alert or Rule Builder, the double line no longer appears.
Connections
- Filtering by schema or database name now correctly shows the associated schema and tables when you expand a relevant connection. The "Search Catalog" API returns valid results for existing schemas. Clearing the filter and expanding the connection now behaves consistently with filtered behavior.
Collibra Platform integration
- The UI now correctly shows "0/0/0" when the values for fully mapped, partially mapped, and total tables are zero. Previously, incorrect values could appear because default values were not set to 0. If any of these values are null or undefined, the UI will continue to show "N/A."
APIs
- The "v3/alerts" API now prevents null or empty values for the alertTriggerType and alert_types parameters. This ensures more reliable API behavior and data validation.
Release 2025.11
Release Information
- Release date of Data Quality & Observability Classic 2025.11: November 24, 2025
- Release notes publication date: October 31, 2025
New and improved
Platform
- Kubernetes installations: The upgrade to 2026.02 is seamless and managed automatically within the application container.
- Spark Standalone installations: You must manually upgrade your underlying Spark environment to Spark 4 before upgrading to version 2026.02. This is a mandatory prerequisite. To learn how to upgrade Spark, go to Upgrade Spark.
- The CVE-2025-66516 vulnerability is detected in Apache Tika, which Data Quality & Observability Classic only uses for content-type detection and the detection workflow processes. For more information on this issue, go to the official Collibra Support article.
- The CVE-2025-59419 vulnerability is detected only in the DQ Livy containers. However, Data Quality & Observability Classic does not use the vulnerable module netty-codec-smtp. This module is included due to a transitive dependency, but it is not in use, and there is no direct or indirect way to exploit it because DQ Livy does not expose or use any SMTP protocol or functionality.
- The release images may show CVE-2025-59250 as a vulnerability, even though it has been addressed in the packages. This detection is a false positive.
- To enhance security, authentication tokens now require a minimum secret key length of 256 bits (32 characters) for the
security.jwt.token.secret-key. If you use environment variables, update theSECURITY_JWT_TOKEN_SECRET-KEYconfiguration to a securely generated value of at least this length. The existing default value will be removed in January 2026. We strongly recommend providing a custom value if you haven't already. - You can now use the OAuth2 authentication type for Snowflake connections. This enhancement provides a more secure and flexible authentication option for connecting to Snowflake.
- The following drivers have been upgraded:
- The Snowflake JDBC driver is now version 3.26.1, to support OAuth-based authentication.
- The Databricks JDBC driver is now version 2.7.5.
- The Sybase JDBC driver is now version 16.
- Out-of-the-box FIPS-compliant JDBC drivers are now included in Data Quality & Observability Classic installation packages. You can test connections by setting "DQ_APP_FIPS_ENABLED" to true or false in the owl-env.sh file or DQ Web ConfigMap.
- In FIPS mode, "SAML_KEYSTORE_FILE" is no longer effective. Instead, DQ uses "SAML_KEYSTORE_ALIAS" at the "KEYSTORE_FILEPATH" with the "SAML_KEYSTORE_PASS."
- You can now manage admin user creation during tenant creation, supporting SSO-only user access and meeting security policy requirements.
- You can now use the new "rulelimit" admin configuration option to limit the number of rule break records stored in the metastore for Pushdown jobs. This helps prevent out-of-memory issues when a rule returns a large number of break records. The default value is 1000. This option does not affect Pullup or Pushdown jobs with archive break records enabled.
Jobs
- When archiving break records for Pullup jobs, link IDs are now optional. This change provides greater flexibility in storing and accessing break records, offering a more scalable solution for remediating records outside of Data Quality & Observability Classic. The new format includes the full rule result.
- Additionally, archiving break records for Pullup jobs now supports multiple runs in a single day. The newest archive file remains "ruleBreaks.csv," and a "ruleBreaks[timestamp].csv" file is created for each run.
- The pagination controls in Explorer are now at the top of the table list, making navigation easier.
- When you access the Dataset Overview from Explorer or the Rule Workbench, column headers now stay locked in place, consistent with other areas of the application.
Rules
- SQL Assistant for Data Quality now directly connects to your Collibra Platform instance to make LLM-router calls, eliminating the need for a Kong endpoint configuration. This improves efficiency and expedites requests to the Collibra AI proxy.
- When you add or edit a rule template, the same naming rules as creating a rule are now applied. You can no longer use special characters in rule template names.
- The "Add Rule Template" modal now requires you to complete the "Rule Query" fields before submitting, ensuring all necessary information is provided when creating a rule template.
- The checkbox in the "Settings" modal now accurately shows the rule's status. It is checked when the rule is active and clear when the rule is inactive.
- Users without sufficient permissions can no longer see break record data in the "Archived Breaks" modal, which is available from the Rules tab of the Findings page. This enhancement improves data security by ensuring that only authorized users can access sensitive information.
Alerts
- You can now configure webhooks to send Global Failure alerts, enabling faster identification and resolution of issues.
- You can now include JSON templates with replacement variables in the definition of a webhook. These variables are included in a custom JSON payload sent to the webhook server, enabling more flexible and tailored integrations for alerting.
Reports
- When you open the Dataset Findings report on the Reports page, you now see the total number of rows in the dataset.
APIs
- The API v3/jobs/findings now populates the same valid values for datashapes as the API v2/getdatashapes.
- The API v3/datasetdef now includes time zone information for job schedules.
Fixes
Platform
- You can now enter credentials for a remote connection when creating a database, restoring the previous functionality.
- When you add role mappings without first specifying a role in Admin Settings > Role Management, the role name and mapping now show correctly. The success message accurately reflects the completed action.
Jobs
- Jobs that run in EMR instances no longer get stuck in an "Unknown" status.
- When you use the file lookback parameter "-fllb" with the added date column "-adddc", the "OWL_RUN_ID" column is now correctly included in historical and current dataframes. This prevents job failures during outlier calculation.
- When working with SQL Server Pushdown jobs, the "ORDER BY" clause in a source query now requires the "TOP" command due to a SQL Server limitation. For queries that return an entire table, "ORDER BY" cannot be used because Pushdown relies on the source system for query processing. Additionally, API calls for rule breaks on SQL Server Pushdown jobs require a limit to ensure repeatable ordering for queries using offsets to page the rule preview. If no limit is applied, the API call will return a 404 error.
- The search string in the profile now refreshes correctly when you change the column. Search results are displayed accurately without needing to manually collapse and re-expand to update the values.
- The integration status button in the Metadata Bar now correctly reflects the status when manual integration fails. This behavior is aligned with the accurate functionality of the Dataset Manager.
- Explorer now remembers its last state when you return from the job configuration.
- When creating a job, the "Include Views" option under the Mapping step now functions correctly.
Findings
- The data preview of Length_N shapes now displays correctly in the Shapes tab on the Findings page.
Rules
- You no longer encounter access errors when using the "Run Result Preview" function in Rule Workbench. The function now checks if you are the dataset owner, ensuring proper access permissions.
- When you add or edit a rule template, the same naming rules as when creating a rule are now applied. You can no longer use special characters in rule template names.
- The validation message "Validation is not supported for stat rules" is now shown again during syntax validation, both as a warning banner and in the dialog box, restoring the legacy behavior.
- When "snowflake_key_pair" is used as a secondary connection under a rule in a dataset, the command line now adds a Spark configuration in the Kubernetes environment.
- The "Pretty Print" function on the Rule Workbench now preserves spaces inside double or single quotes during formatting, preventing unintended changes to column names.
Alerts
- Alerts for non-authenticated webhooks now send correctly at runtime.
- Multi-condition alerts now trigger correctly and emails are sent successfully.
- The search feature now works correctly on the Alert Notifications page.
Collibra Platform integration
- Large aggregation requests no longer cause out-of-memory errors. Results are now fetched in pages and partially stored on the filesystem, ensuring memory usage stays within safe limits while still returning results in the required stats format.
- After running an integration, Collibra Platform now shows correct values for custom rule attributes when a rule in Data Quality & Observability Classic contains a filter. The Loaded Rows, Rows Failed, and Rows Passed attributes now reflect the rule row count, which is based on the subset of the dataset defined by the rule filter.
APIs
- The "/v2/getdataasset" and "/v2/getcatalogbydataset" endpoints now correctly return the name of the Data Category assigned to the dataset.