Release Notes

Important 
  • Failure to upgrade to the most recent release of the Collibra Service and/or Software may adversely impact the security, reliability, availability, integrity, performance or support (including Collibra’s ability to meet its service levels) of the Service and/or Software. For more information, read our Collibra supported versions policy.
  • Some items included in this release may require an additional cost. Please contact your Collibra representative or Collibra Account Team with any questions.

Release 2025.06

Release Information

  • Expected release date of Data Quality & Observability Classic 2025.06: June 30, 2025
  • Release notes publication date: June 3, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.06), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.06, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.05 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

In the 2025.06 and 2025.07 releases, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Platform

  • This application, which was previously known as Collibra Data Quality & Observability, is now called Data Quality & Observability Classic. This change reflects the introduction of the new Data Quality & Observability as part of the unified Collibra Platform. For more information about Data Quality & Observability, go to About Data Quality & Observability.

Jobs

  • When archiving break records for Pushdown jobs, link IDs are now optional. This change provides greater flexibility in storing and accessing break records and offers a more scalable solution for remediating records outside of Data Quality & Observability Classic.
    • If Data Quality & Observability Classic does not have permission to write to tables in your source database, Pushdown jobs with or without link IDs selected require you to run an ALTER statement on the COLLIBRA_DQ_RULES table to add the new results column for JSON data. Data Quality & Observability Classic will attempt this alteration automatically, which will cause the rule to fail because it will attempt to write to a column that doesn’t exist in your source database.
    • Important If you don’t use a link ID, the results column in the COLLIBRA_DQ_RULES table will become exponentially larger, leading to unintended storage and compute costs in your source database. To minimize these costs, you can use a link ID or move these results out of your source database and into a remote file storage system, such as Amazon S3.

  • Data Quality & Observability Classic now checks that a link ID is set in the settings dialog on Explorer when creating a DQ job, to ensure the archive break records feature works when enabled. If one or more link ID columns are selected, a checkbox and drop-down list appear next to the Archive Breaking Records option in the settings dialog. If no link ID columns are selected, the checkbox and drop-down list are unavailable, and a tooltip appears when you hover over the option.
  • Oracle Pushdown is now generally available.
  • Oracle Pushdown connections do not support the following data types:
    • LONG RAW
    • NCHAR
    • NVARCHAR2
    • ROWID
    • BYTES
  • Trino Pushdown jobs now support source-to-target analysis for datasets within the same Trino Pushdown connection.
  • SAP HANA, Snowflake, SQL Server, and Trino Pushdown connections do not support the VARBINARY data type.
  • The valsrclimitui admin limit, which controls the number of Validate Source records shown in the UI, now works as expected. For example, if valsrclimitui is set to 3, the UI shows 3 records from the Column Schema and 3 records from the Cell Panels.
  • Note The fpglimitui, histlimitui, and categoricallimitui admin limits are not working as expected today. Further, any admin limit ending “ui”, such as valsrclimitui, is intended only for updates made via the Admin Limits page of the Admin Console. Changes made to these admin limits via the command line will not be reflected.

Rules

  • Rules that use $colNm in the query, such as data class, template, and data type checks, no longer perform automatic rule validation.
  • The Rule Workbench now has a warning message when the preview limit exceeds the recommended maximum of 500 records.
  • The maximum length of the name of a rule is now 250 characters.
  • The Preview Breaks modal now indicates the name of the rule.
  • The Low, Medium, and High buttons now correctly reflect your selection when you reopen the Rule Details dialog box after automatically changing a rule's points. Additionally, if you manually update the score, the button automatically adjusts to match the corresponding scoring range:
    • Low: 1-4 points
    • Medium: 5-24
    • High: 25-100

Admin Console

  • Business unit names on the Business Unit page in the Admin Console cannot contain the special characters / (forward slash), `(backtick), | (pipe), or $ (dollar sign). If you use any of these special characters, an error message appears, and the Edit Business Unit dialog box remains open until the special characters are removed.
  • Data retention policies now apply to the following metastore tables:
    • alert_output
    • behavior
    • dataset_activity
    • dataset_schema_source
    • hint
    • validate_source
  • The Title and Story fields of the User Story section of the User Profile page cannot contain the following special characters:
    • ‘ (single quote)
    • “ (double quote)
    • -- (double dash)
    • < (less than)
    • > (greater than)
    • & (ampersand)
    • / (forward slash)
    • ` (backtick)
    • | (pipe)
    • $ (dollar sign)
    • If you use any of these special characters, an error message appears, and the Edit User Story dialog box remains open until the special characters are removed.

Fixes

Rules

  • You can now apply rule filter queries to rules created using rule templates on Pushdown datasets.
  • Duplicated data quality rules no longer appear after you import data assets using CMA.
  • Rules now run without errors when you use a single link ID to archive break records with SQL Server Pushdown.

Profile

  • Divide-by-zero errors and issues with empty histograms are now resolved. Calculations and visualizations now work as expected without interruptions.

Findings

  • You can now export rule break records with accurate counts. The total number of rule break records for a rule now matches the number in the Breaking Records column of the exported file.

Integration

  • When you rename a rule in Data Quality & Observability Classic that is part of a dataset integrated with Collibra Platform, additional Data Quality Rule assets are no longer created during the next integration. The existing Data Quality Rule asset is also no longer unintentionally set to a “suppressed” state during the next integration, and its name is updated to match the name in Data Quality & Observability Classic.

Release 2025.05

Release Information

  • Release date of Data Quality & Observability Classic 2025.05.1: June 16, 2025
  • Release date of Data Quality & Observability Classic 2025.05: June 2, 2025
  • Release notes publication date: May 6, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.05), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.05, you must upgrade to Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.05 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Warning We identified a known security vulnerability (CVE-2025-48734) in version 2025.05. We addressed this issue in the 2025.05.1 patch.

Platform

  • You can now authenticate users via Microsoft Azure Active Directory B2C when signing into Data Quality & Observability Classic.
  • Only users with the ROLE_ADMIN role can now see the username and connection string of JDBC connections on the Connections page and in API responses, improving connection security.
  • Metastore credentials are no longer stored in plain text when you create a DQ Job via a notebook. This improvement increases the security of your credentials.
  • We enhanced the security of Data Category validation.
  • We improved the security of our application.

Jobs

  • Trino Pushdown jobs now support validate source.
  • The Run Job buttons now have two enhancements:
    • The Run Job with Date button on the Jobs tab of the Findings page is now labeled Select Run Date.
    • The Run Job button on the metadata bar now includes a helpful tooltip on hover.
  • SAP HANA connections now support the operators =>, ||, and && for scoped queries, the reserved word ‘LIKE_REGEXTR’, and the function REPLACE in scoped queries.
  • The assignments queue is now disabled for the validate source activity on new installations of Data Quality & Observability Classic. To enable it, an admin can set valsrcdisableaq to "false" on the Admin Limits page.
  • Warning When you set valsrcdisableaq to "true," the ability to assign findings is disabled for all finding types, such as rules, outliers, and so on. This issue is resolved in version 2025.05.1.

Rules

  • When Archive Break Records is disabled, you can now use NULL link IDs with Athena, BigQuery, Hive, Redshift, and SQL Server connections.
  • When Archive Break Records is enabled, and a rule doesn't specify a link ID column but the job does, you can no longer archive break records in these scenarios, as Data Quality & Observability Classic no longer appends the link ID to the rule. When a rule does not contain the selected link ID, the break records for that rule will not be archived, and an exception will occur. The job will execute successfully, but the rule will not produce any archived break records, and the exception message will explain that the link ID is not part of the rule.
  • Trino Pushdown jobs now support profiling and custom rules on columns of the ARRAY data type.
  • The Rule Details dialog on the Rule Workbench page now has an optional Purpose field. This allows you to add details about Collibra Platform assets associated with your rule and see which assets should relate to your Data Quality Rule asset upon integration.

Alerts

  • You can now set up Data Quality & Observability Classic to send data quality alerts to one or more webhooks, eliminating the dependency on SMTP for email alerts.
  • You can now rename alerts on the Dataset Alerts page. When you rename an alert, the updated name is reflected in the alert_nm column of the alert_cond Metastore table. The updated name applies only to alerts generated in future job runs and does not affect historical data.
  • In addition to the existing rule name condition variable for percentages, you can now include a rule name with a value identifier, such as test_rule.score > 0, to create condition alerts based on rule scores.

Profile

  • When you retrain a breaking adaptive rule, negative values are now supported for lower and upper bounds, min, max, and mean.

Findings

  • To enhance usability, we moved the data preview from the Rules results table into a dedicated modal, making it easier to navigate and view data.
    • The Actions button is now always visible on the right side of the Rules tab, and includes the following options:
      • Archived Breaks, previously called Rule Breaks, retains the same content and functionality.
      • Preview Breaks, previously embedded under the Plus icon in the first column, shows all breaking records for a given rule. This option provides the same information that was previously available in the Rules tab and still requires the ROLE_DATASET_ACCESS role. Additionally, the Rule Break Preview modal reflects the preview limit of the rule.
    • Tooltips were added to various components on the Rules tab and Rule Break Preview modal.

Integration

  • You can now use the Map to primary column only switch in the Connections step of the integration setup wizard. This allows you to map a rule to only the Column asset in Collibra Platform that corresponds to the primary column in Data Quality & Observability Classic.
  • When you rename a custom rule in Data Quality & Observability Classic with an active integration with Collibra Platform, a new asset is no longer created in Collibra Platform. Instead, renamed rules reuse the Collibra Platform asset IDs associated with the previous rule name when their metadata is integrated.

APIs

  • The /v3/datasetDefs/template API endpoint now returns the nested key-value pairs for shape settings and job schedule in the datasetDefs JSON object.

Admin Console

  • You can now use the previewlimit setting on the Admin Limits page to define the maximum number of preview results shown in the Dataset Overview and Rule Workbench. The default value is 250.
  • Warning Large previewlimit values can negatively impact performance.

Fixes

Platform

  • All datasets are now available when you run a dataset rule fetch with dataset security disabled for the admin user account.
  • FIPS-enabled standalone installations of Data Quality & Observability Classic now support SAML authentication.

Jobs

  • When you remove a table from a manually triggered or scheduled job, you now receive a more descriptive error stating that the table does not exist instead of a generic syntax error.
  • You can now use Amazon S3 as a secondary dataset name for a rule query.
  • You can now rerun a job for a selected date using the Run DQ Job button on the metadata bar of the Findings page.
  • The command line now retains the created query, including the ${rd} parameter, when you run a job using the Run DQ Job button on the metadata bar.
  • The controller response for queries to the alert_cond table in the Metastore now maps internal objects correctly.
  • The Profile page now correctly shows the top 5 TopN and BottomN shape results for Pullup jobs.
  • When you add a transform to a job with histogram enabled or set to auto, the job processes as expected, and aliased columns display correctly on the Profile page.

Rules

  • You can again save rules on Pushdown jobs that exceed ⅓ of a buffer page.
  • Pushdown rules that use stat variables now run with the correct information from the current job, rather than using data from previous job runs.
  • The DOUBLECHECK rule on PostgreSQL and Redshift Pullup jobs now flags rows with negative double values as valid.
  • The Result Preview in the Rule Workbench no longer produces an error when you use out-of-the-box templates.

Profile

  • You can now compare the Baseline and Run values for the "Filled rate" when the Quality switch is enabled for a column on the Profile report.

Findings

  • Pushdown jobs run on Snowflake connections now show as failed on the Findings page if a password retrieval issue occurs.

Dataset Manager

  • When you apply an option from the Actions drop-down list, such as delete a dataset, the correct dataset now has the action applied to it.

Admin Console

  • When you select a submenu option from the Admin Console menu, the submenu section now remains open.

Patch release

2025.05.1

  • Columns from Db2, Oracle, SQL Server, Sybase, and Teradata connections now load correctly in Explorer for users without the "ROLE_ADMIN" or "ROLE_CONNECTION_MANAGER" roles.
  • The ability to assign findings for all finding types, such as rules, outliers, and so on, is no longer disabled when the "valsrcdisableaq" limit is set to "true" on the Admin Limits page.
  • We resolved a security vulnerability related to CVE-2025-48734.

Release 2025.04

Release Information

  • Release date of Data Quality & Observability Classic 2025.04: April 28, 2025
  • Release notes publication date: April 2, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.04), only the Java 17 build profile of Data Quality & Observability Classic contains all new and improved features and bug fixes listed in the release notes. The Java 8 and 11 build profiles for Standalone installations contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release for Java 17 build profiles:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.04, you must upgrade to Java 17 and Spark 3.5.3 if you did not already do so in the 2025.02 or 2025.03 release.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.04 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

While this release contains Java 8, 11, and 17 builds of Data Quality & Observability Classic for Standalone installations, it is the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic.

For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

New and improved

Platform

  • The Platform Path of the SQL Assistant for Data Quality feature now uses the Gemini AI model version gemini-1.5-pro-002.
  • Important To continue using the Platform Path of the SQL Assistant for Data Quality feature, you must upgrade to Data Quality & Observability Classic 2025.04.

  • FIPS-enabled standalone installations of Data Quality & Observability Classic now support SAML authentication.
  • The DQ agent now automatically restores itself when the Metastore reconnects after a temporary disconnection due to maintenance or a glitch.

Jobs

  • We are pleased to announce that Oracle Pushdown is now available for preview testing.
  • When you add a timeslice to a timeUUID data type column in a Cassandra dataset, an unsupported data type error message now appears.
  • Dremio, Snowflake, and Trino Pullup jobs now support common table expressions (CTE) for parallel JDBC processing.
  • You can now archive the break records of shapes from Trino Pushdown jobs.
  • Link IDs for exact match duplicates are no longer displayed on the findings page.
  • We improved the insertion procedure of validate source findings into the Metastore.

Rules

  • When you add or edit a rule with an active status, SQL syntax validation now runs automatically when you save it. If the rule passes validation, it saves as expected. If validation fails, a dialog box appears, asking whether you want to continue saving with errors. Rules with an inactive status save without validation checks.
  • When a rule condition exceeds 2 lines, only the first 2 lines are shown in the Condition column of the Rules tab on the Findings page. You can click "more..." or hover over the cell to show the full condition.

Profile

  • The Profile page now correctly shows the top 5 TopN and BottomN shape results for Pullup jobs.
  • When there are only two unique string values, the histogram on the Profile page now shows them correctly.

Findings

  • To improve page alignment across the application, the Findings page now has a page title.

Alerts

  • Rule score-based alerts now respect rule tolerance settings. Alerts are suppressed when the rule break percentage falls within the tolerance threshold.

Dataset Manager

  • If you don't have the required permissions to perform certain tasks in the Dataset Manager, such as updating a business unit, an error message now appears when you attempt the action.

Scorecards

  • You can now delete scorecards with names that contain trailing empty spaces, such as "scorecard " and "test scorecard ".

Integration

  • You can now search with trailing spaces when looking up a community for tenant mapping in the Data Quality & Observability Classic and Collibra Platform integration configuration.

APIs

  • You now need the ROLE_ADMIN or ROLE_ADMIN_VIEWER role to access the /v2/getdatasetusagecount API endpoint.

Fixes

Platform

  • We have improved the security of our application.
  • The data retention process now works as expected for all tenants in multi-tenant instances.
  • Permissions errors for SAML users without the ROLE_PUBLIC role are now resolved. You no longer need to assign ROLE_PUBLIC to users who already have other valid roles.

Jobs

  • Native rules, secondary dataset rules, and scheduled jobs on Databricks Pullup connections authenticated via EntraID Service Principal now run as expected.
  • Pushdown jobs scheduled to run concurrently now process the correlation activity correctly.
  • Pushdown jobs with queries that contain new line characters now process correctly, and the primary table from the query is shown in the Dataset Manager.

Findings

  • The confidence calculations for numerical outliers in Pullup jobs have been updated for negative values. Positive value confidence calculations and Pushdown calculations remain unchanged.

Alerts

  • Conditional alerts now work as expected when based on rules with names that start with numbers.

Integration

  • The default setting for the integration schema, table, and column recalculation service (DQ_DGC_MAPPING_STATS_SCHEDULER_ENABLED) is now false, reducing unnecessary database activity. You can enable the service or call it through the API when needed.
  • Important From versions 2024.11 to 2025.03 of Data Quality & Observability Classic, if you don't want queries from the recalculation of mapped and unmapped stats of total entities to run, set the DQ_DGC_MAPPING_STATS_SCHEDULER_ENABLED to false in the owl-env.sh or Web ConfigMap. You can keep DQ_INTEGRATION_SCHEDULER_ENABLED set to true.
    For more information, go to the Collibra Support center.

  • The Quality tab for database assets is now supported in out-of-the-box aggregation path configurations.
  • The auto map feature now correctly maps schemas that contain only views.
  • Dimension configuration for integration mapping no longer shows duplicate Data Quality & Observability Classic dimensions from the dq_dimension table.

API

  • When you copy a rule with an incorrect or non-existent dataset name using the v3/rules/copy API, an error message now specifies the invalid dataset or rule reference. This prevents invalid references in the new dataset.

Release 2025.03

Release Information

  • Release date of Data Quality & Observability Classic 2025.03: March 31, 2025
  • Release notes publication date: March 4, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.03), Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • To install Data Quality & Observability Classic 2025.03, you must upgrade to Java 17 and Spark 3.5.3 if you have not already done so in the 2025.02 release.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.03 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

The April 2025 (2025.04) release will contain Java 8, 11, and 17 versions of Data Quality & Observability Classic. This will be the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3, and will include feature enhancements and bug fixes from the 2025.02 release and critical bug fixes from the 2025.03 and 2025.04 releases. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

Enhancements

Platform

  • On April 9, 2025, Google will deprecate the Vertex text-bison AI model, which SQL Assistant for Data Quality uses for the "preview path" option. To continue using SQL Assistant for Data Quality, you must switch to the "platform path," which requires an integration with Collibra Platform. For more information about how to configure the platform path, go to About SQL Assistant for Data Quality.
  • We removed support for Kafka streaming.

Connections

  • Private endpoints are now supported for Azure Data Lake Storage (Gen2) (ABFSS) key- and service principal-based authentication and Azure Blob Storage (WASBS) key-based authentication using the cloud.endpoint=<endpoint> driver property. To do this, add cloud.endpoint=<endpoint> to the Driver Properties field on the Properties tab of an ABFSS or WASBS connection template. For example, cloud.endpoint=microsoftonline.us.
  • Trino now supports parallel processing in Pullup mode. To enable this enhancement, the Trino driver has been upgraded to version 1.0.50.

Jobs

  • The Jobs tab on the Findings page now includes two button options:
    •  Run Job from CMD/JSON allows you to run a job with updates made in the job's command line or JSON. This option is Run Job from JSON in Pushdown mode since the command line option is not available.
    •  Run Job with Date allows you to select a specific run date.
    • Note The Run DQ Job button on the metadata bar retains its functionality, allowing you to rerun a job for the selected date.

  • The order of link IDs in the rule_breaks and opt_owl Metastore tables for Pushdown jobs is now aligned.
  • The options to archive the break records of associated monitors in the Explorer settings dialog box of a Pushdown job are now disabled when the Archive Break Records option is disabled at the connection-level.
  • We updated the logic of the maximum global job count to ensure it only increases, rather than fluctuating based on the maximum count of the last run job's tenant. This change allows tenants with lower maximum job counts to potentially run more total jobs while still enforcing the maximum connections for individual jobs. Over time, the global job count will align with the highest limit among all tenants.
  • You can now archive the break records of shapes from SAP HANA and Trino Pushdown jobs.
  • You can now use the new "behaviorShiftCheck" element in the JSON payload of jobs on Pullup connections. This allows you to enable or disable the shift metric results of Pullup jobs, helping you avoid misleading mixed data type results in string columns. By default, the "behaviorShiftCheck" element is enabled (set to true). To disable it, use the following configuration: “behaviorShiftCheck”: false.

Rules

  • You can now set the MANDATORY_PRIMARY_RULE_COLUMN setting to TRUE from the Application Configuration Settings page of the Admin Console to require users to select a primary column when creating a rule. This requirement is enforced when a user creates a new rule or saves an existing rule for the first time after the setting is enabled. Existing rules are not affected automatically.
  • The names of Template rules can no longer include spaces.
  • CSV rule export files now include a Filter Query column when a rule filter is defined. If no filter is used, the column remains empty. The Condition column has been renamed to Rule Query to better distinguish between rule and filter queries. Additionally, the Passing Records column now shows the correct values.
  • You can now apply custom dimensions added to the dq_dimension table in the metastore to rules from the Rule Details dialog box on the Rule Workbench. These custom dimensions are also included in the Column Dimension Report.
  • Livy caching now uses a combination of username and connection type instead of just the username. This improvement allows you to seamlessly switch between connections to access features such as retrieving the run results previews for rules or creating new jobs for remote file connections, without manually terminating sessions.
  • Note Manually terminating a Livy session will still end all sessions associated with that user.

Findings

  • You can now work with the Findings page in full-screen view.

Scorecards

  • You now receive a helpful error message in the following scenarios:
    • Create a scorecard without including any datasets.
    • Update a scorecard and remove all its datasets.
    • Add or update a scorecard page with a name that already exists.

Fixes

Platform

  • We have improved the security of our application.
  • We mitigated the risk of SQL injection vulnerabilities in our application.
  • Helm Charts now include the external JWT properties required to configure an externally managed JWT.

Jobs

  • Google BigQuery jobs no longer fail during concurrent runs.
  • When you add a -conf setting to the agent configuration of an existing job and rerun it, the command line no longer includes duplicate -conf parameters.
  • When you expand a Snowflake connection in Explorer, the schema is now passed as a parameter in the query. This ensures the Generate Report function loads correctly.
  • Record change detection now works as expected with Databricks Pushdown datasets.
  • When you select a Parquet file in the Job Creator workflow, the Formatted view tab now shows the file’s formatted data.
  • When you edit a Pullup job from the Command Line, JSON, or Query tab, the changes initially appear only on the tab where you made the edits. After you rerun the job, the changes are reflected across all three tabs.
  • The Dataset Overview now performs additional checks to validate queries that don’t include a SELECT statement.

Rules

  • When you update the adaptive level or pass value option in the Change Detection dialog box of an adaptive rule, you must now retrain it by clicking Retrain on the Behaviors tab of the Findings page.
  • @t1 rules on file-based datasets with a row filter now return only the rows included in the filter.
  • @t1 rules on Databricks datasets no longer return a NullPointerException error.
  • When you run Rule Discovery on a dataset with the “Money” Data Class in the Data Category, the job no longer returns a syntax error when it runs.

Findings

  • We updated the time zone library. As a result, some time zone options, such as "US/Eastern," have been updated to their new format. Scheduled jobs are fully compatible with the corresponding time zones in the new library. If you need to adjust a time zone, you must use the updated format. For example, "US/Eastern" is now "America/New_York."
  • Labels under the data quality score meter are now highlighted correctly according to the selected time zone of the dataset.

Alerts

  • You no longer receive erroneous job failure alerts for successful runs. Additional checks now help determine whether a job failed, improving the accuracy of job status notifications.
  • You can now consistently select or deselect the Add Rule Details option in the Condition Alert dialog box.

Reports

  • The link to the Dataset Findings documentation topic on the Dataset Findings report now works as expected.

Connections

  • Editing an existing remote file job no longer results in an error.
  • Teradata connections now function properly without requiring you to manually add the STRICT_NAMES driver property.

APIs

  • When you run a job using the /v3/jobs/run API that was previously exported and imported with /v3/datasetDefs, the Shape settings from the original job now persist in the new job.
  • Bearer tokens generated in one environment using the /v3/auth/signin endpoint (for local users) or the /v3/auth/oauth/signin endpoint (for OAuth users) are now restricted to that specific Data Quality & Observability Classic environment and cannot be used across other environments.
  • We improved the security of our API endpoints.

Integration

  • You can now use the automapping option to map schemas, tables, and columns when setting up an integration between Data Quality & Observability Classic and Collibra Platform in single-tenant Data Quality & Observability Classic environments.
  • The Quality tab now correctly shows the data quality score when the head asset of the starting relation type in the aggregation path is a generic asset or when the starting relation type is based on the co-role instead of the role of the relation type.
  • Parentheses in column names are no longer replaced with double quotes when mapped to Collibra Platform assets. This change allows automatic relations to be created between Data Quality Rule and Column assets in Collibra Platform.

Release 2025.02

Release Information

  • Release date of Data Quality & Observability Classic 2025.02: February 24, 2025
  • Release notes publication date: January 20, 2025

Announcement

Important 

As a security measure, we are announcing the end of life of the Java 8 and 11 versions of Data Quality & Observability Classic, effective in the August 2025 (2025.08) release.

In this release (2025.02) and the March (2025.03) release, Data Quality & Observability Classic is only available on Java 17 and Spark 3.5.3. Depending on your installation of Data Quality & Observability Classic, you can expect the following in this release:
  • Kubernetes installations 
    • Kubernetes containers automatically contain Java 17 and Spark 3.5.3.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-web ConfigMap, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
  • Standalone installations
    • You must upgrade to Java 17 and Spark 3.5.3 to install Data Quality & Observability Classic 2025.02.
    • If you use custom drivers, ensure they are compatible with Java 17 and Spark 3.5.3.
    • Follow the latest steps to upgrade to Data Quality & Observability Classic 2025.02 with Java 17.
    • If you use file-based SAML authentication with the SAML_METADATA_USE_URL variable set to false in the owl-env.sh script, update the Meta-Data URL option on the SAML Security Settings page with your metadata file. Use the file:/opt/owl/config/idp-metadata.xml format, ensuring the file name begins with the prefix file:. For steps on how to configure this, go to the "Enable the SAML SSO sign in option" section in SAML Authentication.
    • We encourage you to migrate to a Kubernetes installation to improve the scalability and ease of future maintenance.

The April 2025 (2025.04) release will contain Java 8, 11, and 17 versions of Data Quality & Observability Classic. This will be the final release to contain Java 8 and 11 builds and Spark versions older than 3.5.3, and will include feature enhancements and bug fixes from the 2025.02 release and critical bug fixes from the 2025.03 and 2025.04 releases. Between 2025.05 and 2025.07, only critical and high-priority bug fixes will be made for Java 8 and 11 versions of Data Quality & Observability Classic. For a breakdown of Java and Spark availability in current and upcoming releases, click "See what is changing" below.

See what is changing
Data Quality & Observability Classic versionJava 8Java 11Java 17Spark versionsAdditional notes
2025.01 and earlier

Yes

Yes

No

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
 
2025.02

No

No

Yes

3.5.3 only 
2025.03

No

No

Yes

3.5.3 only 
2025.04

Yes

Yes

Yes

  • 2.3.0 (Java 8 only)
  • 2.4.5 (Java 8 only)
  • 3.0.1 (Java 8 and 11)
  • 3.1.2 (Java 8 and 11)
  • 3.2.2 (Java 8 and 11)
  • 3.4.1 (Java 11 only)
  • 3.5.3 (Java 17 only)

Important 
The Java 8 and 11 build profiles only contain the 2025.02 release and critical bug fixes addressed in 2025.03 and 2025.04. They do not contain any feature enhancements from the 2025.03 or 2025.04 releases.

Only the Java 17 build profile contains feature enhancements and bug fixes listed in the 2025.04 release notes.

2025.05

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.06

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.07

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.
2025.08

No

No

Yes

3.5.3 onlyFixes for Java 8 and 11 build profiles will be available only for critical and high-priority defects.


For more information, go to the Data Quality & Observability Classic Java Upgrade FAQ.

Enhancements

Platform

  • When an externally managed user enables assignment alert notifications and enters an email address on their user profile, findings assigned to that email address now receive alert notifications.
  • SAML_ENTITY_BASEURL is now a required property for SAML authentication.
  • To improve the security of our application, we upgraded SAML. As part of the upgrade, the following SAML Authentication are no longer used:
    • SAML_TENANT_PROP_FROM_URL_ALIAS
    • SAML_METADATA_USE_URL
    • SAML_METADATA_TRUST_CHECK
    • SAML_INCLUDE_DISCOVERY_EXTENSION
    • SAML_SUB_DOMAIN_REPLACE_SOURCE
    • SAML_SUB_DOMAIN_REPLACE_TARGET
    • SAML_MAX_AUTH_AGE
    • Note If the SAML_LB_EXISTS property is set to true and SAML_LB_INCLUDE_PORT_IN_REQUEST set to false, you may need to update SAML_ENTITY_BASEURL to include the port in the URL. The SAML_ENTITY_BASEURL should match the IdP's ACS URL.

      Additionally, we recommend using the Collibra DQ UI when signing in with SAML, as SP-initiated sign-on is not fully supported in Collibra DQ.

      Tip We recommend synchronizing the clocks of the Identity Provider (IdP) and Service Provider (Collibra DQ) using Network Time Protocol (NTP) to prevent authentication failures caused by significant clock skew.

Connections

  • You can now limit the schemas that are displayed in a JDBC connection in Explorer. This enhancement helps you manage usage and maintain security by restricting access to only the necessary schemas. (idea #CDQ-I-152)
  • Note When you include a restricted schema in the query of a DQ Job, the query scope may be overwritten when the job runs. While only the schemas you selected when you set up the connection are shown in the Explorer menu, users are not restricted from running SQL queries on any schema from the data source.

  • Hive connections now use the driver class name, com.cloudera.hive.jdbc.HS2Driver. Additionally, only Hive versions 2.6.25 and newer are supported.
  • Warning If you have a Standalone installation of Data Quality & Observability Classic and leverage Hive connections, you must upgrade to the latest 2.6.25 Hive driver to use Data Quality & Observability Classic 2025.02.

Jobs

  • DQ Jobs on Trino Pushdown connections now allow you to select multiple link ID columns when setting up the dupes monitor.

Rules

  • We are delighted to announce that rule filtering is now generally available. To use rule filtering, an admin needs to set RULE_FILTER to TRUE in the Application Configuration.
  • We are also pleased to announce that rule tolerance is generally available. However, the RULE_TOLERANCE_BETA setting must remain set to TRUE in the Application Configuration until the 2025.04 version.
  • You can now define a DQ Dimension when you create a new template or edit an existing one to be applied to all new custom rules created using this template. Additionally, a Dimension column is now shown on the Templates page.
  • There is now a dedicated break_msg column in the rule_output Metastore table, which shows the break message when a rule break occurs.

Dataset Manager

  • You can now add meta tags up to 100 characters long from the Edit option under the Actions menu on the Dataset Manager. (idea #CDQ-I-74)

Fixes

Platform

  • We have improved the security of our application.
  • To improve the security of our application, Data Quality & Observability Classic now supports a Content Security Policy (CSP).

Jobs

  • When a scheduled job runs, only the time it is scheduled to run displays in the Scheduled Time column on the Jobs page.
  • When you edit and re-run a DQ Job with a source-to-target mapping of data from two different data sources, the -srcds parameter on the command line now correctly contains the src_ prefix before the source dataset, and the DQ Job re-runs successfully.
  • When you include a custom column via the source query of a source-to-target mapping, the validate overlay function no longer fails with an error message.
  • When you clone a dataset from Dataset Manager, Explorer, or the Job tab on Findings, the Job Name field and the Command Line in the Review step of Explorer now correctly reflect the "temp_cloned_dataset" naming convention for cloned datasets.
  • DQ Jobs with the same run date can no longer run concurrently.
  • Pullup DQ Jobs with names that include dashes, such as "sample-dataset," no longer remain stalled in the "staged" activity when you rerun them.
  • The correlation and histogram activities no longer cause DQ Jobs on Databricks Pushdown connections to fail sporadically.
  • The correlation activity now displays correctly in the UI and no longer causes DQ Jobs on Pushdown connections to fail.

Rules

  • When you use Freeform SQL queries with LIKE % conditions, for example, SELECT * FROM @public.test where testtext LIKE '%a, c%', they now return the expected results.
  • SQL Server job queries where the table name is escaped with brackets, for example, select * from dbo.[Table], now process correctly when the job runs.
  • The Rule Results Preview button is no longer disabled when the API call to gather the Livy status fails due to an invalid or non-existent session. The API call now correctly manages the cache for Livy sessions terminated due to idle timeout.
  • The contents of the Export Rules with Details now include data from the new Tolerance rule setting.
  • Data type rules now evaluate the contents of each column, not just the data type of each column to ensure the correct breaks are returned.
  • SQL reserved keywords included in string parsing are now correctly preserved in their original case.
  • When you update the name of a rule and an alert is configured on it, the alert will now show the updated name when sent.

Alerts

  • When you set a breaking rule as passing, a rule status alert for rules with breaking statuses no longer sends erroneous error messages for that rule.

Connections

  • When you substitute a PWD value as a sensitive variable on the Variables tab of a Databricks connection template, the sensitive variable in the connection URL is now set correctly for source-to-target mappings where Databricks is the source dataset.

APIs

  • The /v3/jobs/<jobId>/breaks/rules endpoint no longer returns a 500 error when using a valid jobId. Instead, it now returns empty files when no results are found for exports without findings.
  • When you schedule a Job to run monthly or quarterly, the jobSchedule object in the /v3/datasetDefs endpoint now reflects your selection.
  • When you run the /v3/rules/{dataset}/{ruleName}/{runId}/breaks endpoint when Archive Rules Break Records is enabled, break records are now retrieved from the source system instead of the Metastore.
  • Meta tags are now correctly applied to new datasets that are created with the /v3/datasetDefs endpoint.
  • When an integration import job fails, the /v3/dgc/integrations/jobs endpoint now returns the correct “failed” status. Additionally, the integration job status “ignored” is now available.

Integration

  • An invalid RunID no longer returns a successful response when using a Pushdown dataset with the /v3/jobs/run endpoint.