Managing alerts from the Alert Builder

You can manage existing alerts and configure alerts for rules and adaptive rules on the Alert Builder page. Click a tab to view the available fields and options.

Tip To sort a column, click the Arrow up icon Arrow down icon icons next to the column header.

Column Description
Alert Name The name of the alert for the selected dataset.
Dataset The name of the selected dataset.
Condition The condition configured for the alert.
Alert Recipient The webhook name or email address of the recipient of the alert email.
Msg A custom message pertaining to the alert.
Batch Name The name of the batch alert, if applicable. This is a group name for a set of email addresses of alert recipients. You can search for and select an existing batch name to quickly apply frequently used email recipient groups to an alert.
Valid The validity of the alert. If the alert is valid, a check mark displays in this column.
Active

Shows whether the alert is active for future runs of the dataset. Click the icon to activate or deactivate the alert.

valid rule icon shows that the alert is active.

inactive rule icon shows that the alert is inactive.

Action

Click Actions and then select one of the following options from the dropdown menu:

Edit
Modify the values of the selected alert.
Rename
Update the name of an alert.
Note The updated name applies only to alerts generated in future job runs and does not affect historical data.
Delete
Delete the selected alert.

Tip To sort a column, click the Arrow up icon Arrow down icon icons next to the column header.

Column Description
Rule Name

The name of the rule.

Note Rule names are subject to Spark naming conventions, which generally only allows letters, numbers, and underscores (_)in names. If your rule name contains hyphens, you can enclose it with backticks (`) to ensure that the alert works correctly. However, rule names that contain unsupported characters may result in alert delivery failure. For more information about Spark naming conventions, see the official Spark documentation .

Dataset The name of the selected dataset.
Value The SQL condition of your rule.
Points The number of points that Collibra DQ deducts from your data quality score when data breaches the conditions of your rule.
% If you do not customize this value on the Rule Details modal, Collibra DQ uses the default value of 1.
Valid

Shows whether your rule passes rule validation.

valid rule icon
Shows that your rule passes validation.
invalid rule icon
Shows that your rule does not pass validation.
Type The type of rule, for example, SQLF.
Action

Click Create Alert + to create a new alert for the rule.
After you configure an alert for a rule, you can access it from the Alerts tab.

Tip To sort a column, click the Arrow up icon Arrow down icon icons next to the column header.

Column Description
Field

The column with the breaking adaptive rule.

Check

The type of breaking adaptive rule check, including:

  • Cardinality
  • Datatype shifting
  • Empty values
  • Load time
  • NULL values
  • Minimum value
  • Maximum value
  • Mean value
  • Row counts
  • Unique values
Condition The automatically defined SQL condition of the breaking adaptive rule.
Status

Shows the adaptive rule as "Breaking."

Note Only "Breaking" adaptive rules are eligible for alerts.

 

Shows the data point of a given column relative to its baseline.

Example In the following chart, the red line represents the data point of the value for the current run. This is overlaid atop the baseline and expected range for that run. When a column has a value outside the expected range boundary, it triggers a "Breaking" status for that type of check. This range also appears on the Findings and Profile pages for the breaking column.

Distribution

Action

Click Add Alert + to add a new alert for the breaking adaptive rule.
After you configure an alert for a rule, you can access it from the Alerts tab.