Manage crawlers

A crawler is an automated script that ingests data from Amazon S3 to Data Catalog.

You can create, edit, and delete crawlers in Collibra Platform. When you synchronize Amazon S3, the crawlers are created in AWS Glue and executed. Each crawler crawls a location in Amazon S3 based on its include path. The results are stored in one AWS Glue database per domain assigned to one or more crawlers. Those databases are ingested in Data Catalog in the form of assets, attributes, and relations. The databases are stored in AWS Glue until the next synchronization. At that moment, they are deleted and recreated. The crawlers in AWS Glue are deleted immediately after the synchronization is finished.

Important 
  • If you completed the Glue database configuration parameter in the capability, you don't need to create crawlers.
  • If you completed the Glue database configuration parameter in the capability, you need to create a dummy crawler. A dummy crawler is a crawler with an invalid include path, such as s3://dummy. This crawler won't be considered when you run the synchronization. In a future release, we will remove the need for a dummy crawler.
  • By default, AWS Glue allows up to 1,000 crawlers per account. You can synchronize several S3 File System assets simultaneously, but if the total number of crawlers exceeds the maximum amount in AWS Glue, synchronization fails. Since Collibra deletes the crawlers from AWS Glue after synchronization, it is safer to synchronize each S3 File System asset at a unique time. For more information, go to the AWS Glue documentation.
  • Crawlers in AWS Glue can crawl multiple buckets, but in Collibra, each crawler can crawl only a single bucket.
Important 

Choose an option below to explore the documentation for the latest user interface (UI) or the classic UI.

Common prerequisites

Create a crawler

You can create a crawler for an S3 File System asset in Data Catalog.

Steps

  1. Open an S3 File System asset page.
  2. In the tab panebar, click Configuration. In the tab panebar, click Configuration.
  3. In the Crawlers section, click Edit Configuration.
  4. Click Add Crawler.
  5. In the Crawlers section, click Create crawler.
    The Create crawler dialog box appears.
  6. Enter the required information.
    FieldDescription

    Domain

    The domain in which the assets of the S3 file system are created.

    Name

    The name of the crawler in Collibra.

    Table LevelSpecify the level from which tables have to be created during the integration. By default, tables are created from the top level, level 1.
    Only specify a number if you want to create tables starting from another level, such as 2 or 3. For more information, go to the AWS documentation.
    File Group Pattern

    Add a regular expression to group files with similar file names into a File Group asset during the S3 synchronization. Multiple regular expression grammar variants exist. We use the Java variant.
    A regular expression, also referred to as regex or regexp, is a sequence of characters that specifies a match pattern in text.

    Example If you add the (\w*)_\d\d\d\d\.csv regex, the integration automatically detects files matching this pattern and groups them into a File Group asset.

    You can define one regex per crawler.

    Tip 
    • Multiple websites provide guidelines and examples of regular expressions, for example, Regexlib and RegexBuddy, or even ChatGPT.
    • You can also test your regular expression on various websites, for example, Regex101 (Select the Java 8 option in the Flavor panel).

    The referenced websites serve only as examples. The use of ChatGPT or other generative AI products and services is at your own risk. Collibra is not responsible for the privacy, confidentiality, or protection of the data you submit to such products or services, and has no liability for such use.

    Include path

    The case-sensitive path to a directory of a bucket in Amazon S3. All objects and subdirectories of this path are crawled.

    For more information and examples, go to the AWS Glue documentation.

    Exclude patterns

    Glob pattern that represents the objects that are in the include path, but that you want to exclude.

    For more information and examples, go to the AWS Glue documentation.

    Add patternButton to add additional exclude patterns.
    Custom Classifier

    If you want the AWS crawler created by the S3 integration to use a specified custom classifier, add the name of the classifier in this field. The custom classifier should be created in the AWS Glue console. For more information, go to the AWS Glue documentation.

    You can add multiple classifiers by clicking Add Custom Classifier.

  7. Click Create.

What's next

You can now synchronize Amazon S3 manually or define a synchronization schedule.

Edit a crawler

You can edit a crawler of an S3 File System asset in Data Catalog. For example, you can do this if you want to edit the exclude pattern.

Steps

  1. Open an S3 File System asset page.
  2. In the tab panebar, click Configuration. In the tab panebar, click Configuration.
  3. In the Crawlers section, click Edit Configuration.
  4. In the Crawlers section, in the row of the crawler that you want to edit, click .
    The Edit crawler dialog box appears.
  5. Enter the required information.
    FieldDescription

    Domain

    The domain in which the assets of the S3 file system are created.

    Name

    The name of the crawler in Collibra.

    Table LevelSpecify the level from which tables have to be created during the integration. By default, tables are created from the top level, level 1.
    Only specify a number if you want to create tables starting from another level, such as 2 or 3. For more information, go to the AWS documentation.
    File Group Pattern

    Add a regular expression to group files with similar file names into a File Group asset during the S3 synchronization. Multiple regular expression grammar variants exist. We use the Java variant.
    A regular expression, also referred to as regex or regexp, is a sequence of characters that specifies a match pattern in text.

    Example If you add the (\w*)_\d\d\d\d\.csv regex, the integration automatically detects files matching this pattern and groups them into a File Group asset.

    You can define one regex per crawler.

    Tip 
    • Multiple websites provide guidelines and examples of regular expressions, for example, Regexlib and RegexBuddy, or even ChatGPT.
    • You can also test your regular expression on various websites, for example, Regex101 (Select the Java 8 option in the Flavor panel).

    The referenced websites serve only as examples. The use of ChatGPT or other generative AI products and services is at your own risk. Collibra is not responsible for the privacy, confidentiality, or protection of the data you submit to such products or services, and has no liability for such use.

    Include path

    The case-sensitive path to a directory of a bucket in Amazon S3. All objects and subdirectories of this path are crawled.

    For more information and examples, go to the AWS Glue documentation.

    Exclude patterns

    Glob pattern that represents the objects that are in the include path, but that you want to exclude.

    For more information and examples, go to the AWS Glue documentation.

    Add patternButton to add additional exclude patterns.
    Custom Classifier

    If you want the AWS crawler created by the S3 integration to use a specified custom classifier, add the name of the classifier in this field. The custom classifier should be created in the AWS Glue console. For more information, go to the AWS Glue documentation.

    You can add multiple classifiers by clicking Add Custom Classifier.

  6. Click Save.

Delete a crawler

You can delete a crawler from an S3 File System asset.

Note If you delete an S3 File System asset that contains one or more crawlers, the crawlers are also deleted.

Steps

  1. Open an S3 File System asset page.
  2. In the tab panebar, click Configuration. In the tab panebar, click Configuration.
  3. In the Crawlers section, in the row of the crawler that you want to delete, click .
    The Delete Crawler confirmation message appears.
  4. Click Delete crawler.