Using the CrawlerConfig
This is optional, you can also work with the defaults.
Using the CrawlerConfig
CrawlerConfig
Using fallback
fallback
When the standard crawler fails to access a site due to restrictions like cookies, JavaScript, or IP blocks, our method integrates a fallback mechanism. This approach incorporates the use of ZyteCrawler
as an alternative. Leveraging both crawlers, we harness their combined strengths, ensuring reliable site access under various constraints.
Using a different Client
Client
If you want to use a differnt client you can also change it.
Using errorHandlingStrategy
errorHandlingStrategy
If you want to use function calling for the AI, it will be important to return the error instead of throwing them, so the default for the Crawler
error handling is return
. If you want to throw an error, you can change it in the config.
Last updated