BidFitBot.
Last updated: 2026-05-02
This page is for portal operators, webmasters, and anyone who notices BidFit traffic in their server logs. We believe transparent crawler identification is table stakes for operating in the public-procurement space.
Who we are
BidFit is a Canadian SaaS that qualifies public-sector tenders for SMB contractors. We read public tender notices on behalf of users who paste a URL. Every crawl is initiated by a real human user requesting a qualification brief on a specific notice — we do not bulk-scrape portals or rebuild your data.
User agent
All BidFit-initiated requests identify themselves with this user-agent string:
If a request to your portal originates from BidFit but does not include this user agent, it is not us. Please report it to bot@bidfit.ca and we will investigate.
What we crawl
- CanadaBuys — federal tender notices via the public listing pages
- BC Bid — provincial notices via the Bonfire-hosted public preview
- MERX — public preview pages only (we do not crawl subscription-gated content)
- Municipal portals — fetched via a generic LLM-based extractor when a user pastes a URL we don't have a hand-tuned adapter for
Crawl frequency
We are request-driven, not a periodic scraper:
| Trigger | Pattern |
|---|---|
| User pastes a URL | One fetch, on demand, per request |
| Daily digest sweep (paid users) | One sweep per portal per day, between 03:00–05:00 ET (off-peak Canadian time) |
| Addendum monitoring | One re-fetch per saved tender per day, batched with the digest sweep |
We do not crawl recursively. We never follow links into authenticated areas. We do not re-fetch the same notice more than once per hour even if multiple users paste the same URL.
What we store
- The notice URL and the public text content of the notice page
- Extracted metadata (title, buyer, NAICS, closing date, value, location)
- The user's qualification brief (private to that user)
We do not store full attachment files (RFP PDFs, addenda, drawings). We do not republish notice content on a public site. We do not include notice content in search engine indexes.
Rate limiting and politeness
- Maximum 1 concurrent request per portal hostname
- Maximum 2-second pause between same-portal requests during digest sweeps
- Standard 30-second timeout, then we abandon the request
- We respect
robots.txtdirectives that explicitly disallow our user agent - We respect
Retry-Afterresponse headers and back off when you ask us to - We honor
Crawl-delaydirectives
Opt-out
If you operate a portal and would prefer BidFit not access it, two options:
- Add this to your
robots.txt:User-agent: BidFitBot Disallow: /We will stop within 24 hours of the next robots.txt fetch. - Email bot@bidfit.ca with your domain. We will allowlist-block it within 24 hours and confirm in writing.
Either method is sufficient. We will never ignore an opt-out.