
How trackagoat fetches TikTok data, what each scrape type does, when scrapes run automatically, how to trigger one manually, daily quotas, and what to do when data looks stale.
trackagoat fetches TikTok data through a background scraper service. Data is not fetched in real time — it's collected on a schedule and stored so you can chart trends over time.
trackagoat is not affiliated with TikTok. Data is collected via Apify's public TikTok scraper. Stats reflect what Apify observed at scrape time and may be a few minutes to a few hours behind what you'd see on TikTok directly.
| Type | What it fetches | Runs for |
|---|---|---|
| Creator Profiles | Follower count, total likes, video count, bio, avatar | Each tracked creator |
| Video Stats | View, like, comment, share, and save counts | Each tracked video |
| Discover Videos | New videos posted by tracked creators since last check | Each tracked creator |
Each type runs on its own schedule and can be triggered independently.
Scrapes run automatically in the background. How often depends on your plan:
| Tier | Creator Profiles | Video Stats | Discover Videos |
|---|---|---|---|
| Free | Every 24 hours | Every 24 hours | Every 24 hours |
| Starter | Every 12 hours | Every 4 hours | Every 12 hours |
| Ultra | Every 6 hours | Every 1 hour | Every 4 hours |
These are the minimum intervals — your org admin can configure longer intervals, but never shorter than the tier floor. Ultra tier has no enforced floor.
Scraping starts automatically once you add a creator. You don't need to do anything to get data flowing.
Org admins can trigger a scrape immediately without waiting for the next scheduled run. This is useful when you just added a creator and want fresh data right away, or when you know a big video just dropped.
Where to find manual triggers:
When you click "Scrape Now", a dialog opens that:
After confirming, a toast confirms the scrape started. The page data updates automatically when the job completes (usually under a minute).
Manual scrapes count against a daily quota that resets at midnight UTC. Automated scheduled scrapes do not count against this quota — only manual "Scrape Now" triggers do.
| Tier | Manual scrapes per day |
|---|---|
| Free | 1 |
| Starter | 10 |
| Ultra | 100 |
What counts as one scrape trigger:
If you reach your daily limit, the "Scrape Now" dialog shows "Daily limit reached" and the button is disabled until midnight UTC. Your org's Usage tab (visible to admins under Org Settings) shows how many you've used today.
Creator list — the "Last Scraped" column shows when each creator's profile was last updated.
Video list — each video shows a scrape tier badge indicating how often it's scheduled to be scraped. Hover the badge to see the next scheduled scrape time.
Video detail → Details tab → Scrape Tier — shows the tier and next scheduled scrape.
Org Settings → Scraping tab (admin only) — shows the last run time, status (OK / Partial / Failed), and processed count for each job type across the whole org.
Stats are snapshots, not live counters. A view count of 12,400 means that's what Apify saw the last time it scraped that video — not necessarily the current live count on TikTok.
Expected freshness by tier after a successful scrape:
Data in charts is aggregated from the stored snapshot history. If a scrape hasn't run yet for a newly added creator or video, analytics will show no data until the first scrape completes.
If the scraper encounters an error (TikTok rate limit, Apify outage, handle not found), the job is retried automatically with exponential backoff.
A creator or video that fails several consecutive scrapes is flagged as stale in the list view.
If you notice a creator hasn't updated in a long time, check:
Is the TikTok handle still valid? The account may have been deleted, renamed, or made private.
Is the org-wide Scraping tab showing errors? If all job types are failing, the scraper service itself may be down — contact support.
Try a manual "Scrape Now" — if it succeeds, the prior failure was transient.
The min_*_hours scrape frequency settings are floors on scheduled scrapes — they don't limit how often you can do manual triggers (that's the max_manual_scrape_triggers_per_day quota above).
See Limits & plan tiers for all limit keys and how enforcement works.