fast-asd (Sieve) vs PySceneDetect
Both in the video & clipping category. Side-by-side — pick the one that fits your stack tonight.
Tells your video which person is actually talking. Powers auto-cropping for clips.
- rating
- 3★
- tested
- —
- cost
- free
- install
- sidecar
- stars
- 82
- updated
- 1y ago
You aren't building your own video pipeline. Most creators should just pay OpusClip and skip the plumbing.
Finds every camera cut in your video automatically. Powers smart cropping + transitions.
- rating
- 4★
- tested
- ✓ loya-tested
- cost
- free
- install
- sidecar
- stars
- 4,736
- updated
- 4d ago
You only work with single-camera talking-head footage — scene detection isn't useful there.
why it matters · fast-asd (Sieve)
If you want to take a multi-person podcast and auto-crop it to the vertical 9:16 format TikTok and Reels want, the video needs to know WHO is talking at any given second. fast-asd figures that out — audio + lip movement detection — so your crop follows the active speaker. Stale repo (last updated mid-2024) but still works, and the pattern is still how every podcast clipper does speaker tracking under the hood. Python sidecar, MIT, free.
why it matters · PySceneDetect
PySceneDetect scans any video and spits out the timestamp of every hard cut — the moment the camera switches. For multi-cam podcasts, that's the boundary you need so your 9:16 crop follows the active speaker without drifting on stale frames. Used in podcast-clipper crop pipelines alongside face tracking — same library Loya's LYRC export pipeline relies on for scene work. Free, Python, actively maintained (commits this week).