fast-asd (Sieve) vs LR-ASD
Both in the video & clipping category. Side-by-side — pick the one that fits your stack tonight.
Tells your video which person is actually talking. Powers auto-cropping for clips.
- rating
- 3★
- tested
- —
- cost
- free
- install
- sidecar
- stars
- 82
- updated
- 1y ago
You aren't building your own video pipeline. Most creators should just pay OpusClip and skip the plumbing.
The 2025 state-of-the-art for 'which face is actually talking.' Fast, tiny, accurate.
- rating
- 4★
- tested
- —
- cost
- free
- install
- sidecar
- stars
- 109
- updated
- 1y ago
You're not building a pipeline yourself. This is a research model, not a product.
why it matters · fast-asd (Sieve)
If you want to take a multi-person podcast and auto-crop it to the vertical 9:16 format TikTok and Reels want, the video needs to know WHO is talking at any given second. fast-asd figures that out — audio + lip movement detection — so your crop follows the active speaker. Stale repo (last updated mid-2024) but still works, and the pattern is still how every podcast clipper does speaker tracking under the hood. Python sidecar, MIT, free.
why it matters · LR-ASD
LR-ASD is the newest open-source active speaker detection model (Springer IJCV 2025 paper). It tells your video pipeline which person in a multi-face frame is actually talking. Accuracy beats the older TalkNet approach and it's 23 times lighter — fast enough to run on every frame, not just samples. If you're building your own clipping or auto-crop pipeline and accuracy matters more than a pre-built library, this is the one to drop in. MIT, free, Python.