Subtitle composer can do that: https://subtitlecomposer.kde.org/
Also Kdenlive has a feature for it as I know, though I never tested: https://docs.kdenlive.org/en/effects_and_compositions/speech_to_text.html
If you’re willing to roll your own a bit, whisper.cpp is pretty good
I’m a university professor who uses whisper.cpp for video lecture transcriptions, so I’ll chime in here. The thing about whisper.cpp compared to pretty well every other option is that whisper.cpp is really really really really really good. Like the accuracy is almost always completely 100% (and that’s just on the ‘medium’ model. The ‘large’ model is probably even better)
There is only one problem with whisper that I’ve found, which is that if you use a low quantization model (I believe I’m using a 4-bit quantization model), whisper can get stuck into a “no punctuation mode” if that happens your transcription will suddenly start to look like this there will be no punctuation or capitalization it’s quite annoying once it gets into this mode it can’t get back out again
The way to get around that is to segment your audio. I use ffmpeg’s silence detector to segment the audio whenever there’s a >1 second pause in the audio (so that I don’t accidentally segment in the middle of a sentence or the middle of a word). Break the audio up into roughly 10-minute segments and you should not see no-punctuation mode happening.
The other nice thing about Whisper is it’ll tag fragments with confidence level and starting- and ending times. I use the confidence level so that I can quickly jump through low-confidence transcription points to see if it made a mistake (though it usually doesn’t). I use the starting- and ending times to automatically generate an .srt subtitle file. Then I use ffmpeg to bake in hardsubs for the students.
So far it’s been working very smoothly and quickly. Even on my crappy old GTX1060, I can get subtitles at about 2-3x real time. And with almost no manual intervention.
Kdenlive apparently supports whisper. Check the link in other comment.
I’d also recommend something based on whisper.
If you’re looking for live transcriptions: https://github.com/abb128/LiveCaptions
And I’ve fiddked around with vosk a year ago.
I wrote a TUI for whisper in bash for a journalist friend of mine for exactly this use case. It’s a bit hackey, but it’s a good place to start.
Whisper is your best bet for FOSS transcription. This is the most efficient implementation AFAIK: https://github.com/guillaumekln/faster-whisper.
How does whisper compare to Mozzilla’s Deepspeach?
From what I’ve heard they’re competitive for English but I’ve never used Deepspeech myself. Whisper has much more community support so it’s probably easier to use overall.
Whisper is pretty good and open source, you just need to write your own script to do the automation.
And then you can also use some summarisation with OpenAI to create short summaries for each lecture or extract highlights or key points.
You can then upload them to Obsidian to make them indexed and searchable and can use any of their plugins to make it even better.
And you can use Syncthing to sync it to your phone.