Thanks for forwarding.
I've tested it briefly and it works great!
And kudos to NYPL for putting this also on GitHub!
I think this is a great application of something I included in my Hourglass
Model: several ways to create descriptive metadata (in this case:
speech-to-text + user generated metadata) are being combined, to cancel out
each other's weak points.
You can check out the full model here:
It would be great if NYPL could report about the results they get from this
project, on some conference or one of the journals in the field or so.
2016-06-10 22:33 GMT+02:00 Nathan Coy <[log in to unmask]>:
> This seems like a pretty neat tool that I just stumbled across. It seems
> like it could be a pretty objective application of crowd sourcing for
> archives that have a lot of spoken content.
Manager Digitalisering & Acquisitie
VIAA vzw* | Sassevaartstraat 46/209 | 9000 Gent | België | www.viaa.be
T: +32 9 298 05 01 *|* M: +32 474 25 04 67