-
Notifications
You must be signed in to change notification settings - Fork 15
New Wasm workloads: Transformers.js ML sentiment analysis and Speech-to-Text #148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for webkit-jetstream-preview ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
I am getting the following error when running the deploy preview in Firefox:
|
My bad, the dynamic import of ONNX runtime didn't work with the blob/preloading in the browser, fixed. |
Seems like both Bert and Whisper spend a majority of the time in whatever function index The fact that Whisper does other things (and IIUC, is more popular) makes it somewhat more interesting. On the other hand, it seems like the dominant function in either case is the same, and running faster is a significant benefit. |
These could be replacements for the tfjs workloads (even though the model file size issue remains). Run via
transformersjs-bert-wasm
andtransformersjs-whisper-wasm
.TODOs: Evaluate startup/model loading performance, take CPU profile, decide whether Whisper task is too long-running, compress model files on-disk for repo size, modify NPM server to handle those?