-
Notifications
You must be signed in to change notification settings - Fork 18
Potential Optimisations #31
Description
Going Static
A lot of Synth`s dynamic-ness could be moved into generic parameters. Here's what a faster, generic Synth might look like.
Synth<M, O, W>
where
M: Mode, // Retrigger, Legato, Polyphonic
O: Oscillator<W>, // Enveloped<W> or Normal<W>
W: Waveform, // Dynamic or Static<F> where F: Fn(phase) -> amp.
Mode
- The way in which Synth handles incoming notes and voices greatly differs between different modes. We should parameterise here rather than dynamically matching each time a difference would occur.Oscillator<W>
- One of the tightest bottle-necks within the Synth at the moment is the envelope interpolation. Making the synth's oscillator type generic would allow to have either a high performance, non-interpolating,Normal
oscillator or an enveloped oscillator.Waveform
- currently Synth's Waveform type is highly dynamic and matches against 6 possible different waveforms every time the phase is stepped forward. Changing this to a generic, compile-time known type will surely provide higher performance.
We could still offer a DynamicSynth
enum type which could wrap each different kind of generic synth type in a variant.
Moving Portamento to a generic parameter.
A synth with some portamento duration is significantly more expensive in comparison to one without. This is because a portamento with some arbitrary duration requires the following extra work per frame:
- Stepping the current portamento duration and checking if it has exceeded the target duration.
- Linear interpolation between the start and end Mels depending on current duration.
- Converting the interpolated value from mel scale to hz.
- Dividing the hz by the base pitch to finally get the frequency multiplier.
Compare this to non-portamento, where none of the above is required per frame, and instead the target frequency multiplier can be calculated upon the given note_on
.
Working Buffer
At the moment, Synth allocates a new audio buffer each time audio is requested so that it may retrieve audio from each of the voices. It should really have an owned working Buffer which it zeroes out between each voice.