-
Notifications
You must be signed in to change notification settings - Fork 442
Description
I'm trying to do Wasapi loopback capture but it's not working:
Even though my input stream is spawned, the stream callback never gets called. Why could that be?
It prints this:
[DEBUG] Try capturing system audio
[INFO ] Capturing audio from: Line (Steinberg UR22mkII )
[INFO ] Default audio cfg: SupportedStreamConfig { channels: 2, sample_rate: SampleRate(44100), buffer_size: Unknown, sample_format: F32 }
build_stream F32
stream is playing
still playing
still playing
still playing
But it should ALSO print wave_reader called
(from inside the wave_reader
stream callback), but it never prints this and so I'm not receiving any audio frames through the channel
in my application.
I based my code on the implementation of loopback capture in https://github.com/dheijl/swyh-rs:
https://github.com/dheijl/swyh-rs/blob/e6709b2f546af6cf272ded9529f6dc5d95145453/src/main.rs#L849
In swyh it apparently works (but I can't test that application because I don't have a streaming server/receiver).
I just refactored the code a little bit, but apart from that it's the same.
Any idea why the stream callback (wave_reader
) doesn't get called?
(Note: On linux, wave_reader
does get called and I receive audio frames over the channel. They are all 0.0
, but that's a different issue, probably a linux issue rather than cpal issue?)
use super::*;
use anyhow::{Context, Result};
use cpal::{Device, Sample, SampleFormat, Stream, StreamError, SupportedStreamConfig};
use device::*;
use std::{
sync::{
atomic::{AtomicBool, Ordering},
mpsc::channel,
Arc,
},
thread,
};
pub fn capture_live_audio_input(
m_audio_device: Option<String>,
terminate_capture: Arc<AtomicBool>,
) -> Result<(BidirAudioChannel, AudioFormat)> {
let (audio_to_vis, vis_to_audio) = BidirAudioChannel::duplex_unbounded();
// spawn new thread as workaround: https://github.com/RustAudio/rodio/issues/214
let (tx, rx) = channel();
let _thread_capture = thread::Builder::new()
.name("capture".to_string())
.spawn(move || -> Result<_> {
let output_device = match m_audio_device {
Some(name) => unwropt!(
get_audio_output_devices()
.into_iter()
.find_map(|device| (device.name().ok().as_deref() == Some(&name)).then_some(device)),
"audio device not found"
),
None => unwropt!(get_default_audio_output_device(), "no default audio device"),
};
// capture system audio
debug!("Try capturing system audio");
let (stream, audio_format) =
capture_output_audio(&output_device, audio_to_vis).context("Could not capture audio")?;
tx.send(audio_format).unwrap();
stream.play().context("stream.play()")?;
println!("stream is playing");
// stream shouldn't be dropped, but it's not Send, so we need to keep this thread alive
while !terminate_capture.load(Ordering::SeqCst) {
std::thread::sleep(std::time::Duration::from_millis(100));
println!("still playing");
}
Ok(())
})
.expect("spawn");
let audio_format = rx.recv()?;
Ok((vis_to_audio, audio_format))
}
/// capture the audio stream from the default audio output device
/// sets up an input stream for the wave_reader in the appropriate format (f32/i16/u16)
fn capture_output_audio(
output_device: &Device,
audio_to_vis: BidirAudioChannel,
) -> Result<(Stream, AudioFormat)> {
info!(
"Capturing audio from: {}",
output_device.name().context("Could not get default audio device name")?
);
let audio_cfg = output_device.default_output_config().context("No default output config found")?;
let audio_format =
AudioFormat { sample_rate: audio_cfg.sample_rate().0, channel_count: audio_cfg.channels() as usize };
info!("Default audio cfg: {:?}", audio_cfg);
fn build_stream<T: Sample>(
output_device: &Device,
audio_cfg: SupportedStreamConfig,
audio_format: AudioFormat,
audio_to_vis: BidirAudioChannel,
) -> Result<(Stream, AudioFormat)> {
println!("build_stream {:?}", T::FORMAT);
fn wave_reader<T: Sample>(samples: &[T], audio_to_vis: &BidirAudioChannel) {
println!("wave_reader called");
let mut f32_samples =
audio_to_vis.rx.try_recv().unwrap_or_else(|_| Vec::with_capacity(samples.len()));
f32_samples.clear();
f32_samples.extend(samples.iter().map(|x| x.to_f32()));
println!("wave_reader {}", f32_samples.len());
let _ = audio_to_vis.tx.send(f32_samples);
}
match output_device.build_input_stream(
&audio_cfg.config(),
move |data, _: &_| wave_reader::<T>(data, &audio_to_vis),
|e| error!("Error on audio input stream: {}", e),
) {
Ok(stream) => Ok((stream, audio_format)),
Err(e) => {
bail!("Error capturing {:?} audio stream: {}", T::FORMAT, e);
}
}
}
match audio_cfg.sample_format() {
SampleFormat::F32 => build_stream::<f32>(output_device, audio_cfg, audio_format, audio_to_vis),
SampleFormat::I16 => build_stream::<i16>(output_device, audio_cfg, audio_format, audio_to_vis),
SampleFormat::U16 => build_stream::<u16>(output_device, audio_cfg, audio_format, audio_to_vis),
}
}
Btw, if you're wondering why I'm spawning a new thread (compared to swyh):
If I don't create the stream in the separate thread, I get this:
Os { code: -2147417850, kind: Other, message: "Cannot change thread mode after it is set." }', C:\Users\me.cargo\registry\src\github.com-1ecc6299db9ec823\cpal-0.13.1\src\host\wasapi\com.rs:13:77
It's this issue: RustAudio/rodio#214
My application uses winit, and apparently this combination causes this issue:
RustAudio/rodio#214 (comment)
rust-windowing/winit#1185
That's why I'm spawning a new thread for the stream (which I'm then keeping alive to prevent the stream from getting dropped).
But that's the only difference to swyh
that I see. So I'm wondering why the stream callback is not being called.