Skip to content

Conversation

@Tunglies
Copy link
Contributor

Pre-allocate nonces Vec and reuse a single buffer with write! instead of format!, reducing heap allocations when replacing tokens.

Adjust CSP nonce buffer size based on target pointer width

Pre-allocate nonces Vec and reuse a single buffer with `write!`
instead of `format!`, reducing heap allocations when replacing tokens.
@Tunglies Tunglies requested a review from a team as a code owner October 21, 2025 13:04
@github-project-automation github-project-automation bot moved this to 📬Proposal in Roadmap Oct 21, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Oct 21, 2025

Package Changes Through 282ed06

There are 1 changes which include tauri with patch

Planned Package Versions

The following package releases are the planned based on the context of changes in this pull request.

package current next
tauri 2.9.1 2.9.2

Add another change file through the GitHub UI by following this link.


Read about change files or the docs at github.com/jbolda/covector

This change introduces a performance optimization for Content Security Policy (CSP) nonce generation.
let mut nonces = Vec::with_capacity(asset.matches(token).count());
*asset = replace_with_callback(asset, token, || {
#[cfg(target_pointer_width = "64")]
let mut raw = [0u8; 8];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lucasfernog do you remember why we used usize here at the first place (tracing back to cf54dcf)? It doesn't quite make sense to me

Copy link
Contributor

@vrmiguel vrmiguel Oct 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I don't get the question, but the code is still using a usize there, just that it's using its underlying byte representation instead, right?

In any case, seems like this code can be simplified down to:

let mut raw = [0u8; std::mem::size_of::<usize>()];
getrandom::fill(&mut raw).expect("failed to get random bytes");
let nonce = usize::from_ne_bytes(raw);

Or even:

let mut raw = 0_usize.to_ne_bytes();
getrandom::fill(&mut raw).expect("failed to get random bytes");
let nonce = usize::from_ne_bytes(raw);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant it doesn't make sense to make nonce length depends on target pointer size

#[cfg(target_pointer_width = "32")]
let mut buf = String::with_capacity(20);
#[cfg(target_pointer_width = "16")]
let mut buf = String::with_capacity(14);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these capacities meant to be how many bytes to alloc to store 'nonce-{0..usize::MAX}'? If so, isn't it a bit oversized for 32 and 16 ptr widths? Feels like it should be 18 and 13 respectively, unless I'm missing something

If we want to be a bit fancy-pants (😛) we could use this helper function:

const fn nonce_source_capacity() -> usize {
    // 'nonce-' prefix + digits of usize::max + closing quote
    8 + usize::MAX.ilog10() as usize + 1
}

(You can test that by replacing usize for u64, u32 and u16 respectively)

buf.clear();
write!(&mut buf, "'nonce-{}'", nonce).unwrap();
sources.push(buf.clone());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reusing buf doesn't seem helpful in this scenario, if we'll clone it anyway. It means that the code will allocate nonces.len()+1 times

for nonce in nonces {
      let mut buf = String::with_capacity(nonce_capacity());
      write!(&mut buf, "'nonce-{}'", nonce).unwrap();
      sources.push(buf);
}

Something like that ^ would allocate n times rather than n+1 times

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I honestly doubt these optimizations will actually make any differences, format! Itself is quite optimized to estimate the buffer size already

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: 📬Proposal

Development

Successfully merging this pull request may close these issues.

3 participants