Skip to content

Conversation

Mico-rstar
Copy link

  • refactor(Listening): 使用队列维护inners

  • debug(Listening): fix inners没有正确扩展

  • refactor(poll): 多包一并notify改为单包notify

  • feat: 增加单元测试

- refactor(Listening): 使用队列维护inners

- debug(Listening): fix inners没有正确扩展

- refactor(poll): 多包一并notify改为单包notify

- feat: 增加单元测试
@github-actions github-actions bot added the enhancement New feature or request label Oct 9, 2025
@fslongjin fslongjin requested review from Samuka007 and Copilot October 9, 2025 14:04
@fslongjin
Copy link
Member

多包一并notify改为单包notify

为什么要这样做

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the TCP listening backlog implementation to use dynamic expansion instead of pre-allocated sockets. The core changes involve moving from a fixed Vec of bound inners to a queue-based approach that dynamically creates new listening sockets as needed.

  • Refactored TCP listening to use a VecDeque with dynamic socket creation
  • Modified polling to process packets individually with immediate notification
  • Added unit test for backlog functionality

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
user/apps/c_unitest/test_backlogc.c New unit test for TCP backlog functionality
kernel/src/net/socket/inet/stream/inner.rs Refactored Listening struct to use dynamic socket management with VecDeque
kernel/src/driver/net/mod.rs Changed polling strategy from batch to single-packet processing

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

}
printf("the %dth connection\n", ++cnt);
close(*client_sockfd);

Copy link

Copilot AI Oct 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memory leak: allocated client_sockfd is never freed after closing the socket. Add free(client_sockfd); after line 80.

Suggested change
free(client_sockfd);

Copilot uses AI. Check for mistakes.

let mut inners_guard = self.inners.write();
let inner = match inners_guard.back() {
Some(inner) => inner,
None => return debug!("the tcp socket inners is empty"),
Copy link

Copilot AI Oct 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using debug! macro in a return statement is unconventional. Consider using a separate debug! call followed by return, or return an appropriate value/error.

Suggested change
None => return debug!("the tcp socket inners is empty"),
None => {
debug!("the tcp socket inners is empty");
return;
}

Copilot uses AI. Check for mistakes.

)),
) {
Ok(inner) => inner,
Err(e) => return debug!("bind err: {:#?}", e),
Copy link

Copilot AI Oct 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using debug! macro in a return statement is unconventional. Consider using a separate debug! call followed by return, or return an appropriate value/error.

Copilot uses AI. Check for mistakes.

@fslongjin fslongjin requested a review from sparkzky October 9, 2025 14:08
@fslongjin
Copy link
Member

and,这个PR是为了解决什么问题?有无测试用例/问题复现案例?

@Mico-rstar
Copy link
Author

and,这个PR是为了解决什么问题?有无测试用例/问题复现案例?

当前tcp listening backlog的实现会在执行系统调用sys_listen后创建backlog个socket等待连接,这样即便处于空闲状态下依旧占用固定内存,且占用大小取决于用户程序指定的backlog大小。这个PR用队列来维护socket,将完成tcp三次握手的socket push到队列,调用accept从队列pop出socket,从而让队列大小随着并发连接数和客户端处理能力的变化而变化

@Mico-rstar
Copy link
Author

Mico-rstar commented Oct 9, 2025

多包一并notify改为单包notify

为什么要这样做

smoltcp没有提供listen接口,不能在连接到来时自动创建新的socket,所以现在的实现是总是绑定连接数+1个smoltcp socket,单包处理就可以在update_io_event中bind新的socket供下一次连接

@Mico-rstar
Copy link
Author

Mico-rstar commented Oct 9, 2025

and,这个PR是为了解决什么问题?有无测试用例/问题复现案例?

dragonos运行test_backlogc
使用外部tcp连接脚本,并发数:5,连接数:20
屏幕截图 2025-10-09 232743

@fslongjin
Copy link
Member

这个实现我初看了一下,怎么感觉容易在poll的地方卡很久?如果一直有包到来的话。

@sparkzky @Samuka007 麻烦帮忙review一下?

Comment on lines +210 to +224
let mut sockets = self.sockets.lock_irqsave();
let result = interface.poll_ingress_single(timestamp, device, &mut sockets);
drop(sockets);
match result {
smoltcp::iface::PollIngressSingleResult::None => break,
smoltcp::iface::PollIngressSingleResult::PacketProcessed => {},
smoltcp::iface::PollIngressSingleResult::SocketStateChanged => {
self.bounds.read_irqsave().iter().for_each(|bound_socket| {
bound_socket.notify();
let _woke = bound_socket
.wait_queue()
.wakeup(Some(ProcessState::Blocked(true)));
});
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

能说说你这里为什么要逐包处理吗
之前的写法可以一次性处理多个包,然后唤醒bound_sockets,然后可以同时唤醒多个在等待数据的socket
但是现在你这样的写法,每次有一个包的到来,都会去唤醒整个bound_sockets,但是最终只有一个socket真正收到数据,会不会有点不太必要

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之前的写法能够一次处理多个包,关键在于提前bind backlog个数量的socket,这个pr希望实现类似于linux动态扩展wait_queue的机制。考虑过两种解决方案:

  1. 逐包处理,将处理粒度变小,让上层TCP_Socket可以及时扩充队列大小以供新的连接
  2. 在smoltcp层面,提供一个listen接口,并维护一个wait_queue来根据连接数量动态扩展
    方案2相对优雅,且可以对smoltcp的分发机制进行优化,例如用hashmap来维护sockets set,但需要对smoltcp进行修改。
    是否有更好的方案,或者是否有准备对smoltcp进行优化

Comment on lines +531 to +532
Inner::Listening(listen) => Some(listen.inners.read().front().unwrap().iface().clone()),
Inner::Established(est) => Some(est.inner.iface().clone()),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里直接unwarp吗,加个判断是否为空吧,为空的话就返回None

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants