-
Notifications
You must be signed in to change notification settings - Fork 11
feat: multi-url support #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The authentication tests (AuthBasicSuccess, AuthBasicFailed, AuthTokenSuccess, AuthTokenFailed) were failing due to protocol and port mismatches. The dummy HTTP server was starting on HttpPort without proper HTTPS support, but tests were trying to connect using HTTPS protocol. Changes: - Changed all auth tests from https:// to http:// protocol - Changed HttpsPort references to HttpPort for consistent port matching - Removed tls_verify=unsafe_off parameters (not needed with plain HTTP) - Added missing .StopAsync() calls for proper server cleanup All 10 authentication tests now pass successfully. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
|
Caution Review failedAn error occurred during the review process. Please try again later. WalkthroughAdds multi-address HTTP sending with per-address HttpClient/handler caching and failover rotation, extends buffer/sender APIs with Guid and char column types (and nullable variants), scopes dummy HTTP server state per port for concurrent instances, and introduces Docker-backed QuestDB integration tests and supporting test suites. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Sender Client
participant HttpSender as HttpSender
participant AddrProv as AddressProvider
participant Cache as ClientCache
participant ServerA as Server A
participant ServerB as Server B
Client->>HttpSender: SendAsync(payload)
HttpSender->>AddrProv: CurrentAddress
AddrProv-->>HttpSender: hostA:portA
HttpSender->>Cache: GetClientFor(hostA:portA)
Cache-->>HttpSender: HttpClient(hostA)
HttpSender->>ServerA: POST /write
ServerA-->>HttpSender: 500 RetriableError
Note over HttpSender,AddrProv: On retriable error -> rotate and retry
HttpSender->>AddrProv: RotateToNextAddress()
AddrProv-->>HttpSender: hostB:portB
HttpSender->>Cache: GetClientFor(hostB:portB)
Cache-->>HttpSender: HttpClient(hostB)
HttpSender->>ServerB: POST /write
ServerB-->>HttpSender: 200 OK
HttpSender-->>Client: Success
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (9)
src/dummy-http-server/IlpEndpoint.cs (1)
127-128: Thread safety concern with returned mutable collections.
GetReceiveBufferandGetReceiveBytesreturn mutableStringBuilderandList<byte>without synchronization. Callers modifying these collections concurrently withHandleAsynccould cause race conditions. This is likely acceptable for test infrastructure where tests typically wait for server operations to complete before accessing buffers.Verify that tests consistently await server operations before accessing these buffers to avoid data races.
src/net-questdb-client/Utils/AddressProvider.cs (1)
97-130: IPv6 address parsing may fail.Using
LastIndexOf(':')will incorrectly parse IPv6 addresses like[::1]:9000or::1:9000, treating the final colon as the port separator.Consider handling bracketed IPv6 notation:
public static string ParseHost(string address) { if (string.IsNullOrEmpty(address)) return address; + // Handle bracketed IPv6: [::1]:9000 + if (address.StartsWith("[")) + { + var closeBracket = address.IndexOf(']'); + if (closeBracket > 0) + return address.Substring(1, closeBracket - 1); + } + var index = address.LastIndexOf(':'); if (index > 0) { return address.Substring(0, index); } return address; }src/net-questdb-client/Senders/HttpSender.cs (2)
269-282: Double assignment of_clientfield.
GetClientForCurrentAddressassigns_clientat line 280 and also returns the client. The caller (e.g., line 167, 201) also assigns to_client. This redundancy is harmless but adds confusion.Consider either removing the assignment inside this method or having callers use the return value without reassigning:
private HttpClient GetClientForCurrentAddress() { var address = _addressProvider.CurrentAddress; if (!_clientCache.TryGetValue(address, out var client)) { client = CreateClientForAddress(address); _clientCache[address] = client; } - _client = client; return client; }
288-308: CleanupUnusedClients disposes clients that may have in-flight connections.When a successful response is received,
CleanupUnusedClientsdisposes all HttpClient instances except the current one. However, these clients share the same underlyingSocketsHttpHandler, and disposing them while connections may still be pooled could cause issues. Additionally, if the current address fails on the next request, new clients will need to be recreated for all other addresses.Consider keeping clients alive for a configurable period or only cleaning up on disposal:
- private void CleanupUnusedClients() + // Optional: remove aggressive cleanup to allow faster failover on subsequent failures + private void CleanupUnusedClients() { - if (!_addressProvider.HasMultipleAddresses) - { - return; - } - - var currentAddress = _addressProvider.CurrentAddress; - var addressesToRemove = _clientCache.Keys - .Where(address => address != currentAddress) - .ToList(); - - foreach (var address in addressesToRemove) - { - if (_clientCache.TryGetValue(address, out var client)) - { - client.Dispose(); - _clientCache.Remove(address); - } - } + // Keep all clients alive for potential future failover + // Clients will be disposed in Dispose() }src/net-questdb-client-tests/QuestDbManager.cs (2)
33-39: HttpClient is not disposed if constructor throws.If the constructor throws after creating
_httpClient, the client won't be disposed. While unlikely, consider wrapping in try-catch or using a factory pattern.
228-232: Null-conditional on _httpClient is unnecessary.The
_httpClientfield is assigned in the constructor and is never null. The?.operator on line 231 is defensive but misleading since it suggests the field could be null.public async ValueTask DisposeAsync() { await StopAsync(); - _httpClient?.Dispose(); + _httpClient.Dispose(); }src/net-questdb-client-tests/MultiUrlHttpTests.cs (1)
131-158: Test name suggests round-robin but tests first-address preference.The test
MultipleAddresses_RoundRobinRotationasserts that data goes to server 1, which validates first-address preference rather than round-robin distribution. Consider renaming or adding assertions that verify rotation actually occurs after failures.[Test] - public async Task MultipleAddresses_RoundRobinRotation() + public async Task MultipleAddresses_FirstAddressPreferred() { - // Test round-robin rotation across multiple addresses + // Test that first available address is used when all are healthysrc/net-questdb-client/Utils/SenderOptions.cs (1)
602-606: Consider limiting split to handle edge cases.If an address value ever contains an
=character (though unlikely), the currentSplit('=')would yield more than 2 parts and the address would be silently ignored.Apply this diff for defensive parsing:
- var kvp = param.Split('='); - if (kvp.Length == 2 && kvp[0].Trim() == "addr") + var kvp = param.Split('=', 2); + if (kvp.Length == 2 && kvp[0].Trim().Equals("addr", StringComparison.OrdinalIgnoreCase))Note: Also consider case-insensitive key matching for consistency with typical connection string handling.
src/net-questdb-client-tests/QuestDbIntegrationTests.cs (1)
220-221: Consider usingInterlocked.Incrementinstead of lock for counter.For simple counter increments,
Interlocked.Incrementis more concise and slightly more performant than a lock.Apply this diff:
- var sendLock = new object(); ... - lock (sendLock) - { - batchesSent++; - } + Interlocked.Increment(ref batchesSent);Also applies to: 271-274
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
src/dummy-http-server/DummyHttpServer.cs(8 hunks)src/dummy-http-server/IlpEndpoint.cs(3 hunks)src/net-questdb-client-tests/HttpTests.cs(10 hunks)src/net-questdb-client-tests/MultiUrlHttpTests.cs(1 hunks)src/net-questdb-client-tests/QuestDbIntegrationTests.cs(1 hunks)src/net-questdb-client-tests/QuestDbManager.cs(1 hunks)src/net-questdb-client/Buffers/BufferV1.cs(1 hunks)src/net-questdb-client/Buffers/IBuffer.cs(1 hunks)src/net-questdb-client/Senders/AbstractSender.cs(1 hunks)src/net-questdb-client/Senders/HttpSender.cs(24 hunks)src/net-questdb-client/Senders/ISender.cs(1 hunks)src/net-questdb-client/Utils/AddressProvider.cs(1 hunks)src/net-questdb-client/Utils/SenderOptions.cs(6 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-10-17T16:38:52.474Z
Learnt from: RaphDal
Repo: questdb/net-questdb-client PR: 52
File: src/net-questdb-client/Senders/ISender.cs:410-418
Timestamp: 2025-10-17T16:38:52.474Z
Learning: In the net-questdb-client project (C#), the NullableColumn pattern (e.g., NullableColumn(ReadOnlySpan<char> name, long? value)) is considered a bad practice and is only kept for backward compatibility reasons. Do not suggest adding NullableColumn overloads for new types.
Applied to files:
src/net-questdb-client/Buffers/IBuffer.cssrc/net-questdb-client/Senders/AbstractSender.cssrc/net-questdb-client/Senders/ISender.cssrc/net-questdb-client/Buffers/BufferV1.cs
📚 Learning: 2025-10-17T16:38:52.474Z
Learnt from: RaphDal
Repo: questdb/net-questdb-client PR: 52
File: src/net-questdb-client/Senders/ISender.cs:410-418
Timestamp: 2025-10-17T16:38:52.474Z
Learning: In src/net-questdb-client/Senders/ISender.cs, the decimal type intentionally does not have a NullableColumn overload. This is a deliberate design decision because NullableColumn is deprecated in favor of using the Column method directly with nullable types.
Applied to files:
src/net-questdb-client/Buffers/IBuffer.cssrc/net-questdb-client/Senders/AbstractSender.cssrc/net-questdb-client/Senders/ISender.cssrc/net-questdb-client-tests/HttpTests.cssrc/net-questdb-client/Buffers/BufferV1.cs
📚 Learning: 2025-10-17T16:37:03.583Z
Learnt from: RaphDal
Repo: questdb/net-questdb-client PR: 52
File: src/dummy-http-server/DummyHttpServer.cs:228-299
Timestamp: 2025-10-17T16:37:03.583Z
Learning: In src/dummy-http-server/DummyHttpServer.cs, the PrintBuffer() method is a test helper that produces human-readable output from received bytes. Decimal tests use binary representation directly via DecimalTestHelpers and do not call PrintBuffer, so DECIMAL type decoding in PrintBuffer is not required.
Applied to files:
src/dummy-http-server/IlpEndpoint.cs
🧬 Code graph analysis (7)
src/net-questdb-client/Senders/HttpSender.cs (1)
src/net-questdb-client/Utils/AddressProvider.cs (5)
AddressProvider(30-131)AddressProvider(39-48)ParsePort(115-130)ParseHost(97-109)RotateToNextAddress(89-92)
src/net-questdb-client/Senders/AbstractSender.cs (1)
src/net-questdb-client/Senders/ISender.cs (16)
ISender(68-68)ISender(109-109)ISender(117-117)ISender(130-130)ISender(138-138)ISender(141-141)ISender(149-149)ISender(157-157)ISender(165-165)ISender(173-173)ISender(181-181)ISender(255-255)ISender(266-266)ISender(274-274)ISender(284-293)ISender(304-312)
src/dummy-http-server/IlpEndpoint.cs (1)
src/dummy-http-server/DummyHttpServer.cs (5)
StringBuilder(199-202)List(208-211)Exception(213-216)GetCounter(244-247)Clear(147-150)
src/net-questdb-client/Senders/ISender.cs (3)
src/net-questdb-client/Senders/AbstractSender.cs (16)
ISender(46-49)ISender(70-74)ISender(77-81)ISender(84-88)ISender(91-95)ISender(103-107)ISender(110-114)ISender(117-121)ISender(124-128)ISender(131-135)ISender(138-142)ISender(144-148)ISender(150-154)ISender(156-160)ISender(276-280)ISender(283-287)src/net-questdb-client/Buffers/IBuffer.cs (1)
ReadOnlySpan(205-205)src/net-questdb-client/Buffers/BufferV1.cs (1)
ReadOnlySpan(195-198)
src/dummy-http-server/DummyHttpServer.cs (1)
src/dummy-http-server/IlpEndpoint.cs (6)
ToString(51-55)IlpEndpoint(82-283)ClearPort(164-175)SetPortConfig(177-183)Exception(130-136)GetCounter(147-153)
src/net-questdb-client-tests/QuestDbIntegrationTests.cs (2)
src/net-questdb-client/Senders/HttpSender.cs (4)
Task(415-432)Task(645-659)Task(662-785)TimeSpan(369-373)src/net-questdb-client/Sender.cs (1)
Sender(48-96)
src/net-questdb-client/Buffers/BufferV1.cs (2)
src/net-questdb-client/Buffers/IBuffer.cs (17)
IBuffer(66-66)IBuffer(74-74)IBuffer(84-84)IBuffer(97-97)IBuffer(105-105)IBuffer(113-113)IBuffer(121-121)IBuffer(129-129)IBuffer(137-137)IBuffer(145-145)IBuffer(153-153)IBuffer(228-228)IBuffer(235-235)IBuffer(248-248)IBuffer(261-261)IBuffer(273-273)ReadOnlySpan(205-205)src/net-questdb-client/Utils/IngressError.cs (3)
IngressError(35-66)IngressError(42-46)IngressError(54-58)
🔇 Additional comments (39)
src/dummy-http-server/DummyHttpServer.cs (5)
44-48: LGTM - Instance fields for per-server configuration.These fields properly enable concurrent server instances with independent configuration, supporting the multi-URL testing requirements.
71-84: LGTM - Dual storage approach for backwards compatibility.Configuration is stored in both instance fields (for per-server isolation) and static fields (for backwards compatibility with existing tests). This is a reasonable transition strategy.
117-122: LGTM - Middleware for port identification.The middleware correctly sets the
X-Server-Portheader so endpoints can identify which server instance is handling the request. This is essential for the per-port state isolation implemented inIlpEndpoint.
168-179: LGTM - Per-port configuration and dynamic URL scheme.The port configuration is properly registered with
IlpEndpoint, and the URL scheme correctly uses HTTPS when client certificates are required.
149-150: LGTM - Port-scoped accessor usage.All state accessors (
Clear,GetReceiveBuffer,GetReceivedBytes,GetLastError,GetCounter) correctly use the instance's_portfield to access port-specific data.Also applies to: 201-201, 210-210, 215-215, 246-246
src/dummy-http-server/IlpEndpoint.cs (4)
86-123: LGTM - Port-keyed storage implementation.The port-keyed dictionaries with proper locking enable multiple concurrent
DummyHttpServerinstances. TheGetPortKeyfallback toLocalPortwhen the header is missing provides good resilience.
130-162: LGTM - Port-specific accessors with proper synchronization.
GetLastError,SetLastError,GetCounter, andSetCounterall properly acquire locks before accessing shared state. The pattern of re-fetching data viaGetOrCreatePortDatainside the lock ensures consistency.
164-196: LGTM - Port clearing and configuration management.
ClearPortcorrectly resets buffer state while preserving the container objects.SetPortConfigandGetPortConfigprovide proper isolation with fallback to global defaults for backwards compatibility.
215-259: LGTM - Port-aware request handling.
HandleAsynccorrectly retrieves port from context, uses port-scoped configuration for behavior control (RetriableError,ErrorMessage), and stores request data in port-specific buffers with proper locking.src/net-questdb-client/Utils/AddressProvider.cs (3)
30-34: Thread safety consideration for concurrent access.
_currentIndexis accessed without synchronization. IfRotateToNextAddressorCurrentIndexsetter is called from multiple threads (e.g., during concurrent retry logic), this could lead to race conditions.Verify whether
AddressProvideris accessed from multiple threads. If so, consider usingInterlocked.Incrementfor atomic rotation:public void RotateToNextAddress() { - _currentIndex = (_currentIndex + 1) % _addresses.Count; + Interlocked.Increment(ref _currentIndex); + // Use modulo when reading: _currentIndex % _addresses.Count }
39-48: LGTM - Constructor with validation and defensive copy.Properly validates that at least one address is provided and creates a defensive copy of the input list.
85-92: LGTM - Round-robin rotation logic.The modulo operation correctly wraps around to the beginning of the address list.
src/net-questdb-client/Buffers/IBuffer.cs (1)
299-314: LGTM - New Column overloads for char and Guid.These interface additions properly extend the IBuffer contract to support additional column types. The documentation is clear and consistent with existing patterns. Based on learnings, using direct
Columnoverloads (rather thanNullableColumn) is the correct approach.src/net-questdb-client/Buffers/BufferV1.cs (2)
508-513: LGTM - Efficient char column implementation.Using
stackallocfor a single-character span avoids heap allocation, and delegating to the existingColumn(ReadOnlySpan<char>)reuses the string column encoding logic.
515-525: LGTM - Guid column implementation with defensive validation.The implementation correctly uses
stackallocfor the 36-character buffer required by the default GUID format ("D" format:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx). The length validation is good defensive programming, thoughTryFormatshould consistently write exactly 36 characters for this format.src/net-questdb-client/Senders/AbstractSender.cs (1)
282-294: LGTM - New Column overloads for Guid and char.Both implementations correctly follow the established pattern of delegating to
Buffer.Columnand returningthisfor fluent API chaining. The use of<inheritdoc />appropriately references the interface documentation.src/net-questdb-client/Senders/HttpSender.cs (3)
570-586: Address rotation occurs before retry delay, which is correct.The rotation logic correctly moves to the next address before sleeping and retrying. This ensures the retry attempt uses a different server.
718-735: Async retry path mirrors sync path correctly.The async
SendAsyncmethod properly implements address rotation on retries, matching the synchronousSendWithRetriesbehavior.
818-831: Dispose properly cleans up all cached clients.The disposal logic correctly iterates through all cached clients and disposes them before clearing the cache and disposing the handler.
src/net-questdb-client-tests/QuestDbManager.cs (1)
197-223: Readiness check is robust with appropriate timeout.The
WaitForQuestDbAsyncmethod polls/settingswith a 30-second timeout, which is reasonable for container startup. The implementation correctly handles exceptions during polling.src/net-questdb-client-tests/MultiUrlHttpTests.cs (2)
76-102: Test properly validates primary address routing.The test correctly verifies that when the primary address is available, data is sent only to that server and not to the secondary.
104-129: Failover test validates retry behavior on retriable errors.The test correctly sets up a server returning retriable errors and verifies failover to the secondary address. The 5-second retry timeout is appropriate.
src/net-questdb-client/Senders/ISender.cs (1)
444-492: New Guid and char column support is consistent with existing patterns.The new
ColumnandNullableColumnoverloads forGuidandcharfollow the established interface patterns. TheNullableColumnimplementations correctly usevalue.Valueinstead of the null-coalescing pattern used in other overloads, which is cleaner.Based on learnings, while
NullableColumnis considered legacy, it's kept for backward compatibility. These additions maintain consistency with the existing API.src/net-questdb-client-tests/HttpTests.cs (5)
1104-1122: GUID column test validates correct wire format.The test correctly verifies that GUIDs are serialized as quoted strings in the ILP wire format.
1124-1141: Char column test validates correct wire format.The test correctly verifies that char values are serialized as quoted single-character strings.
1168-1195: Nullable GUID test covers both value and null cases.The test properly validates that nullable GUIDs are serialized when present and omitted when null.
1197-1222: Nullable char test covers both value and null cases.The test properly validates that nullable chars are serialized when present and omitted when null.
1860-1870: Test refactored to use pre-configured sender correctly.The test now properly builds the sender before making assertions, which is the correct approach.
src/net-questdb-client/Utils/SenderOptions.cs (4)
60-60: LGTM!The field is correctly initialized to an empty list for collecting multiple addresses.
168-182: LGTM!The read-only exposure of the addresses list and the count property are appropriately designed. The
[JsonIgnore]attributes correctly prevent serialization of these internal properties.
707-714: LGTM!The fallback logic correctly ensures that the addresses list is never empty by using the primary address when no addresses were parsed from the config string.
162-166: Potential inconsistency when settingaddrdirectly.When
addris set via the property setter,_addressesis not updated. This could lead toaddressesreturning stale values if users mix property-based and config-string-based configuration.Verify if this is intentional behavior. If
addrshould be the canonical "first" address, consider either:
- Documenting that
addressesis only populated from config strings, or- Synchronizing
_addresses[0]whenaddris set:public string addr { get => _addr; - set => _addr = value; + set + { + _addr = value; + if (_addresses.Count > 0) + _addresses[0] = value; + } }src/net-questdb-client-tests/QuestDbIntegrationTests.cs (7)
37-56: LGTM!The test fixture lifecycle is properly managed with async setup and teardown. The null check in
TearDownFixturecorrectly handles cases where setup might have failed.
58-153: LGTM!The basic integration tests cover HTTP, TCP/ILP, multi-row, and multi-type scenarios effectively. The retry logic in
CanSendDifferentDataTypesappropriately handles eventual consistency.
155-176: LGTM!Good test for validating the multi-URL fallback feature. The test correctly configures a non-existent endpoint first and expects the client to fall back to the valid endpoint.
Minor note: Port 19001 is assumed to be unused. If this proves flaky in CI environments, consider using a guaranteed-invalid address like
localhost:1(reserved port).
349-356: Same volume cleanup concern applies here.Both
volume1andvolume2are created but cleanup isn't visible. See previous comment about verifyingQuestDbManager.DisposeAsync()handles volume removal.
516-520: Verify expected row count behavior during failover.The assertion expects exactly
expectedTotalRowsacross both databases. This assumes:
- No duplicate rows (each batch sent exactly once)
- No data loss during failover
Is this the expected behavior? If retries could cause duplicate writes to different databases, consider using
Is.GreaterThanOrEqualTolike the single-database chaos test.
538-580: LGTM!The helper methods provide good retry logic for eventual consistency. The
Assert.Failat line 578 will throw, making thereturn 0unreachable, but this pattern satisfies the compiler's return requirement.
1-35: Good integration test coverage for multi-URL support.The test suite provides comprehensive coverage of:
- Basic HTTP and ILP data ingestion
- Multi-URL failover scenarios
- Auto-flush behavior
- Chaos testing with database restarts
- Multi-database failover
The tests are well-structured with proper setup/teardown and will help catch regressions in the new multi-URL feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (4)
src/net-questdb-client-tests/QuestDbManager.cs (2)
110-156: Handle startup failures by cleaning up the container and normalizingIsRunning.If
WaitForQuestDbAsyncthrows (timeout or other failure), the container will stay running until a later cleanup, andIsRunningremainstrueeven though startup failed. This can leave stray containers and make test behavior surprising.Consider wrapping the readiness wait in a
try/catchthat attempts best-effort cleanup and restoresIsRunningtofalse:_containerId = output.Trim(); Console.WriteLine($"QuestDB container started: {_containerId}"); - IsRunning = true; - - // Wait for QuestDB to be ready - await WaitForQuestDbAsync(); + IsRunning = true; + + try + { + // Wait for QuestDB to be ready + await WaitForQuestDbAsync(); + } + catch + { + // Best-effort cleanup if startup fails + try + { + await StopAsync(); + } + catch + { + // Ignore cleanup failures + } + + throw; + }
265-291: Dispose theProcessto avoid leaking OS handles.
RunDockerCommandAsyncstarts aProcessbut never disposes it, which can leak handles over many test runs. The concurrent reading of stdout/stderr looks good (and fixes the earlier deadlock risk), but theProcessshould be wrapped in ausingblock.Example refactor:
- var process = Process.Start(startInfo); - if (process == null) - { - throw new InvalidOperationException("Failed to start docker command"); - } - - var outputTask = process.StandardOutput.ReadToEndAsync(); - var errorTask = process.StandardError.ReadToEndAsync(); - await Task.WhenAll(outputTask, errorTask); - var output = await outputTask; - var error = await errorTask; - await process.WaitForExitAsync(); - - return (process.ExitCode, output + error); + var process = Process.Start(startInfo); + if (process == null) + { + throw new InvalidOperationException("Failed to start docker command"); + } + + using (process) + { + var outputTask = process.StandardOutput.ReadToEndAsync(); + var errorTask = process.StandardError.ReadToEndAsync(); + + await Task.WhenAll(outputTask, errorTask); + await process.WaitForExitAsync(); + + var output = await outputTask; + var error = await errorTask; + + return (process.ExitCode, output + error); + }src/net-questdb-client/Senders/HttpSender.cs (2)
253-274: Clarify ownership ofSocketsHttpHandlerto avoid double-dispose and simplify cleanupRight now:
CreateClientForAddressdoesnew HttpClient(handler)(defaultdisposeHandler: true).CleanupUnusedClientsandDisposeboth explicitly callhandler.Dispose()from_handlerCacheas well.That means each handler is typically disposed twice (once via
HttpClient.Dispose(), once directly). This is harmless in practice, but it complicates reasoning about ownership.Consider simplifying one of these ways:
- Let
HttpClientown the handler
- Keep
new HttpClient(handler)as-is.- In
CleanupUnusedClientsandDispose, only dispose clients and just remove handlers from_handlerCachewithout disposing them explicitly:
if (_handlerCache.TryGetValue(address, out var handler)){handler.Dispose();_handlerCache.Remove(address);}
if (_handlerCache.ContainsKey(address)){_handlerCache.Remove(address);}And in `Dispose()`: ```diff
foreach (var handler in _handlerCache.Values){handler.Dispose();}_handlerCache.Clear();
_handlerCache.Clear();
- Or make
HttpSenderown the handlers explicitly
- Construct clients with
new HttpClient(handler, disposeHandler: false)and keep the explicit handler disposal you already have.Either option makes the lifetime model clearer and avoids redundant disposal logic.
Also applies to: 294-324, 833-851
529-626: Deduplicate sync/async retry & rotation logic to reduce divergence risk
SendWithRetries(sync) andSendAsyncboth implement:
- Exponential backoff with jitter,
- Rotation via
_addressProvider.RotateToNextAddress(),- Per-address client retrieval via
GetClientForCurrentAddress(),- Handling of
HttpRequestException+IsRetriableErrorstatus codes.The logic is nearly identical but duplicated, which makes future changes to retry semantics or address-rotation behavior easy to get out of sync between sync and async paths.
Consider factoring the common pieces into a shared helper (e.g., a small state machine that takes
Func<HttpRequestMessage>and a delegate to perform the actual send) and havingSend/SendAsynccall the same core implementation. That would keep the multi-URL failover behavior consistent and easier to maintain.Also applies to: 677-801
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/net-questdb-client-tests/QuestDbManager.cs(1 hunks)src/net-questdb-client/Senders/HttpSender.cs(23 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/net-questdb-client-tests/QuestDbManager.cs (2)
src/net-questdb-client/Senders/HttpSender.cs (7)
HttpClient(227-274)HttpClient(279-292)TimeSpan(385-389)Task(431-448)Task(661-675)Task(678-801)Dispose(834-853)src/net-questdb-client-tests/QuestDbIntegrationTests.cs (2)
Task(532-536)Task(538-580)
src/net-questdb-client/Senders/HttpSender.cs (2)
src/net-questdb-client/Utils/AddressProvider.cs (5)
AddressProvider(30-131)AddressProvider(39-48)ParsePort(115-130)ParseHost(97-109)RotateToNextAddress(89-92)src/net-questdb-client/Utils/SenderOptions.cs (4)
SenderOptions(92-95)SenderOptions(101-128)SenderOptions(732-738)ToString(649-694)
🔇 Additional comments (1)
src/net-questdb-client/Senders/HttpSender.cs (1)
47-58: Per-address HttpClient/SocketsHttpHandler caching and rotation look solidThe new
_clientCache/_handlerCacheplusAddressProvider-driven rotation andCleanupUnusedClientsgive you:
- One
SocketsHttpHandlerper address with a fixedSslOptions.TargetHost(no more cross-address mutation).- A cached
HttpClientper address, re-used across retries and sends.- Cleanup of non-current clients/handlers after a successful send to avoid resource bloat.
This addresses the earlier concern about mutating a shared handler’s
TargetHostacross different endpoints and aligns well with the documented single-threaded sender usage.Also applies to: 114-169, 222-274, 294-325
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/net-questdb-client-tests/QuestDbManager.cs (1)
255-272: Trim container names to avoid cleanup failures on Windows.The code splits container names on
'\n'but doesn't trim the resulting strings. On Windows,docker psoutput uses\r\nline endings, which will leave a trailing\rin each name. This can causedocker stopanddocker rmcommands to fail because the container name won't match exactly, potentially leaving containers running and causing port conflicts in subsequent test runs.Apply this fix to trim names and use ordinal string comparisons:
var containerNames = output.Split('\n', StringSplitOptions.RemoveEmptyEntries); // Stop and remove any QuestDB test containers -foreach (var name in containerNames) +foreach (var rawName in containerNames) { + var name = rawName.Trim(); + // Look for containers with matching port pattern: questdb-test-{port}-{httpPort}-* - if (name.Contains("questdb-test-") && - (name.Contains($"-{_port}-{_httpPort}-") || name.Contains($"-{_httpPort}-{_port}-"))) + if (name.StartsWith(ContainerNamePrefix, StringComparison.Ordinal) && + (name.Contains($"-{_port}-{_httpPort}-", StringComparison.Ordinal) || + name.Contains($"-{_httpPort}-{_port}-", StringComparison.Ordinal))) { Console.WriteLine($"Cleaning up existing container: {name}"); // Stop the container await RunDockerCommandAsync($"stop -t 5 {name}"); // Remove the container await RunDockerCommandAsync($"rm {name}"); } }Based on past review feedback.
🧹 Nitpick comments (1)
src/net-questdb-client-tests/MultiUrlHttpTests.cs (1)
131-158: Test name and comment don't match the implementation.The test is named
MultipleAddresses_RoundRobinRotationand the comment states "Test round-robin rotation across multiple addresses," but the test only sends a single request and verifies it goes to server 1. This doesn't actually test rotation behavior. To properly test round-robin rotation, the test would need to either send multiple requests and verify distribution, or demonstrate failover to subsequent servers.Consider either renaming this test to something like
MultipleAddresses_UsesFirstServerWhenAllAvailableor expanding it to actually test rotation across multiple send operations.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/net-questdb-client-tests/MultiUrlHttpTests.cs(1 hunks)src/net-questdb-client-tests/QuestDbManager.cs(1 hunks)src/net-questdb-client/Utils/AddressProvider.cs(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/net-questdb-client-tests/MultiUrlHttpTests.cs (2)
src/net-questdb-client/Sender.cs (1)
Sender(48-96)src/net-questdb-client/Utils/AddressProvider.cs (3)
AddressProvider(30-166)AddressProvider(39-48)RotateToNextAddress(89-92)
src/net-questdb-client-tests/QuestDbManager.cs (1)
src/net-questdb-client-tests/QuestDbIntegrationTests.cs (2)
Task(532-536)Task(538-580)
🔇 Additional comments (3)
src/net-questdb-client-tests/MultiUrlHttpTests.cs (1)
290-337: IPv6 parsing tests look comprehensive.The IPv6 address parsing tests properly exercise the
AddressProvider.ParseHostandParsePortmethods with various formats including[::1]:9000,[2001:db8::1]:9000, and addresses without ports. The implementation inAddressProvider.cscorrectly handles IPv6 by detecting the opening bracket and finding the closing bracket before parsing the port, which addresses the concern raised in the past review about incorrect parsing usingLastIndexOf(':').src/net-questdb-client-tests/QuestDbManager.cs (1)
293-298: Process stream handling correctly prevents deadlocks.The concurrent reading of
StandardOutputandStandardErrorusingTask.WhenAllproperly avoids potential deadlocks that could occur if one stream's buffer fills before the other is read. This implementation correctly addresses the concern from the previous review.src/net-questdb-client/Utils/AddressProvider.cs (1)
99-165: IPv6 address parsing correctly implemented.The
ParseHostandParsePortmethods properly handle IPv6 addresses in bracket notation by:
- Detecting the opening
[bracket- Finding the closing
]bracket first- Returning the entire bracketed section as the host (including both brackets)
- Parsing the port only from the text after the closing bracket
This approach correctly avoids the issue mentioned in past reviews where using
LastIndexOf(':')directly would split inside the IPv6 address itself. For example,[::1]:9000now correctly parses to host[::1]and port9000, rather than the broken result mentioned in the earlier review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/net-questdb-client/Senders/HttpSender.cs (1)
223-245: Simplify and harden host parsing inCreateClientForAddressThe
Contains("//")+Split("//")[1]path is brittle and can throw for some inputs (e.g., a full URI without an explicit port such as"https://example.com"causesParseHostto return"https", andSplit("//")[1]thenIndexOutOfRangeException). SinceAddressProvider.ParseHostalready handles the supportedhost:portand[ipv6]:portformats, this special-case branch isn’t really buying anything and makes misconfiguration fail with an opaque runtime exception.You can avoid the edge cases and simplify the code by always using
ParseHostdirectly:- var host = address.Contains("//") - ? AddressProvider.ParseHost(address).Split("//")[1] - : AddressProvider.ParseHost(address); + var host = AddressProvider.ParseHost(address);This keeps behavior predictable for the documented address formats and degrades misconfigurations into cleaner errors rather than crashes.
🧹 Nitpick comments (3)
src/net-questdb-client/Senders/HttpSender.cs (3)
276-324: Cache cleanup strategy is correct but may be a bit aggressive for frequent failover
GetClientForCurrentAddressandCleanupUnusedClientscorrectly ensure that each address has at most oneHttpClient/SocketsHttpHandlerpair and that non-current entries are disposed once a request finally succeeds. That said, in environments where writers routinely fail over between multiple URLs, eagerly dropping all non-current clients/handlers on every successful write could lead to extra churn (re-creating handlers, redoing TLS handshakes, etc.) on subsequent failbacks.If you expect frequent oscillation between endpoints, consider softening the cleanup policy (e.g., only trimming on
Dispose, or trimming least-recently-used entries above some cap) to trade a small amount of memory for fewer handler/client re-creations. Current behavior is still correct, so this is just a perf/operational tuning suggestion.
462-527: Sync/async send paths rotate addresses correctly; minor unification opportunityThe updated
Send/SendAsyncandSendWithRetriespaths now:
- Respect the transaction/empty-buffer guards as before.
- Use a per-request CTS with
CalculateRequestTimeout().- On retriable failures, back off with jitter and, when multiple addresses are configured, rotate via
AddressProviderand callGetClientForCurrentAddress()before the next attempt.- Surface a clearer “Cannot connect to
host:port” message usingCurrentHost/CurrentPort.- Call
CleanupUnusedClients()on success to shed non-current address resources.This all looks logically consistent for both sync and async flows.
One small unification you might consider (purely for clarity/future-proofing) is to also use
GetClientForCurrentAddress()for the very first attempt instead of_clientdirectly, so every attempt—first and subsequent—goes through the same mechanism to resolve the active client from the current address. Today_clientand_addressProviderare kept in sync so this isn’t a correctness issue, but unifying the pattern would reduce the chance of future refactors accidentally desynchronizing them.Also applies to: 529-626, 677-801
803-831: Re-trying 404/421 might be too aggressive for single-address setupsMarking
404and421as retriable makes sense in multi-URL deployments where one instance might not accept writes or might be misdirected, so rotating to another address is desirable. However, in single-address configurations this will cause the client to resend the same request to the same endpoint untilretry_timeoutelapses, even though a 404/421 from that one server is rarely recoverable and is more likely a hard configuration or routing problem.To keep the benefits for multi-URL failover while avoiding unnecessary retries in the single-URL case, you could gate the 404/421 behavior on the presence of multiple addresses. For example:
- Keep
IsRetriableErrorfocused on clearly transient/server-side issues (5xx, 509, 523, 524, 529, 599).- Treat 404/421 as “conditionally retriable” only when
_addressProvider.HasMultipleAddressesis true, by checking them at the call site before deciding to enter the retry loop.This would improve fail-fast behavior for simple single-endpoint deployments without changing semantics for the multi-URL use case you’re targeting.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/net-questdb-client-tests/QuestDbManager.cs(1 hunks)src/net-questdb-client/Senders/HttpSender.cs(23 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/net-questdb-client-tests/QuestDbManager.cs
🔇 Additional comments (3)
src/net-questdb-client/Senders/HttpSender.cs (3)
47-58: Per-address client/handler caching and Build() initialization look solidThe per-address
_clientCache/_handlerCacheplusCreateHandler/Buildwiring correctly avoid the previous shared-handler TLSTargetHostissue and cleanly encapsulate per-host TLS settings and pool configuration. AddressProvider integration inBuild()(including saving/restoringCurrentIndexaround the/settingsprobe and refreshing_client) keeps the active address and client in sync without leaking probe-induced rotations.Also applies to: 114-220
629-675: JSON error handling is clear and robustBoth sync and async JSON error handlers now consistently deserialize into
JsonErrorResponsewhen possible and fall back to streaming the raw body when deserialization fails, composing a singleIngressErrormessage withReasonPhraseplus details. The async version’s expliciterrorStrlocal avoids double-reading the stream and keeps disposal semantics correct.
833-853: Disposal of per-address clients and handlers is correctThe new
Disposeimplementation that walks_clientCacheand_handlerCache, disposing and clearing both, ensures all per-address resources are released, on top of the existing buffer cleanup. Given that this sender isn’t intended to be reused after disposal, this is an appropriate and complete teardown.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
ci/azure-pipelines.yml (1)
90-100: Windows Docker Desktop startup polling looks reasonable, but consider documenting the assumption.The PowerShell script polls for the named pipe
\\.\pipe\docker_enginewith a 60-second timeout, which is the standard approach for verifying Docker Desktop readiness on Windows. However, the script has no explicit error handling if the timeout expires—the pipeline would continue without Docker being ready, causing cryptic test failures downstream.Consider adding explicit error handling to fail fast if Docker fails to start:
choco install docker-desktop -y Start-Process "C:\Program Files\Docker\Docker\Docker Desktop.exe" $dockerReady = $false for ($i = 1; $i -le 60; $i++) { if (Test-Path "\\.\pipe\docker_engine") { $dockerReady = $true break } Start-Sleep -Seconds 1 } if (-not $dockerReady) { Write-Error "Docker failed to start within 60 seconds" exit 1 }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ci/azure-pipelines.yml(1 hunks)
🔇 Additional comments (1)
ci/azure-pipelines.yml (1)
84-88: LGTM!The macOS Docker setup via Colima is an appropriate lightweight alternative to Docker Desktop. The command syntax and condition gating are correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
ci/azure-pipelines.yml (1)
15-28: Add Docker setup step for Linux runners.The pipeline defines a Linux vmImage (ubuntu-latest) in the strategy matrix but lacks a corresponding Docker initialization step. macOS and Windows have dedicated setup steps (lines 76-89 and 91-133), but Linux is missing one. The test task (line 135) runs on all platforms, so Docker must be available on Linux runners before tests execute.
Add a Linux Docker setup step between line 89 (end of macOS setup) and line 91 (start of Windows setup) that:
- Installs Docker
- Starts the Docker daemon
- Verifies Docker is ready before tests begin
Without this, tests will fail on Linux with "docker: command not found" or "Cannot connect to Docker daemon" errors.
🧹 Nitpick comments (4)
src/net-questdb-client-tests/QuestDbManager.cs (4)
39-50: Volume removal semantics inDisposeAsyncmight surprise callers
DisposeAsyncalways removes_volumeNameviadocker volume rmif set. That’s fine for ephemeral test volumes, but it conflicts a bit with the “persistent storage” wording and could be surprising if a shared volume name is passed in.If you expect shared or pre‑created volumes, consider either:
- Documenting that
SetVolumeimplies “ephemeral test volume that will be deleted on dispose”, or- Adding an option/flag to control whether the volume is removed.
123-169: DeferIsRunning = trueuntil after readiness, and optionally clean up on failure
IsRunningis set immediately afterdocker run, beforeWaitForQuestDbAsyncsucceeds. If readiness times out, callers see an exception butIsRunningremainstrue, and_containerIdis populated, which can make the state a bit misleading.Consider:
- Setting
IsRunning = trueonly afterWaitForQuestDbAsynccompletes successfully.- Optionally wrapping the readiness wait in a
try/catchthat logs and callsStopAsync()(or directly stops/removes the container) before rethrowing, so failed startups don’t leave stray containers running.For example:
- _containerId = output.Trim(); - Console.WriteLine($"QuestDB container started: {_containerId}"); - IsRunning = true; - - // Wait for QuestDB to be ready - await WaitForQuestDbAsync(); + _containerId = output.Trim(); + Console.WriteLine($"QuestDB container started: {_containerId}"); + + try + { + // Wait for QuestDB to be ready + await WaitForQuestDbAsync(); + IsRunning = true; + } + catch + { + // Best-effort cleanup if startup fails + await StopAsync(); + throw; + }This keeps the observable state consistent and avoids orphaned containers when readiness fails.
216-247: Readiness wait works; logging and timeout semantics could be clarifiedThe readiness loop is fine for CI: 120 attempts with 1s delay and a short HttpClient timeout gives a robust wait.
Two small polish points you might consider:
- The log line
QuestDB is ready (after {attempts} seconds)prints0on immediate success and generally under‑reports elapsed time (doesn’t include HTTP call + delay). Usingattempts + 1or tracking elapsedTimeSpanwould better reflect reality.- The comment “2 minutes” assumes each iteration is ~1 second; if HTTP timeouts become significant, total wall‑clock time can be greater. Either reduce the HttpClient timeout for this probe or adjust the comment to “up to ~2 minutes” to be more accurate.
These are cosmetic/observability only; behavior is fine.
284-310: Concurrent reading of Docker output/error avoids deadlocks; minor simplification possibleThe updated
RunDockerCommandAsyncnow reads stdout and stderr concurrently, which fixes the potential deadlock from sequentialReadToEndAsynccalls. The pattern is sound.You can simplify slightly by dropping the extra awaits after
Task.WhenAll:- var outputTask = process.StandardOutput.ReadToEndAsync(); - var errorTask = process.StandardError.ReadToEndAsync(); - await Task.WhenAll(outputTask, errorTask); - var output = await outputTask; - var error = await errorTask; + var outputTask = process.StandardOutput.ReadToEndAsync(); + var errorTask = process.StandardError.ReadToEndAsync(); + await Task.WhenAll(outputTask, errorTask); + var output = outputTask.Result; + var error = errorTask.Result;Optionally, you could also
Dispose()theprocessafterWaitForExitAsyncto release OS handles, but that’s a minor concern in a short‑lived test process.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
ci/azure-pipelines.yml(1 hunks)src/net-questdb-client-tests/QuestDbManager.cs(1 hunks)
🔇 Additional comments (8)
ci/azure-pipelines.yml (2)
76-89: ✓ macOS Colima setup is well-implemented.The installation, startup, and health check are sound. The 60-second wait with per-second polling is reasonable for Colima initialization. No issues identified.
91-133: ✓ Windows Docker Desktop setup is comprehensive and well-structured.The use of
docker pull hello-world:latestas a readiness check is more robust than socket checks alone, as it verifies the full daemon stack including WSL2 backend. The 10-minute timeout appropriately accounts for initial Docker Desktop startup on Windows. Progress logging every 60 seconds provides good observability. No issues identified.src/net-questdb-client-tests/QuestDbManager.cs (6)
21-35: Constructor and basic state tracking look solidThe ctor builds a unique, descriptive container name and scopes HTTP/ILP ports cleanly;
IsRunningas a simple lifecycle flag is appropriate for the test context. No changes needed here.
63-82: Docker availability check is clear and defensiveThe
EnsureDockerAvailableAsyncflow (exit code check, wrapping any exception with a clear message and link) is straightforward and helpful for test diagnostics. Looks good as-is.
87-118: Image existence and pull logic is appropriate for testsUsing
docker images --filter "reference=... --quiet"and treating any non‑empty output as “image exists” is correct for this use case. The pull failure path surfaces a useful error message. No action needed.
174-195: Stop logic and error handling are reasonable for test infrastructureBest‑effort stop with a warning on failure, followed by a forced
rm -fand state reset, is appropriate for a test‑only Docker manager. This looks good.
200-211: Endpoint helpers are simple and correct
GetHttpEndpointandGetIlpEndpointreturn the expected URIs for localhost‑mapped ports and keep HTTP vs ILP concerns separated. No further changes required.
249-282: Container cleanup logic correctly handles Windows line endings and test prefixesTrimming each container name, using
ContainerNamePrefix, and doing ordinal substring checks on the{port}-{httpPort}patterns address the earlier Windows/\rissues and make the cleanup more robust. This looks correct for targeting only the test containers on the configured ports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/net-questdb-client-tests/QuestDbManager.cs (1)
30-30: Consider modern range syntax for string slicing.Line 30 uses
Substring(0, 8)which could be replaced with the modern range operator[..8]for a more concise expression:- _containerName = $"{ContainerNamePrefix}{port}-{httpPort}-{Guid.NewGuid().ToString().Substring(0, 8)}"; + _containerName = $"{ContainerNamePrefix}{port}-{httpPort}-{Guid.NewGuid().ToString()[..8]}";
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
ci/azure-pipelines.yml(1 hunks)src/net-questdb-client-tests/QuestDbManager.cs(1 hunks)
🔇 Additional comments (4)
src/net-questdb-client-tests/QuestDbManager.cs (4)
39-252: LGTM!The lifecycle management methods are well-structured with proper error handling, logging, and separation between Docker and native instance modes. The logic correctly handles cleanup, image pulling, container startup, readiness polling, and graceful shutdown with fallback to force removal.
257-283: LGTM!The readiness polling logic is appropriate with a reasonable 30-second timeout and clear error messaging.
285-318: LGTM! Past review issues addressed.The container cleanup logic now correctly trims container names (line 302) and uses robust string comparisons with
StringComparison.Ordinal(lines 305-307), addressing the Windows\r\nline ending issue flagged in previous reviews.
320-346: LGTM! Past deadlock issue resolved.The concurrent stream reading (lines 338-342) using
Task.WhenAllcorrectly prevents the buffer deadlock that was flagged in previous reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
ci/azure-pipelines.yml (3)
76-91: The readiness loop doesn't fail the job when timeout is reached.The polling loop (lines 82-89) exits after 60 seconds regardless of QuestDB's state. If QuestDB fails to start, subsequent test steps will fail with confusing connection errors. Add an explicit check after the loop to verify readiness and fail the job if QuestDB isn't running.
Apply this diff:
for i in {1..60}; do if curl -s http://localhost:9000/settings > /dev/null 2>&1; then echo "QuestDB is ready" break fi echo "Waiting for QuestDB... ($i/60)" sleep 1 done + # Verify QuestDB started successfully + if ! curl -s http://localhost:9000/settings > /dev/null 2>&1; then + echo "ERROR: QuestDB failed to start within 60 seconds" + exit 1 + fi displayName: 'Install QuestDB via systemd (Linux)' condition: eq(variables['osName'], 'Linux')
93-106: The readiness loop doesn't fail the job when timeout is reached.The polling loop (lines 97-104) exits after 60 seconds without verifying QuestDB actually started. Add an explicit check after the loop to fail the job if the service isn't ready.
Apply this diff:
for i in {1..60}; do if curl -s http://localhost:9000/settings > /dev/null 2>&1; then echo "QuestDB is ready" break fi echo "Waiting for QuestDB... ($i/60)" sleep 1 done + # Verify QuestDB started successfully + if ! curl -s http://localhost:9000/settings > /dev/null 2>&1; then + echo "ERROR: QuestDB failed to start within 60 seconds" + exit 1 + fi displayName: 'Install QuestDB via Homebrew (macOS)' condition: eq(variables['osName'], 'macOS')
108-148: The readiness loop doesn't fail the job when timeout is reached.The PowerShell polling loop (lines 132-146) doesn't fail the job if QuestDB never becomes ready within 120 seconds. Add an explicit check after the loop to throw an error if QuestDB isn't running.
Apply this diff:
for ($i = 1; $i -le $maxAttempts; $i++) { try { $response = Invoke-WebRequest -Uri "http://localhost:9000/settings" -UseBasicParsing -ErrorAction SilentlyContinue if ($response.StatusCode -eq 200) { Write-Host "QuestDB is ready" break } } catch { # Silently continue } if ($i % 30 -eq 0) { Write-Host "Still waiting... ($i/$maxAttempts seconds)" } Start-Sleep -Seconds 1 } + # Verify QuestDB started successfully + try { + $response = Invoke-WebRequest -Uri "http://localhost:9000/settings" -UseBasicParsing -ErrorAction Stop + if ($response.StatusCode -ne 200) { + throw "QuestDB health check failed" + } + } catch { + Write-Error "ERROR: QuestDB failed to start within $maxAttempts seconds" + exit 1 + } displayName: 'Install and Start QuestDB (Windows)' condition: eq(variables['osName'], 'Windows')
🧹 Nitpick comments (1)
src/net-questdb-client-tests/QuestDbManager.cs (1)
95-109: Remove unnecessary namespace qualifications for consistency.Lines 100-101 use fully-qualified type names (
System.Threading.CancellationTokenSource,System.Net.Http.HttpCompletionOption) while the rest of the file relies onusingstatements. Remove the namespace prefixes for consistency.Apply this diff:
- using var cts = new System.Threading.CancellationTokenSource(TimeSpan.FromSeconds(30)); - var response = await _httpClient.GetAsync($"{GetHttpEndpoint()}/settings", System.Net.Http.HttpCompletionOption.ResponseHeadersRead, cts.Token); + using var cts = new CancellationTokenSource(TimeSpan.FromSeconds(30)); + var response = await _httpClient.GetAsync($"{GetHttpEndpoint()}/settings", HttpCompletionOption.ResponseHeadersRead, cts.Token);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
ci/azure-pipelines.yml(1 hunks)src/net-questdb-client-tests/QuestDbManager.cs(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/net-questdb-client-tests/QuestDbManager.cs (2)
src/net-questdb-client/Senders/HttpSender.cs (8)
HttpClient(227-274)HttpClient(279-292)TimeSpan(385-389)Dispose(834-853)Task(431-448)Task(661-675)Task(678-801)CancellationTokenSource(341-346)src/net-questdb-client-tests/QuestDbIntegrationTests.cs (2)
Task(532-536)Task(538-580)
🔇 Additional comments (6)
src/net-questdb-client-tests/QuestDbManager.cs (6)
5-19: LGTM!The class structure is well-organized with appropriate use of constants and private fields to manage Docker lifecycle state.
150-208: LGTM!The container startup logic correctly handles native instances, Docker availability, cleanup, image pulling, and readiness polling. The volume argument construction is functional.
213-234: LGTM!The stop logic correctly distinguishes between Docker and native instances, includes a graceful shutdown timeout, and has a force-remove fallback for error cases.
239-255: LGTM!The property and endpoint getter methods are straightforward and correctly implemented.
288-321: LGTM!The cleanup logic correctly addresses the previous review feedback: container names are trimmed to handle Windows line endings, the
ContainerNamePrefixconstant is used, and ordinal string comparisons prevent culture-sensitive matching issues.
323-349: LGTM!The concurrent reading of stdout and stderr (lines 341-345) correctly addresses the previous review comment about potential deadlocks when streams fill.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
ci/azure-pipelines.yml (3)
76-91: Add explicit failure when QuestDB doesn't start within timeout.The readiness polling loop (lines 82–89) doesn't fail the job if QuestDB doesn't become ready within 60 seconds. This allows downstream tests to run against an unavailable service, causing confusing failures. Add a check after the loop to verify QuestDB is running.
for i in {1..60}; do if curl -s http://localhost:9000/settings > /dev/null 2>&1; then echo "QuestDB is ready" break fi echo "Waiting for QuestDB... ($i/60)" sleep 1 done + # Verify QuestDB started successfully + if ! curl -s http://localhost:9000/settings > /dev/null 2>&1; then + echo "ERROR: QuestDB failed to start within 60 seconds" + exit 1 + fi
93-106: Add explicit failure when Docker doesn't start within timeout.The readiness polling loop (lines 97–104) doesn't fail the job if Docker doesn't become ready within 120 seconds, which will cause confusing test failures downstream. Add a final check after the loop.
for i in {1..120}; do if docker ps >/dev/null 2>&1; then echo "Docker is ready" break fi echo "Waiting for Docker... ($i/120)" sleep 1 done + # Verify Docker started successfully + if ! docker ps >/dev/null 2>&1; then + echo "ERROR: Docker failed to start within 120 seconds" + exit 1 + fi
108-136: Add explicit failure when Docker Desktop doesn't start within timeout.The readiness polling loop (lines 120–134) doesn't fail the job if Docker Desktop doesn't become ready within 300 seconds. Add a check after the loop to verify the Docker daemon is running before proceeding.
for ($i = 1; $i -le $maxAttempts; $i++) { try { $output = docker ps 2>&1 if ($LASTEXITCODE -eq 0) { Write-Host "Docker is ready" break } } catch { # Continue } if ($i % 60 -eq 0) { Write-Host "Still waiting for Docker... ($i/$maxAttempts seconds)" } Start-Sleep -Seconds 1 } + # Verify Docker started successfully + try { + $output = docker ps 2>&1 + if ($LASTEXITCODE -ne 0) { + throw "Docker daemon is not responding" + } + } catch { + Write-Error "ERROR: Docker Desktop failed to start within $maxAttempts seconds" + exit 1 + }
🧹 Nitpick comments (2)
src/net-questdb-client-tests/QuestDbManager.cs (2)
30-31: Minor: Consider modernizing GUID slicing and HttpClient disposal.Two small refinements:
- Line 30: Modern C# prefers
[..8]overSubstring(0, 8)- Line 49: The null-conditional
?.is unnecessary since_httpClientis always initialized in the constructorThese are cosmetic and don't affect functionality.
Apply this diff if desired:
- _containerName = $"{ContainerNamePrefix}{port}-{httpPort}-{Guid.NewGuid().ToString().Substring(0, 8)}"; + _containerName = $"{ContainerNamePrefix}{port}-{httpPort}-{Guid.NewGuid().ToString()[..8]}";- _httpClient?.Dispose(); + _httpClient.Dispose();Also applies to: 49-49
232-235: Optional: Consider adding a comment to the empty catch block.The empty catch block is intentional for retry logic, which is a common pattern. However, adding a brief comment would make the intent explicit to future readers.
Example:
catch { - // Ignore and retry + // Ignore connection errors and retry - QuestDB is still starting up }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
ci/azure-pipelines.yml(1 hunks)src/net-questdb-client-tests/QuestDbManager.cs(1 hunks)
🔇 Additional comments (1)
src/net-questdb-client-tests/QuestDbManager.cs (1)
1-306: LGTM! Solid Docker lifecycle management for tests.The implementation is well-structured with proper error handling, cleanup logic, and resource management. All previously identified issues have been addressed:
- ✅ Concurrent stream reading (lines 297-302) correctly uses
Task.WhenAll- ✅ Container name trimming (lines 258-261) handles Windows
\r\nline endings- ✅ Proper use of constants and ordinal string comparisons for robustness
The test utility provides a clean API for managing QuestDB Docker instances during integration testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
ci/azure-pipelines.yml (1)
76-91: Linux QuestDB readiness loop missing explicit timeout failure.The readiness polling loop exits without verifying QuestDB became ready within 60 seconds. If the timeout is reached, the pipeline will continue and tests will fail downstream with confusing errors.
Apply this diff to add explicit failure verification:
for i in {1..60}; do if curl -s http://localhost:9000/settings > /dev/null 2>&1; then echo "QuestDB is ready" break fi echo "Waiting for QuestDB... ($i/60)" sleep 1 done + # Verify QuestDB started successfully + if ! curl -s http://localhost:9000/settings > /dev/null 2>&1; then + echo "ERROR: QuestDB failed to start within 60 seconds" + exit 1 + fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ci/azure-pipelines.yml(1 hunks)
🔇 Additional comments (1)
ci/azure-pipelines.yml (1)
99-113: macOS Docker readiness verification looks good.The explicit failure check at line 110–111 correctly exits with status 1 if Docker doesn't become ready within the timeout. This ensures the pipeline fails fast and prevents confusing downstream test failures.
This PR adds support for configuring multiple urls in the .NET client. In the event that one of the instances is unavailable for writes, the client will retry against alternate urls.
There's also some small API changes to add GUID and char support.
Also closes: #50
Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.