-
Notifications
You must be signed in to change notification settings - Fork 175
Move orchestrator config into a model #1295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's also some issue with build in integration tests
The CodeQL issues will be resolved in #1296 , so this is dependent on that one. |
# Conflicts: # packages/client-proxy/internal/proxy/proxy.go # packages/client-proxy/main.go
CodeQL's wrong, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Config Parameter Omission in Slot Creation
The Config
parameter, recently added to network.Pool
and network.StorageKV
, isn't consistently passed to NewSlot
calls. This results in network slots being created without essential configuration, such as hyperloop IP and port, leading to compilation errors or runtime issues. This affects both initial slot creation and fallback acquisition.
packages/orchestrator/internal/sandbox/network/pool.go#L81-L88
infra/packages/orchestrator/internal/sandbox/network/pool.go
Lines 81 to 88 in 6209e35
} | |
zap.L().Info("network slot pool populate closed") | |
}() | |
return pool, nil | |
} | |
packages/orchestrator/internal/sandbox/network/storage_kv.go#L67-L68
infra/packages/orchestrator/internal/sandbox/network/storage_kv.go
Lines 67 to 68 in 6209e35
if status { | |
return NewSlot(key, slotIdx, s.config) |
packages/orchestrator/internal/sandbox/network/storage_kv.go#L89-L96
infra/packages/orchestrator/internal/sandbox/network/storage_kv.go
Lines 89 to 96 in 6209e35
if slot == nil { | |
// This is a fallback for the case when all slots are taken. | |
// There is no Consul lock so it's possible that multiple sandboxes will try to acquire the same slot. | |
// In this case, only one of them will succeed and other will try with different slots. | |
reservedKeys, _, keysErr := kv.Keys(s.nodeID+"/", "", nil) | |
if keysErr != nil { | |
return nil, fmt.Errorf("failed to read Consul KV: %w", keysErr) |
Pre-requisite for running multiple orchestrators on a single node.
Note
Introduce orchestrator config model and network config, switch port types to uint16 across services, and move Nomad jobs to env-driven ports (drop CLI flags).
internal/cfg
with env-parsedConfig
(e.g.,GRPC_PORT
,PROXY_PORT
,REDIS_URL|REDIS_CLUSTER_URL
,CLICKHOUSE_CONNECTION_STRING
,ALLOW_SANDBOX_INTERNET
,ORCHESTRATOR_SERVICES
, plusNetworkConfig
).cfg.Parse()
inmain
and propagate config (lock path, force stop, services, ports, Redis/ClickHouse, hyperloop port).network.Config
andParseConfig
; pass through pool/storage/slot; removeinternal/consts.go
and read hyperloop IP/port from config.uint16
in proxy (shared
), orchestratorgrpcserver
, hyperloop server, sandbox proxy, and client proxy; adjust logs and validation.NewClientProxy
signature and calls.GRPC_PORT
/PROXY_PORT
and stop passing--port
/--proxy-port
args inorchestrator.hcl
andtemplate-manager.hcl
.network.ParseConfig
anduint16
ports.github.com/caarlos0/env/v11
dependency.Written by Cursor Bugbot for commit 6209e35. This will update automatically on new commits. Configure here.