-
Notifications
You must be signed in to change notification settings - Fork 29
Reupload exported editable mapping annotation zip #8969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughAdds end-to-end import/upload support for editable-mapping (proofreading) annotations, new tracing-store save-from-zip API and IO pipeline, extends NML/upload payloads with editable-mapping fields, refactors chunk-cache into trait + DS/TS implementations, adds boolean data handling in datastore, and small config, message, and UI/log fixes. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…for multi-layer proofreading
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (1)
31-31
: Clarify comment about boolean fill_value support.The comment states "Reading boolean datasets is not supported" but the parsing logic (lines 178-189) now reads boolean fill_value from JSON and stores it as a string. Consider updating the comment to reflect this capability more clearly.
Apply this diff to clarify the comment:
- fill_value: Either[String, Number], // Reading boolean datasets is not supported. When writing boolean, true and false literals will be stored in the string. + fill_value: Either[String, Number], // Boolean fill_value is read from JSON and stored as string ("true" or "false"). When writing boolean data type, fill_value is serialized as JSON boolean.webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (1)
53-61
: Handle boolean datasets increateEmpty
ArrayDataType.bool
now exists and the rest of the stack can construct boolean chunks, butcreateEmpty
still has no branch for it. CallingcreateEmpty(ArrayDataType.bool, …)
will hit the default path and throw aMatchError
, breaking workflows that expect an empty boolean MultiArray.def createEmpty(dataType: ArrayDataType, rank: Int): MultiArray = { val datyTypeMA = dataType match { case ArrayDataType.i1 | ArrayDataType.u1 => MADataType.BYTE case ArrayDataType.i2 | ArrayDataType.u2 => MADataType.SHORT case ArrayDataType.i4 | ArrayDataType.u4 => MADataType.INT case ArrayDataType.i8 | ArrayDataType.u8 => MADataType.LONG case ArrayDataType.f4 => MADataType.FLOAT case ArrayDataType.f8 => MADataType.DOUBLE + case ArrayDataType.bool => MADataType.BOOLEAN } MultiArray.factory(datyTypeMA, Array.fill(rank)(0)) }
🧹 Nitpick comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (1)
178-189
: Simplify flatMap to map for cleaner code.The
flatMap(value => JsSuccess(...))
pattern is unnecessarily verbose. Since you're already in a for-comprehension that expectsJsResult
, you can usemap
to transform the value directly.Apply this diff to simplify the fill_value parsing:
- fill_value <- (fill_value_raw.validate[String], - fill_value_raw.validate[Number], - fill_value_raw.validate[Boolean]) match { - case (asStr: JsSuccess[String], _, _) => - asStr.flatMap(value => JsSuccess[Either[String, Number]](Left(value))) - case (_, asNum: JsSuccess[Number], _) => - asNum.flatMap(value => JsSuccess[Either[String, Number]](Right(value))) - case (_, _, asBool: JsSuccess[Boolean]) => - asBool.flatMap(value => JsSuccess[Either[String, Number]](Left(value.toString))) - case _ => JsError("Could not parse fill_value as string, number or boolean value.") - } + fill_value <- (fill_value_raw.validate[String], + fill_value_raw.validate[Number], + fill_value_raw.validate[Boolean]) match { + case (asStr: JsSuccess[String], _, _) => asStr.map(Left(_)) + case (_, asNum: JsSuccess[Number], _) => asNum.map(Right(_)) + case (_, _, asBool: JsSuccess[Boolean]) => asBool.map(v => Left(v.toString)) + case _ => JsError("Could not parse fill_value as string, number or boolean value.") + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (32)
app/Startup.scala
(2 hunks)app/controllers/AnnotationIOController.scala
(3 hunks)app/models/annotation/AnnotationUploadService.scala
(1 hunks)app/models/annotation/WKRemoteTracingStoreClient.scala
(1 hunks)app/models/annotation/nml/NmlParser.scala
(2 hunks)app/models/annotation/nml/NmlVolumeTag.scala
(1 hunks)conf/application.conf
(1 hunks)conf/messages
(1 hunks)frontend/javascripts/viewer/view/version_entry.tsx
(2 hunks)unreleased_changes/8969.md
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala
(4 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetHeader.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala
(3 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3DataType.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala
(1 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/connectome/ZarrConnectomeFileService.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mapping/ZarrAgglomerateService.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala
(2 hunks)webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/segmentindex/ZarrSegmentIndexFileService.scala
(2 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala
(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala
(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreModule.scala
(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala
(4 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala
(1 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala
(2 hunks)webknossos-tracingstore/conf/standalone-tracingstore.conf
(1 hunks)webknossos-tracingstore/conf/tracingstore.latest.routes
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (17)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3DataType.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1)
ArrayDataType
(5-82)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetHeader.scala (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/wkw/WKWHeader.scala (1)
fill_value
(95-95)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedHeader.scala (1)
fill_value
(54-54)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (2)
app/utils/WkConf.scala (1)
Cache
(74-80)webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreConfig.scala (2)
Cache
(32-46)Redis
(51-54)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/segmentindex/ZarrSegmentIndexFileService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
DSChunkCacheService
(28-30)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
DSChunkCacheService
(28-30)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/connectome/ZarrConnectomeFileService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
DSChunkCacheService
(28-30)
app/models/annotation/WKRemoteTracingStoreClient.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (7)
s
(236-240)s
(240-250)s
(250-259)Fox
(30-230)Fox
(232-305)successful
(53-56)failure
(58-62)webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (2)
addQueryString
(28-31)postFileWithJsonResponse
(135-138)app/controllers/UserTokenController.scala (1)
RpcTokenHolder
(31-39)
app/models/annotation/nml/NmlParser.scala (1)
frontend/javascripts/viewer/model/accessors/volumetracing_accessor.ts (1)
hasEditableMapping
(625-634)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1)
ArrayDataType
(5-82)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (2)
MultiArrayUtils
(11-161)createFilledArray
(32-51)
app/controllers/AnnotationIOController.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (8)
map
(281-284)map
(377-377)Fox
(30-230)Fox
(232-305)successful
(53-56)failure
(58-62)serialCombined
(95-99)serialCombined
(99-111)webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/annotation/AnnotationLayer.scala (5)
toProto
(19-21)AnnotationLayer
(13-21)AnnotationLayer
(23-51)AnnotationLayerStatistics
(53-70)unknown
(69-69)app/models/annotation/WKRemoteTracingStoreClient.scala (2)
saveEditableMappingIfPresent
(252-269)saveVolumeTracing
(227-250)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreConfig.scala (1)
ImageArrayChunks
(36-38)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (1)
util/src/main/scala/com/scalableminds/util/time/Instant.scala (1)
now
(48-48)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1)
ArrayDataType
(5-82)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (3)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (4)
webknossos
(73-75)validateAccessFromTokenContext
(90-97)UserAccessRequest
(30-30)UserAccessRequest
(31-75)util/src/main/scala/com/scalableminds/util/time/Instant.scala (2)
now
(48-48)logSince
(72-75)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
initializeFromUploadedZip
(131-159)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mapping/ZarrAgglomerateService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
DSChunkCacheService
(28-30)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (2)
Tracingstore
(14-32)Cache
(28-30)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (9)
util/src/main/scala/com/scalableminds/util/io/ZipIO.scala (2)
ZipIO
(15-309)zip
(70-87)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datavault/VaultPath.scala (1)
VaultPath
(19-100)webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/UPath.scala (2)
UPath
(54-96)fromLocalPath
(80-80)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TsTempFileService.scala (1)
TsTempFileService
(8-11)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/TracingDataStore.scala (1)
TracingDataStore
(13-66)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1)
create
(134-141)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (1)
createDirectory
(42-49)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3Array.scala (3)
Zarr3Array
(16-40)Zarr3Array
(42-184)open
(18-40)webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetArray.scala (1)
readAsMultiArray
(189-192)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-smoketest-push
- GitHub Check: backend-tests
🔇 Additional comments (21)
app/Startup.scala (1)
101-101
: LGTM! Log message standardization.The Unicode ellipsis (…) has been replaced with ASCII three-dot ellipsis (...) in both log messages. This standardization improves consistency and avoids potential encoding issues.
Also applies to: 118-118
frontend/javascripts/viewer/view/version_entry.tsx (1)
170-170
: LGTM! Correctness fix for segment identifier display.The change correctly aligns
segment2Description
withsegmentId2
instead of incorrectly referencingsegmentId1
. This ensures that version history entries accurately reflect which segments were involved in split and merge agglomerate operations.Also applies to: 183-183
webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (1)
135-138
: LGTM! Method follows existing patterns.The implementation correctly mirrors the existing
postFile
method (lines 130-133) while adding JSON response parsing, consistent with other*WithJsonResponse
methods in the class. The type parameter and return type are appropriate.Note: Like the existing
postFile
method, this doesn't explicitly set a Content-Type header. Play's WSClient may set this automatically based on the file, but if the server expects a specific content type (e.g.,application/zip
for the editable mapping uploads mentioned in the PR), you may want to verify the header is correctly propagated.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala (1)
21-21
: LGTM: DI type update aligns with trait refactoring.The parameter type update from
ChunkCacheService
toDSChunkCacheService
is consistent with the trait-based cache service refactoring across the datastore module.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mapping/ZarrAgglomerateService.scala (1)
16-16
: LGTM: Consistent DI migration to DSChunkCacheService.The import and constructor parameter updates align with the trait-based refactoring of the chunk cache service.
Also applies to: 28-28
webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (1)
64-64
: LGTM: DI binding correctly updated.The binding change from
ChunkCacheService
toDSChunkCacheService
ensures the concrete implementation is wired as an eager singleton, consistent with the trait-based refactoring.webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala (1)
13-13
: LGTM: DI type update consistent with refactoring.The import and constructor changes align with the DSChunkCacheService migration across the datastore module.
Also applies to: 62-62
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/connectome/ZarrConnectomeFileService.scala (1)
11-11
: LGTM: Consistent constructor dependency update.The migration to
DSChunkCacheService
follows the established pattern across the datastore services.Also applies to: 45-45
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/segmentindex/ZarrSegmentIndexFileService.scala (1)
13-13
: LGTM: DI update aligns with trait-based refactoring.The constructor dependency change to
DSChunkCacheService
is consistent with the module-wide cache service migration.Also applies to: 53-53
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (2)
9-26
: LGTM: Clean trait extraction improves flexibility.The refactoring from a concrete class to a trait with
protected val maxSizeBytes
enables different implementations while preserving the cache initialization logic. This design improves testability and follows the dependency inversion principle.
28-30
: LGTM: DSChunkCacheService provides datastore-specific sizing.The concrete implementation correctly sources
maxSizeBytes
from the datastore configuration, maintaining the original behavior while enabling the trait-based design.webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (1)
46-50
: No changes required for byteOrder detection
Option.contains("big")
correctly matches the only big-endian value ("big"
), and no other variants exist in the codebase.conf/application.conf (1)
203-203
: LGTM!The new chunk cache configuration is properly documented and the 2 GB limit aligns with the datastore's image array cache configuration. This will be consumed by the new TSChunkCacheService.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreModule.scala (1)
28-28
: LGTM!The DI binding for TSChunkCacheService follows the established pattern for service registration and is correctly configured as an eager singleton.
conf/messages (1)
270-270
: LGTM!The error message is clear, grammatically correct, and provides actionable information about what's missing (file or baseMappingName).
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (1)
28-31
: LGTM!The new Cache configuration object follows the same pattern as existing configuration objects (Redis, Fossildb, WebKnossos) and correctly defines the chunk cache size parameter. The children list is properly updated to include the new Cache object.
webknossos-tracingstore/conf/standalone-tracingstore.conf (1)
64-64
: LGTM!The standalone tracingstore configuration correctly mirrors the chunk cache setting from application.conf, ensuring consistent behavior across deployment modes.
webknossos-tracingstore/conf/tracingstore.latest.routes (1)
43-43
: LGTM!The new route for saving editable mappings from ZIP follows RESTful conventions and is properly positioned within the Editable Mappings section. The parameter types are appropriate for the operation.
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (2)
42-49
: LGTM!The new
createDirectory
method follows the same pattern as the existingcreate
method, with appropriate use ofFiles.createDirectory
and proper tracking inactiveTempFiles
.
56-59
: LGTM!The updated cleanup logic correctly handles both files and directories. Using
FileUtils.deleteDirectory
for directories ensures recursive deletion of directory contents, while regular files continue usingFiles.delete
. This prevents exceptions that would occur ifFiles.delete
were called on a non-empty directory.webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
6-7
: LGTM!The TSChunkCacheService correctly extends ChunkCacheService and derives
maxSizeBytes
from the configuration. The DI wiring with@Inject
is proper, and the protected visibility ofmaxSizeBytes
matches the contract of the parent class.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala
Show resolved
Hide resolved
...calableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
131-157
: Return final persisted version, not just the count (and keep grouping materialized)Iterator exhaustion is fixed via toSeq, but returning the count alone can mislead callers. Prefer returning the actual last version written so follow-up uploads can chain correctly.
- updatesGrouped = updateActions.grouped(100).toSeq + updatesGrouped = updateActions.grouped(100).toSeq _ <- Fox.serialCombined(updatesGrouped.zipWithIndex) { case (updateGroup: Seq[UpdateAction], updateGroupIndex) => tracingDataStore.annotationUpdates.put(annotationId.toString, - startVersion + updateGroupIndex, + startVersion + updateGroupIndex, Json.toJson(updateGroup)) } - numberOfSavedVersions = updatesGrouped.length - } yield numberOfSavedVersions + finalVersion = + if (updatesGrouped.isEmpty) startVersion - 1 else startVersion + updatesGrouped.size - 1 + } yield finalVersionIf the API intentionally needs “number of saved versions”, please align the controller/clients accordingly and rename variables to avoid confusion.
🧹 Nitpick comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
186-189
: Validate shape consistency between arraysEnsure edgeIsAddition length matches edges’ first dimension to avoid silent truncation/mismatch.
- numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox + numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox + numAdds <- edgeIsAdditionZarrArray.datasetShape.flatMap(_.headOption).toFox + _ <- Fox.fromBool(numEdges == numAdds) ?~> "editableMappingFromZip.edgeIsAddition.length.mismatch"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
app/controllers/AnnotationIOController.scala
(3 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala
(4 hunks)webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala
(2 hunks)
🔇 Additional comments (8)
app/controllers/AnnotationIOController.scala (5)
135-153
: LGTM! Return value properly destructured and propagated.The changes correctly handle the new tuple return type from
mergeAndSaveVolumeLayers
, destructuring it intomergedVolumeLayers
andearliestAccessibleVersion
. TheearliestAccessibleVersion
is appropriately passed to theAnnotationProto
constructor, replacing the previous constant0L
.
171-173
: LGTM! Function signature and empty case correctly updated.The return type change to
Fox[(List[AnnotationLayer], Long)]
is consistent with the need to propagate the earliest accessible version. Returning0L
for the empty case is appropriate, likely indicating no version restrictions when no volume layers are present.
174-181
: LGTM! Comprehensive edge-case validations.The three validation checks effectively prevent problematic scenarios:
- Lines 174-176: Prevents duplicate fallback layers within a single annotation using
distinctBy
, ensuring each volume layer references a unique segmentation layer.- Lines 177-178: Blocks merging when multiple annotations each contain multiple volume layers, avoiding complex multi-dimensional merge logic.
- Lines 179-181: Prevents merging annotations with editable mapping edges, as merging proofreading history is complex and unsupported.
These validations are well-reasoned and protect against data corruption.
183-218
: Past version counter issue successfully resolved!The previous review flagged a critical bug where the version counter would reset when
saveEditableMappingIfPresent
returned0L
, corrupting FossilDB history by overwriting earlier updates.The current implementation at line 199 correctly uses addition rather than replacement:
_ = layerUpdatesStartVersionMutable = layerUpdatesStartVersionMutable + numberOfSavedVersionsThis ensures:
- When
numberOfSavedVersions = 0
(no editable mapping): counter remains unchanged ✓- When
numberOfSavedVersions > 0
: counter advances by the exact number of saved versions ✓The monotonic version ordering is now preserved across all layers, preventing history corruption.
Example trace:
- Layer 1: saves updates 1-5 → counter becomes 6
- Layer 2: no editable mapping → counter stays 6
- Layer 3: saves updates 6-8 → counter becomes 9
Each layer receives a non-overlapping version range, as intended.
218-218
: Verify return value consistency across scenarios.There's a potential inconsistency in the
earliestAccessibleVersion
returned:
- Line 218 (single annotation): Returns
layerUpdatesStartVersionMutable
, which is1L
if no editable mappings were saved- Line 239 (multiple annotations): Returns
0L
even though no editable mappings can exist (blocked by line 179-181 validation)Question: When no editable mappings are saved, should both scenarios return
0L
to consistently indicate "no version restrictions"?The difference might be:
0L
= special sentinel for "all versions accessible, no imported history"1L
= "versions start at 1, all accessible"If both values have the same meaning, this inconsistency is harmless. However, if
0L
vs1L
have different semantics when used to restrict version access elsewhere in the codebase, returning1L
for a single annotation without editable mappings could inadvertently restrict access.Run the following script to understand how
earliestAccessibleVersion
is used:Also applies to: 232-239
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (2)
34-36
: Good extension with KeyValueStoreImplicitsBrings in needed toFox/shiftBox utils; consistent with usage below.
193-211
: Harden upload endpoint: parser, logging, cleanup, and return semantics
- Use a raw body parser with explicit maxLength and wrap in log() for consistency and observability.
- Ensure the uploaded temp file is deleted after processing to avoid disk leaks.
- Verify whether the API should return the final persisted version (startVersion + groups - 1) vs just the count; caller expectations may break otherwise.
- Optionally assert that annotationId actually owns tracingId and that the tracing has editableMapping before writing.
Example adjustments:
- def saveFromZip(...): Action[AnyContent] = - Action.async { implicit request => + def saveFromZip(...): Action[AnyContent] = + Action.async(bodyParsers.raw /* consider maxLength */) { implicit request => - accessTokenService.validateAccessFromTokenContext(UserAccessRequest.webknossos) { - for { - editedEdgesZip <- request.body.asRaw.map(_.asFile).toFox ?~> "zipFile.notFound" + log() { + accessTokenService.validateAccessFromTokenContext(UserAccessRequest.webknossos) { + for { + editedEdgesZip <- request.body.asRaw.map(_.asFile).toFox ?~> "zipFile.notFound" before = Instant.now - numberOfSavedVersions <- editableMappingIOService.initializeFromUploadedZip(tracingId, + saved <- editableMappingIOService.initializeFromUploadedZip(tracingId, annotationId, startVersion, baseMappingName, editedEdgesZip) _ = Instant.logSince(before, s"Initializing editable mapping $tracingId from zip") - } yield Ok(Json.toJson(numberOfSavedVersions)) - } + } yield { + // best-effort cleanup of uploaded temp file + try editedEdgesZip.delete() catch { case _: Throwable => () } + Ok(Json.toJson(saved)) + } + } + } }To confirm what callers expect (count vs final version), run:
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
192-225
: UpdateAction construction looks correctCorrectly maps additions to Merge and removals to Split with segment IDs, unit mag, and timestamp. Good defaults for optional metadata.
...calableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codewise I didn't find much. A few optional improvements and mainly questions :)
Did not test yet though
: (action.value.segmentId1 ?? "unknown"); | ||
const segment2Description = | ||
action.value.segmentPosition2 ?? action.value.segmentId1 ?? "unknown"; | ||
action.value.segmentPosition2 ?? action.value.segmentId2 ?? "unknown"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uff xD, thanks for fixing this
Fox.successful(List()) | ||
Fox.successful(List(), 0L) | ||
else if (volumeLayersGrouped.exists(layersOfAnnotation => | ||
layersOfAnnotation.length != layersOfAnnotation.distinctBy(_.tracing.fallbackLayer).length)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this assertion here is not correct. What if I upload an annotation with 2 "custom" volume layers where each has no fallback layer? In that case this assertion would fail but the annotation should be correct 🤔
startVersion = layerUpdatesStartVersionMutable | ||
) | ||
// The next layer’s update actions then need to start after this one | ||
_ = layerUpdatesStartVersionMutable = layerUpdatesStartVersionMutable + numberOfSavedVersions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is there an assignment to _? I think this is unnecessary
_ = layerUpdatesStartVersionMutable = layerUpdatesStartVersionMutable + numberOfSavedVersions | |
layerUpdatesStartVersionMutable = layerUpdatesStartVersionMutable + numberOfSavedVersions | |
) | ||
} | ||
} yield (annotationLayers, layerUpdatesStartVersionMutable) | ||
} else { // Multiple annotations with volume layers (but at most one each) was uploaded, they have no editable mappings. merge those volume layers into one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Grammar / punctuation is a little off in the comment. Maybe something like
// Multiple annotations with volume layers (but at most one each) were uploaded. Merge those volume layers into one. None has editable mappings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't there be more than one volume layer per annotation? Is such an upload not supported in general? In that case, my comment above regarding the assertion is wrong.
), | ||
basePath.getOrElse("") + v.dataZipPath, | ||
v.name, | ||
v.editedMappingEdgesLocation.map(location => basePath.getOrElse("") + location), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this preferably use UPath or so to use a /
operator instead of contacting strings? Overall the result would very likely be a simple string concatenation but it might feel safer? 🤔
for { | ||
_ <- tracingDataStore.editableMappingsInfo.put(tracingId, | ||
0L, | ||
toProtoBytes(editableMappingService.create(baseMappingName))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 might not matter but you are not creating the editable mapping info at the passed startVersionNumber but instead at version 0
None, | ||
None, | ||
None, | ||
chunkCacheService.sharedChunkContentsCache) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Here the chunk cache is passed so that the zarr3 array has one as it needs one. But this means as reading the whole array, that without me checking the code, all of the array is loaded into the cache. But imo after the upload this zarr array is never read from again and thus obsolete. Therefore, loading all its content through a cache is as well. So I'd say this is unnecessary and could maybe be replace with a None or so which could turn off the caching in an zarr array or pass a cache which never stores anything 🤔?
|
||
private def buildUpdateActionFromEdge(edgeSrc: Long, | ||
edgeDst: Long, | ||
isAddition: Boolean, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 if "isAddition" is meaning a merging edge, I think it would be more intuitive to call is isMergingEdge
or isMerging
or so. Because while reading the previous code up until now, I was unsure what "isAddition" and its zarr array meant.
segmentPosition2 = None, | ||
segmentId1 = Some(edgeSrc), | ||
segmentId2 = Some(edgeDst), | ||
mag = Vec3Int.ones, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a thought: Can this become a problem if the dataset actually does not have mag 1 🤔?
address = "localhost" | ||
port = 6379 | ||
} | ||
cache.chunkCacheMaxSizeBytes = 2000000000 # 2 GB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I posted before, I think caching the read results from the uploaded editable mappings doesn't yield any benefit imo. In that case reducing the max cache size could leave more free RAM?! (Or not having a cache at all)
URL of deployed dev instance (used for testing):
Steps to test:
TODOs:
Issues:
$PR_NUMBER.md
file inunreleased_changes
or use./tools/create-changelog-entry.py
)