Skip to content

Conversation

Lagejohn
Copy link
Collaborator

@Lagejohn Lagejohn commented Sep 16, 2025

Adds functionality for a new "dry run" mode invoked with -D or --dry-run passed in the command line invocation. When used, MMTC will not keep any of its output data products and will instead print the "hypothetical" outputs to the console and record them to the primary log. This includes the latest SCLK kernel triplet (only latest entry in most cases, or the latest two entries if the new kernel has a new clock change rate set as is the case in interpolated runs), the new row of the time history file, any new SCLKSCET entries if enabled, new raw TLM table lines if enabled, and uplink command entries if enabled. Because of JNISpice limitations, the SCLK kernel must be at least temporarily written to disk for the SCLKSCET file to be generated, so in the case of dry runs, a temporary SCLK kernel is written to the system /tmp/ dir and deleted after the SCLKSCET file is created.

All output products logged/printed in dry runs match their "actual" outputs that get written to disk in non-dry runs with the exception of the SCLKSCET file which seems to have duplicate entries--see comment below for more details.

John Lage and others added 6 commits September 16, 2025 16:43
Add functionality to MMTC for adaptations to declare one or many custom output products, implemented by external jars (compiled against mmtc-core) that implements the OutputProductDefinitionFactory interface.  These output product implementations:
- must be defined as OutputProductDefinitions that extend either AppendedFileOutputProductDefinition or EntireFileOutputProductDefinition
- are written last in a correlation run, and are included in the Run History, rollback, and sandboxing functionality
- are provided a TimeCorrelationContext from which to pull information about the current or past correlations

Also:
- Adds an output product plugin SDK, containing an example Gradle project with functional example implementations to serve as the base for custom plugins
- Allows the default Time History File product to be disabled
- Updates User Guide with new section and information on custom output products
Copy link

Path newFile = Paths.get(newFilePath);

if (Files.exists(newFile)) {
if (Files.exists(newFile) && !dirname.equals("/tmp")) { // Ignore duplicate products if we're writing to the /tmp/ dir--it means this is likely a dry run and the last run was aborted
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an inelegant solution to the edge case where a SCLK kernel generated during a dry run doesn't get deleted from the /tmp directory and then prevents you from being able to run any additional dry runs until you generate a new kernel normally. It was happening consistently during debugging when a run would get killed before completion, but it could theoretically happen any time something causes an MMTC run to abort prematurely. Let me know if you think it's necessary.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see the benefit in adding this condition and it's hard to see any drawbacks. That said, consider using Java's built-in temporary file utility methods for this, a la: https://stackoverflow.com/a/26860215

public SclkScetProductDefinition() {
super("SCLKSCET File");
}
private final int ENTRIES_TO_PRINT = 4;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my low-tech solution to the problem of knowing how many SCLKSCET entries to print after trying and failing to find a better way to dynamically determine how many have changed. If you can think of a better way, PLEASE let me know

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't think of a simpler way right now. We can mark this for future improvement IMO.

@Lagejohn
Copy link
Collaborator Author

TODO: Update user guide

Copy link
Collaborator

@rmseng rmseng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great set of changes! Couple of suggestions. Thank you for handling this feature.

logger.warn(String.format("Test mode is enabled! One-way light time will be set to the provided value %f and ancillary positional and velocity calculations will be skipped.", config.getTestModeOwlt()));
}
if (config.isDryRun()) {
logger.warn("Dry run mode is enabled! No data products from this run will be kept and will instead be printed to the console and recorded in the log at {}/log/", System.getenv("MMTC_HOME"));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the thought behind showing the log path in this statement, but the log file path is set in log4j2.xml and could be any path on the local filesystem. Recommend either having this logging statement end with 'recorded in the log.' or using e.g. the log4j2 API to find the actual path.

logger.info(String.format("Run at %s recorded to %s", ctx.appRunTime, config.getRunHistoryFilePath().toString()));
} else {
Files.deleteIfExists(ctx.newSclkKernelPath.get());
logger.info(USER_NOTICE, "Deleted temporary SCLK kernel {}. No other output products were written to disk.", ctx.newSclkKernelPath.get());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say this could be at the debug log level; users shouldn't normally need to care about this implementation detail unless something has gone wrong.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thought was that since there's an info level notice that a SCLK kernel got written, it might be good to reassure users that it got removed and there isn't some rogue SCLK kernel floating around on their filesystem somewhere when they were expecting no kept products from a dry run. Will change!

"D",
"dry-run",
false,
"Executes a dry run of MMTC with outputs calculated as otherwise configured " +
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit, but if you're open to rewording, we might be able to state the impact of this more directly a la: "Enables dry-run mode, resulting in correlation outputs being printed & logged but not written to the filesystem"

TableRecord rawTlmTableRecord = RawTelemetryTable.calculateUpdatedRawTlmTable(ctx);
List<String> rtHeaders = new RawTelemetryTable(ctx.config.getRawTelemetryTablePath()).getHeaders();
Collection<String> rtValues = rawTlmTableRecord.getValues();
String zippedRtRow = IntStream.range(0, rtHeaders.size())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

public SclkScetProductDefinition() {
super("SCLKSCET File");
}
private final int ENTRIES_TO_PRINT = 4;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't think of a simpler way right now. We can mark this for future improvement IMO.

throw new RuntimeException(e);
}
List<String> thHeaders = new TimeHistoryFile(ctx.config.getTimeHistoryFilePath()).getHeaders();
Collection<String> thValues = timeHistRecord.getValues();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recommend changing TableRecord.getValues to return a List instead of a Collection. The orders of the values are meaningful, so no reason to hide them from TableRecord's users. That'll also alleviate you of having to create a new ArrayList around them.


for (FrameSample sample : ctx.correlation.target.get().getSampleSet()) {
rawTlmTableRecord = sample.toRawTelemetryTableRecord(rawTlmTable.getHeaders());
rawTlmTableRecord.setValue(RawTelemetryTable.RUN_TIME, ctx.appRunTime.toString());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably can pull the statement on line 69 out of the for loop

private Double clockChgRate;

/* New interpolated clock change reate to ovewrite the predicted rate in the existing SCLK kernel record. */
/* New interpolated clock change rate to overwrite the predicted rate in the existing SCLK kernel record. */
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you.

Path newFile = Paths.get(newFilePath);

if (Files.exists(newFile)) {
if (Files.exists(newFile) && !dirname.equals("/tmp")) { // Ignore duplicate products if we're writing to the /tmp/ dir--it means this is likely a dry run and the last run was aborted
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see the benefit in adding this condition and it's hard to see any drawbacks. That said, consider using Java's built-in temporary file utility methods for this, a la: https://stackoverflow.com/a/26860215

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants