Skip to content

Conversation

@Parv621
Copy link

@Parv621 Parv621 commented Nov 27, 2025

Implemented FNO notebooks

  1. Darcy Flow (Darcy Flow example from the neuralop library):
    examples/NOs/Part_1_FNO_DarcyFlow.ipynb
    Inspiration: https://neuraloperator.github.io/dev/auto_examples/models/plot_FNO_darcy.html

  2. Diffusion Equation (original PINN example adapted to grid-based data):
    examples/NOs/Part_1_FNO_DiffusionEquation.ipynb
    Inspiration: examples/PDEs/Part_1_PINN_DiffusionEquation.ipynb


Features

  1. FNO Wrapper (src/neuromancer/modules/operators.py)
    Provides a thin interface that enables direct use of neuralop modules while attaching the required _metadata for checkpoint compatibility across PyTorch versions. This approach generalizes to any Neural Operator from the neuralop library.

  2. Custom Losses
    Implementation of LpLoss and H1Loss in
    src/neuromancer/modules/operators.py.


Future Task (Non-blocking)

Potential improvement: Integrate the L2/H1 loss terms using Neuromancer’s built-in L2norm abstraction instead of the custom loss code.
This requires a short review of the formulation for the H1 loss.
Not required for this PR and can be revisited later without affecting current functionality.


Pseudocode

from neuromancer.constraint import Objective, variable

# define L2 loss objective for FNO
y_true = variable("y_grid")          # ground-truth field in your datadict
y_hat_fno = variable("y_hat_fno")    # output from the FNO node

# Approach 1: Using Constraint from Variables
l2_constraint = (y_hat_fno == y_true) ^ 2   # Eq constraint with quadratic penalty
l2_constraint.name = "l2norm"

# Approach 2: Using Objective
residual = y_hat_fno - y_true
l2_obj = Objective(residual**2, metric=torch.mean, name="l2_loss")

from neuromancer.loss import PenaltyLoss
from neuromancer.problem import Problem

# aggregate list of objective terms and constraints
# objectives_fno = [l2_constraint]  # L2 loss for training FNO
objectives_fno = [l2_obj]
constraints_fno = []

loss_fno = PenaltyLoss(objectives_fno, constraints_fno)

# construct the FNO supervised training problem
problem_fno = Problem(nodes=[fno_node], loss=loss_fno, grad_inference=True)

Copy link
Collaborator

@drgona drgona left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this syntax will work for custom loss functions

h1_obj = Loss( ["y_fno", "y_grid"], lambda yhat, y: h1_loss_fn(yhat.squeeze(1), y.squeeze(1)), name="h1_loss_fn", )

but it is uncecessarily verbose, any pytorch callable works on neuromancer variables.
You could define yhat and y to be neuromancer variables and then simply call:
h1_var = h1_loss_fn(yhat, y)
to instantiate new variable h1_var, then you can either use h1_var.minimize() or h1_var ==0 to instantiate objective or constraint term

Copy link
Author

@Parv621 Parv621 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this syntax will work for custom loss functions

h1_obj = Loss( ["y_fno", "y_grid"], lambda yhat, y: h1_loss_fn(yhat.squeeze(1), y.squeeze(1)), name="h1_loss_fn", )

but it is uncecessarily verbose, any pytorch callable works on neuromancer variables.
You could define yhat and y to be neuromancer variables and then simply call:
h1_var = h1_loss_fn(yhat, y)
to instantiate new variable h1_var, then you can either use h1_var.minimize() or h1_var ==0 to instantiate objective or constraint term

Not sure if this will work, as h1_loss_fn is an H1Loss instance that expects real tensors and not Neuromancer variables.
I tried this

y_true = variable("y_grid")  # ground-truth field in your datadict
y_hat_fno = variable("y_fno")  # output from the FNO node

h1_var = h1_loss_fn(y_hat_fno, y_true)
h1_constraint = (h1_var).minimize()

@Parv621
Copy link
Author

Parv621 commented Dec 1, 2025

this syntax will work for custom loss functions
h1_obj = Loss( ["y_fno", "y_grid"], lambda yhat, y: h1_loss_fn(yhat.squeeze(1), y.squeeze(1)), name="h1_loss_fn", )
but it is uncecessarily verbose, any pytorch callable works on neuromancer variables.
You could define yhat and y to be neuromancer variables and then simply call:
h1_var = h1_loss_fn(yhat, y)
to instantiate new variable h1_var, then you can either use h1_var.minimize() or h1_var ==0 to instantiate objective or constraint term

Not sure if this will work, as h1_loss_fn is an H1Loss instance that expects real tensors and not Neuromancer variables. I tried this

y_true = variable("y_grid")  # ground-truth field in your datadict
y_hat_fno = variable("y_fno")  # output from the FNO node

h1_var = h1_loss_fn(y_hat_fno, y_true)
h1_constraint = (h1_var).minimize()

This snippet might just work for h1loss, witohut needing the H1loss module. It follows from the https://github.com/pnnl/neuromancer/blob/master/examples/ODEs/Part_2_param_estim_ODE.ipynb example

# Inputs from your datadict
y_true = variable("y_grid")          # B x C x H x W
y_hat_fno = variable("y_fno")        # B x C x H x W

# Finite forward differences
dx_hat = y_hat_fno[..., 1:, :] - y_hat_fno[..., :-1, :]
dx_true = y_true[..., 1:, :] - y_true[..., :-1, :]

dy_hat = y_hat_fno[..., :, 1:] - y_hat_fno[..., :, :-1]
dy_true = y_true[..., :, 1:] - y_true[..., :, :-1]

# Value (L2) match and gradient (L2) matches; ^2 sets norm=2 (MSE) on the Eq comparator
val_constraint = ((y_hat_fno == y_true) ^ 2)
val_constraint.update_name("h1_value")

x_grad_constraint = ((dx_hat == dx_true) ^ 2)
x_grad_constraint.update_name("h1_dx")

y_grad_constraint = ((dy_hat == dy_true) ^ 2)
y_grad_constraint.update_name("h1_dy")

@drgona
Copy link
Collaborator

drgona commented Dec 1, 2025

this syntax will work for custom loss functions
h1_obj = Loss( ["y_fno", "y_grid"], lambda yhat, y: h1_loss_fn(yhat.squeeze(1), y.squeeze(1)), name="h1_loss_fn", )
but it is uncecessarily verbose, any pytorch callable works on neuromancer variables.
You could define yhat and y to be neuromancer variables and then simply call:
h1_var = h1_loss_fn(yhat, y)
to instantiate new variable h1_var, then you can either use h1_var.minimize() or h1_var ==0 to instantiate objective or constraint term

Not sure if this will work, as h1_loss_fn is an H1Loss instance that expects real tensors and not Neuromancer variables. I tried this

y_true = variable("y_grid")  # ground-truth field in your datadict
y_hat_fno = variable("y_fno")  # output from the FNO node

h1_var = h1_loss_fn(y_hat_fno, y_true)
h1_constraint = (h1_var).minimize()

This snippet might just work for h1loss, witohut needing the H1loss module. It follows from the https://github.com/pnnl/neuromancer/blob/master/examples/ODEs/Part_2_param_estim_ODE.ipynb example

# Inputs from your datadict
y_true = variable("y_grid")          # B x C x H x W
y_hat_fno = variable("y_fno")        # B x C x H x W

# Finite forward differences
dx_hat = y_hat_fno[..., 1:, :] - y_hat_fno[..., :-1, :]
dx_true = y_true[..., 1:, :] - y_true[..., :-1, :]

dy_hat = y_hat_fno[..., :, 1:] - y_hat_fno[..., :, :-1]
dy_true = y_true[..., :, 1:] - y_true[..., :, :-1]

# Value (L2) match and gradient (L2) matches; ^2 sets norm=2 (MSE) on the Eq comparator
val_constraint = ((y_hat_fno == y_true) ^ 2)
val_constraint.update_name("h1_value")

x_grad_constraint = ((dx_hat == dx_true) ^ 2)
x_grad_constraint.update_name("h1_dx")

y_grad_constraint = ((dy_hat == dy_true) ^ 2)
y_grad_constraint.update_name("h1_dy")

this would work for more didactical exposure of the loss construction.

@Parv621
Copy link
Author

Parv621 commented Dec 1, 2025

I've added 3 different ways to formulate this loss, for your review

@drgona
Copy link
Collaborator

drgona commented Dec 1, 2025

I've added 3 different ways to formulate this loss, for your review

approach 1 and approach 2 are both good, we can discuss on Wednesday which one to pick

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants