Skip to main content
NVIDIA PhysicsNeMo tutorial: Darcy Flow mapping k(x,y) to u(x,y) visualized with color gradients and vector fields.

Editorial illustration for NVIDIA PhysicsNeMo Tutorial Maps k(x,y) to u(x,y) for Darcy Flow

NVIDIA PhysicsNeMo: Mapping Darcy Flow with AI Neural Ops

NVIDIA PhysicsNeMo Tutorial Maps k(x,y) to u(x,y) for Darcy Flow

Updated: 3 min read

The tutorial walks you through building a Darcy‑flow surrogate with NVIDIA’s PhysicsNeMo library. It stitches together Fourier neural operators (FNOs) and physics‑informed neural networks (PINNs) into a single, reproducible pipeline. You’ll see a concrete class definition that starts with an __init__ signature: in_channels and out_channels both set to 1, modes1 and modes2 at 12, a width of 32, n_layers of 4, and padding of 9.

While the code scaffolding is straightforward, the real question is what the model ultimately predicts. Why does mapping a spatially varying permeability field k(x,y) to a pressure field u(x,y) matter for Darcy flow? Understanding that link is the key to evaluating surrogate performance, benchmarking inference speed, and comparing FNO‑based approaches against traditional solvers.

The next line of the tutorial makes that connection explicit:

“Project back to output space This learns the mapping: k(x,y) -> u(x,y) for Darcy flow.”

Project back to output space This learns the mapping: k(x,y) -> u(x,y) for Darcy flow """ def __init__( self, in_channels: int = 1, out_channels: int = 1, modes1: int = 12, modes2: int = 12, width: int = 32, n_layers: int = 4, padding: int = 9 ): super().__init__() self.modes1 = modes1 self.modes2 = modes2 self.width = width self.padding = padding self.fc0 = nn.Linear(in_channels + 2, width) self.fno_blocks = nn.ModuleList([ FNOBlock(width, modes1, modes2) for _ in range(n_layers) ]) self.fc1 = nn.Linear(width, 128) self.fc2 = nn.Linear(128, out_channels) def get_grid(self, shape: Tuple, device: torch.device) -> torch.Tensor: """Create normalized grid coordinates.""" batch_size, size_x, size_y = shape[0], shape[2], shape[3] gridx = torch.linspace(0, 1, size_x, device=device) gridy = torch.linspace(0, 1, size_y, device=device) gridx, gridy = torch.meshgrid(gridx, gridy, indexing='ij') grid = torch.stack([gridx, gridy], dim=-1) grid = grid.unsqueeze(0).repeat(batch_size, 1, 1, 1) return grid def forward(self, x: torch.Tensor) -> torch.Tensor: batch_size = x.shape[0] grid = self.get_grid(x.shape, x.device) x = x.permute(0, 2, 3, 1) x = torch.cat([x, grid], dim=-1) x = self.fc0(x) x = x.permute(0, 3, 1, 2) if self.padding > 0: x = F.pad(x, [0, self.padding, 0, self.padding]) for block in self.fno_blocks: x = block(x) if self.padding > 0: x = x[..., :-self.padding, :-self.padding] x = x.permute(0, 2, 3, 1) x = F.gelu(self.fc1(x)) x = self.fc2(x) x = x.permute(0, 3, 1, 2) return x print("\nCreating Fourier Neural Operator model...") fno_model = FourierNeuralOperator2D( in_channels=1, out_channels=1, modes1=8, modes2=8, width=32, n_layers=4, padding=5 ).to(device) n_params = sum(p.numel() for p in fno_model.parameters() if p.requires_grad) print(f"✓ FNO Model created with {n_params:,} trainable parameters") We first visualize the generated Darcy Flow samples to clearly see the relationship between the permeability field and the resulting pressure field.

The tutorial walks through a complete PhysicsNeMo pipeline on Colab. Starting with environment setup, it generates synthetic 2D Darcy flow data and visualizes permeability k(x,y) alongside pressure u(x,y). From there, the guide implements a Fourier Neural Operator, specifying modes1 and modes2 of twelve and a width of thirty‑two, and trains it against the generated fields.

A convolutional surrogate baseline is also built for comparison, while a brief foray into Physics‑Informed Neural Networks illustrates how loss terms can encode governing equations. The code snippet shows an __init__ signature that exposes channels, modes, layers and padding, hinting at flexibility but leaving hyper‑parameter selection to the user. Training curves appear reasonable, yet the article does not quantify error margins or benchmark inference speed beyond a single statement.

Consequently, while the workflow demonstrates that mapping k(x,y) → u(x,y) is feasible with current tools, it remains unclear how the approach scales to more complex geometries or real‑world data. Overall, the tutorial provides a hands‑on entry point, though further validation would be needed to assess robustness. Will it generalize?

Further Reading

Common Questions Answered

How does the PhysicsNeMo tutorial map permeability k(x,y) to pressure u(x,y) using Fourier Neural Operators?

The tutorial demonstrates mapping permeability k(x,y) to pressure u(x,y) by implementing a Fourier Neural Operator (FNO) with specific configurations like modes1 and modes2 set to 12. The implementation uses a neural network architecture that learns the relationship between input permeability fields and output pressure fields through a physics-informed approach.

What are the key parameters in the FNO model's __init__ method for the Darcy flow surrogate?

The key parameters include in_channels and out_channels (both set to 1), modes1 and modes2 (both set to 12), width (set to 32), n_layers (set to 4), and padding (set to 9). These parameters define the neural network's architecture, specifying the complexity and structure of the Fourier Neural Operator used to model the Darcy flow.

How does the tutorial approach generating and visualizing Darcy flow data?

The tutorial generates synthetic 2D Darcy flow data and provides visualization of both permeability k(x,y) and pressure u(x,y) fields. It implements the data generation process on Google Colab, allowing users to create and explore the relationship between permeability and pressure using a physics-informed neural network approach.