Interpolate laser from file on GPU #1330
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR rewrites the laser file reader to achieve a big performance improvement for a production simulation. Previously the laser input grid was interpolated to the full 3D simulation grid by a single CPU core at initialization. This could take a long time and consume a lot of CPU memory even for moderately sized simulations. Now the laser file is read directly into pinned memory and is interpolated per slice into the main laser array by the GPU during the first time step. This uses a similar implementation compared to the plasma density file reader.
I also changed the profiling regions to be less redundant and actually measure the time used for laser initialization and I fixed some of the formatting for the other laser initialization types.
PR:
Dev:
Tested with rt and xyz laser input files
constisconst)