You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<li><code>cyl_coord</code> activates cylindrical coordinates. The domain is defined in $x$-$y$-$z$ cylindrical coordinates, instead of Cartesian coordinates. Domain discretization is accordingly conducted along the axes of cylindrical coordinates. When $p=0$, the domain is defined on $x$-$y$ axi-symmetric coordinates. In both Coordinates, mesh stretching can be defined along the $x$- and $y$-axes. MPI topology is automatically optimized to maximize the parallel efficiency for given choice of coordinate systems.</li>
247
247
<li><code>dt</code> specifies the constant time step size that is used in simulation. The value of <code>dt</code> needs to be sufficiently small such that the Courant-Friedrichs-Lewy (CFL) condition is satisfied.</li>
248
-
<li><code>t_step_start</code> and <code>t_step_end</code> define the time steps at which simulation starts and ends, respectively. <code>t_step_save</code> is the time step interval for data output during simulation. To newly start simulation, set <code>t_step_start</code>=0. To restart simulation from $k$-th time step, set <code>t_step_start</code>=k.</li>
248
+
<li><code>t_step_start</code> and <code>t_step_end</code> define the time steps at which simulation starts and ends, respectively. <code>t_step_save</code> is the time step interval for data output during simulation. To newly start the simulation, set <code>t_step_start = 0</code>. To restart simulation from $k$-th time step, set <code>t_step_start = k</code>, do not run <code>pre_process</code>, and run <code>simulation</code> directly (<code>./mfc.sh run [...] -t simulation</code>). Ensure the data for the $k$-th time step is stored in the <code>restart_data/</code> directory within the case repository.</li>
<p>Reference: V. A. Titarev, E. F. Toro, Finite-volume WENO schemes for three-dimensional conservation laws, Journal of Computational Physics 201 (1) (2004) 238–260.</p>
150
+
Isentropic vortex problem (2D)</h1>
151
+
<p>Reference: Coralic, V., & Colonius, T. (2014). Finite-volume Weno scheme for viscous compressible multicomponent flows. Journal of Computational Physics, 274, 95–121. <ahref="https://doi.org/10.1016/j.jcp.2014.06.003">https://doi.org/10.1016/j.jcp.2014.06.003</a></p>
<p>Reference: C. W. Shu, S. Osher, Efficient implementation of essentially non-oscillatory shock-capturing schemes, Journal of Computational Physics 77 (2) (1988) 439–471. doi:10.1016/0021-9991(88)90177-5.</p>
165
+
Titarev-Toro problem (1D)</h1>
166
+
<p>Reference: V. A. Titarev, E. F. Toro, Finite-volume WENO schemes for three-dimensional conservation laws, Journal of Computational Physics 201 (1) (2004) 238–260.</p>
<p>The <ahref="case.py"><b>3D_weak_scaling</b></a> case depends on two parameters:</p>
197
-
<ul>
198
-
<li><b>The number of MPI ranks</b> (<em>procs</em>): As <em>procs</em> increases, the problem size per rank remains constant. <em>procs</em> is determined using information provided to the case file by <code>mfc.sh run</code>.</li>
199
-
<li><b>GPU memory usage per rank</b> (<em>gbpp</em>): As <em>gbpp</em> increases, the problem size per rank increases and the number of timesteps decreases so that wall times consistent. <em>gbpp</em> is a user-defined optional argument to the <ahref="case.py">case.py</a> file. It can be specified right after the case filepath when invoking <code>mfc.sh run</code>.</li>
200
-
</ul>
201
-
<p>Weak scaling benchmarks can be produced by keeping <em>gbpp</em> constant and varying <em>procs</em>.</p>
202
-
<p>For example, to run a weak scaling test that uses ~4GB of GPU memory per rank on 8 2-rank nodes with case optimization, one could:</p>
203
-
<divclass="fragment"><divclass="line">./mfc.sh run examples/3D_weak_scaling/case.py 4 -t pre_process simulation \</div>
</div><!-- fragment --><h1><aclass="anchor" id="autotoc_md41"></a>
207
-
Lid-Driven Cavity Problem (2D)</h1>
208
-
<p>Reference: Bezgin, D. A., & Buhendwa A. B., & Adams N. A. (2022). JAX-FLUIDS: A fully-differentiable high-order computational fluid dynamics solver for compressible two-phase flows. arXiv:2203.13760</p>
209
-
<p>Reference: Ghia, U., & Ghia, K. N., & Shin, C. T. (1982). High-re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method. Journal of Computational Physics, 48, 387-411</p>
<p>Reference: Chamarthi, A., & Hoffmann, N., & Nishikawa, H., & Frankel S. (2023). Implicit gradients based conservative numerical scheme for compressible flows. arXiv:2110.05461</p>
<p>Reference: C. W. Shu, S. Osher, Efficient implementation of essentially non-oscillatory shock-capturing schemes, Journal of Computational Physics 77 (2) (1988) 439–471. doi:10.1016/0021-9991(88)90177-5.</p>
<p>Reference: Bezgin, D. A., & Buhendwa A. B., & Adams N. A. (2022). JAX-FLUIDS: A fully-differentiable high-order computational fluid dynamics solver for compressible two-phase flows. arXiv:2203.13760</p>
239
+
<p>Reference: Ghia, U., & Ghia, K. N., & Shin, C. T. (1982). High-re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method. Journal of Computational Physics, 48, 387-411</p>
<p>The <ahref="case.py"><b>3D_weak_scaling</b></a> case depends on two parameters:</p>
256
+
<ul>
257
+
<li><b>The number of MPI ranks</b> (<em>procs</em>): As <em>procs</em> increases, the problem size per rank remains constant. <em>procs</em> is determined using information provided to the case file by <code>mfc.sh run</code>.</li>
258
+
<li><b>GPU memory usage per rank</b> (<em>gbpp</em>): As <em>gbpp</em> increases, the problem size per rank increases and the number of timesteps decreases so that wall times consistent. <em>gbpp</em> is a user-defined optional argument to the <ahref="case.py">case.py</a> file. It can be specified right after the case filepath when invoking <code>mfc.sh run</code>.</li>
259
+
</ul>
260
+
<p>Weak scaling benchmarks can be produced by keeping <em>gbpp</em> constant and varying <em>procs</em>.</p>
261
+
<p>For example, to run a weak scaling test that uses ~4GB of GPU memory per rank on 8 2-rank nodes with case optimization, one could:</p>
262
+
<divclass="fragment"><divclass="line">./mfc.sh run examples/3D_weak_scaling/case.py 4 -t pre_process simulation \</div>
['1d_4',['1D',['../md_examples.html#autotoc_md37',1,'Lax shock tube problem (1D)'],['../md_examples.html#autotoc_md34',1,'Shu-Osher problem (1D)'],['../md_examples.html#autotoc_md31',1,'Titarev-Toro problem (1D)']]]
7
+
['1d_4',['1D',['../md_examples.html#autotoc_md37',1,'Lax shock tube problem (1D)'],['../md_examples.html#autotoc_md46',1,'Shu-Osher problem (1D)'],['../md_examples.html#autotoc_md34',1,'Titarev-Toro problem (1D)']]]
['2d_1',['2D',['../md_examples.html#autotoc_md50',1,'2D Riemann Test (2D)'],['../md_examples.html#autotoc_md47',1,'Isentropic vortex problem (2D)'],['../md_examples.html#autotoc_md41',1,'Lid-Driven Cavity Problem (2D)'],['../md_examples.html#autotoc_md44',1,'Shock Droplet (2D)']]],
4
+
['2d_1',['2D',['../md_examples.html#autotoc_md43',1,'2D Riemann Test (2D)'],['../md_examples.html#autotoc_md31',1,'Isentropic vortex problem (2D)'],['../md_examples.html#autotoc_md49',1,'Lid-Driven Cavity Problem (2D)'],['../md_examples.html#autotoc_md40',1,'Shock Droplet (2D)']]],
5
5
['2d_20hardcodied_20ic_20example_2',['2D Hardcodied IC Example',['../md_examples.html#autotoc_md28',1,'']]],
6
-
['2d_20riemann_20test_202d_3',['2D Riemann Test (2D)',['../md_examples.html#autotoc_md50',1,'']]]
6
+
['2d_20riemann_20test_202d_3',['2D Riemann Test (2D)',['../md_examples.html#autotoc_md43',1,'']]]
0 commit comments