- In Maxima
- Define governing equations
- Substitute finite difference expressions for differential expressions in the governing equations
- Output appropriate difference expressions to Fortran using
f90()
- Define governing equations
- In Fortran: wrap the expression generated by Maxima in appropriate functions, subroutines and modules
- In Python
- Compile modules with
f2py
, which comes packaged along withnumpy
- Write the high-level application code, parsing input decks, performing optimization, grid convergence studies, or visualizations that use the compiled Fortran modules
- Compile modules with
Maxima
The governing equation is the three-dimensional Poisson's equation. In Cartesian coordinates The Maxima code to defi! ne this equation is straightforward:depends(u,x); depends(u,y); depends(u,z); /* Poisson's equation: */ p : 'diff(u,x,2) + 'diff(u,y,2) + 'diff(u,z,2) = -f;
Now that we have the continuous differential equation defined we can decide on what sort of discrete approximation we are going to solve. The simple thing to do with second derivatives is to use central differences of second order accuracy, which require only information from closest neighbours in the finite difference grid. Replacing the differential expressions with difference expressions is accomplished in Maxima with the ratsubst()
function. /* substitue difference expressions for differential ones: */
p : ratsubst((u[i-1,j,k] - 2*u[i,j,k] + u[i+1,j,k])/dx**2 , 'diff(u,x,2), p);
p : ratsubst((u[i,j-1,k] - 2*u[i,j,k] + u[i,j+1,k])/dy**2 , 'diff(u,y,2), p);
p : ratsubst((u[i,j,k-1] - 2*u[i,j,k] + u[i,j,k+1])/dz**2 , 'diff(u,z,2), p);
/* substitute the correct array value of f:*/
p : ratsubst(f[i,j,k], f, p);
This assumes that the solution p : ratsubst((u[i-1,j,k] - 2*u[i,j,k] + u[i+1,j,k])/dx**2 , 'diff(u,x,2), p);
p : ratsubst((u[i,j-1,k] - 2*u[i,j,k] + u[i,j+1,k])/dy**2 , 'diff(u,y,2), p);
p : ratsubst((u[i,j,k-1] - 2*u[i,j,k] + u[i,j,k+1])/dz**2 , 'diff(u,z,2), p);
/* substitute the correct array value of f:*/
p : ratsubst(f[i,j,k], f, p);
u
and the forcing function f
are stored in three dimensional arrays, if not then the indexing in the difference expressions becomes more complicated. If we want to apply a Gauss-Seidel method to solve our elliptic problem we just need to solve p
for u[i,j,k]
, and then dump that expression to Fortran format. gs : solve(p, u[i,j,k]);
del : part(gs[1],2) - u[i,j,k];
/* output for fortran: */
with_stdout("gs.f90", f90(d = expand(del)));
This gives the expression for the update to the solution at each iteration of the solver. del : part(gs[1],2) - u[i,j,k];
/* output for fortran: */
with_stdout("gs.f90", f90(d = expand(del)));
Fortran
The Maxima expressions developed above need control flow added so they get applied properly to our solution array.do k = 1, nz
do j = 1, ny
do i = 1, nx
d = (dy**2*dz**2*u(i+1,j,k)+dx**2*dz**2*u(i,j+1,k)+dx**2*dy**2* & u(i,j,k+1)+dx**2*dy**2*dz**2*f(i,j,k)+dx**2*dy**2* & u(i,j,k-1)+dx**2*dz**2*u(i,j-1,k)+dy**2*dz**2*u(i-1,j,k))/ & ((2*dy**2+2*dx**2)*dz**2+2*dx**2*dy**2)-u(i,j,k) u(i,j,k) = u(i,j,k) + w*d
end do
end do
end do
The expression for the update was generated from our governing equations in Maxima, one of the things to notice is the do j = 1, ny
do i = 1, nx
d = (dy**2*dz**2*u(i+1,j,k)+dx**2*dz**2*u(i,j+1,k)+dx**2*dy**2* & u(i,j,k+1)+dx**2*dy**2*dz**2*f(i,j,k)+dx**2*dy**2* & u(i,j,k-1)+dx**2*dz**2*u(i,j-1,k)+dy**2*dz**2*u(i-1,j,k))/ & ((2*dy**2+2*dx**2)*dz**2+2*dx**2*dy**2)-u(i,j,k) u(i,j,k) = u(i,j,k) + w*d
end do
end do
end do
w
that multiplies our update, this is so we can do successive over relaxation. Also, we'll go ahead and package several of these iterative schemes into a Fortran module module iter_schemes
implicit none
contains
implicit none
contains
...bunches of related subroutines and functions ...
end module iter_schemesPython
Now we want to be able to call our Fortran solution schemes from Python. Using F2py makes this quite simple:[command-line]$ f2py -m iter_schemes -c iter_schemes.f90
This should result in a python module iter_schemes.so
that can be imported just as any other module. import numpy
from iter_schemes import *
from iter_schemes import *
... do some cool stuff with our new module in Python ...
Conclusion
This approach might seem like over-kill for the fairly simple scalar equation we used, but think about the complicated update expressions that can be generated for large vector partial differential equations like the Navier-Stokes or Maxwell's equations. Having these expressions automatically generated and ready to be wrapped in loops can save lots of initial development and subsequent debugging time. It also makes generating unit tests easier, and so encourages good development practice. But does it work? Of course, here's the convergence of the Gauss-Seidel and SOR schemes discussed above along with a Ja! cobi scheme and a conjugate gradient scheme applied to a 50 by 50 by 50 grid with a manufactured solution. The plot was generated with the Python modulematplotlib! code>.
solve a homogeneous differential equation
No comments:
Post a Comment