A common issue with linear algebra textbooks is the depiction of as the solution to the linear system of equations
.
Find the inverse of the matrix , then multiply that inverse with the right-hand side vector,
.
Theoretically correct? Yes. Practical? No. Numerical linear algebraicists? They scream.
The practical approach to find is direct solution by Gauss elimination (LU decomposition). In MATLAB speak, this operation is
, a notation that I will adopt in the equation environments of my lecture notes.
Not only is inversion and multiplication more expensive than direct solution for , it is also less accurate. Google something like “reasons to not invert a matrix” and you’ll find plenty of explanations, such as this one.
There’s a few element and material models in OpenSees that use the “by the book” inversion and multiplication approach. Usually, the offending operations appear in a local Newton loop where the Matrix::Invert() method is called in preparation for multiplication on the next line.
Matrix A(4,4); // Jacobian
Matrix Ainv(4,4); // Inverse of Jacobian
Vector x(4); // Unknowns
Vector b(4); // Right-hand side
// Form initial b
// Solve using Newton's method
while (b.Norm() > tol) {
//
// Form A based on current x
//
A.Invert(Ainv); // DON'T DO THIS
dx = Ainv*b;
x -= dx;
//
// Update b based on current x
//
}
The use of the overloaded *
operator to get dx
is an offense that I’m willing to overlook.
What you should do instead is call the Matrix::Solve() method to get dx
directly.
A.Solve(b,dx);
x -= dx;
You may say the matrices at the element and material level are pretty small, so it’s not a big deal. But all the unnecessary computations add up.
Also, don’t get me wrong, there are some instances where computing and storing the inverse is necessary within some element and material models in OpenSees, e.g., in the mixedBeamColumn
element. In this formulation, a matrix inverse is used a few times across multiple operations. Ideally, we’d have the Matrix class store the factors from direct solution so that one could call Matrix::Solve() multiple times, performing backward substitution after the first call.
But that’s a PR for another day. And Frank doesn’t like it when people mess with the Matrix class.
Not inverting the matrix is at the core of numerical linear algebra. Those include direct methods (LU, Cholesky factorisation, etc.) or iterative methods (gradient, conjugate gradient, GMRES.) The last is the preferred method for very large systems. An example of conjugate gradient is here: https://chet-aero.com/2018/12/03/numerical-integration-of-a-function-in-a-two-dimensional-space-and-its-solution-with-conjugate-gradient/
LikeLike
Yes, certainly. For large systems at the global level you use a direct or iterative method. Although not stated, the post was more about the small systems solved within a local element or material routine where you may think “Oh, it’s no big deal because it’s only 4×4” or 5×5 or whatever.
LikeLike
I spent several years with CFD types and GMRES was their method of choice. It’s interesting to note, however, that the “leading” book on FEA in geotechnical engineering discouraged using iterative methods because of the nature of the nonlinearity and the need for multiple iterations. I used Cholesky factorisation in my own research.
LikeLike