CGLinearSolver
The `CGLinearSolver` is a linear system solver in the SOFA framework that uses the Conjugate Gradient (CG) iterative algorithm to solve systems of equations represented by matrices and vectors. It is part of the Sofa.Component.LinearSolver.Iterative module and implements an iterative method specifically designed for symmetric positive-definite matrices, which are common in finite element analysis and other simulation scenarios. This component interacts with other SOFA components through its API by providing methods to initialize (`init`), reinitialize (`reinit`), and solve linear systems (`solve`). The `solve` method iteratively solves the equation Ax=b using CG descent. It requires template parameters for matrix (TMatrix) and vector (TVector) types, which can include full matrices, sparse matrices, or scattered graph structures. Users configure `CGLinearSolver` via several data fields: - `d_maxIter`: Maximum number of iterations before the solver stops, regardless of convergence. - `d_tolerance`: Desired accuracy level based on the ratio of current residual norm over initial residual norm (|r|^2 / |b|^2). - `d_smallDenominatorThreshold`: Minimum value for the denominator in CG calculations to avoid division by near-zero values. - `d_warmStart`: Option to use previous solutions as an initial guess, which can improve performance if system changes smoothly over time. The solver also supports logging and visualization of residuals through a graph data field (`d_graph`), providing insight into the convergence process.
- abstract
- The `CGLinearSolver` is a linear system solver that uses the Conjugate Gradient iterative method to solve symmetric positive-definite matrices in SOFA simulations, with configurable parameters for iteration limits, tolerance, and warm start options.
- sheet
- # CGLinearSolver ## Overview The `CGLinearSolver` component implements an iterative method using the Conjugate Gradient (CG) algorithm to solve linear systems of equations represented by symmetric positive-definite matrices. It is part of the Sofa.Component.LinearSolver.Iterative module and interacts with other SOFA components through its API, providing methods for initialization (`init`), reinitialization (`reinit`), and solving linear systems (`solve`). ## Mathematical Model The Conjugate Gradient (CG) algorithm minimizes the quadratic function: \[ f(x) = \frac{1}{2} x^T A x - b^T x + c \] subject to $x \in \mathbb{R}^n$. The CG algorithm proceeds as follows: 1. **Initialization**: Start with an initial guess for the solution vector, typically $x_0 = 0$ or some other feasible point. Compute the residual: \[ r_0 = b - Ax_0 \] 2. **Conjugate Direction Calculation**: The search direction is chosen as a conjugate direction to previous directions. 3. **Step Size Determination**: Calculate step sizes $\alpha_k$ and $\beta_k$ using the following formulas: \[ \alpha_k = \frac{r_k^T r_k}{p_k^T A p_k} \] 4. **Solution Update**: The solution vector and residual are updated as follows: \[ x_{k+1} = x_k + \alpha_k p_k \] \[ r_{k+1} = r_k - \alpha_k A p_k \] 5. **Conjugate Direction Update**: Compute the next conjugate direction using: \[ p_{k+1} = r_{k+1} + \beta_k p_k \] where $\beta_k$ is given by: \[ \beta_k = \frac{r_{k+1}^T r_{k+1}}{r_k^T r_k} \] ### Stopping Criteria The algorithm stops when one of the following conditions is met: - The maximum number of iterations (`d_maxIter`) is reached. - The relative residual error, defined as $\frac{|r|^2}{|b|^2}$, falls below a specified tolerance level (`d_tolerance`). - The denominator in the computation of $\alpha$ (i.e., $p_k^T A p_k$) becomes smaller than a threshold value (`d_smallDenominatorThreshold`), indicating potential numerical instability. ### Warm Start Option The `d_warmStart` option allows the solver to use previous solutions as an initial guess for subsequent runs, which can accelerate convergence if the system changes smoothly over time. ## Parameters and Data - **iterations (`d_maxIter`)**: Maximum number of iterations after which the iterative descent of the Conjugate Gradient must stop. Type: `unsigned int`, Default: Not specified. - **tolerance (`d_tolerance`)**: Desired accuracy of the Conjugate Gradient solution evaluating $\frac{|r|^2}{|b|^2}$. Type: `Real`, Default: Not specified. - **threshold (`d_smallDenominatorThreshold`)**: Minimum value of the denominator $(p^T A p)$ in the conjugate gradient solution. Type: `Real`, Default: Not specified. - **warmStart (`d_warmStart`)**: Use previous solution as initial solution, which may improve the initial guess if your system is evolving smoothly. Type: `bool`, Default: Not specified.
- description
- The `CGLinearSolver` is a linear system solver in the SOFA framework that uses the Conjugate Gradient (CG) iterative algorithm to solve systems of equations represented by matrices and vectors. It is part of the Sofa.Component.LinearSolver.Iterative module and implements an iterative method specifically designed for symmetric positive-definite matrices, which are common in finite element analysis and other simulation scenarios. This component interacts with other SOFA components through its API by providing methods to initialize (`init`), reinitialize (`reinit`), and solve linear systems (`solve`). The `solve` method iteratively solves the equation Ax=b using CG descent. It requires template parameters for matrix (TMatrix) and vector (TVector) types, which can include full matrices, sparse matrices, or scattered graph structures. Users configure `CGLinearSolver` via several data fields: - `d_maxIter`: Maximum number of iterations before the solver stops, regardless of convergence. - `d_tolerance`: Desired accuracy level based on the ratio of current residual norm over initial residual norm (|r|^2 / |b|^2). - `d_smallDenominatorThreshold`: Minimum value for the denominator in CG calculations to avoid division by near-zero values. - `d_warmStart`: Option to use previous solutions as an initial guess, which can improve performance if system changes smoothly over time. The solver also supports logging and visualization of residuals through a graph data field (`d_graph`), providing insight into the convergence process.
- maths
- # Mathematical Description of the CGLinearSolver Component ## Introduction The `CGLinearSolver` is an iterative method for solving linear systems of equations, specifically designed for symmetric positive-definite matrices. The Conjugate Gradient (CG) algorithm is used to iteratively find solutions to the system $Ax = b$, where $A$ is a matrix, and $x$ and $b$ are vectors. This component is part of the SOFA framework, which is widely used in simulation scenarios such as finite element analysis. ## Mathematical Background ### System of Equations The primary goal is to solve the system of linear equations: \[ Ax = b \] where: - $A$ is a symmetric positive-definite matrix, - $x$ is the vector of unknowns (solution), - $b$ is the right-hand side vector. ### Conjugate Gradient Algorithm The CG algorithm minimizes the quadratic function: \[ f(x) = \frac{1}{2} x^T A x - b^T x + c \] subject to $x \in \mathbb{R}^n$. The minimization process involves finding a sequence of conjugate directions and updating the solution iteratively. ### Iterative Process The CG algorithm proceeds as follows: 1. **Initialization**: Start with an initial guess for the solution vector, typically $x_0 = 0$ or some other feasible point. Compute the residual: \[ r_0 = b - Ax_0 \] 2. **Conjugate Direction Calculation**: The search direction is chosen as a conjugate direction to previous directions. 3. **Step Size Determination**: Calculate step sizes $\alpha_k$ and $\beta_k$ using the following formulas: \[ \alpha_k = \frac{r_k^T r_k}{p_k^T A p_k} \] where $p_k$ is the current search direction. 4. **Solution Update**: The solution vector and residual are updated as follows: \[ x_{k+1} = x_k + \alpha_k p_k \] \[ r_{k+1} = r_k - \alpha_k A p_k \] 5. **Conjugate Direction Update**: Compute the next conjugate direction using: \[ p_{k+1} = r_{k+1} + \beta_k p_k \] where $\beta_k$ is given by: \[ \beta_k = \frac{r_{k+1}^T r_{k+1}}{r_k^T r_k} \] ### Stopping Criteria The algorithm stops when one of the following conditions is met: - The maximum number of iterations (`d_maxIter`) is reached. - The relative residual error, defined as $\frac{|r|^2}{|b|^2}$, falls below a specified tolerance level (`d_tolerance`). - The denominator in the computation of $\alpha$ (i.e., $p_k^T A p_k$) becomes smaller than a threshold value (`d_smallDenominatorThreshold`), indicating potential numerical instability. ### Warm Start Option The `d_warmStart` option allows the solver to use previous solutions as an initial guess for subsequent runs, which can accelerate convergence if the system changes smoothly over time. ## Physical Interpretation In simulation contexts such as finite element analysis, the matrix $A$ often represents a discretized physical operator (e.g., stiffness or mass matrices), and $b$ typically contains force vectors or other boundary conditions. The solution vector $x$ then corresponds to the nodal displacements in a mechanical system, the state variables in a fluid dynamics simulation, etc. ### Visualization and Logging The `d_graph` data field provides visualization of residuals over iterations, offering insight into the convergence process. This can be particularly useful for diagnosing issues or optimizing solver parameters. ## Summary In summary, the `CGLinearSolver` implements an efficient iterative method to solve symmetric positive-definite systems, providing configurable stopping criteria and optional warm start capabilities. It is well-suited for simulations where such matrices naturally arise.
{
"name": "CGLinearSolver",
"main": {
"name": "CGLinearSolver",
"namespace": "sofa::component::linearsolver::iterative",
"module": "Sofa.Component.LinearSolver.Iterative",
"include": "sofa/component/linearsolver/iterative/CGLinearSolver.h",
"doc": "Linear system solver using the conjugate gradient iterative algorithm",
"inherits": [],
"templates": [
"FullMatrix<SReal>, FullVector<SReal>",
"GraphScatteredMatrix, GraphScatteredVector",
"SparseMatrix<SReal>, FullVector<SReal>"
],
"data_fields": [
{
"name": "d_maxIter",
"type": "unsigned int",
"xmlname": "iterations",
"help": "Maximum number of iterations after which the iterative descent of the Conjugate Gradient must stop"
},
{
"name": "d_tolerance",
"type": "Real",
"xmlname": "tolerance",
"help": "Desired accuracy of the Conjugate Gradient solution evaluating: |r|²/|b|² (ratio of current residual norm over initial residual norm)"
},
{
"name": "d_smallDenominatorThreshold",
"type": "Real",
"xmlname": "threshold",
"help": "Minimum value of the denominator (pT A p)^ in the conjugate Gradient solution"
},
{
"name": "d_warmStart",
"type": "bool",
"xmlname": "warmStart",
"help": "Use previous solution as initial solution, which may improve the initial guess if your system is evolving smoothly"
}
],
"links": [],
"methods": [
{
"name": "cgstep_beta",
"return_type": "void",
"params": [
{
"name": "params",
"type": "const core::ExecParams *"
},
{
"name": "p",
"type": "Vector &"
},
{
"name": "r",
"type": "Vector &"
},
{
"name": "beta",
"type": "Real"
}
],
"is_virtual": false,
"is_pure_virtual": false,
"is_static": false,
"access": "protected"
},
{
"name": "cgstep_alpha",
"return_type": "void",
"params": [
{
"name": "params",
"type": "const core::ExecParams *"
},
{
"name": "x",
"type": "Vector &"
},
{
"name": "r",
"type": "Vector &"
},
{
"name": "p",
"type": "Vector &"
},
{
"name": "q",
"type": "Vector &"
},
{
"name": "alpha",
"type": "Real"
}
],
"is_virtual": false,
"is_pure_virtual": false,
"is_static": false,
"access": "protected"
},
{
"name": "init",
"return_type": "void",
"params": [],
"is_virtual": false,
"is_pure_virtual": false,
"is_static": false,
"access": "public"
},
{
"name": "reinit",
"return_type": "void",
"params": [],
"is_virtual": false,
"is_pure_virtual": false,
"is_static": false,
"access": "public"
},
{
"name": "solve",
"return_type": "void",
"params": [
{
"name": "A",
"type": "Matrix &"
},
{
"name": "x",
"type": "Vector &"
},
{
"name": "b",
"type": "Vector &"
}
],
"is_virtual": false,
"is_pure_virtual": false,
"is_static": false,
"access": "public"
}
]
},
"desc": {
"description": "The `CGLinearSolver` is a linear system solver in the SOFA framework that uses the Conjugate Gradient (CG) iterative algorithm to solve systems of equations represented by matrices and vectors. It is part of the Sofa.Component.LinearSolver.Iterative module and implements an iterative method specifically designed for symmetric positive-definite matrices, which are common in finite element analysis and other simulation scenarios.\n\nThis component interacts with other SOFA components through its API by providing methods to initialize (`init`), reinitialize (`reinit`), and solve linear systems (`solve`). The `solve` method iteratively solves the equation Ax=b using CG descent. It requires template parameters for matrix (TMatrix) and vector (TVector) types, which can include full matrices, sparse matrices, or scattered graph structures.\n\nUsers configure `CGLinearSolver` via several data fields:\n- `d_maxIter`: Maximum number of iterations before the solver stops, regardless of convergence.\n- `d_tolerance`: Desired accuracy level based on the ratio of current residual norm over initial residual norm (|r|^2 / |b|^2).\n- `d_smallDenominatorThreshold`: Minimum value for the denominator in CG calculations to avoid division by near-zero values.\n- `d_warmStart`: Option to use previous solutions as an initial guess, which can improve performance if system changes smoothly over time.\n\nThe solver also supports logging and visualization of residuals through a graph data field (`d_graph`), providing insight into the convergence process."
},
"maths": {
"maths": "# Mathematical Description of the CGLinearSolver Component\n\n## Introduction\nThe `CGLinearSolver` is an iterative method for solving linear systems of equations, specifically designed for symmetric positive-definite matrices. The Conjugate Gradient (CG) algorithm is used to iteratively find solutions to the system $Ax = b$, where $A$ is a matrix, and $x$ and $b$ are vectors. This component is part of the SOFA framework, which is widely used in simulation scenarios such as finite element analysis.\n\n## Mathematical Background\n### System of Equations\nThe primary goal is to solve the system of linear equations:\n\\[ Ax = b \\]\nwhere:\n- $A$ is a symmetric positive-definite matrix,\n- $x$ is the vector of unknowns (solution),\n- $b$ is the right-hand side vector.\n\n### Conjugate Gradient Algorithm\nThe CG algorithm minimizes the quadratic function:\n\\[ f(x) = \\frac{1}{2} x^T A x - b^T x + c \\]\nsubject to $x \\in \\mathbb{R}^n$. The minimization process involves finding a sequence of conjugate directions and updating the solution iteratively.\n\n### Iterative Process\nThe CG algorithm proceeds as follows:\n1. **Initialization**: Start with an initial guess for the solution vector, typically $x_0 = 0$ or some other feasible point. Compute the residual:\n \\[ r_0 = b - Ax_0 \\]\n2. **Conjugate Direction Calculation**:\n The search direction is chosen as a conjugate direction to previous directions.\n3. **Step Size Determination**: Calculate step sizes $\\alpha_k$ and $\\beta_k$ using the following formulas:\n \\[ \\alpha_k = \\frac{r_k^T r_k}{p_k^T A p_k} \\]\n where $p_k$ is the current search direction.\n4. **Solution Update**:\n The solution vector and residual are updated as follows:\n \\[ x_{k+1} = x_k + \\alpha_k p_k \\]\n \\[ r_{k+1} = r_k - \\alpha_k A p_k \\]\n5. **Conjugate Direction Update**: Compute the next conjugate direction using:\n \\[ p_{k+1} = r_{k+1} + \\beta_k p_k \\]\n where $\\beta_k$ is given by:\n \\[ \\beta_k = \\frac{r_{k+1}^T r_{k+1}}{r_k^T r_k} \\]\n\n### Stopping Criteria\nThe algorithm stops when one of the following conditions is met:\n- The maximum number of iterations (`d_maxIter`) is reached.\n- The relative residual error, defined as $\\frac{|r|^2}{|b|^2}$, falls below a specified tolerance level (`d_tolerance`).\n- The denominator in the computation of $\\alpha$ (i.e., $p_k^T A p_k$) becomes smaller than a threshold value (`d_smallDenominatorThreshold`), indicating potential numerical instability.\n\n### Warm Start Option\nThe `d_warmStart` option allows the solver to use previous solutions as an initial guess for subsequent runs, which can accelerate convergence if the system changes smoothly over time.\n\n## Physical Interpretation\nIn simulation contexts such as finite element analysis, the matrix $A$ often represents a discretized physical operator (e.g., stiffness or mass matrices), and $b$ typically contains force vectors or other boundary conditions. The solution vector $x$ then corresponds to the nodal displacements in a mechanical system, the state variables in a fluid dynamics simulation, etc.\n\n### Visualization and Logging\nThe `d_graph` data field provides visualization of residuals over iterations, offering insight into the convergence process. This can be particularly useful for diagnosing issues or optimizing solver parameters.\n\n## Summary\nIn summary, the `CGLinearSolver` implements an efficient iterative method to solve symmetric positive-definite systems, providing configurable stopping criteria and optional warm start capabilities. It is well-suited for simulations where such matrices naturally arise."
},
"summary": {
"abstract": "The `CGLinearSolver` is a linear system solver that uses the Conjugate Gradient iterative method to solve symmetric positive-definite matrices in SOFA simulations, with configurable parameters for iteration limits, tolerance, and warm start options.",
"sheet": "# CGLinearSolver\n\n## Overview\nThe `CGLinearSolver` component implements an iterative method using the Conjugate Gradient (CG) algorithm to solve linear systems of equations represented by symmetric positive-definite matrices. It is part of the Sofa.Component.LinearSolver.Iterative module and interacts with other SOFA components through its API, providing methods for initialization (`init`), reinitialization (`reinit`), and solving linear systems (`solve`).\n\n## Mathematical Model\nThe Conjugate Gradient (CG) algorithm minimizes the quadratic function:\n\\[ f(x) = \\frac{1}{2} x^T A x - b^T x + c \\]\nsubject to $x \\in \\mathbb{R}^n$. The CG algorithm proceeds as follows:\n\n1. **Initialization**: Start with an initial guess for the solution vector, typically $x_0 = 0$ or some other feasible point. Compute the residual:\n \\[ r_0 = b - Ax_0 \\]\n2. **Conjugate Direction Calculation**:\n The search direction is chosen as a conjugate direction to previous directions.\n3. **Step Size Determination**: Calculate step sizes $\\alpha_k$ and $\\beta_k$ using the following formulas:\n \\[ \\alpha_k = \\frac{r_k^T r_k}{p_k^T A p_k} \\]\n4. **Solution Update**:\n The solution vector and residual are updated as follows:\n \\[ x_{k+1} = x_k + \\alpha_k p_k \\]\n \\[ r_{k+1} = r_k - \\alpha_k A p_k \\]\n5. **Conjugate Direction Update**: Compute the next conjugate direction using:\n \\[ p_{k+1} = r_{k+1} + \\beta_k p_k \\]\n where $\\beta_k$ is given by:\n \\[ \\beta_k = \\frac{r_{k+1}^T r_{k+1}}{r_k^T r_k} \\]\n\n### Stopping Criteria\nThe algorithm stops when one of the following conditions is met:\n- The maximum number of iterations (`d_maxIter`) is reached.\n- The relative residual error, defined as $\\frac{|r|^2}{|b|^2}$, falls below a specified tolerance level (`d_tolerance`).\n- The denominator in the computation of $\\alpha$ (i.e., $p_k^T A p_k$) becomes smaller than a threshold value (`d_smallDenominatorThreshold`), indicating potential numerical instability.\n\n### Warm Start Option\nThe `d_warmStart` option allows the solver to use previous solutions as an initial guess for subsequent runs, which can accelerate convergence if the system changes smoothly over time.\n\n## Parameters and Data\n- **iterations (`d_maxIter`)**: Maximum number of iterations after which the iterative descent of the Conjugate Gradient must stop. Type: `unsigned int`, Default: Not specified.\n- **tolerance (`d_tolerance`)**: Desired accuracy of the Conjugate Gradient solution evaluating $\\frac{|r|^2}{|b|^2}$. Type: `Real`, Default: Not specified.\n- **threshold (`d_smallDenominatorThreshold`)**: Minimum value of the denominator $(p^T A p)$ in the conjugate gradient solution. Type: `Real`, Default: Not specified.\n- **warmStart (`d_warmStart`)**: Use previous solution as initial solution, which may improve the initial guess if your system is evolving smoothly. Type: `bool`, Default: Not specified."
}
}