Memory usage of decomposition
6 Commenti
Risposte (1)
Hi @Mariano,
When dealing with large sparse matrices in numerical computing, especially in environments like MATLAB, it is crucial to understand both the computational efficiency and memory implications of various algorithms. Here’s a detailed breakdown of the situation:
1. Performance Comparison: The two methods being compared are:
Method 1:Using `dA = decomposition(A,'chol','upper'); x_i = dA\b_i;` Method 2:Using classical Cholesky factorization `R = chol(A); x_i=R\(R'\b_i);`
In your experiments, Method 1 appears faster for repeated solves against multiple right-hand sides b_i. This is likely due to the efficiency of reusing the decomposition stored in `dA`, which is optimized for repeated operations.
2. Memory Usage: You reported that `dA` occupies approximately 1.5 \times 10^7 bytes while R only takes up 5.6 \times 10^6 bytes. This difference (approximately 2.7 times more memory) can be attributed to several factors:
Internal Representation:The `decomposition` function utilizes a wrapper class (`cholmod_`), which may include additional metadata or structures that are not present in the simple Cholesky factorization. This wrapper likely maintains references to the original matrix and other necessary data for efficient computations.
Sparse Matrix Handling: While both methods work with sparse matrices, the internal representation used by `decomposition` might lead to increased overhead due to maintaining extra structures necessary for its optimizations.
3. Implications on Large Scale Problems:
As you noted, working with large matrices close to system RAM limits can lead to performance degradation if excessive paging occurs. If MATLAB begins to use virtual memory (swapping data to disk), it can severely impact performance despite any gains from faster algorithms.
Given your constraints and requirements, here are some strategies you could consider:
Stick with Classical Cholesky: If memory usage is a critical concern and you have confirmed that classical Cholesky is sufficiently fast for your needs, it may be prudent to continue using this method despite its longer execution time per solve.
Optimize Memory Usage: Investigate whether you can reduce the size of your matrices or break them into smaller blocks if applicable. Ensure that other processes consuming RAM are minimized during your computations.
Upgrade Hardware: As suggested by Walter Roberson, installing a fast SSD could help alleviate some issues related to virtual memory usage by improving data access speeds when paging occurs.
Parallel Computing: If applicable, consider using MATLAB's parallel computing toolbox to distribute computations across multiple cores or even machines, which can help mitigate some of the RAM limitations by processing chunks of data independently.
Profiling and Debugging: Use MATLAB’s profiling tools (`profile on`, `profile viewer`) to analyze where most memory is being consumed during execution. This may provide insights into optimizing your code further.
Hope this helps.
0 Commenti
Vedere anche
Categorie
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!