Hi Leon,
The phenomenon you're observing, where seemingly identical operations result in tiny precision differences, can indeed stem from the parallel execution of your code in a parfor loop, among other factors. Here are some points to consider that might explain the behavior:
1. Floating-Point Arithmetic and Parallelism
Floating-point arithmetic in computers does not always behave in the way we might intuitively expect, especially under parallel computation scenarios. This is due to several factors:
- Non-Associativity of Floating-Point Operations: The result of floating-point arithmetic (like addition or multiplication) can depend on the order in which operations are performed. In a parallel computing environment, operations might be executed in different orders across threads due to variations in execution speed, leading to slight discrepancies in results.
- Differences in Intermediate Precision: Different threads or processes might use different strategies for maintaining intermediate values, especially if they're utilizing different hardware resources (like different cores or different vectorization capabilities). This can lead to tiny differences in the final results.
2. MATLAB's Parallel Computing Toolbox
When using MATLAB's Parallel Computing Toolbox with a parfor loop, each iteration is executed independently across the available workers in your computing pool. Although each worker is supposed to perform the same operations, the non-deterministic nature of parallel execution can lead to the discrepancies you've observed, especially with floating-point computations.
3. Implications for Your Work
The differences you're seeing are on the order of 1e-15, which is significantly smaller than the precision most applications would require. However, when comparing floating-point numbers or searching for maximum values, these tiny differences can indeed become relevant.
Possilble Workarounds
- Rounding: For comparison purposes, you might consider rounding your accuracy values to a certain number of significant digits that makes sense for your application. This can mitigate the effect of tiny discrepancies.
accuracies_rounded = round(accuracies, 15);
- Using a Tolerance for Comparisons: Instead of looking for exact matches, use a tolerance when comparing floating-point numbers. It seems you're already doing something similar with find(abs(accuracies - max_val) < 1e-10). Adjusting the tolerance level appropriately can help manage the precision issues.
- Analyzing Results with Care: When dealing with floating-point arithmetic, especially in parallel computing environments, always consider the possibility of such tiny discrepancies. Design your algorithms and result analysis to be robust against these minor differences.
In summary, what you're experiencing is a common aspect of floating-point arithmetic in parallel computing environments. Adjusting your approach to comparison and result analysis to account for these nuances will be key in managing the impact on your work.