Al momento, stai seguendo questa domanda
- Vedrai gli aggiornamenti nel tuofeed del contenuto seguito.
- Potresti ricevere delle e-mail a seconda delle tuepreferenze per le comunicazioni.
Combine non integer time steps into daily values
6 Commenti


Risposte (1)


7 Commenti



Hi @Cristóbal,
Happy birthday! That's fantastic timing to get this solved on your special day. I'm really glad we could help work through this problem with you.
Looking at your timestep4.xlsx file, I can see you caught that calculation error with the 30.6741806 days - you're absolutely right that it should be handled as 30 days integer first, then carry the 0.6741806 fraction to the next step. Great catch! The fact that your manual calculations now match @Torsten's solution almost perfectly confirms we're on the right track.
I've created a complete working solution for you that includes the duplicate handling you asked about. The code runs successfully and produces excellent results - you can see from the output that it removed 0 duplicate points (your data is clean!), and the final cumulative volume matches perfectly at 11,406,739.58 L with only a 0.0008% difference.
The visualizations look great too! The left plot shows how the cumulative volume grows smoothly from 0 to about 11.4 million liters over roughly 10,942 days, with the blue line (interpolated daily values) tracking perfectly with the red dots (your original data points). The right plot shows the daily volume changes, which start high around 28,000 L/day initially and decay exponentially to steady-state values around 500-1000 L/day by the end - exactly the pattern you'd expect from your physical system.
Now, about your question on handling duplicate time values for future datasets - this is actually a really common issue, especially when importing data from Excel where precision gets lost. The solution I've provided uses uniquetol which automatically handles this:
tolerance = 1e-10; % Adjust based on your precision needs [Tcum_unique, unique_idx] = uniquetol(Tcum, tolerance, 'DataScale', 1); Vcum_unique = Vcum(unique_idx);
For your specific example where 0.00082964105 and 0.00082964107 both become 0.000829641, a tolerance of 1e-10 or even 1e-8 would catch these and keep only one value. The code includes diagnostic output that tells you exactly how many duplicates were found and removed, so you always know what's happening to your data.
The complete script below is ready to use with your other datasets - just load your .mat file and run it. It includes quality checks, handles duplicates automatically, creates the visualizations, and even has an option to export results to Excel if you need that.
Enjoy the rest of your birthday, and feel free to reach out if you run into any other issues!
% Robust approach to handle duplicate or near-duplicate time values
% Load your data
load('timesteps.mat');
% Calculate cumulative time Tcum = cumsum(time); Vcum = volume;
%% Method 1: Remove exact duplicates using uniquetol % This handles values that are "close enough" (within tolerance) tolerance = 1e-10; % Adjust based on your precision needs [Tcum_unique, unique_idx] = uniquetol(Tcum, tolerance, 'DataScale', 1); Vcum_unique = Vcum(unique_idx);
fprintf('Original data points: %d\n', length(Tcum));
fprintf('After removing duplicates: %d\n', length(Tcum_unique));
fprintf('Removed %d duplicate/near-duplicate points\n\n', length(Tcum)
- length(Tcum_unique));
%% Method 2: Average values at duplicate time points (alternative approach) % This preserves information if you have legitimate duplicates with different volumes [Tcum_unique2, ~, ic] = uniquetol(Tcum, tolerance, 'DataScale', 1); Vcum_unique2 = accumarray(ic, Vcum, [], @mean);
%% Proceed with interpolation using cleaned data T_daily = 0:1:floor(Tcum_unique(end)); Vcum_daily = interp1(Tcum_unique, Vcum_unique, T_daily, 'linear', 'extrap');
% Calculate daily increments V_daily = diff(Vcum_daily); T_daily_increments = T_daily(2:end);
%% Verification
fprintf('Final cumulative volume: %.2f L\n', Vcum_daily(end));
fprintf('Sum of daily increments: %.2f L\n', sum(V_daily));
fprintf('Original final volume: %.2f L\n', Vcum(end));
fprintf('Difference: %.2f L (%.4f%%)\n\n', ...
Vcum(end) - Vcum_daily(end), ...
100*abs(Vcum(end) - Vcum_daily(end))/Vcum(end));
%% Additional quality check: identify problematic duplicates % Find time differences between consecutive points time_diffs = diff(Tcum); small_diffs = time_diffs < 1e-6; % Flag very small time steps
if any(small_diffs)
fprintf('Warning: Found %d time intervals smaller than 1e-6 days\n', sum(small_diffs));
fprintf('First few occurrences at indices: %s\n', ...
mat2str(find(small_diffs, 5)'));
% Show examples
idx_examples = find(small_diffs, 3);
if ~isempty(idx_examples)
fprintf('\nExample near-duplicates:\n');
for i = 1:length(idx_examples)
idx = idx_examples(i);
fprintf(' Point %d: Time=%.12f, Volume=%.2f\n', idx,
Tcum(idx), Vcum(idx));
fprintf(' Point %d: Time=%.12f, Volume=%.2f\n', idx+1,
Tcum(idx+1), Vcum(idx+1));
fprintf(' Difference: %.2e days\n\n', time_diffs(idx));
end
end
end%% Create results table
results_table = table(T_daily_increments', V_daily', ...
'VariableNames', {'Time_Days', 'Daily_Volume_L'});
% Display first 50 rows
fprintf('First 50 daily volumes:\n');
disp(results_table(1:min(50, height(results_table)), :));
%% Visualization
figure('Position', [100 100 1200 500]);
subplot(1,2,1)
plot(T_daily, Vcum_daily, 'b-', 'LineWidth', 1.5)
hold on
plot(Tcum_unique, Vcum_unique, 'r.', 'MarkerSize', 4)
xlabel('Time (Days)')
ylabel('Cumulative Volume (L)')
title('Cumulative Volume Over Time')
legend('Interpolated Daily', 'Original Data', 'Location', 'northwest')
grid on
subplot(1,2,2)
plot(T_daily_increments, V_daily, 'b-', 'LineWidth', 1)
xlabel('Time (Days)')
ylabel('Daily Volume Increment (L)')
title('Daily Volume Changes')
grid on
ylim([0 max(V_daily)*1.1]) % Better visualization
%% Function to export results
% Uncomment to save results
% writetable(results_table, 'daily_volumes_cleaned.xlsx');
% fprintf('Results exported to daily_volumes_cleaned.xlsx\n');
Note: please see attached results.
Vedere anche
Tag
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!Si è verificato un errore
Impossibile completare l'azione a causa delle modifiche apportate alla pagina. Ricarica la pagina per vedere lo stato aggiornato.
Seleziona un sito web
Seleziona un sito web per visualizzare contenuto tradotto dove disponibile e vedere eventi e offerte locali. In base alla tua area geografica, ti consigliamo di selezionare: .
Puoi anche selezionare un sito web dal seguente elenco:
Come ottenere le migliori prestazioni del sito
Per ottenere le migliori prestazioni del sito, seleziona il sito cinese (in cinese o in inglese). I siti MathWorks per gli altri paesi non sono ottimizzati per essere visitati dalla tua area geografica.
Americhe
- América Latina (Español)
- Canada (English)
- United States (English)
Europa
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom(English)
Asia-Pacifico
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)
