Textscan doesn't work on big files?

4 visualizzazioni (ultimi 30 giorni)
Oscar Perez
Oscar Perez il 22 Mag 2024
Commentato: Harald il 24 Mag 2024
I'm currently using the latest Matlab version on 16 GB RAM Mac.
I tried to perform a splitting of a really big cube file (100 GB) into smaller cube files with only 210151 lines per file using this code:
%% Splitting
% opening the result.cube file
fid = fopen(cube) ;
if fid == -1
error('File could not be opened.');
end
m = 1 ;
while ~feof(fid)
% skip the alpha and beta density
fseek(fid,16596786,0) ;
% copy the spin density
text = textscan(fid,'%s',210150,'Delimiter','\n','Whitespace','') ;
% Prints the cube snap shot to the subdirectory
name = string(step_nr(m))+'.cube' ;
full_path = fullfile(name1,name) ;
fid_new = fopen(full_path,"w") ;
fprintf(fid_new,'%s\n', text{1}{:}) ;
fclose(fid_new) ;
m = m+1 ;
end
fclose(fid) ;
save("steps","step_nr")
end
My problem is: Apparently, textscan is not suited for this kind of files. I also tried with line-by-line copying with fgetl, which on the other hand takes ages for a file of 100 GB. Is there a more efficient way to split the file?
I've read about fscanf and tried this:
tic;
fid = fopen('result.cube');
fgetl(fid) ; fgetl(fid) ;
f = fscanf(fid, '%d %f %f %f', [4 4]) ;
s = fscanf(fid, '%d %f %f %f %f', [5 192]) ;
n = fscanf(fid, '%f %f %f %f %f %f', [6 209953]) ;
fid_new = fopen("new",'w') ;
fprintf(fid_new, '%d %.6f %.6f %.6f\n', f) ;
fprintf(fid_new, '%d %.6f %.6f %.6f %.6f\n', s) ;
fprintf(fid_new, '%f %f %f %f %f\n', n) ;
fclose(fid) ;
t=toc
But my problem here is: `s` is not aligned in the individual file like in the big file. `n` is in decimals instead of for example E-02. I also tried to copy it line by line but it takes years. Any suggestions how to improve this? I want it to look like this:
  2 Commenti
Steven Lord
Steven Lord il 22 Mag 2024
Is your goal to split the file or is your goal to work with the data in MATLAB? If the latter, some of the Large File and Big Data functionality available in MATLAB may be of use to you.
Oscar Perez
Oscar Perez il 22 Mag 2024
My goal is actually just splitting a really huge file into smaller ones. Afterwards, I want to deal with them individually.

Accedi per commentare.

Risposte (1)

Harald
Harald il 22 Mag 2024
Hi Oscar,
please attach a sample data file (1 MB will be plenty) so that we can reproduce any issues.
What problem do you encounter with the textscan approach? One issue I suspect: While textscan usually resumes where the previous textscan command left off, you always use fseek to move to the same point again. It seems you should place the call to fseek outside of the while loop.
For block reading, I would usually resort to datastores. If the data is of tabular format, I would specifically use
Best wishes,
Harald
  9 Commenti
Oscar Perez
Oscar Perez il 24 Mag 2024
Modificato: Oscar Perez il 24 Mag 2024
Matlab stops exactly at textscan and displays "out of memory". Already at the first loop.
Harald
Harald il 24 Mag 2024
Ok. Can you try with a smaller number of rows (say 20000 or 2000) to see what the memory usage is?

Accedi per commentare.

Categorie

Scopri di più su Data Import and Export in Help Center e File Exchange

Prodotti


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by