Hi guys,
When I run my function below, I get stuck in an infinite loop.
From and to are filenames for txt files that already exist.
The only thing I can think of is that the txt file does not end in a
NULL pointer. Am I missing something?
//---------------------------------------------------------------------------
//Copy all lines except the first one
//into another file.
// from and to are char arrays of filenames
//---------------------------------------------------------------------------
void copyNotFirstLn (char *from, char *to)
{
F_FILE *fp1;
F_FILE *fp2;
char readbuf[100];
int i=0;
fp1=f_open(from, "r"); // open for reading
fp2=f_open(to, "w+"); // open for writing, delete anything already in file, create file if necessary
f_fgets(readbuf,100,fp1); // read through first line
// read fp1 one line at a time (up to 100 chars), and write to fp2
while (f_fgets( readbuf, 100 , fp1 )!= NULL)
{
iprintf("%d",i);
f_write( readbuf, 1, strlen( readbuf ), fp2 );
i++;
}
f_close(fp1);
f_close(fp2);
}
do all files end in null?
Re: do all files end in null?
The end of a file is not the same as the end of a string. A file is terminated with an EOF (End of file) char. This is dependent on the type of files or file system but it is different then a NULL char:
http://en.wikipedia.org/wiki/End_of_file
Now f_gets will read strings from a file. This means it will read from the file until it reaches a EOF OR NULL char. To check if you have reached the EOF after a f_gets you can use the function f_eof( filehandle ). For example, you might want to modify your code to be:
readbuf[99] = 0;
while ( !f_eof( fp1 ) )
{
f_fgets( readbuf, 99, fp1 )
iprintf("%d",i);
f_write( readbuf, 1, strlen( readbuf ), fp2 );
i++;
}
The problem you will run into with your original code is that you call f_gets and read up to 100 bytes into a buffer that is 100 bytes big. If you read all 100 bytes from the file then the fgets function will write a 0 to the next location in RAM which you never allocated, possibly overwriting another system variable. Basically make sure your read or gets size is always smaller then the buffer you are reading to.
You should also make your buffer much bigger and make it a global variable so it does not consume space on the task stack. I also usually try to use at least a 32K buffer for file system read and writes. Your performance will increase greatly compared to the 100 byte reads and writes you are using in your current code.
-Larry
http://en.wikipedia.org/wiki/End_of_file
Now f_gets will read strings from a file. This means it will read from the file until it reaches a EOF OR NULL char. To check if you have reached the EOF after a f_gets you can use the function f_eof( filehandle ). For example, you might want to modify your code to be:
readbuf[99] = 0;
while ( !f_eof( fp1 ) )
{
f_fgets( readbuf, 99, fp1 )
iprintf("%d",i);
f_write( readbuf, 1, strlen( readbuf ), fp2 );
i++;
}
The problem you will run into with your original code is that you call f_gets and read up to 100 bytes into a buffer that is 100 bytes big. If you read all 100 bytes from the file then the fgets function will write a 0 to the next location in RAM which you never allocated, possibly overwriting another system variable. Basically make sure your read or gets size is always smaller then the buffer you are reading to.
You should also make your buffer much bigger and make it a global variable so it does not consume space on the task stack. I also usually try to use at least a 32K buffer for file system read and writes. Your performance will increase greatly compared to the 100 byte reads and writes you are using in your current code.
-Larry
Re: do all files end in null?
Thanks a lot Larry.
I changed my buffer size to 4096 bytes because that is the cluster size.
It seemed to work pretty fast.
Then I tried to increase the buffer size 16k and it didn't work as well.
If the NB has a 64k buffer would it make sense to use as much of it as possible
when copying or is there no significant improvement to copying one cluster at a time?
I changed my buffer size to 4096 bytes because that is the cluster size.
It seemed to work pretty fast.
Then I tried to increase the buffer size 16k and it didn't work as well.
If the NB has a 64k buffer would it make sense to use as much of it as possible
when copying or is there no significant improvement to copying one cluster at a time?