linux - Writing a file quite quick, overwriting the file take much longer -


I have some functionality problems with PHP scripts on the Linux box, so I was going to see some commands One thing I noticed was that writing a file is very fast:

  [root @ localhost ~] # dd if = / dev / zero of = / root / myGfile bs = 1024K count = 1000 1000 1000 + 0 records record 1048576000 bytes (1.0 GB), 1.0817s, 969 MB / s  

but overwriting takes a lot of time;

  [root @ localhost ~] # dd if = / dev / zero of = / root / myGfile bs = 1024K count = 1000 1000+ records in 1000+ records, 1048576000 bytes (1.0 GB) Copy, 23.0658 s, 45.5 MB / s  

Why is it so? (I can repeat those results.)

The first time you write a file, this system There is a buffer in the memory.

The second time when you write a file, the file is truncated, which is the reason for all the dirty pages to be written on the disk for some reason. Yes, it seems stupid: why write the file's data when the file is scaled down in zero?

You can display it by creating another dd , just write, say, 4k figures of It takes just as long.

You can also force dd to erase by using conv = notrunc .


Comments

Popular posts from this blog

sql - dynamically varied number of conditions in the 'where' statement using LINQ -

asp.net mvc - Dynamically Generated Ajax.BeginForm -

Debug on symbian -