This is sometimes useful when playing with bigdata.
Instead of a dd command and wait the file being created block by clock, we can run:
$ fallocate -l 200G /mnt/reallyBigFile.csv
It essentially “allocates” all of the space you’re seeking, but it doesn’t bother to write anything. So, when you use fallocate to create a 200 GB virtual drive space, you really do get a 200 GB file.
Just remember that you cannot approach if you want to benchmark things. I once used this approach to create a vm image and when I attached that to a guest and performed a random read i/o test the result was several thousand IOPS per second which was impossible since that image was placed on a regulard harddrive (as opposed to a ssd).
The reason was since the read requests hit unallocated blocks the filesystem didn’t bother to send them to the disk at all and instead immediately returned zero bytes without performing actual disk i/o.
So if you want to create a file using fallocate for other purposes this is great but if you want to make performance measurements you are indeed better off using something like dd to actually write the blocks to disks.
Thanks for your comments Dennis !
This is very valuable. Good to know !
Thanks again
Hernan