But why would you do that? (The only time I've ever used that is when benchmarking and you need a clean cache between runs). Otherwise the kernel will just empty it as needed when you run a program.
First, it's not why. It's that this is possible. Please don't say "this is impossible because this doesn't make sense".
Second, I agree with you. Normally people don't need to do this unless they hate their hard drives. :-) But there's some special cases, for example when the disk is "volatile", or when you want a huge chunk of continuous physical address and you want to tell the OS to stay away and don't touch any pages inside.
PS: An interesting thing is that Windows people always seek for a way to enlarge disk cache. It's fine on server versions, but on desktop versions such as Windows 7, if I don't remember wrong, disk cache is capped at ~4GB(not sure if globally or per file). Oh well.
I can't answer for everyone, but I can tell you why I have a cron job dropping my cache every several minutes:
The application I develop consumes most of the ram on my machine when running, and takes several minutes to rebuild and start up to begin with. When most of my memory is being used for the cache, this process takes several minutes longer, because I'm making millions upon millions of calls for more memory -- and each one has to get some of that cached memory back for itself. If I simply drop the cache all at once, every minute, it takes a split second. If I shrink it over a million increments, it takes around a minute.
Even typing that I feel I must be doing something wrong; and yet, it worked.
Your app is making more calls to malloc when there is memory being used by disk cache, or you mean something else by "millions upon millions of calls for more memory"?
You control hope many times you call malloc. Just call it once with what you need.
You can drop cache by sending a byte into drop_cache.