[CentOS] Cnetos 5.4 ext3 question...

Les Mikesell lesmikesell at gmail.com
Mon Dec 28 18:41:46 UTC 2009

Ross Walker wrote:
> On Dec 28, 2009, at 12:07 PM, Tom Bishop <bishoptf at gmail.com 
> <mailto:bishoptf at gmail.com>> wrote:
>> On Mon, Dec 28, 2009 at 11:03 AM, < 
>> <mailto:david at pnyet.web.id>david at pnyet.web.id 
>> <mailto:david at pnyet.web.id>> wrote:
>>     I'm using ext3 on my CentOS box, so far so good, I don't get any
>>     problem. Sometimes my server shutdown when power is cut, but
>>     CentOS still running well and nothing corruption files or anything
>>     after start again.
>> Thanks guys for the responses, can anyone explain what the hoopla is 
>> then about ext4 and performance issues and barriers being enabled, 
>> there was also some talk about that being an potential issue with 
>> ext3?  I've tried to google and look but have not found a good 
>> explanation on what the issue is....
> Barriers expose the poor performance of cheap hard drives. They provide 
> assurance that all the data leading up to the barrier and the barrier IO 
> itself are committed to media. This means that the barrier does a disk 
> flush first and if the drive supports FUA (forced unit access, ie bypass 
> cache), then issues the IO request FUA, if the drive doesn't support FUA 
> then it issues another cache flush. It's the double flush that causes 
> the most impact to performance.
> The typical fsync() call only assures data is flushed from memory, but 
> makes no assurance the drive itself has flushed it to disk which is 
> where the concern lies.
> Currently in RHEL/CentOS the LVM (device mapper) layer doesn't know how 
> to propogate barriers to the underlying devices so it filters them out, 
> so barriers are only currently supported on whole drives or raw 
> partitions. This is fixed in the current kernels, but has yet to be 
> backported to RHEL kernels.
> There are a couple of ways to avoid the barrier penalty. One is to have 
> nvram backed write-cache either on the contoller or as a separate 
> pass-through device. The other is to use a separate log device on a SSD 
> which has nvram cache, newer ones have capacitor backed cache or a 
> standalone nvram drive.

Did linux ever get a working fsync() or does it still flush the entire 
filesystem buffer?

   Les Mikesell
    lesmikesell at gmail.com

More information about the CentOS mailing list