Sounds like a bug in the program. Maybe it runs a separate instance for each page in that mode and doesn't release any memory until it is all finished. On something smaller or less complex it might not make much difference, but if the memory use pushes into swap it will take much longer.
Yes, that's what it seems to me. As I said before, it starts processing swiftly but soon each new page takes longer and longer until it crawls. CPU time reaches 98% and the memory footprint keeps increasing untill the end of the process. This happens even on a standalone Windows workstation, not only over the network. I can report this to Adobe but I don't have too much hope about the attention such a large company is going to give to such an issue...
By the way, yet another really-contorted workaround would be to run VMware server or virtualbox (both free) on the centos box with a windows guest to get a reliable NTFS network drive. If you have resources to spare on this server you could even run distiller there so you could shut down the workstations as soon as the final run starts.
I thought of doing that but it really is not realistic at the moment in my environment. It is overkill. It would be much easier to put a small FAT32-formated partition on the server just for that purpose. The PS files are not kept. After processing they are discarded, only the resulting PDF is used and archived. For now I will stick with a EXT3 partition with dir_index off and use rundirex like we always did. It works well this way: 3 to 4 minutes to render a complete publication.
Thank you for your tips. Even if I don't use them now, the information stays. Maybe it will be needed one of these days.