I'm staring at the free CentOS images on AWS, and seeing that whoever set those up elected to use a partition for /dev/xvda1 rather than taking advantage of Amazon's tendency to use "/dev/xvda", "/dev/xvdb", etc. for each disk and use those directly as a file system. The result is that if you elect to allocate a larger base disk image, for example allocating 50 Gig to allow local home directories or space for "mock" or for bulky logs, and don't spend the time to select and allocate new disk images, it's awkward to simply expand the "/" partition. And with only 8 Gig allocated in the latest CentOS 6 images that I see in AWS, it's possible to get pretty pressed for space pretty quickly. Now, AWS published guidelines on manipulating partition size, and expanding a matching filesystem, but they're very clear to "unmount the parition before you touch it!!!" That's a bit difficult to unmount with a "/" partition, and they understandably don't have the kind of "boot from CD and work from the console" setup I'd normally use for that kind of work. So: why did the creators of that CentOS AMI elect to use such a small / partition? And how dangerous is it, with the system essentially idle, to use "parted" to expand the "/dev/xvda1" parition and then use "resize2fs" to expand the "/" file system while the system is alive? Note that, because I'm a complete weasel, I know at least one way around this: add a second disk, copy the OS to *that*, set grub to boot from the second disk, reboot from that, paritition the first disk as desired, copy the OS back, reset grub to boot from the first disk, and pray. I've had good success with the approach in the past, and have rebuilt rougly 15,000 Linux systems this way. But the work predates CentOS, and I dont't want to go through that again. So, has anyone resized "/" successfully and gracefully on AWS CentOS instances? Nico Kadel-Garcia