[CentOS] Rescan /dev/sd* without reboot?
centos at linuxpowered.net
Wed Jul 2 00:57:53 UTC 2008
Rainer Duffner wrote:
> I can't believe that nobody needs that in Linux-land.
> If you enlarge the LUN on the SAN for a Linux-volume, you end-up with
> a 2nd partition behind the first - you'd need to do some nasty,
> dangerous disklabel-manipulations to fix that.
> I end-up just adding another LUN and using LVM to piece them
> together. Of course, having multiple LUNs from a SAN in an LVM makes
> it next to impossible to create a consistent snapshot (via the SAN's
> snapshot functionality) in case the SAN (like all HP EVAs, AFAIK) can
> only do one snapshot of a LUN at exactly the same time.
Increasing amounts of storage arrays support thin provisioning.
I made extensive use of this technology at my last company. Say you
need 100GB today but may need to grow later. You can create a 1TB
volume, export it to the host, and depending on disk usage patterns
optionally create a 100GB LVM. Fill up that LVM, and while you
have a 1TB drive exported to the system, only 100GB of space is
utilized on the array. Increase the size of the LVM with lvextend
and off you go. No fuss no muss (?).
If your application's space usage characteristics are such that
it doesn't consume large amounts of space and then free it, you
can create that 1TB volume off the bat and never have to worry
about extending it(until you get to that 1TB). Or create a 2TB
volume(or bigger). Thin provisioning is a one-way trip, once the
space is allocated(on the array) it cannot be "freed". Though I
read a storage blog where a NetApp guy talked about a utility they
have to reclaim space from TP volumes running on NTFS, haven't
seen anything for linux(and the guy warned it's a very I/O intensive
operation). For me for apps that are sloppy with space I just
restrict their usage with LVM, that way I know I can easily extend
stuff and still control growth with an iron fist if I so desire.
At my last company I achieved 400% over subscription with thin
provisioning. It did take several months of closely watching
the space utilization characteristics of the various applications
to determine the most optimal storage configuration. The vendor
says on average customers save about 50% space using this
If it turns out you never use more than 100GB, there's nothing
lost, the rest of the space is available to be allocated to
other systems. No waste.
I'm optimistic that in the coming years the standard files systems
will include more intelligence with regards to thin provisioning,
that is being able to mark freed space in such a way that the array
can determine with certainty that it is not being used any more
and reclaim it. And also intelligently re-use recently deleted blocks
before allocating new(to some extent they do this already but it's
not good enough). Thin provisioning has really started to take off
in the past year or so the number of storage vendors supporting it
has gone up 10x. How well it actually works depends on the vendor,
some of the architecture's out there are better than others.
More information about the CentOS