Just idle curiosity. I've always been fond of the simple hddtemp utility. I tried 'yum install hddtemp' (I have dag in my repos), but it was not found. Google found it on '/pub/dag/dries/packages...'. So I downloaded it, installed (Centos4), and it works just fine.
Just wondering why it's not in the dag el4 repo?
Dries and Dag repo's aren't combined at the current moment, AFAIK. You can set your client with a config as shown here: http://dries.studentenweb.org/apt/
On 4/16/05, Collins Richey crichey@gmail.com wrote:
Just idle curiosity. I've always been fond of the simple hddtemp utility. I tried 'yum install hddtemp' (I have dag in my repos), but it was not found. Google found it on '/pub/dag/dries/packages...'. So I downloaded it, installed (Centos4), and it works just fine.
Just wondering why it's not in the dag el4 repo?
-- Collins When I saw the Iraqi people voting three weeks ago, 8 million of them, it was the start of a new Arab world.... The Berlin Wall has fallen. - Lebanese Druze leader Walid Jumblatt _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 4/15/05, Beau Henderson silentbob@gmail.com wrote:
Dries and Dag repo's aren't combined at the current moment, AFAIK. You can set your client with a config as shown here: http://dries.studentenweb.org/apt/
Thanks, I tried that, but the dries repo doesn't appear to be very reliable. I even tired with some mirrors, and it doesn't work thus far "no more mirrors to try...". Does anyone have a reliable mirrorlist for dries?
You can use the fedora rpm from here, http://download.fedora.us/fedora/fedora/latest/i386/RPMS.stable/hddtemp-0.3-...
Worked for me.
Craig
On 4/15/05, Beau Henderson silentbob@gmail.com wrote:
Dries and Dag repo's aren't combined at the current moment, AFAIK. You can set your client with a config as shown here: http://dries.studentenweb.org/apt/
Thanks, I tried that, but the dries repo doesn't appear to be very reliable. I even tired with some mirrors, and it doesn't work thus far "no more mirrors to try...". Does anyone have a reliable mirrorlist for dries?
-- Collins When I saw the Iraqi people voting three weeks ago, 8 million of them, it was the start of a new Arab world.... The Berlin Wall has fallen. - Lebanese Druze leader Walid Jumblatt _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi,
I've just installed CentOS on my new machine and I'm wondering about a thing: the refresh rate is set rather low (it looks like 50Hz), whereas the monitor can at least support 75Hz in 1152x864 resolution (which is the resolution I set the monitor at).
Now... for some reason CentOS decided to use the generic VESA video driver, so perhaps this is the issue, and perhaps a better (i.e. more specific) video driver would give me the additional refresh rates, but then, from Gnome's -> System Settings -> Display menu, I do not see any choices for the refresh rate at all. It does, however, make change to /etc/X11/xorg.conf.
The motherboard is an ASRock K7S41GX, and it seems the video adapter that is used is a bridged (North Bridge?) SiS 741GX one. Unfortunately the CD-ROM that came with the machine only has W*nd*ws drivers for this chip set. :(
Does anyone know if I can somehow force the generic VESA driver to run at 75Hz, or does anyone know where I can find the proper video driver (I do not see it listed in the list of available video drivers)?
Meanwhile, I'll take a look myself on Google as well, but I hope someone can give me some quick pointers...
Cheers! Olafo
On Mon, 2005-04-18 at 14:32 +0200, Olaf Greve wrote:
Hi,
I've just installed CentOS on my new machine and I'm wondering about a thing: the refresh rate is set rather low (it looks like 50Hz), whereas the monitor can at least support 75Hz in 1152x864 resolution (which is the resolution I set the monitor at).
Now... for some reason CentOS decided to use the generic VESA video driver, so perhaps this is the issue, and perhaps a better (i.e. more specific) video driver would give me the additional refresh rates, but then, from Gnome's -> System Settings -> Display menu, I do not see any choices for the refresh rate at all. It does, however, make change to /etc/X11/xorg.conf.
The motherboard is an ASRock K7S41GX, and it seems the video adapter that is used is a bridged (North Bridge?) SiS 741GX one. Unfortunately the CD-ROM that came with the machine only has W*nd*ws drivers for this chip set. :(
Does anyone know if I can somehow force the generic VESA driver to run at 75Hz, or does anyone know where I can find the proper video driver (I do not see it listed in the list of available video drivers)?
Meanwhile, I'll take a look myself on Google as well, but I hope someone can give me some quick pointers...
---- THE definitive web site for SiS video chipsets on Linux is...
http://www.winischhofer.at/linuxsisvga.shtml
Craig
Is it just me, or does anyone see something seriously wrong with the Software RAID example?
http://mirrors.cat.pdx.edu/centos/3/docs/html/rhel-sag-en-3/ch-software-raid .html
Figure 12-4
Isn't RAID supposed to be redundant disks, not redundant partitions? If that hda disk goes bad, the raid and redundancy is nowhere to be found along with your data. Right?
Is there a way to make each disk a single software raid partition, then use slices of that partition for your drive partitions with LVM or something? I basically want to encapsulate the partitions for the operating system with RAID so that if a disk drive fails, it can be repaired.
I have never used this function before, so set me straight if I'm just missing the point.
Thanks, Chuck
On 4/19/05, Chuck Rock carock@epctech.com wrote:
Is it just me, or does anyone see something seriously wrong with the Software RAID example?
http://mirrors.cat.pdx.edu/centos/3/docs/html/rhel-sag-en-3/ch-software-raid .html
Figure 12-4
Isn't RAID supposed to be redundant disks, not redundant partitions? If that hda disk goes bad, the raid and redundancy is nowhere to be found along with your data. Right?
Correct. My guess is that the person who did the screenshots didn't have two drives, or something like that.
Is there a way to make each disk a single software raid partition, then use slices of that partition for your drive partitions with LVM or something? I basically want to encapsulate the partitions for the operating system with RAID so that if a disk drive fails, it can be repaired.
LVM is pretty cool when you want to concat or split drives out. So for an example you create a single big raid on the drives, and then use LVM to "partition" it (create logical volumes) for the various mountpoints. LVM is also pretty cool when it comes to expanding your storage; let's say that you start out with RAID1 on two disks, and then at a later stage buy two more disks (which you also do RAID1 on) you can use LVM to grow the "partitions" (volumes).
I have never used this function before, so set me straight if I'm just missing the point.
LVM doesn't do RAID (well, you can do striping [RAID0]), and you should combine [hardware/software] raid with LVM. And needless to say, RAID is not a substitute for backups.
Best regards Michael Boman
On 4/18/05, Chuck Rock carock@epctech.com wrote:
Isn't RAID supposed to be redundant disks, not redundant partitions? If that hda disk goes bad, the raid and redundancy is nowhere to be found along with your data. Right?
Software is more flexible than that. It works with partitions.
Of course, for redundancy purposes, you want to mirror partitions on separate physical drives. But you don't have to mirror all the partitions on all your drives.
For example, you don't raid swap partitions. Or you may want to have a small /boot that's not raid. And then a big /home which is mirrored.
Francois
On Mon, 18 Apr 2005 at 10:51pm, Francois Caen wrote
On 4/18/05, Chuck Rock carock@epctech.com wrote:
Isn't RAID supposed to be redundant disks, not redundant partitions? If that hda disk goes bad, the raid and redundancy is nowhere to be found along with your data. Right?
Software is more flexible than that. It works with partitions.
Of course, for redundancy purposes, you want to mirror partitions on separate physical drives. But you don't have to mirror all the partitions on all your drives.
For example, you don't raid swap partitions. Or you may want to have a small /boot that's not raid. And then a big /home which is mirrored.
If you want the system to survive losing a disk (i.e. it stays up until you shut it down to swap the disk (if you don't have hot swap)), you must RAID all partitions, including swap. In a ks.cfg, it looks something like this:
part raid.01 --size 8192 --ondisk sda --asprimary part raid.11 --size 8192 --ondisk sdb --asprimary part raid.02 --size 4096 --ondisk sda part raid.12 --size 4096 --ondisk sdb part raid.03 --size 4096 --ondisk sda part raid.13 --size 4096 --ondisk sdb part raid.04 --size 4096 --ondisk sda part raid.14 --size 4096 --ondisk sdb part raid.05 --size 2047 --ondisk sda part raid.15 --size 2047 --ondisk sdb part raid.06 --size 1023 --grow --ondisk sda part raid.16 --size 1023 --grow --ondisk sdb
raid / --level=1 --device=md0 raid.01 raid.11 raid /usr/local --level=1 --device=md1 raid.02 raid.12 raid /tmp --level=1 --device=md2 raid.03 raid.13 raid /var --level=1 --device=md3 raid.04 raid.14 raid swap --level=1 --device=md4 raid.05 raid.15 raid /home --level=1 --device=md5 raid.06 raid.16
On 4/19/05, Joshua Baker-LePain jlb17@duke.edu wrote:
If you want the system to survive losing a disk (i.e. it stays up until you shut it down to swap the disk (if you don't have hot swap)), you must RAID all partitions, including swap. In a ks.cfg, it looks something like this:
The reason I don't softraid1 my swaps is that the default behavior is striping a-la-raid0 if you have multiple swaps. At least that's my understanding of it.
You guys bring up a good point in case of drive failure. I never tested it. Do you guys know what happens if the swaps are not softraid1? Does the kernel panic or something of the like? Or does it survive and just operate on less swapspace?
Francois
On 4/19/05, Francois Caen frcaen@gmail.com wrote:
On 4/19/05, Joshua Baker-LePain jlb17@duke.edu wrote:
If you want the system to survive losing a disk (i.e. it stays up until you shut it down to swap the disk (if you don't have hot swap)), you must RAID all partitions, including swap. In a ks.cfg, it looks something like this:
The reason I don't softraid1 my swaps is that the default behavior is striping a-la-raid0 if you have multiple swaps. At least that's my understanding of it.
You guys bring up a good point in case of drive failure. I never tested it. Do you guys know what happens if the swaps are not softraid1? Does the kernel panic or something of the like? Or does it survive and just operate on less swapspace?
I only know that it continues along fine when swap is mirrored, but I have never tested it without raided swaps. We used to do this on Solaris, so when we went to Linux we did the same thing. That said I would imagine that bad things would happen if you happened to have pages swapped out. Whats the kernel going to do when it goes to aquire a page from disk (data or text) and its not available. I would hope it would panic, because something is very wrong in the universe...james
Hi, I had raid0 my swap partitions, but in my experience, it causes a kernel panic in case of failure of 1 disk. I assume crashing depends on what is using that swap space at the moment of failure. On the other hand, if swap is on raid1, crashing of 1 disk doesn't affect the correct functioning of the machine.
Ciao Simone
Francois Caen wrote:
On 4/19/05, Joshua Baker-LePain jlb17@duke.edu wrote:
If you want the system to survive losing a disk (i.e. it stays up until you shut it down to swap the disk (if you don't have hot swap)), you must RAID all partitions, including swap. In a ks.cfg, it looks something like this:
The reason I don't softraid1 my swaps is that the default behavior is striping a-la-raid0 if you have multiple swaps. At least that's my understanding of it.
You guys bring up a good point in case of drive failure. I never tested it. Do you guys know what happens if the swaps are not softraid1? Does the kernel panic or something of the like? Or does it survive and just operate on less swapspace?
Francois _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Tue, 2005-04-19 at 08:15 -0700, Francois Caen wrote:
On 4/19/05, Joshua Baker-LePain jlb17@duke.edu wrote:
If you want the system to survive losing a disk (i.e. it stays up until you shut it down to swap the disk (if you don't have hot swap)), you must RAID all partitions, including swap. In a ks.cfg, it looks something like this:
The reason I don't softraid1 my swaps is that the default behavior is striping a-la-raid0 if you have multiple swaps. At least that's my understanding of it.
You guys bring up a good point in case of drive failure. I never tested it. Do you guys know what happens if the swaps are not softraid1? Does the kernel panic or something of the like? Or does it survive and just operate on less swapspace?
Well it dies horribly ... out and out locked up for me using RH 7.3 when a drive that is using un-mirrored swap failed. After that I decided mirrored swap was a good thing.
Paul
On 4/19/05, Francois Caen frcaen@gmail.com wrote:
On 4/18/05, Chuck Rock carock@epctech.com wrote:
Isn't RAID supposed to be redundant disks, not redundant partitions? If that hda disk goes bad, the raid and redundancy is nowhere to be found along with your data. Right?
Software is more flexible than that. It works with partitions.
Of course, for redundancy purposes, you want to mirror partitions on separate physical drives. But you don't have to mirror all the partitions on all your drives.
For example, you don't raid swap partitions.
We do that. The thinking is that if a drive goes your swap is still accessible through the md device. Ultimately we are trying to avoid VM issues because we lost one drive.
Cheers...james
On Mon, 2005-04-18 at 14:19 -0500, Chuck Rock wrote:
Is it just me, or does anyone see something seriously wrong with the Software RAID example?
http://mirrors.cat.pdx.edu/centos/3/docs/html/rhel-sag-en-3/ch-software-raid .html
Figure 12-4
Isn't RAID supposed to be redundant disks, not redundant partitions? If that hda disk goes bad, the raid and redundancy is nowhere to be found along with your data. Right?
Is there a way to make each disk a single software raid partition, then use slices of that partition for your drive partitions with LVM or something? I basically want to encapsulate the partitions for the operating system with RAID so that if a disk drive fails, it can be repaired.
I have never used this function before, so set me straight if I'm just missing the point.
Under CentOS 4 all I did was create a 100MB /boot & /boot2 partition as primary partitions on each drive and then made the rest of each disk a raid partition, created md0 and then specified that it was part of a LVM group in disk druid and finally split up the volume group into separate partitions.
Paul