Hello,
I have a server with an 17tb scsi-storage. In past, the storage has a "jfs"-filesystem. Now i want to create a "ext4"-filesystem. I have update the e2fsprogs from 1.41 to 1.42 (16tb limit >1.41).
Now I have an 17tb-storage as /dev/sda1 with ext4. I can mount this device as /home/ (/etc/fstab "/dev/sda1 /home/ ext4 defaults 1 2". Now I start a e2fschk /dev/sda1 (umounted). No error-messages are in the screen.
If I now reboot the server, the server does not started:
<---// Checking filesystems /dev/md2: clean, ..... /dev/md0: clean, ..... /dev/sda1 is in use. e2fsck: Cannot continue, aborting.
*** An error occurred during the file system check. *** Dropping you to a shell; the system will reboot *** when you leave the shell. Give root password for maintenance (or type Control-D to continue): //--->
I think thats can be a problem with the e2fsprogs 1.42, now i reinstall the server with the default e2fsprogs 1.41 from CentOS 6.4 and create only a 16tb /dev/sda1 partition with ext4. But if i start a "e2fschk /dev/sda1" and reboot the server, i have the same message in the boot-screen and the server does not boot.
Why the system thinks that the device still be in use? How can i change this?
Thanks Sebastian
unbelievable!
after a reboot: sometimes: /dev/sda1 1 267350 2147483647+ ee GPT
sometimes: /dev/sdc1 1 267350 2147483647+ ee GPT
The server change the device-name.I'm confused - I will use the UUID in fstab
Am 17.05.2013 11:22, schrieb sebastian:
Hello,
I have a server with an 17tb scsi-storage. In past, the storage has a "jfs"-filesystem. Now i want to create a "ext4"-filesystem. I have update the e2fsprogs from 1.41 to 1.42 (16tb limit >1.41).
Now I have an 17tb-storage as /dev/sda1 with ext4. I can mount this device as /home/ (/etc/fstab "/dev/sda1 /home/ ext4 defaults 1 2". Now I start a e2fschk /dev/sda1 (umounted). No error-messages are in the screen.
If I now reboot the server, the server does not started:
<---// Checking filesystems /dev/md2: clean, ..... /dev/md0: clean, ..... /dev/sda1 is in use. e2fsck: Cannot continue, aborting.
*** An error occurred during the file system check. *** Dropping you to a shell; the system will reboot *** when you leave the shell. Give root password for maintenance (or type Control-D to continue): //--->
I think thats can be a problem with the e2fsprogs 1.42, now i reinstall the server with the default e2fsprogs 1.41 from CentOS 6.4 and create only a 16tb /dev/sda1 partition with ext4. But if i start a "e2fschk /dev/sda1" and reboot the server, i have the same message in the boot-screen and the server does not boot.
Why the system thinks that the device still be in use? How can i change this?
Thanks Sebastian _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 05/17/2013 11:52 AM, sebastian wrote:
sometimes: /dev/sdc1
Does dmesg say what /dev/sd[ab] is in this case?
Mogens
sd 0:0:0:0: [sda] 586072368 512-byte logical blocks: (300 GB/279 GiB) sd 1:0:0:0: [sdb] 586072368 512-byte logical blocks: (300 GB/279 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sdb: sd 6:0:0:0: [sdc] 35158450176 512-byte logical blocks: (18.0 TB/16.3 TiB) sd 6:0:0:0: [sdc] Write Protect is off sd 6:0:0:0: [sdc] Mode Sense: 8f 00 00 08 sd 6:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdc: sda1 sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk sdb1 sdb2 sdb3 sd 1:0:0:0: [sdb] Attached SCSI disk sdc1 sd 6:0:0:0: [sdc] Attached SCSI disk md: bind<sda1> md: bind<sda3> md: bind<sdb3> md: raid1 personality registered for level 1 bio: create slab <bio-1> at 1 md/raid1:md2: active with 2 out of 2 mirrors created bitmap (3 pages) for device md2 md2: bitmap initialized from disk: read 1 pages, set 0 of 4342 bits md2: detected capacity change from 0 to 291335110656 md2: unknown partition table md: bind<sdb1> md/raid1:md1: active with 2 out of 2 mirrors md1: detected capacity change from 0 to 8384348160 md1: unknown partition table EXT4-fs (md2): mounted filesystem with ordered data mode. Opts: dracut: Mounted root filesystem /dev/md2
Am 17.05.2013 12:13, schrieb Mogens Kjaer:
On 05/17/2013 11:52 AM, sebastian wrote:
sometimes: /dev/sdc1
Does dmesg say what /dev/sd[ab] is in this case?
Mogens
On 05/17/2013 01:23 PM, sebastian wrote:
sd 0:0:0:0: [sda] 586072368 512-byte logical blocks: (300 GB/279 GiB) sd 1:0:0:0: [sdb] 586072368 512-byte logical blocks: (300 GB/279 GiB)
...
sd 6:0:0:0: [sdc] 35158450176 512-byte logical blocks: (18.0 TB/16.3 TiB) sd 6:0:0:0: [sdc] Write Protect is off sd 6:0:0:0: [sdc] Mode Sense: 8f 00 00 08 sd 6:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdc: sda1 sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk sdb1 sdb2 sdb3 sd 1:0:0:0: [sdb] Attached SCSI disk sdc1
So sd[ab] are two 300 GB disks, and sdc is the big storage.
When the big storage shows up as sda, do the two disks show up as sd[bc] or are they absent?
Mogens
Mogens Kjaer wrote:
On 05/17/2013 01:23 PM, sebastian wrote:
sd 0:0:0:0: [sda] 586072368 512-byte logical blocks: (300 GB/279 GiB) sd 1:0:0:0: [sdb] 586072368 512-byte logical blocks: (300 GB/279 GiB)
...
sd 6:0:0:0: [sdc] 35158450176 512-byte logical blocks: (18.0 TB/16.3 TiB) sd 6:0:0:0: [sdc] Write Protect is off sd 6:0:0:0: [sdc] Mode Sense: 8f 00 00 08 sd 6:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdc: sda1 sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk sdb1 sdb2 sdb3 sd 1:0:0:0: [sdb] Attached SCSI disk sdc1
So sd[ab] are two 300 GB disks, and sdc is the big storage.
When the big storage shows up as sda, do the two disks show up as sd[bc] or are they absent?
This is odd; I think I've only seen it once. I'd suggest using label or (bleah!) UUID. We prefer labels, since a) you can remember them, b) look for them, and c) there's no way to remember a UUID....
mark
Hi,
So sd[ab] are two 300 GB disks, and sdc is the big storage.
I think this is not so uncommon. In my experience this has to do with the initialization of PCIe controller cards. I assume you are using a RAID controller, right? I saw that with Areca RAID-controllers in the past as well with LSI-controller these days.
I am using UUID for that purpose. Not really convenient, but safe. You probably don't have to change that as soon as it is running, so the strange UUID designators are no big issue.
best regards --- Michael Schumacher PAMAS Partikelmess- und Analysesysteme GmbH Dieselstr.10, D-71277 Rutesheim Tel +49-7152-99630 Fax +49-7152-996333 Geschäftsführer: Gerhard Schreck Handelsregister B Stuttgart HRB 252024