CentOS boot problem
Garg
Purveyor of Lincoln Nightmares Icrontian
So, I became a server admin when I started my new job and found a server on my desk. We had to power it down to move it, and now on boot, it's complaining about a file system check (see attached).
/dev/sde was the 3TB external USB hard drive, but now it seems to be /dev/sdc. Here's what the devices looked like before powering down:
Hitting Ctrl-D to boot results in a boot loop during the CentOS loading bar.
Halp?
/dev/sde was the 3TB external USB hard drive, but now it seems to be /dev/sdc. Here's what the devices looked like before powering down:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_cuuats-lv_root 50G 18G 32G 36% / tmpfs 12G 460K 12G 1% /dev/shm /dev/sda3 485M 135M 325M 30% /boot /dev/mapper/vg_cuuats-lv_home 58G 966M 54G 2% /home /dev/sdb2 1.8T 19G 1.6T 2% /media/cuuats.data2 /dev/sdb1 1.9T 195M 1.8T 1% /media/cuuats.data /dev/sdc 50G 18G 32G 36% / /dev/sdd 50G 18G 32G 36% / /dev/sde 50G 18G 32G 36% / /dev/sde 2.7T 2.7T 0 100% /media/extBURunning e2fsk on /dev/sde just repeats the no such file or directory error.
Hitting Ctrl-D to boot results in a boot loop during the CentOS loading bar.
Halp?
0
Comments
Might check its configuration and see if anything is relying on /dev/sde for boot files. If so, update the path to /dev/sdc if that's where the 3TB is being mounted now.
First problem, you shouldn't be mounting 4 separate partitions to the same mount point (/).
As for dealing with the fsck error, if that drive doesn't contain anything critical to the actual boot process, you can boot into single mode and comment out the entry for the drive in /etc/fstab so that it won't attempt to mount on boot. That should at least get you into the system so you can troubleshoot further.
If /dev/sde (or whatever it should or ends up being) is an external drive AND it contains essential boot files, whoever set that up needs to be shot.
Failing that, as Ardi said - Get into single-user mode, zap the drive's entry in /etc/fstab, and reboot. It should then boot cleanly and you should be able to mess around with the external drive manually.
I would also recommend NOT having external drives set to auto in /etc/fstab for this very reason. If the drive drops out or another device somehow steals that drive's assignment (like a USB key that got plugged in while the drive was disconnected, etc), then you're going to run into a similar situation every time you need to boot.
If it's essential data that must be available the entire time the server is running, then the drive needs to be plugged directly into the disk controller and live inside the case.
If I run df -h at the root console after the e2fsck error, the only partitions mounted are those three and the root partition. Absolutely. But that's the weird thing - it's only for backup, doesn't have any boot files, and the SOP is to only have it mounted during a backup. I'm not sure why it would even be in fstab, except I read something saying that non-root users can only mount if the drive is in fstab.
Anyway, for now, I removed /dev/sde from fstab. Since the root login (single user mode?) mounted a readonly file system, I ran mount --options remount,rw /, then was able to edit fstab and boot.
Looking deeper, I'm not sure how you were ever able to mount the external drive since it appears to have the same device name as one of the disks in the RAID group, which SHOULD be impossible.
I'm going to guess that the server hasn't gone down in a LONG time, or had the external drive disconnected/unmounted before shutting down/starting up every time.
Everything seems to be good over here, now. Thanks guys!
No trace of those three 50GB drives now, after rebooting.