Hi,
I'm looking to put together a doc for the wiki.c.o on howto secure a remotely hosted machine. Its a situation that many of us find ourselves in, wherein we either lease or colo a server ( or many ) and there is always the issue of remote hands, other facility users etc being able to get physical access of the machines. So what are the usual steps that people take in order to secure their remote-hosted-servers.
A short list of things that I tend to always do is :
- disable all getty's
- make grub boot imediately with no user interrupt possible
- put sensitive data on a locally encrypted disk
- plumb in a bios password
- have all console redirected to a iLo / drac / ipmi2 device; if there is one of those - if not then redirect the output to a non-existing ttySX port ( isnt ideal! )
- disable all telnet and http/https access to the ilo / drac interfaces, ensure impi is secured.
What other, reasonable, steps should one consider ?
the end result, ofcourse, is to still have the option of handing passwords etc to the DC ops should there be a need to actually work on the machine remotely. so removing the keyb and display interfaces might not be desirable.
- KB
I'm looking to put together a doc for the wiki.c.o on howto secure a remotely hosted machine. There is always the issue of remote hands, other facility users etc being able to get physical access of the machines.
1: Rebuild kernel to remove local KVM (Keyboard Video Mouse), run headless; the only access is via ssh. 2: Log-ins through firewall allowed only from approved IPs/MACs regardless of possession of correct password.
3: When you first build the system, ghost/image the boot/root/usr (bru) drive onto a spare backup, verify the backup boots the machine the same as the main drive. 4: have the backup bru drive mailed to you, dupe it, and rsync the remote bru to your local copy whenever you make a change to the remote bru. 5: In the event of fire, vandalism, or other urgent cause, your cluster can appear on a new server rapidly. Just FedEx ghosts of your locally stored bru drive rsynced from what were your remote machines, and (on similar hardware) they should turn-key boot and run.
6: Repeat 3-5 with any mission-critical applications (think of the bru drive as "flight critical", this is the "not flight critical but mission critical" stuff.
the end result, ofcourse, is to still have the option of handing passwords etc to the DC ops should there be a need to actually work on the machine remotely. so removing the keyb and display interfaces might not be desirable.
Linux install disks in 'rescue' mode have sufficient terminal handling in their kernel that the running system doesn't need more than ssh. You have more worry about the competence and trustworthiness of the host employees than you know. So: The remote system doesn't need KVM (Keyboard Video Mouse), just ssh.
Headless embedded systems work fine this way... ssh only until ssh fails, then swap out the bru drive (rsync'd spare is on-site or with remote support personnel we send out) and mail me the junker; it gets installed as sdb on another system and operated on until 'why did it die' is discovered and corrected. Then it gets mailed back and become the on-site hot-spare that gets rsync'd when the running system changes. ******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**
On 8/20/2010 9:55 AM, Brunner, Brian T. wrote:
3: When you first build the system, ghost/image the boot/root/usr (bru) drive onto a spare backup, verify the backup boots the machine the same as the main drive. 4: have the backup bru drive mailed to you, dupe it, and rsync the remote bru to your local copy whenever you make a change to the remote bru.
This part tends to be problematic when the system is remote and you need hands-on access for the install. It would be much nicer to build locally and ship the initial drives.
5: In the event of fire, vandalism, or other urgent cause, your cluster can appear on a new server rapidly. Just FedEx ghosts of your locally stored bru drive rsynced from what were your remote machines, and (on similar hardware) they should turn-key boot and run.
Try it - you won't like it. If the MAC addresses of the NICs don't match what is configured, the network won't come up. Have fun with that when you've broken the local keyboard/monitor. I ship clonezilla-copied drives around fairly often, but bringing them up always involves local operators that know their way around linux enough to get the right IPs assigned to the right interfaces. I suppose if I had a dhcp server on all the destination networks I could watch for the IP they give out, then connect and change it but that's not very convenient either so sometimes I end up shipping the whole servers around.
3: When you first build the system, ghost/image the boot/root/usr (bru) drive onto a spare backup, verify the backup boots the machine the same as the main drive. 4: have the backup bru drive mailed to you, dupe it, and rsync the remote bru to your local copy whenever you make a change to the remote bru.
This part tends to be problematic when the system is remote and you need hands-on access for the install. It would be much nicer to build locally and ship the initial drives.
Build-local-and-ship > build-remote! Agreed! I'm not assuming the remote personnel are competent in Linux. I think you are. I suggest that "remote hosting security" howtos address the question of "what can we trust the remote people to do right?" We had one where the only thing we could trust the remote folks to do was cash a check.
My comments were thinking "if you don't own the remote server hardware but can own/update identical disk drives"
5: In the event of fire, vandalism, or other urgent cause, your cluster can appear on a new server rapidly. Just FedEx ghosts of your locally stored bru drive rsynced from what were your remote machines, and (on similar hardware) they should turn-key boot
and run.
Try it - you won't like it. If the MAC addresses of the NICs don't match what is configured, the network won't come up. Have fun with that when you've broken the local keyboard/monitor.
I do try it (as INfrequently as possible), it's to address situations that nobody likes. Phone calls to establish IP/MAC numbers take care of most problems. When remote people can't be bothered to know how to spell IP, like on some offshore oil drilling rigs, we send out a disk-installer.
I didn't point out the details of the unknown.
Fire might or might not move the machines to a new ISP or IP; Vandalism ditto; "Other urgent causes" has (in my experience) been remote operators who could not be relied on and therefore a new remote host was necessary. In some cases the old IPs work, new MACs are needed, in other cases both change or neither changes. So editing of your rsync'd drives before shipping them to the (new?) ISP/IP site is maybe necessary.
If you can trust your remote operators (more than I can) then remote KVM is an affordable risk. ******************************************************************* This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This footnote also confirms that this email message has been swept for the presence of computer viruses. www.Hubbell.com - Hubbell Incorporated**
On 8/20/2010 10:44 AM, Brunner, Brian T. wrote:
3: When you first build the system, ghost/image the boot/root/usr (bru) drive onto a spare backup, verify the backup boots the machine the same as the main drive. 4: have the backup bru drive mailed to you, dupe it, and rsync the remote bru to your local copy whenever you make a change to the remote bru.
This part tends to be problematic when the system is remote and you need hands-on access for the install. It would be much nicer to build locally and ship the initial drives.
Build-local-and-ship> build-remote! Agreed! I'm not assuming the remote personnel are competent in Linux. I think you are. I suggest that "remote hosting security" howtos address the question of "what can we trust the remote people to do right?" We had one where the only thing we could trust the remote folks to do was cash a check.
The bulk of our machines are in two large data centers of our own. The people there are trusted and competent enough in linux to assign IPs and help troubleshoot. A smaller number are in hosted data centers, but almost none of them are linux and we tend to handle one-off problems there by shipping whole configured servers to swap.
My comments were thinking "if you don't own the remote server hardware but can own/update identical disk drives"
If you have a large number of servers you probably have something identical locally. For a smaller number, it is probably time to be using some flavor of virtual machine or cloud service to eliminate the issues of hardware differences. In that case, copying the images around is even easier than cloning drives.
5: In the event of fire, vandalism, or other urgent cause, your cluster can appear on a new server rapidly. Just FedEx ghosts of your locally stored bru drive rsynced from what were your remote machines, and (on similar hardware) they should turn-key boot
and run.
Try it - you won't like it. If the MAC addresses of the NICs don't match what is configured, the network won't come up. Have fun with that when you've broken the local keyboard/monitor.
I do try it (as INfrequently as possible), it's to address situations that nobody likes. Phone calls to establish IP/MAC numbers take care of most problems. When remote people can't be bothered to know how to spell IP, like on some offshore oil drilling rigs, we send out a disk-installer.
I didn't point out the details of the unknown.
I thought you disabled all access except ssh - which would make it a big problem if the network doesn't start.
Fire might or might not move the machines to a new ISP or IP; Vandalism ditto; "Other urgent causes" has (in my experience) been remote operators who could not be relied on and therefore a new remote host was necessary. In some cases the old IPs work, new MACs are needed, in other cases both change or neither changes. So editing of your rsync'd drives before shipping them to the (new?) ISP/IP site is maybe necessary.
Needing new IPs is a different issue than then one I mentioned. If you have a HWADDR specified in your ifcfg-eth? file, it won't come up if you move the disk to a different machine where it doesn't match. If you remove the HDWADDR line and have more than one interface they are unlikely to be assigned in the right order. In some cases I've tried to edit in the right values but as the machine came up it renamed all of the ifcfg-eth? files with a .bak extension and created new ones using dhcp.
If you can trust your remote operators (more than I can) then remote KVM is an affordable risk.
So far it hasn't been enough of a problem to be worth a huge effort to solve, but I did prefer the predictable NIC assignement of the 2.4 kernel. And in a lab setting where things change often, I have someone install VMware ESXi and assign it one IP address which is pretty simple. Then even with the free version I can do everything else remotely including installs or conversions of guest images.
On Fri, Aug 20, 2010 at 11:21:49AM -0500, Les Mikesell wrote:
In some cases I've tried to edit in the right values but as the machine came up it renamed all of the ifcfg-eth? files with a .bak extension and created new ones using dhcp.
Since you're bringing that up - I've seen it too - what does that and how do we drive a stake through its heart?
Whit
On 08/20/2010 05:26 PM, Whit Blauvelt wrote:
In some cases I've tried to edit in the right values but as the machine came up it renamed all of the ifcfg-eth? files with a .bak extension and created new ones using dhcp.
Since you're bringing that up - I've seen it too - what does that and how do we drive a stake through its heart?
stop running kudzu on boot, and fix the pci address order in the kernel boot line
On 8/20/2010 11:26 AM, Whit Blauvelt wrote:
On Fri, Aug 20, 2010 at 11:21:49AM -0500, Les Mikesell wrote:
In some cases I've tried to edit in the right values but as the machine came up it renamed all of the ifcfg-eth? files with a .bak extension and created new ones using dhcp.
Since you're bringing that up - I've seen it too - what does that and how do we drive a stake through its heart?
Yes, if anyone knows where this is documented, please point it out. I think it is extremely important to know that you can revive your backups on replacement hardware and so far this seems mostly like black magic.
On 08/20/2010 03:55 PM, Brunner, Brian T. wrote:
1: Rebuild kernel to remove local KVM (Keyboard Video Mouse), run headless; the only access is via ssh.
that isnt going to help if the network card is dead. I dont want the machine shipped back to me for looking at :)
3: When you first build the system, ghost/image the boot/root/usr (bru) drive onto a spare backup, verify the backup boots the machine the same as the main drive. 4: have the backup bru drive mailed to you, dupe it, and rsync the remote bru to your local copy whenever you make a change to the remote bru. 5: In the event of fire, vandalism, or other urgent cause, your cluster can appear on a new server rapidly. Just FedEx ghosts of your locally stored bru drive rsynced from what were your remote machines, and (on similar hardware) they should turn-key boot and run.
points 3 - 5 are a bit academic, and very site specific. For my setup, it takes lesser time to rebuild the machine with the installer and have the config management system, job queue system restore a box's 'role' than use ghosting policies. eg. a bare metal install is ~ 5 min from a local cobbler setup, which can also trigger a puppet run which usually does the system state rebuild in about 15 - 18 minutes. Data needs restoring, but that will come from the backup machine.
With rapid provisioning where it is, i dont think ghosting is worth the extra agro.
- KB
On Friday 20 August 2010 10:55, Brunner, Brian T. wrote:
2: Log-ins through firewall allowed only from approved IPs/MACs regardless of possession of correct password.
One can never guarantee that they will be a at the approved IP/MAC Address when issues arise. For this reason I would use SSH-Keys for access to the machine. I would also move the port to something other then the default port and block 22 at the firewall. After that I would run something like fail2ban and drop any IP Address that fails to log in on the new port should that port be discovered by unauthorized persons.
2010/8/22 Robert Spangler mlists@zoominternet.net:
On Friday 20 August 2010 10:55, Brunner, Brian T. wrote:
2: Log-ins through firewall allowed only from approved IPs/MACs regardless of possession of correct password.
One can never guarantee that they will be a at the approved IP/MAC Address when issues arise. For this reason I would use SSH-Keys for access to the machine. I would also move the port to something other then the default port and block 22 at the firewall. After that I would run something like fail2ban and drop any IP Address that fails to log in on the new port should that port be discovered by unauthorized persons.
read cis redhat tuning manual, it is really good.
-- Eero
On Fri, Aug 20, 2010 at 10:18 AM, Karanbir Singh mail-lists@karan.org wrote:
A short list of things that I tend to always do is :
Decent list. I might also consider disabling vaious unused or unnecessary hardware kernel modules like usb-storage or other things that might lend themselves to snooping by remote hands.
On 08/20/2010 03:59 PM, Jim Perrin wrote:
Decent list. I might also consider disabling vaious unused or unnecessary hardware kernel modules like usb-storage or other things that might lend themselves to snooping by remote hands.
that's a good point. hald might be worth turning off completely too.
- KB
From: Karanbir Singh mail-lists@karan.org
What other, reasonable, steps should one consider ?
I think you can password protect the interactive editing in grub. Did not see: - Disable usb booting (and CD, if there is a CD/DVD drive). - lock the server case (if there is a lock). - enable case intrusion detection (if available).
JD