[CentOS] One server not showing SSH port, the other is.

Wed Oct 13 13:42:29 UTC 2010
Lamar Owen <lowen at pari.edu>

On Tuesday, October 12, 2010 06:56:27 pm Phil Schaffner wrote:
> Ryan Manikowski wrote on 10/11/2010 07:49 PM:
> > http://dotancohen.com/howto/portknocking.html

> Somehow I suspect the OP may have seen that one. :-D

Yeah, nothing quite like being directed to a howto you wrote yourself (getting flattered and insulted at the same time!).... I can remember a few cases of that, back when I was maintaining the PostgreSQL RPM set (and its README for the rpm packaging) and would post advice to folks about the way the RPM's work, and then get private e-mail back pointing me to my own README..... and, honestly, there was once that the advice I gave in the post and the advice I gave in the README was different, and I corrected the README for the next release.

As to the ssh business, if a full nmap scan doesn't see things differently, and the ssh connections are still working properly from clients, I would have no idea how that could be accomplished on the host itself (barring an IDS), other than perhaps a hosts.allow or hosts.deny difference, assuming that's set up for sshd.  The first thing I would think of otherwise is to check to make sure the selinux policy is set for the sshd to actually bind the port, but if that's the problem, ssh connections aren't going to work at all.

That is, if the ssh client can connect so can nmap, unless you're doing something proactive and IDS-like (perhaps a PacketFence in the line, or similar automatic intrusion detection mechanism activating for one server but not the other).  Even a small Cisco router with the IDS feature set can do this sort of thing on a port-by-port basis, so the nmap scan might be getting blocked that way.  That fact should be logged somewhere if that's the case.

With Dotan's background and experience, and the statement about iptables being turned off, I would make an educated assumption that there is another firewall somewhere to let iptables be turned off temporarily for testing; perhaps that firewall is not set up to pass both of the ports?  

These days one can't assume that the network is transparent, even between switch ports (packet-level ACLs on the data plane have been available in switches for 12 years or more; I have some Cisco Catalyst 8540 hardware that does data plane ACL's in hardware, and they're that old).  And with VLAN ACL's and deep inspection firewalling being available even in a transparent bridging mode (Linux Journal ran a series not long ago about a transparent bridging Linux firewall using, IIRC, OpenWRT on a Linksys) even layer 2 adjacency is no guarantee of transparency.

Then I'd look to make sure the ssh client's outgoing access is allowed on both ports (while that sounds obvious, it's time to check the obvious, it sounds like).  Run the client in debug mode and see what it spits out; also use a port-configurable tcp-using traceroute equivalent on both port numbers to see where blockages might be happening.

When things get this paranoid, my experience has been that fat-fingered port numbers and/or IP addresses and/or access-list numbers or names are the typical culprits in broken connectivity.  At least those have been my areas of mysterious network non-transparency, typically.

Dotan, can you share the nmap command-line you're using with the IP range sanitized?