Re: sFTP server setup ( Active/active) mode (803 Views)
Occasional Advisor
Posts: 16
Registered: ‎10-03-2013
Message 1 of 2 (822 Views)

sFTP server setup ( Active/active) mode

Hi All,


I am looking to setup two sFTP servers on RedHat Linux 6.4. The below are the requirements.


1) sFTP users would be authenicated via AD

2) sFTP chroot (jail) must be enabled / fucntional

3) We have to setup 2 sFTP servers , they would be load balanced( round robin) via Load Balancers and the requirement is to have BOTH the nodes Active in the pool.

4) sFTP users will have their own filesystems / mount points where files will be uploaded



Challenge is that these filesystems are on SAN and we need to be able to present these SAN filesystems ( same mount points) on BOTH the nodes since the 2 nodes will be Active/Active on the Load Balancer.


Pleae advice the best approach to accomplish this.


Appreciate your help.




Honored Contributor
Posts: 6,271
Registered: ‎12-02-2001
Message 2 of 2 (803 Views)

Re: sFTP server setup ( Active/active) mode

One possible solution to the filesystem issue is to have the users' filesystems available on some other host (ora NAS device) as NFS shares, which will then be mounted to both sFTP servers (optionally using the automounter).

If there will be a lot of users, this is probably the easiest way to implement this.




If the sFTP servers must not depend on any other hosts/NAS devices, then the simplest implementation would be to use a cluster filesystem (GFS2), which requires setting up a RedHat Cluster for the coordination of cluster filesystem locks. But this will probably be inconvenient if you have more than just a few users. Having a large number of GFS filesystems may be a waste of system resources.




An alternative solution would be to set up RedHat Cluster on the sFTP servers and configure a highly available NFS service on the cluster. The cluster node that is currently running the NFS service could be called the "master" node: only it will mount and access the SAN disks directly, and present a floating IP address for NFS access. Both nodes will then use an automounter to mount the users' filesystems using the floating IP to access the NFS shares.



This is not a particularly elegant solution, but it should be workable: the master role should be switchable from one server to another without terminating any existing sFTP connections, although the switch will cause a delay on NFS file operations. If the switch happens because the previous master node crashed, the new master node will have to run filesystem checks before it can mount the filesystems, which will cause a larger delay.



[If you attempt to mount a regular non-cluster filesystem (ext2/ext3/ext4 etc.) in two or more servers simultaneously, both servers will assume they're the only server accessing the filesystem, and will each cache the filesystem metadata to speed up filesystem access. Because of this caching, their ideas regarding filesystem state will get out of sync as soon as there are any write operations, and then the filesystem will become corrupted for sure.]

The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.