HP9000 Containers and NFS mounts (1611 Views)
Reply
Advisor
JCI IT Unix
Posts: 30
Registered: ‎11-21-1999
Message 1 of 16 (1,611 Views)
Accepted Solution

HP9000 Containers and NFS mounts

I am running HP9000 Containers version A.03.01.  I have the container up and running.  I now need to mount an exported file system from another HPUX server with both read and write permissions.  I can get the mount okay but from the container I cannot read or write properly to the mounted file system.  The NFS server exporting the file system is running HPUX 11.31, here is the dfstab export line:

/usr/sbin/share -F nfs -o nosuid,rw,root=All,window=30000 -d "none" /var/adm/crash -

 

The container /etc/fstab entry is this:

j217uv01.corp.na.jci.com:/var/adm/crash /hpcontainer1 nfs soft,rw,nosuid 0 2

 

When logged into the container as root, I am able to touch a file.  The ownership of this file is odd.  It is:

-rw-r-----   1 -2         sys             21 Jun 22 14:31 stuff4

 

I would have expected the owner be root.  There are other files in this exported file system owned by root that from the container, root is unable to read.

 

Does anyone know what I need to change?

Acclaimed Contributor
Dennis Handly
Posts: 25,182
Registered: ‎03-06-2006
Message 2 of 16 (1,590 Views)

Re: HP9000 Containers and NFS mounts

The UID of -2 means that "root is less than dirt".  Do you have this machine in your root=All netgroup?

Advisor
JCI IT Unix
Posts: 30
Registered: ‎11-21-1999
Message 3 of 16 (1,580 Views)

Re: HP9000 Containers and NFS mounts

Your suggestion helped.  I did some digging and found that the NFS server authenticates root from the NFS client as nobody.  Not sure why.  I added "anon=0" to the /etc/dfs/dfstab on the NFS server.  This now has fixed the problem.  Not sure if it is the best way, but for now this allows root to read and write to the NFS mounted file system.  Thank you Dennis.

Honored Contributor
Matti_Kurkela
Posts: 6,271
Registered: ‎12-02-2001
Message 4 of 16 (1,565 Views)

Re: HP9000 Containers and NFS mounts

If root of the NFS client is not "less than dirt", an intruder who has managed to gain root access on one NFS client can easily get root access on the NFS server (and all the other NFS clients) too, by placing a suitable script or binary to the NFS share, giving it setuid root permissions and tricking someone (or something) on the server  to run it.

 

Since the intruder is root on the NFS client, he can write an executable to a writeable NFS share, set its ownership to root and permissions to e.g. 4755.

 

This is a very old attack plan for NFS environments. Every NFS server administrator should understand how it works, in order to correctly evaluate the need for countermeasures like the "the NFS client's root is less than dirt on the NFS server" setting.

MK
Acclaimed Contributor
Dennis Handly
Posts: 25,182
Registered: ‎03-06-2006
Message 5 of 16 (1,549 Views)

Re: HP9000 Containers and NFS mounts

>I added "anon=0"

 

My suggestion was for you to fix the root=, not necessarily the anon=.  Now any client root (or unknown) user can be root on the NFS filesystem.

Advisor
JCI IT Unix
Posts: 30
Registered: ‎11-21-1999
Message 6 of 16 (1,541 Views)

Re: HP9000 Containers and NFS mounts

I did try the root=hostname alone prior to adding anon=0and that seemed to have no affect.

 

Currently my dfstab has the following:

 

/usr/sbin/share -F nfs -o nosuid,anon=0,rw,root=j217u019c3,window=30000 -d "none" /var/adm/crash -

 

It is only when I added "anon=0" that I was allowed to read files in the nfs mount file system as root on the nfs client.

Acclaimed Contributor
Dennis Handly
Posts: 25,182
Registered: ‎03-06-2006
Message 7 of 16 (1,536 Views)

Re: HP9000 Containers and NFS mounts

>I did try the root=hostname alone prior to adding anon=0 and that seemed to have no affect.

>root=j217u019c3,

 

You may have to use FQDN and you may want to add to what you have: root=All,FQDN

Advisor
JCI IT Unix
Posts: 30
Registered: ‎11-21-1999
Message 8 of 16 (1,515 Views)

Re: HP9000 Containers and NFS mounts

I made the change as follows on the NFS server dfstab:

 

/usr/sbin/share -F nfs -o nosuid,rw,root=j217u019c3.corp.na.jci.com,window=30000 -d "none" /var/adm/crash -

 

I took out "anon=0", added domain to hostname j217u019c3.

This has now allowed user root to read already existing files owned by root.  The ability to create a file by root is successful but the owner of the newly created file is again "-2" instead of "root".  User root cannot write to existing files owned by root.

 

 

Acclaimed Contributor
Dennis Handly
Posts: 25,182
Registered: ‎03-06-2006
Message 9 of 16 (1,497 Views)

Re: HP9000 Containers and NFS mounts

>The ability to create a file by root is successful but the owner of the newly created file is again "-2" instead of "root".

 

Then root= isn't working.

Can you do: /usr/sbin/showmount -e -a name-of-server

Or do it on your server without the name.

Perhaps the name of your client isn't exactly what you have in root=?

Advisor
JCI IT Unix
Posts: 30
Registered: ‎11-21-1999
Message 10 of 16 (1,487 Views)

Re: HP9000 Containers and NFS mounts

You have another BINGO.  The HP Container has two network cards configured as follows:

 

lan2:1    1500 10.11.5.0       10.11.5.63      63538338           0     47660597           0     0   
lan901:1  1500 10.11.16.128    10.11.16.138    277188             0     62253              0     0

 

The primary network for the container is lan901:1 (j217u019c3.corp.na.jci.com) which is an APA configuration, lan2:1 (j217u019b.corp.na.jci.com) is a tape backup network that is not an APA configuration.  I ran the showmount on the NFS server and it thinks the hostname for lan2:1 is what has mounted it.  In the dfstab, I replaced the primary hostname with the tape backup hostname, re-exported, re-mounted and now when I create a file it knows it is root that has created the file.  I can edit other root files in that NFS file system now as well.  I can leave it this way as it works but I can't help but wonder why it is doing this.

 

 

HP Pro
Doug_Lamoureux
Posts: 11
Registered: ‎11-30-2011
Message 11 of 16 (1,477 Views)

Re: HP9000 Containers and NFS mounts

Check your routing configuration for the container.  If the NFS server is not on the same subnet as your primary interface it may be using the default route which could be set using the secondary interface.   From the global execute:

 

# srp -v -l <container> -s network

 

for example: (see red for default route)

 

srp -v -l hp9ksys -s network

Name: hp9ksys Template: hp9000sys Service: network ID: 1
----------------------------------------------------------------------

Compartment Configuration (/etc/cmpt/hp9ksys.rules):
// owns the IP address
interface 192.1.1.111

Netconf Configuration:
INTERFACE_NAME="lan23:7"
INTERFACE_SKIP="true"
IP_ADDRESS="192.1.1.111"
TYPE="ipv4"
SUBNET_MASK="255.255.255.0"
INTERFACE_STATE="up"
BROADCAST_ADDRESS=""
DHCP_ENABLE="0"
INTERFACE_MODULES=""
CMGR_TAG="compartment="hp9ksys" template="hp9000sys" service="network" id="1""
ROUTE_DESTINATION="default"
ROUTE_SKIP="true"
ROUTE_MASK=""
ROUTE_GATEWAY="192.1.1.1"
ROUTE_COUNT="1"
ROUTE_ARGS=""
ROUTE_SOURCE="192.1.1.111"
ROUTE_PARAMS=""

Name: hp9ksys Template: hp9000sys Service: network ID: 4
----------------------------------------------------------------------

Compartment Configuration (/etc/cmpt/hp9ksys.rules):
// owns the IP address
interface 194.1.1.56

Netconf Configuration:
INTERFACE_NAME="lan1"
INTERFACE_SKIP="true"
IP_ADDRESS="194.1.1.56"
TYPE="ipv4"
SUBNET_MASK="255.255.255.0"
INTERFACE_STATE="up"
BROADCAST_ADDRESS=""
DHCP_ENABLE="0"
INTERFACE_MODULES=""
CMGR_TAG="compartment="hp9ksys" template="hp9000sys" service="network" id="4""

 

 

 

Advisor
JCI IT Unix
Posts: 30
Registered: ‎11-21-1999
Message 12 of 16 (1,473 Views)

Re: HP9000 Containers and NFS mounts

From the container host (global) server I executed the command, here is the output.  IP 10.11.16.138 is the primary container IP.

 

j217u019# srp -v -l hpc1 -s network

Name: hpc1  Template: hp9000sys Service: network ID: 1
----------------------------------------------------------------------

Compartment Configuration (/etc/cmpt/hpc1.rules):
// owns the IP address
interface       10.11.16.138  

Netconf Configuration:
INTERFACE_NAME="lan901:1"
INTERFACE_SKIP="true"
IP_ADDRESS="10.11.16.138"
TYPE="ipv4"
SUBNET_MASK="255.255.255.128"
INTERFACE_STATE="up"
BROADCAST_ADDRESS=""
DHCP_ENABLE="0"
INTERFACE_MODULES=""
CMGR_TAG="compartment="hpc1" template="hp9000sys" service="network" id="1""
ROUTE_DESTINATION="default"
ROUTE_SKIP="true"
ROUTE_MASK=""
ROUTE_GATEWAY="10.11.16.131"
ROUTE_COUNT="1"
ROUTE_ARGS=""
ROUTE_SOURCE="10.11.16.138"
ROUTE_PARAMS=""

Name: hpc1  Template: hp9000sys Service: network ID: 2
----------------------------------------------------------------------

Compartment Configuration (/etc/cmpt/hpc1.rules):
// owns the IP address
interface       10.11.5.63  

Netconf Configuration:
INTERFACE_NAME="lan2:1"
INTERFACE_SKIP="true"
IP_ADDRESS="10.11.5.63"
TYPE="ipv4"
SUBNET_MASK="255.255.255.0"
INTERFACE_STATE="up"
BROADCAST_ADDRESS=""
DHCP_ENABLE="0"
INTERFACE_MODULES=""
CMGR_TAG="compartment="hpc1" template="hp9000sys" service="network" id="2""
ROUTE_DESTINATION="default"
ROUTE_SKIP="true"
ROUTE_MASK=""
ROUTE_GATEWAY="10.11.5.1"
ROUTE_COUNT="1"
ROUTE_ARGS=""
ROUTE_SOURCE="10.11.5.63"
ROUTE_PARAMS=""


HP Pro
Doug_Lamoureux
Posts: 11
Registered: ‎11-30-2011
Message 13 of 16 (1,467 Views)

Re: HP9000 Containers and NFS mounts

You have 2 default gateways set for the container, one using each interface.  Try removing the default gateway for the backup network  - I assume that you only want to use that netwrork for 'backups'

Advisor
JCI IT Unix
Posts: 30
Registered: ‎11-21-1999
Message 14 of 16 (1,464 Views)

Re: HP9000 Containers and NFS mounts

On the container host server (global), I found the following in its netconf file for this particular interface:

 

ROUTE_DESTINATION[2]="default"
ROUTE_SKIP[2]="true"
ROUTE_MASK[2]=""
ROUTE_GATEWAY[2]="10.11.5.1"
ROUTE_COUNT[2]=1
ROUTE_ARGS[2]=""
ROUTE_SOURCE[2]="10.11.5.63"
ROUTE_PARAMS[2]=""

 

I will remove the "10.11.5.1" from the ROUTE_GATEWAY, do I leave the ROUTE_SOURCE alone?

HP Pro
Doug_Lamoureux
Posts: 11
Registered: ‎11-30-2011
Message 15 of 16 (1,433 Views)

Re: HP9000 Containers and NFS mounts

you should use the srp commands to modify the container network configuration - or use the Container Manager GUI found in SMH.

 

This command should work for your configuration:

 

# srp -b -r hpc1 -s network -id 2 iface=lan2:1 ip_address=10.11.5.63 ip_mask=255.255.255.0

 

then use "srp -v -l hpc1 -s network -id 2" to verify that there is no default route in the container configuration


Advisor
JCI IT Unix
Posts: 30
Registered: ‎11-21-1999
Message 16 of 16 (1,403 Views)

Re: HP9000 Containers and NFS mounts

Doug,

Thank you for the help.  That was the issue.  I did not even realize that the default gateway was configured for that lan card.  I removed it, modified the container.  Just to be certain, I rebooted the host system, restarted the container and all is well.  This issue is resolved.

 

Thanks to all who contributed.

The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.