03-05-2012 04:41 AM
I want to pipe output from a local command run on my (Linux) workstation to a remote VMS host via ssh, then pipe the output from that command to another local command for further processing. However, if I provide any input at all to the remote VMS host over ssh, it seems to break my connection and not execute the command.
Here's a simple test case involving only the Linux workstation:
echo -e "1\n2\n3" | ssh localhost cat | cat
This produces the expected three lines of output:
1 2 3
Here's the result I get when involving the remote VMS node instead:
$ echo -e "1\n2\n3" | ssh dyma 'type sys$input' | cat ****** Unauthorized use prohibited ****** $
That is, I get the login banner and nothing more. All input from the pipeline is discarded and my command isn't even executed (which I verified by replacing the command with "copy nla0: test.txt" -- the file isn't created). Here's an excerpt from 'ssh -vv' using the above example:
Authenticated to dyma ([220.127.116.11]:22). debug2: fd 4 setting O_NONBLOCK debug2: fd 5 setting O_NONBLOCK debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug1: Sending environment. debug1: Sending env LANG = en_CA.UTF-8 debug2: channel 0: request env confirm 0 debug1: Sending command: type sys$input debug2: channel 0: request exec confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 100000 rmax 32768 debug2: channel 0: read<=0 rfd 4 len 0 debug2: channel 0: read failed debug2: channel 0: close_read debug2: channel 0: input open -> drain debug2: channel 0: ibuf empty debug2: channel 0: send eof debug2: channel 0: input drain -> closed debug2: channel_input_status_confirm: type 99 id 0 debug2: exec request accepted on channel 0 debug2: channel 0: rcvd close debug2: channel 0: output open -> drain debug2: channel 0: obuf empty debug2: channel 0: close_write debug2: channel 0: output drain -> closed debug2: channel 0: almost dead debug2: channel 0: gc: notify user debug2: channel 0: gc: user detached debug2: channel 0: send close debug2: channel 0: is dead debug2: channel 0: garbage collecting debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: fd 1 clearing O_NONBLOCK Transferred: sent 3176, received 7576 bytes, in 0.3 seconds Bytes per second: sent 11499.2, received 27430.0 debug1: Exit status -1
I don't know how to interpret this. Is "read failed" significant?
I have tried this against two remote VMS nodes, an Alpha and an Itanium. Here is the "tcpip show version" from each:
HP TCP/IP Services for OpenVMS Alpha Version V5.6 - ECO 5 on an AlphaServer DS10 617 MHz running OpenVMS V8.3 HP TCP/IP Services for OpenVMS Industry Standard 64 Version V5.7 - ECO 1 on an HP rx2600 (1.50GHz/6.0MB) running OpenVMS V8.3-1H1
This is driving me nuts. Is there any way to resolve this, or is it simply impossible to involve a VMS host in a local pipeline via ssh?
03-05-2012 04:56 AM
By the way, is this relevant to the problem? Or have I just introduced a different problem into the equation? I tried doing a similar test, but just from the VMS host back to itself:
$ pipe write sys$output 1 | ssh localhost "type sys$input" dsa0:[sys0.syscommon.][sysexe]tcpip$ssh_ssh2.exe: FATAL: ssh_io_register_fd: fd 3 already registered! %TCPIP-F-SSH_FATAL, non-specific fatal error condition
03-05-2012 06:03 AM
>>> or is it simply impossible to involve a VMS host in a local pipeline via ssh?
I don't have an answer and I don't play an ssh/TCP/IP expert on TV.
Try an ssh dyma "sh log/proc" and an ssh dyma 'type [.ssh2]*.com" and you'll see that sys$input is a command file. It looks like a pre-fabricated one, into which your remote command was inserted. I have no idea where your data read from stdin by your local ssh command ends up on the remote side. I would expect somewhere in a socket. I would expect that in the remote Unix shell stdin is redirected to that socket. I have no idea how this is done on VMS. Sys$input is not a socket, sys$output is an FT device, a pseudo terminal, which seems to be connected to the socket.
03-05-2012 07:06 AM
Huh. Well, I see what you are getting at. However, I did not find any such file in my [.ssh2]. Perhaps it gets removed by the time I try to observe it due to some timing difference between my system's login vs. yours?
OK, so I've concluded that I just can't get there from here. It's a shame the VMS ssh implementation sucks so hard. I'll have to resort to moving temporary files around.
03-05-2012 08:58 AM
>>> However, I did not find any such file in my [.ssh2]. Perhaps it gets removed by the time I try to observe it due to some timing difference between my system's login vs. yours?
03-05-2012 06:13 PM
I think there's a culture clash here. PIPE on OpenVMS is a kludge, and quite dissimilar to the Unix implementation. Remember that the command executed by ssh is in the context of the other node, so it won't understand how to read from the previous pipe stage.
I get a different error from you:
$ pipe type login.com | ssh localhost show log/proc
%TCPIP-F-SSH_FATAL, non-specific fatal error condition
I suspect this means the mechanism which PIPE uses to communicate with the pipe stage subprocess is the same as SSH is trying to use and they're crashing into each other.
To simplify, I also tried rsh, this kind of worked:
pipe type mytextfile.txt | rsh localhost type sys$input
This is my text file
Just 2 lines
So, the rsh pipe stage was able to read the output from the first pipe stage, BUT it didn't see an EOF, so the command just hung. I could even catch the output in a 3rd pipe stage back in the context of the initiating process, but no EOF. I'm not sure how to send an EOF from the initial stage. Here's an attempt
$ pipe ( type mytextfile.txt ; write sys$output eof) | -
rsh localhosttype sys$input | -
(write sys$output "3rd stage" ; type sys$pipe)
This is my text file
Just 2 lines
03-05-2012 11:00 PM
I think you might have to give up on having this work as a single shell command line.
Instead, do it in a couple of steps:
1. On your Linux system, generate the data required by the VMS system.
2. Use any convenient method to transfer that data to a file on the VMS server. (Maybe SCP?)
3. Use SSH to run a remote command, e.g.
ssh vms-server "@command file"
(assumes command is a .COM file on the server which will process file)
4. Process the output received from the SSH command.
03-06-2012 03:16 AM
Yeah, that's what I concluded. Temporary files all the way. And we already have written a ruby procedure in which we've implemented a DSL with a few commonly needed commands (e.g. file uploads/downloads) and that handles contacting multiple VMS hosts in parallel. I just need to make sure that the Linux -> VMS support in that still works, and rewrite my script to use it. It's really quite a nice tool. I was just missing the ability to construct one-liner Unix pipelines in which most of the processing was handled locally (in contrast to using this tool, which is mostly focused on remote execution).
Thanks, everyone, for your answers.
03-30-2012 06:32 AM
I'm going to expose my ignorance here, but it seems that there is an issue regarding how you log in and what (if anything) exists in LOGIN.COM and SYLOGIN.COM on the OpenVMS side. I know for a fact that if you tried to do what you are doing on our system, you would have to go through two layers of things that "silently" try to decide what kind of terminal is making the connection and also they would issue some banners and do some other things that might change terminal characteristics. That could have the effect of flushing an input buffer. Also, the question exists in my mind as to whether you are logging in via password or PKI through SSH, because those differ in the characters consumed during login. I don't know that those count as useful comments, but I think they might be relevant simply to eliminate as issues.