Re: New shell server transition
Posted: Wed May 23, 2018 5:21 pm
Still unchanged as of 5:20.
I can make it honor .no-motd -- will do.fw wrote:A few notes:
I'm definitely running afoul of some sort of idle timeout, and it's in not very many minutes (though I haven't yet measured the exact duration). This is with bash. I haven't yet changed any ssh settings, but according to the documentation, TCP keepalive is enabled by default, which should both keep the TCP connection from dropping and keep the NAT mapping from timing out (as long as the keepalive interval is shorter than the NAT timeout). I haven't tried fiddling with ServerAliveInterval yet.
There doesn't seem to be a way to disable the motd at login without also disabling the "last login" message (and possibly other messages of interest). The old server honored the .no-motd flag file, but the new one ignores it. Using .hushlogin is overly heavy-handed.
I'm surprised it doesn't do that by default, and can certainly add that.fw wrote: The new server doesn't add ~/bin to PATH globally, which the old one did. Perhaps this was intentional, since binaries built for oldshell won't necessarily work (due to missing libraries), but it means that some tweaking of local startup scripts is needed to get that effect.
The chroots & process visibility _are_ per-user -- not per-session. (Unless I'm misunderstanding what the mount option to the /proc fs is doing.) Not sure what you are seeing, but I just verified this with a test, and I got the expected behavior -- you can see all of your processes all over the system.fw wrote: The chroot containerization unfortunately separates multiple sessions from the same user. On oldshell, when something I'm running goes out to lunch, I can log in another session and kill the offending process, but that's not possible on the new server. It would be better if the containers could be per-userid rather than per-session, though I don't know how easy that would be to implement. A kludgy workaround might be to have special versions of ps and kill that could reach outside the chroot jail (while still being userid-constrained). And of course, if any real use is made of groups, containers would need to be per-group.
Just saw:gie wrote:Still unchanged as of 5:20.
I investigated this further and determined that the ssh session times out after about five minutes of inactivity. Setting ServerAliveInterval on the client to 240 (four minutes) fixes it. I don't know what the equivalent is in other clients, but this isn't necessary on oldshell.fw wrote:A few notes:
I'm definitely running afoul of some sort of idle timeout, and it's in not very many minutes (though I haven't yet measured the exact duration). This is with bash. I haven't yet changed any ssh settings, but according to the documentation, TCP keepalive is enabled by default, which should both keep the TCP connection from dropping and keep the NAT mapping from timing out (as long as the keepalive interval is shorter than the NAT timeout). I haven't tried fiddling with ServerAliveInterval yet.
My bad. I hadn't given ps the right options.scott wrote:The chroots & process visibility _are_ per-user -- not per-session. (Unless I'm misunderstanding what the mount option to the /proc fs is doing.) Not sure what you are seeing, but I just verified this with a test, and I got the expected behavior -- you can see all of your processes all over the system.fw wrote: The chroot containerization unfortunately separates multiple sessions from the same user. On oldshell, when something I'm running goes out to lunch, I can log in another session and kill the offending process, but that's not possible on the new server. It would be better if the containers could be per-userid rather than per-session, though I don't know how easy that would be to implement. A kludgy workaround might be to have special versions of ps and kill that could reach outside the chroot jail (while still being userid-constrained). And of course, if any real use is made of groups, containers would need to be per-group.
Also, the chroot is shared -- there is no other place for it to be, I wrote the script that starts and stops the chroots.
Those are actually "bind" mounts within the chroots. There are a few other bind mounts, too. Right now, they may pile up a bit, as I've "no-opped" the unmount script until I've satisfied myself that that isn't the cause of various API filesystems getting unmounted (/dev/pts being one of them).fw wrote:I investigated this further and determined that the ssh session times out after about five minutes of inactivity. Setting ServerAliveInterval on the client to 240 (four minutes) fixes it. I don't know what the equivalent is in other clients, but this isn't necessary on oldshell.fw wrote:A few notes:
I'm definitely running afoul of some sort of idle timeout, and it's in not very many minutes (though I haven't yet measured the exact duration). This is with bash. I haven't yet changed any ssh settings, but according to the documentation, TCP keepalive is enabled by default, which should both keep the TCP connection from dropping and keep the NAT mapping from timing out (as long as the keepalive interval is shorter than the NAT timeout). I haven't tried fiddling with ServerAliveInterval yet.
My bad. I hadn't given ps the right options.scott wrote:The chroots & process visibility _are_ per-user -- not per-session. (Unless I'm misunderstanding what the mount option to the /proc fs is doing.) Not sure what you are seeing, but I just verified this with a test, and I got the expected behavior -- you can see all of your processes all over the system.fw wrote: The chroot containerization unfortunately separates multiple sessions from the same user. On oldshell, when something I'm running goes out to lunch, I can log in another session and kill the offending process, but that's not possible on the new server. It would be better if the containers could be per-userid rather than per-session, though I don't know how easy that would be to implement. A kludgy workaround might be to have special versions of ps and kill that could reach outside the chroot jail (while still being userid-constrained). And of course, if any real use is made of groups, containers would need to be per-group.
Also, the chroot is shared -- there is no other place for it to be, I wrote the script that starts and stops the chroots.
Regarding devpts - I've sometimes seen a large proliferation of devpts mounts (I think I counted 124 in one case). Maybe this is what systemd is trying to clean up.