Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enter running machine as systemd service #123

Open
MrFoxPro opened this issue Aug 3, 2023 · 16 comments
Open

Enter running machine as systemd service #123

MrFoxPro opened this issue Aug 3, 2023 · 16 comments

Comments

@MrFoxPro
Copy link
Contributor

MrFoxPro commented Aug 3, 2023

Is it possible to connect terminal stdin/stdout to deployed machine, to inspect what's going on there?

@luochen1990
Copy link

Having same issue, maybe a sshd service might work, but is there any easier way ( like a tty directly ) ?

@astro
Copy link
Owner

astro commented Aug 4, 2023

If you look into git history, this existed as bin/microvm-console before, using a pty instance, just for qemu and cloud-hypervisor. I dropped it because I wasn't too happy with it.

I am happy to have consoles/serials configurable but with sensible defaults. I cannot give an ETA when I'll have time for that.

Also, I am delighted that @Mic92 has updated https://github.com/Mic92/vmsh -- please play with that!

@luochen1990
Copy link

Thanks, but vmsh seems not working for me , not sure why :(

@Mic92
Copy link
Contributor

Mic92 commented Aug 6, 2023

I think I would need more time to fix some issues with VMSH.
But here are some thoughts about serial/console support in microvm.nix itself:

  • Serial devices do not set TERM or terminal size correctly. Here is an serial nixos module that works around that: https://github.com/numtide/srvos/blob/main/nixos/common/serial.nix
  • virtio console would be ideal because it knows about the terminal size and can also update it dynamically. At least on the protocol level, I don't know what the state of the art hypervisors are doing.
  • Another option some vsock based daemon that works like ssh but doesn't require any special network configuration.
  • Or simply use ssh, this is what I am doing just now:

I allocate an tap interface called "management" to each vm (on the host I use mgt-$name). And allow ssh traffic from it:

{
    # Only allow ssh on internal tap devices
    networking.firewall.interfaces.management.allowedTCPPorts = [ 22 ];
    services.openssh.openFirewall = false;
}

Than I set the link-local ipv6 address to "fe80::1" on the host and "fe80::2" in the VM.

I can than use this ssh wrapper to access my machine:

{
  
  environment.systemPackages = [
    (pkgs.writeScriptBin "ssh-vm" ''
      #!/usr/bin/env bash
      if [[ "$#" -ne 1 ]]; then
        echo "Usage: $0 <vm-name>"
        exit 1
      fi
      vm=$1
      shift
      # we can disable host key checking because we use ipv6 link local addresses and no other vm can spoof them on this interface
      ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "$@" root@fe80::2%mgt-$vm
    '')
  ];
}

This allows me to login by using the VM name:

$ ssh-vm foo

@Mic92
Copy link
Contributor

Mic92 commented Aug 6, 2023

Systemd also now parses terminal name and size from the kernel command line, but this would be mainly useful for the initial terminal at boot time and not for ad-hoc ones: https://github.com/systemd/systemd/blob/6ac299e3cedb1d9eb8df88010c4994a90aa12a9a/NEWS#L144

@bouk
Copy link
Contributor

bouk commented Jan 12, 2024

A future version of systemd will make it easy to connect to a running VM over VSOCK: systemd/systemd#30777 which this project can use!

@jim3692
Copy link

jim3692 commented Jan 17, 2024

I just came across this issue on my configuration. For me, @Mic92's solution, with SSH over IPv6, did not work. I instead changed the Qemu parameters to forward /dev/ttyS0 to a unix sock. This allows me to at least access my VMs using socat.

I have these changes in this branch: https://github.com/jim3692/microvm.nix/tree/console-in-unix-sock

I have also implemented the microvm -s <name> command, which runs socat in raw mode, inside screen. I could not find an easier way to be able to leave the socat session.

EDIT: My VM's IPv6 is fe80::ff:fe00:1 and not fe80::2. I am not sure how link local works, but the VM's MAC is 02:00:00:00:00:01. I managed to successfully SSH using the correct IP.

@astro
Copy link
Owner

astro commented Jan 17, 2024

I prefer waiting for ssh over vsock rather than bringing back what we had before with microvm-console for only a few hypervisors.

BTW, find your machine's link-local addresses by pinging ff02::1%$interface (that's my favourite IPv6 address).

@tomfitzhenry
Copy link

I prefer waiting for ssh over vsock

This is doable today.

In your host:

microvm.my-vm.vsock.cid = 1337;

In your guest:

services.openssh = {
  enable = true;
  startWhenNeeded = true;
};
systemd.sockets.sshd = {
  socketConfig = {
    ListenStream = [
      "vsock:1337:22"
    ];
  };
};

Then, to connect to your guest from your host:

$ ssh -o "ProxyCommand socat - VSOCK-CONNECT:1337:22" root@localhost

@sydneymeyer
Copy link

I prefer waiting for ssh over vsock

This is doable today.

In your host:

microvm.my-vm.vsock.cid = 1337;

In your guest:

services.openssh = {
  enable = true;
  startWhenNeeded = true;
};
systemd.sockets.sshd = {
  socketConfig = {
    ListenStream = [
      "vsock:1337:22"
    ];
  };
};

Then, to connect to your guest from your host:

$ ssh -o "ProxyCommand socat - VSOCK-CONNECT:1337:22" root@localhost

This approach works with the default qemu VMM, but not with e.g. cloud-hypervisor, as it's terminating the vsock connection differently [1].

Is there a way to use an approach like this with cloud-hypervisor's implementation of vsock?

@Alfablos
Copy link

Hi everyone,
I'm pretty interested in this. I'm using macvtap interfaces and I'd like to ssh into the microvm (qemu) from another client host in the LAN.

For example:

  • The host machine (hosting the microvm) has ip 10.0.10.15
  • The microvm has ip 10.0.10.32
  • The client host in the lan I'm trying to connect from has ip 10.0.10.20

I'm trying to stick to examples and provided snippets where possible at first.

As expected, ssh from the host machine (10.0.10.15) doesn't work, but what I get when 10.0.10.20 tries to ssh into 10.0.10.32 (hosted on 10.0.10.15) is:

([email protected]) Password:
Read from remote host 10.0.100.32: Connection reset by peer
Connection to 10.0.100.32 closed.
client_loop: send disconnect: Broken pipe

However, if I curl the microvm it works.

➜  ~ curl 10.0.100.32
<html><body>It works</body></html>%

I've already tried playing with TCPKeepAlive and SSH options but I'm pretty sure this is something else.

This is an excerpt from my flake.nix file (unstable nixpkgs):

nixosConfigurations.myhost = lib.nixosSystem {
      specialArgs = { inherit inputs pkgs-stable microvm; };
      modules = [
	microvm.nixosModules.host
        ./hosts/myhost/configuration.nix
	sops-nix.nixosModules.sops
	{ nixpkgs.overlays = [ nur.overlays.default ]; }
	(import ./overlays)

My host configuration.nix looks like this:

  microvm.vms = {
    myvm = import ../../vms/myvm.nix {
      inherit config pkgs;
      stateVersion = config.system.stateVersion;
    };
  };
  # ...
  networking.networkmanager.enable = true;  # Easiest to use and most distros use this by default.
  networking.networkmanager.dns = "systemd-resolved";
  networking.networkmanager.unmanaged = [ "eth0" "docker0" "vmnet*" ];
  networking.networkmanager.logLevel = "INFO";
  networking.useDHCP = false;
  networking.useNetworkd = false;

  systemd.network.enable = true;
  systemd.network.wait-online.enable = false;

  networking.usePredictableInterfaceNames = false;

  systemd.network.links = {
    "10-eth0" = {
      matchConfig.PermanentMACAddress = "...";
      linkConfig.Name = "eth0";
    };
    "10-wlan0" = {
      matchConfig.PermanentMACAddress = "...";
      linkConfig.Name = "wlan0";
    };
  };

  systemd.network.networks = {
    "10-eth0" = {
      matchConfig.Name = "eth0";
      networkConfig = {
        DHCP = "yes";
      };
      linkConfig = {
        RequiredForOnline = false;
      };
    };
    # "10-wlan0" = {
    #   matchConfig.Name = "wlan0";
    #   networkConfig = {
    #     DHCP = "ipv4";
    #   };
    #   linkConfig = {
    #     RequiredForOnline = false;
    #   };
    # };
  };

  networking.networkmanager.ensureProfiles = {
    environmentFiles = [ config.sops.secrets."network/networkmanagerEnv".path ];
    profiles = {
      wifi1 = nmWifiConnectionBuilder { ssid = "wifi1"; authProtocol = "sae"; passOrVar = "$wifi1_pass"; };
      wifi2 = nmWifiConnectionBuilder { ssid = "wifi2"; authProtocol = "wpa-psk"; passOrVar = "somepass"; };
      wifi3 = nmWifiConnectionBuilder { ssid = "wifi3"; authProtocol = "sae"; passOrVar = "$wifi3_pass"; };
    };
  };

My vms/myvm.nix looks like this:

{ config, pkgs, ... }:
{
  config = {
    microvm = {
      hypervisor = "qemu";
      graphics.enable = false;
      interfaces = [
        {
          type = "macvtap";
          macvtap.link = "eth0";
          macvtap.mode = "bridge";
          id = "myvm";
          mac = "10:02:03:04:05:06";
        }
      ];
      shares = [
	{
	  source = "/nix/store";
          mountPoint = "/nix/.ro-store";
          tag = "ro-store";
          proto = "virtiofs";
	}
      ];

    };
    microvm.qemu.extraArgs = [
      # "-vga" "qxl"
      # "-device" "virtio-keyboard"
      "-usb"
      "-device"
      "usb-tablet,bus=usb-bus.0"
    ];

    networking.hostName = "myvm";
    networking.firewall.enable = false;
    networking.networkmanager.enable = true;
    system.stateVersion = config.system.nixos.version;

    services.openssh = {
      enable = true;
      settings.PermitEmptyPasswords = "yes";
      settings.PermitRootLogin = "yes";
    };
    
    users.users.root.password = "test";
    services.getty.autologinUser = "root";
    users.users.user = {
      uid = 1000;
      password = "test";
      shell = pkgs.bash;
      group = "user";
      isNormalUser = true;
      extraGroups = [ "wheel" ];
    };
    users.groups.user = { };

    security.sudo = {
      enable = true;
      wheelNeedsPassword = false;
    };
    environment.systemPackages = [ ];

    services.nginx = {
      enable = true;
      virtualHosts.localhost = {
	locations."/" = {
	  return = "200 '<html><body>It works</body></html>'";
	  extraConfig = ''
	    default_type text/html;
	  '';
	};
      };
    };

  };

}

I hope someone can help me diagnose this because this project is amazing and I want to use it! :D

@astro
Copy link
Owner

astro commented Dec 27, 2024

@Alfablos I have had the exact same issues with macvtap. I wonder how others like @megheaiulian @pinkisemils @PatrickDaG @pks-t (the macvtap contributors) get the host to communicate with the VMs? I'd like to document that.

Until then, I mostly use tap attached to a host bridge like documented in the handbook. Except for where the NIC can do SR-IOV which should really be documented, too!

Let's not forget the actual subject of this issue: SSH over Vsock would still be really nice to have!

@PatrickDaG
Copy link
Contributor

To be able to connect from the host you will need to also give it its own macvtap and use that as the interface, you can't use the physical interface directly. Something like this:

  # To be able to ping containers from the host, it is necessary
  # to create a macvlan on the host on the VLAN 1 network.
  networking.macvlans.lan = {
    interface = "lan01";
    mode = "bridge";
  };
  systemd.network.networks = {
    "10-lan01" = {
      #matchConfig.MACAddress = "...";
      matchConfig.Name = "lan";
}

I'm not quite sure why you can't ssh into the host from remote, but maybe some kind of firewall problem, or port conflict since your host uses the physical interface, which the macvtap also tries to do?

@megheaiulian
Copy link
Contributor

I just use a separate physical interface plugged into the same network.
I think (might be wrong) with macvlan the host and guest can't communicate because its hard to figure where the packets originate from when using the same interface.
One other option might be to add an additional user networking interface and communicate from the host via that one.

@Alfablos
Copy link

To be able to connect from the host you will need to also give it its own macvtap and use that as the interface, you can't use the physical interface directly. Something like this:

  # To be able to ping containers from the host, it is necessary

  # to create a macvlan on the host on the VLAN 1 network.

  networking.macvlans.lan = {

    interface = "lan01";

    mode = "bridge";

  };

  systemd.network.networks = {

    "10-lan01" = {

      #matchConfig.MACAddress = "...";

      matchConfig.Name = "lan";

}

I'm not quite sure why you can't ssh into the host from remote, but maybe some kind of firewall problem, or port conflict since your host uses the physical interface, which the macvtap also tries to do?

ok, so I'd get another IP from the router on my host machine, and the microvm would see connections from that IP.

What I don't get is why another separate host on the network does get a correct HTTP response from the microvm while disconnecting right after insertin the password. ssh -vvv is no more informative than without verbose logging. I tried changing the vm ssh port an changing its firewall accordingly (also disabling it) and disabling the host's firewall.

It's as if the vm's sshd were not properly responding. Maybe sshd is ok but I'm missing something in the vm's os configuration, even though I copied and pasted its config. I'll try using a tap device to see if mv side everything is ok!

@39555
Copy link

39555 commented Feb 22, 2025

I haven't tried try it myself yet but it seems like cloud-hypervisor supports connecting to the serial port via a socket. See cloud-hypervisor/cloud-hypervisor#5708

cloud-hypervisor \
        .... \
        --console tty \
        --serial socket=/tmp/serial.sock
socat pty,link=/tmp/serial.pty,raw UNIX-CONNECT:/tmp/serial.sock
screen /tmp/serial.pty

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests