-
Notifications
You must be signed in to change notification settings - Fork 301
Update machine metadata through etcd #555
Comments
This is not out of scope, but the current implementation will not support this without some major refactoring. As you noted, fleet is statically configured with specific metadata. It simply publishes that data to etcd, it does not read it back in from etcd. We should really move a subset of the config into etcd to make this reconfiguration simple. |
This could also apply to the functionality exposed in the HTTP API. |
@bcwaldon Is this already on the roadmap and can you give an eta? |
@gucki @jonboulle and I will spend some time getting together a formal roadmap later this week. We've been focusing a lot of our development effort on etcd, so fleet hasn't gotten a lot of love recently. This specific feature is probably something we want to support. |
+1 I'm using Ansible to configure metadata and being able to do this dynamically would be extremely helpful. |
+1 |
Unfortunately @bcwaldon and I are quite resource constrained right now, but we would be very open to accepting a patch to implement this if someone in the community is interested in putting one together! |
+1 |
+1 throwing this gist out (https://gist.github.com/skippy/d539442ada90be06459c) in case it is helpful for folks to see how to modify fleet metadata from a meta-data service (in this case AWS) |
+1 |
As this issue belongs to milestone 1.0, I have been reading the discussions in the issue. I can also see that there were discussions about adding schedulers like #922. Though I think that at the moment it's not a smart choice to add such schedulers, as it would require a lot of work while taking a risk of breaking changes. So I'm closing it. If I'm missing anything, please let me know. |
Being able to update metadata would allow for much more powerful scheduling in lieu of resource based scheduling. Perhaps this is already possible and just needs proper documentation, but I was unable to find anything.
From what I understand etcd is used to store the fleet registry and presumably the metadata. So perhaps one can manually tweak the etcd entries? I was unable to find where they were stored as all I see are update related files in etcd. And the etcd reference in cloudinit only lists basic machine info and no fleet information.
I would imagine restarting the fleet service with different metadata environment variable would do the trick, but would be rather brute force and would interrupt things. I'll try that next and see if state of services and such remains. EDIT: Seems to do the trick. Various status checks and list-units/list-machines do not show it while restarting obviously, but running services are not interrupted and show up again once up. Given my workaround was creating fake services (which unfortunately have to be on fleet and mapped to specific machine) to make up for lack of updating metadata...this approach of restarting seems better.
My usecase:
I am attempting to simulate resource based scheduling in a manner specific to my services. I have a few services of varying resource requirements and I can calculate how many I can fit (leaving margin for error). My thinking then was to add metadata indicating the types of services that could still fit on the machine (via a script on each node). The services would be define to look for their related metadata tag. Once the machine was too full for certain services the metadata could be removed.
This allows for hosting multiple of the same service on one machine (perhaps of varying sizes) instead of primitive...conflicts checks and what not.
Looking forward to your thoughts.
The text was updated successfully, but these errors were encountered: