-
Notifications
You must be signed in to change notification settings - Fork 413
Trying to run a single server in Kubernetes #95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey Barry, This issue seems almost identical to another one you submitted last year: Issue #36 Stuck in "Waiting for master server..." I think my response is still the same. I've never used Kubernetes, but I suspect your "pod" (server?) hostname is changing, which really confuses Cronicle. As explained in #36, Cronicle is extremely sensitive and frankly entirely dependent on the server hostname being static (never changing after initial install). If your hostname is going to change, you have to do one of two things: (1) Follow the instructions in #36 (which have been promoted to a Troubleshooting Wiki) to change your server hostname, possibly on boot. You can now do it programmatically by the way, by exporting, manipulating and resubmitting the server data record. For example:
(2) See Issue #4 (Add docker support) which has lots of information about making Cronicle work with Docker. I assume Kubernetes would be similar. It sounds like several people have succeeded in making this work. One thing I talked about in #4 is forcing the server to become master, and ignoring the server hostname data. This is detailed in Comment 268882549. Good luck. |
Thanks, @jhuckaby , that did the trick. |
Hi @barryw |
Hi, if can help to somebody else, here a quick and dirty single line command:
|
Summary
I'm trying to deploy Cronicle to Kubernetes, and I have it mostly working, but if the Cronicle pod gets recreated Cronicle comes up in a perpetual "Waiting for master" state. I've set my maingrp regex to match the hostname of any new pod, but it doesn't help. The pod will also have a new IP address, so maybe that's the problem.
I'm using a persistent volume so that a new pod still retains the Cronicle data.
Steps to reproduce the problem
I have a private Docker image that I'm using that's specific for Kubernetes deployment, but the gist of the problem is that I should be able to recreate the Cronicle container using persistent data and have it declare itself the new master without intervention.
Your Setup
I'm running a custom Cronicle docker image that I've built myself that was tailor made for Kubernetes. In this environment, the Cronicle pod could get recreated (failed Kubernetes worker node), and when it does, it will have a different hostname (which is based on the pod name) and different IP address. I need to have a configuration that allows it to come up and declare itself the new master without intervention. The underlying config and data are stored on a persistent volume, so Cronicle will keep config and data even after the pod is recreated.
Operating system and version?
I'm using the node:6.11-alpine base Docker image.
Node.js version?
6.11
Cronicle software version?
Latest from master branch
Are you using a multi-server setup, or just a single server?
For now I just want to use a single master deployed in Kubernetes. It might even be nice to have a switch or config setting that does away with the whole master election and lets me run as a single server.
Are you using the filesystem as back-end storage, or S3/Couchbase?
Data and configs are stored persistently on Kubernetes persistent volumes (filesystem storage).
Can you reproduce the crash consistently?
Absolutely. I can bring up a new Cronicle with an empty data directory and everything works great. If I delete the Cronicle pod, Kubernetes dutifully brings up a new one and lets me know that Cronicle has already been configured (persistent disk which includes information about the current master). It determines that the current host is not the master and then waits for the master to contact it. My maingrp regex is ".+", which should catch any host. I've also tried using "^(cybric-local-cronicle.+)$" (pod names look like this: cybric-local-cronicle-5ddd6fbf66-cbhsl) without success.
Log Excerpts
I don't see anything unusual in the logs, except that it says that it's not a master and will wait for the current one:
Dockerfile
setup_and_start_in_debug.sh
$CONFIG_DIR is mounted from a ConfigMap, which is just a way of storing files and configurations in Kubernetes. The data directory is mounted as a persistent volume in /opt/cronicle/data_mount
API_KEY and ADMIN_PASSWORD are stored as Kubernetes secrets and passed in at pod creation time so that we can default them to known values. We use a custom setup.json in the ConfigMap that contains a placeholder for our API user.
Thanks!
Barry
The text was updated successfully, but these errors were encountered: