Skip to content
This repository was archived by the owner on Oct 31, 2019. It is now read-only.
This repository was archived by the owner on Oct 31, 2019. It is now read-only.

load balancers go into critical state when the instances are rebooted #185

Open
@sundeepdhall

Description

@sundeepdhall

Terraform Version

# Run this command to get the terraform version:

$ terraform -v

[oracle@OL74-ClientImage terraform-kubernetes-installer]$ terraform -v
Terraform v0.11.3

  • provider.null v1.0.0
  • provider.oci v2.1.0
  • provider.random v1.1.0
  • provider.template v1.0.0
  • provider.tls v1.1.0

OCI Provider Version

# Execute the plugin directly to get the version:

$ <path-to-plugin>/terraform-provider-oci

-rwxr-xr-x 1 oracle oinstall 24145729 Mar 12 15:13 terraform-provider-oci_v2.1.0
[oracle@OL74-ClientImage plugins]$ ./terraform-provider-oci_v2.1.0
2018/03/29 04:11:13 [INFO] terraform-provider-oci 2.1.0
This binary is a plugin. These are not meant to be executed directly.
Please execute the program that consumes these plugins, which will
load any plugins automatically
[oracle@OL74-ClientImage plugins]$ pwd
/home/oracle/.terraform.d/plugins

Terraform Installer for Kubernetes Version

kubertnetes 1.8.5

Input Variables

[oracle@OL74-ClientImage terraform-kubernetes-installer]$ cat terraform.tfvars

BMCS Service

tenancy_ocid = "snipped"
compartment_ocid = "snipped"
fingerprint = "snipped"
private_key_path = "/home/oracle/.oci/oci_api_key.pem"
user_ocid = "snipped"
region="us-ashburn-1"

etcdShape = "VM.Standard1.2"
k8sMasterShape = "VM.Standard1.1"
k8sWorkerShape = "VM.Standard1.1"

etcdAd1Count = "1"
etcdAd2Count = "0"
etcdAd3Count = "0"

k8sMasterAd1Count = "0"
k8sMasterAd2Count = "1"
k8sMasterAd3Count = "0"

k8sWorkerAd1Count = "0"
k8sWorkerAd2Count = "0"
k8sWorkerAd3Count = "1"

etcdLBShape = "100Mbps"
k8sMasterLBShape = "100Mbps"

#etcd_ssh_ingress = "10.0.0.0/16"
#etcd_ssh_ingress = "0.0.0.0/0"
#etcd_cluster_ingress = "10.0.0.0/16"
master_ssh_ingress = "0.0.0.0/0"
worker_ssh_ingress = "0.0.0.0/0"
master_https_ingress = "0.0.0.0/0"
#worker_nodeport_ingress = "0.0.0.0/0"
#worker_nodeport_ingress = "10.0.0.0/16"

#control_plane_subnet_access = "public"
#k8s_master_lb_access = "public"
#natInstanceShape = "VM.Standard1.2"
#nat_instance_ad1_enabled = "true"
#nat_instance_ad2_enabled = "false"
#nat_instance_ad3_enabled = "true"
#nat_ssh_ingress = "0.0.0.0/0"
#public_subnet_http_ingress = "0.0.0.0/0"
#public_subnet_https_ingress = "0.0.0.0/0"

#worker_iscsi_volume_create is a bool not a string
#worker_iscsi_volume_create = true
#worker_iscsi_volume_size = 100

#etcd_iscsi_volume_create = true
#etcd_iscsi_volume_size = 50
ssh_public_key_openssh = "snipped"

Description of issue:

Load balancers go critical and remain so for over an hour, when the instances are rebooted
Cluster is setup using terraform plan
Cluster is working, instances are up, LB's are up. Able to deploy and access application service
Compute instances are stopped and restarted
The LB's go into critical state, and seem to remain in that state

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions