diff --git a/load-balancing/elb/README.md b/load-balancing/elb/README.md index 69938c4..bda717d 100644 --- a/load-balancing/elb/README.md +++ b/load-balancing/elb/README.md @@ -1,52 +1,55 @@ # ELB and ASG lifecycle event scripts -Often when running a web service, you'll have your instances behind a load balancer. But when -deploying new code to these instances, you don't want the load balancer to continue sending customer -traffic to an instance while the deployment is in progress. Lifecycle event scripts give you the -ability to integrate your AWS CodeDeploy deployments with instances that are behind an Elastic Load -Balancer or in an Auto Scaling group. Simply set the name (or names) of the Elastic Load Balancer -your instances are a part of, set the scripts in the appropriate lifecycle events, and the scripts -will take care of deregistering the instance, waiting for connection draining, and re-registering -after the deployment finishes. +Often when running a web service, you'll have your instances behind a load balancer. But when deploying new code to these instances, you don't want the load balancer to continue sending customer traffic to an instance while the deployment is in progress. Lifecycle event scripts give you the ability to integrate your AWS CodeDeploy deployments with instances that are behind an Elastic Load Balancer or in an Auto Scaling group. Simply set the name (or names) of the Elastic Load Balancer your instances are a part of, set the scripts in the appropriate lifecycle events, and the scripts will take care of deregistering the instance, waiting for connection draining, and re-registering after the deployment finishes. ## Requirements -The register and deregister scripts have a couple of dependencies in order to properly interact with -Elastic Load Balancing and AutoScaling: - -1. The [AWS CLI](http://aws.amazon.com/cli/). In order to take advantage of -AutoScaling's Standby feature, the CLI must be at least version 1.3.25. If you -have Python and PIP already installed, the CLI can simply be installed with `pip -install awscli`. Otherwise, follow the [installation instructions](http://docs.aws.amazon.com/cli/latest/userguide/installing.html) -in the CLI's user guide. -1. An instance profile with a policy that allows, at minimum, the following actions: - -``` - elasticloadbalancing:Describe* - elasticloadbalancing:DeregisterInstancesFromLoadBalancer - elasticloadbalancing:RegisterInstancesWithLoadBalancer - autoscaling:Describe* - autoscaling:EnterStandby - autoscaling:ExitStandby - autoscaling:UpdateAutoScalingGroup -``` - -Note: the AWS CodeDeploy Agent requires that an instance profile be attached to all instances that -are to participate in AWS CodeDeploy deployments. For more information on creating an instance -profile for AWS CodeDeploy, see the [Create an IAM Instance Profile for Your Amazon EC2 Instances]() -topic in the documentation. -1. All instances are assumed to already have the AWS CodeDeploy Agent installed. +The register and deregister scripts have a couple of dependencies in order to properly interact with Elastic Load Balancing and AutoScaling: + +1. The [AWS CLI](http://aws.amazon.com/cli/). In order to take advantage of AutoScaling's Standby feature, the CLI must be at least version 1.3.25. If you have Python and PIP already installed, the CLI can simply be installed with `pip install awscli`. Otherwise, follow the [installation instructions](http://docs.aws.amazon.com/cli/latest/userguide/installing.html) in the CLI's user guide. + +2. An instance profile with a policy that allows, at minimum, the following actions: + + elasticloadbalancing:Describe* + elasticloadbalancing:DeregisterInstancesFromLoadBalancer + elasticloadbalancing:RegisterInstancesWithLoadBalancer + autoscaling:Describe* + autoscaling:EnterStandby + autoscaling:ExitStandby + autoscaling:UpdateAutoScalingGroup + autoscaling:SuspendProcesses + autoscaling:ResumeProcesses + + **Note**: the AWS CodeDeploy Agent requires that an instance profile be attached to all instances that are to participate in AWS CodeDeploy deployments. For more information on creating an instance profile for AWS CodeDeploy, see the [Create an IAM Instance Profile for Your Amazon EC2 Instances](http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html) topic in the documentation. + +3. All instances are assumed to already have the AWS CodeDeploy Agent installed. ## Installing the Scripts To use these scripts in your own application: 1. Install the AWS CLI on all your instances. -1. Update the policies on the EC2 instance profile to allow the above actions. -1. Copy the `.sh` files in this directory into your application source. -1. Edit your application's `appspec.yml` to run `deregister_from_elb.sh` on the ApplicationStop event, -and `register_with_elb.sh` on the ApplicationStart event. -1. Edit `common_functions.sh` to set `ELB_LIST` to contain the name(s) of the Elastic Load -Balancer(s) your deployment group is a part of. Make sure the entries in ELB_LIST are separated by space. -1. Deploy! +2. Update the policies on the EC2 instance profile to allow the above actions. +3. Copy the `.sh` files in this directory into your application source. +4. Edit your application's `appspec.yml` to run `deregister_from_elb.sh` on the ApplicationStop event, and `register_with_elb.sh` on the ApplicationStart event. +5. If your instance is not in an Auto Scaling Group, edit `common_functions.sh` to set `ELB_LIST` to contain the name(s) of the Elastic Load Balancer(s) your deployment group is a part of. Make sure the entries in ELB_LIST are separated by space. +Alternatively, you can set `ELB_LIST` to `_all_` to automatically use all load balancers the instance is registered to, or `_any_` to get the same behaviour as `_all_` but without failing your deployments if the instance is not part of any ASG or ELB. This is more flexible in heterogeneous tag-based Deployment Groups. +6. Optionally set up `HANDLE_PROCS=true` in `common_functions.sh`. See note below. +7. Deploy! + +## Important notice about handling AutoScaling processes + +When using AutoScaling with CodeDeploy you have to consider some edge cases during the deployment time window: + +1. If you have a scale up event, the new instance(s) will get the latest successful *Revision*, and not the one you are currently deploying. You will end up with a fleet of mixed revisions. +2. If you have a scale down event, instances are going to be terminated, and your deployment will (probably) fail. +3. If your instances are not balanced accross Availability Zones **and you are** using these scripts, AutoScaling may terminate some instances or create new ones to maintain balance (see [this doc](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html#process-types)), interfering with your deployment. +4. If you have the health checks of your AutoScaling Group based off the ELB's ([documentation](http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html)) **and you are not** using these scripts, then instances will be marked as unhealthy and terminated, interfering with your deployment. + +In an effort to solve these cases, the scripts can suspend some AutoScaling processes (AZRebalance, AlarmNotification, ScheduledActions and ReplaceUnhealthy) while deploying, to avoid those events happening in the middle of your deployment. You only have to set up `HANDLE_PROCS=true` in `common_functions.sh`. + +A record of the previously (to the start of the deployment) suspended process is kept by the scripts (on each instance), so when finishing the deployment the status of the processes on the AutoScaling Group should be returned to the same status as before. I.e. if AZRebalance was suspended manually it will not be resumed. However, if the scripts don't run (failed deployment) you may end up with stale suspended processes. + +Disclaimer: There's a small chance that an event is triggered while the deployment is progressing from one instance to another. The only way to avoid that completely whould be to monitor the deployment externally to CodeDeploy/AutoScaling and act accordingly. The effort on doing that compared to this depends on the each use case. +**WARNING**: If you are using this functionality you should only use *CodeDepoyDefault.OneAtATime* deployment configuration to ensure a serial execution of the scripts. Concurrent runs are not supported. diff --git a/load-balancing/elb/common_functions.sh b/load-balancing/elb/common_functions.sh index 7ee6dbe..7e3f21a 100644 --- a/load-balancing/elb/common_functions.sh +++ b/load-balancing/elb/common_functions.sh @@ -14,7 +14,10 @@ # permissions and limitations under the License. # ELB_LIST defines which Elastic Load Balancers this instance should be part of. -# The elements in ELB_LIST should be seperated by space. +# The elements in ELB_LIST should be separated by space. Safe default is "". +# Set to "_all_" to automatically find all load balancers the instance is registered to. +# Set to "_any_" will work as "_all_" but will not fail if instance is not attached to +# any ASG or ELB, giving flexibility. ELB_LIST="" # Under normal circumstances, you shouldn't need to change anything below this line. @@ -34,6 +37,12 @@ WAITER_INTERVAL=1 # AutoScaling Standby features at minimum require this version to work. MIN_CLI_VERSION='1.3.25' +# Create a flagfile for each deployment +FLAGFILE="/tmp/asg_codedeploy_flags-$DEPLOYMENT_GROUP_ID-$DEPLOYMENT_ID" + +# Handle ASG processes +HANDLE_PROCS=false + # Usage: get_instance_region # # Writes to STDOUT the AWS region as known by the local instance. @@ -49,6 +58,123 @@ get_instance_region() { AWS_CLI="aws --region $(get_instance_region)" +# Usage: set_flag +# +# Writes = to FLAGFILE +set_flag() { + if echo "$1=$2" >> $FLAGFILE; then + return 0 + else + error_exit "Unable to write flag \"$1=$2\" to $FLAGFILE" + fi +} + +# Usage: get_flag +# +# Checks for in FLAGFILE. Echoes it's value and returns 0 on success or non-zero if it fails to read the file. +get_flag() { + if [ -r $FLAGFILE ]; then + local result=$(awk -F= -v flag="$1" '{if ( $1 == flag ) {print $2}}' $FLAGFILE) + echo "${result}" + return 0 + else + # FLAGFILE doesn't exist + return 1 + fi +} + +# Usage: check_suspended_processes +# +# Checks processes suspended on the ASG before beginning and store them in +# the FLAGFILE to avoid resuming afterwards. Also abort if Launch process +# is suspended. +check_suspended_processes() { + # Get suspended processes in an array + local suspended=($($AWS_CLI autoscaling describe-auto-scaling-groups \ + --auto-scaling-group-name "${asg_name}" \ + --query 'AutoScalingGroups[].SuspendedProcesses' \ + --output text | awk '{printf $1" "}')) + + if [ ${#suspended[@]} -eq 0 ]; then + msg "No processes were suspended on the ASG before starting." + else + msg "This processes were suspended on the ASG before starting: ${suspended[*]}" + fi + + # If "Launch" process is suspended abort because we will not be able to recover from StandBy. Note the "[[ ... =~" bashism. + if [[ "${suspended[@]}" =~ "Launch" ]]; then + error_exit "'Launch' process of AutoScaling is suspended which will not allow us to recover the instance from StandBy. Aborting." + fi + + for process in ${suspended[@]}; do + set_flag "$process" "true" + done +} + +# Usage: suspend_processes +# +# Suspend processes known to cause problems during deployments. +# The API call is idempotent so it doesn't matter if any were previously suspended. +suspend_processes() { + local -a processes=(AZRebalance AlarmNotification ScheduledActions ReplaceUnhealthy) + + msg "Suspending ${processes[*]} processes" + $AWS_CLI autoscaling suspend-processes \ + --auto-scaling-group-name "${asg_name}" \ + --scaling-processes ${processes[@]} + if [ $? != 0 ]; then + error_exit "Failed to suspend ${processes[*]} processes for ASG ${asg_name}. Aborting as this may cause issues." + fi +} + +# Usage: resume_processes +# +# Resume processes suspended, except for the one suspended before deployment. +resume_processes() { + local -a processes=(AZRebalance AlarmNotification ScheduledActions ReplaceUnhealthy) + local -a to_resume + + for p in ${processes[@]}; do + if ! local tmp_flag_value=$(get_flag "$p"); then + error_exit "$FLAGFILE doesn't exist or is unreadable" + elif [ ! "$tmp_flag_value" = "true" ] ; then + to_resume=("${to_resume[@]}" "$p") + fi + done + + msg "Resuming ${to_resume[*]} processes" + $AWS_CLI autoscaling resume-processes \ + --auto-scaling-group-name "${asg_name}" \ + --scaling-processes ${to_resume[@]} + if [ $? != 0 ]; then + error_exit "Failed to resume ${to_resume[*]} processes for ASG ${asg_name}. Aborting as this may cause issues." + fi +} + +# Usage: remove_flagfile +# +# Removes FLAGFILE. Returns non-zero if failure. +remove_flagfile() { + if rm $FLAGFILE; then + msg "Successfully removed flagfile $FLAGFILE" + return 0 + else + msg "WARNING: Failed to remove flagfile $FLAGFILE." + fi +} + +# Usage: finish_msg +# +# Prints some finishing statistics +finish_msg() { + msg "Finished $(basename $0) at $(/bin/date "+%F %T")" + + end_sec=$(/bin/date +%s.%N) + elapsed_seconds=$(echo "$end_sec" "$start_sec" | awk '{ print $1 - $2 }') + + msg "Elapsed time: $elapsed_seconds" +} + # Usage: autoscaling_group_name # # Prints to STDOUT the name of the AutoScaling group this instance is a part of and returns 0. If @@ -101,6 +227,14 @@ autoscaling_enter_standby() { return 0 fi + if [ "$HANDLE_PROCS" = "true" ]; then + msg "Checking ASG ${asg_name} suspended processes" + check_suspended_processes + + # Suspend troublesome processes while deploying + suspend_processes + fi + msg "Checking to see if ASG ${asg_name} will let us decrease desired capacity" local min_desired=$($AWS_CLI autoscaling describe-auto-scaling-groups \ --auto-scaling-group-name "${asg_name}" \ @@ -113,6 +247,7 @@ autoscaling_enter_standby() { if [ -z "$min_cap" -o -z "$desired_cap" ]; then msg "Unable to determine minimum and desired capacity for ASG ${asg_name}." msg "Attempting to put this instance into standby regardless." + set_flag "asgmindecremented" "false" elif [ $min_cap == $desired_cap -a $min_cap -gt 0 ]; then local new_min=$(($min_cap - 1)) msg "Decrementing ASG ${asg_name}'s minimum size to $new_min" @@ -123,10 +258,13 @@ autoscaling_enter_standby() { msg "Failed to reduce ASG ${asg_name}'s minimum size to $new_min. Cannot put this instance into Standby." return 1 else - msg "ASG ${asg_name}'s minimum size has been decremented, creating flag file /tmp/asgmindecremented" - # Create a "flag" file to denote that the ASG min has been decremented - touch /tmp/asgmindecremented + msg "ASG ${asg_name}'s minimum size has been decremented, creating flag in file $FLAGFILE" + # Create a "flag" denote that the ASG min has been decremented + set_flag "asgmindecremented" "true" fi + else + msg "No need to decrement ASG ${asg_name}'s minimum size" + set_flag "asgmindecremented" "false" fi msg "Putting instance $instance_id into Standby" @@ -192,7 +330,9 @@ autoscaling_exit_standby() { return 1 fi - if [ -a /tmp/asgmindecremented ]; then + if ! local tmp_flag_value=$(get_flag "asgmindecremented"); then + error_exit "$FLAGFILE doesn't exist or is unreadable" + elif [ "$tmp_flag_value" = "true" ]; then local min_desired=$($AWS_CLI autoscaling describe-auto-scaling-groups \ --auto-scaling-group-name "${asg_name}" \ --query 'AutoScalingGroups[0].[MinSize, DesiredCapacity]' \ @@ -207,16 +347,22 @@ autoscaling_exit_standby() { --min-size $new_min) if [ $? != 0 ]; then msg "Failed to increase ASG ${asg_name}'s minimum size to $new_min." + remove_flagfile return 1 else msg "Successfully incremented ASG ${asg_name}'s minimum size" - msg "Removing /tmp/asgmindecremented flag file" - rm -f /tmp/asgmindecremented fi else msg "Auto scaling group was not decremented previously, not incrementing min value" fi + if [ "$HANDLE_PROCS" = "true" ]; then + # Resume processes, except for the ones suspended before deployment + resume_processes + fi + + # Clean up the FLAGFILE + remove_flagfile return 0 } @@ -240,6 +386,9 @@ get_instance_state_asg() { fi } +# Usage: reset_waiter_timeout +# +# Resets the timeout value to account for the ELB timeout and also connection draining. reset_waiter_timeout() { local elb=$1 local state_name=$2 @@ -396,30 +545,11 @@ validate_elb() { get_elb_list() { local instance_id=$1 - local asg_name=$($AWS_CLI autoscaling describe-auto-scaling-instances \ - --instance-ids $instance_id \ - --query AutoScalingInstances[*].AutoScalingGroupName \ - --output text | sed -e $'s/\t/ /g') local elb_list="" - if [ -z "${asg_name}" ]; then - msg "Instance is not part of an ASG. Looking up from ELB." - local all_balancers=$($AWS_CLI elb describe-load-balancers \ - --query LoadBalancerDescriptions[*].LoadBalancerName \ - --output text | sed -e $'s/\t/ /g') - for elb in $all_balancers; do - local instance_health - instance_health=$(get_instance_health_elb $instance_id $elb) - if [ $? == 0 ]; then - elb_list="$elb_list $elb" - fi - done - else - elb_list=$($AWS_CLI autoscaling describe-auto-scaling-groups \ - --auto-scaling-group-names "${asg_name}" \ - --query AutoScalingGroups[*].LoadBalancerNames \ - --output text | sed -e $'s/\t/ /g') - fi + elb_list=$($AWS_CLI elb describe-load-balancers \ + --query $'LoadBalancerDescriptions[].[join(`,`,Instances[?InstanceId==`'$instance_id'`].InstanceId),LoadBalancerName]' \ + --output text | grep $instance_id | awk '{ORS=" ";print $2}') if [ -z "$elb_list" ]; then return 1 diff --git a/load-balancing/elb/deregister_from_elb.sh b/load-balancing/elb/deregister_from_elb.sh index c497e57..39d89cb 100755 --- a/load-balancing/elb/deregister_from_elb.sh +++ b/load-balancing/elb/deregister_from_elb.sh @@ -44,15 +44,34 @@ if [ $? == 0 -a -n "${asg}" ]; then error_exit "Failed to move instance into standby" else msg "Instance is in standby" + finish_msg exit 0 fi fi -msg "Instance is not part of an ASG, continuing..." +msg "Instance is not part of an ASG, trying with ELB..." -msg "Checking that user set at least one load balancer" -if test -z "$ELB_LIST"; then - error_exit "Must have at least one load balancer to deregister from" +set_flag "dereg" "true" + +if [ -z "$ELB_LIST" ]; then + error_exit "ELB_LIST is empty. Must have at least one load balancer to deregister from, or \"_all_\", \"_any_\" values." +elif [ "${ELB_LIST}" = "_all_" ]; then + msg "Automatically finding all the ELBs that this instance is registered to..." + get_elb_list $INSTANCE_ID + if [ $? != 0 ]; then + error_exit "Couldn't find any. Must have at least one load balancer to deregister from." + fi + set_flag "ELBs" "$ELB_LIST" +elif [ "${ELB_LIST}" = "_any_" ]; then + msg "Automatically finding all the ELBs that this instance is registered to..." + get_elb_list $INSTANCE_ID + if [ $? != 0 ]; then + msg "Couldn't find any, but ELB_LIST=any so finishing successfully without deregistering." + set_flag "ELBs" "" + finish_msg + exit 0 + fi + set_flag "ELBs" "$ELB_LIST" fi # Loop through all LBs the user set, and attempt to deregister this instance from them. @@ -72,7 +91,7 @@ for elb in $ELB_LIST; do fi done -# Wait for all Deregistrations to finish +# Wait for all deregistrations to finish msg "Waiting for instance to de-register from its load balancers" for elb in $ELB_LIST; do wait_for_state "elb" $INSTANCE_ID "OutOfService" $elb @@ -81,9 +100,4 @@ for elb in $ELB_LIST; do fi done -msg "Finished $(basename $0) at $(/bin/date "+%F %T")" - -end_sec=$(/bin/date +%s.%N) -elapsed_seconds=$(echo "$end_sec - $start_sec" | /usr/bin/bc) - -msg "Elapsed time: $elapsed_seconds" +finish_msg diff --git a/load-balancing/elb/register_with_elb.sh b/load-balancing/elb/register_with_elb.sh index fc4e1f2..2a88c65 100755 --- a/load-balancing/elb/register_with_elb.sh +++ b/load-balancing/elb/register_with_elb.sh @@ -44,15 +44,44 @@ if [ $? == 0 -a -n "${asg}" ]; then error_exit "Failed to move instance out of standby" else msg "Instance is no longer in Standby" + finish_msg exit 0 fi fi -msg "Instance is not part of an ASG, continuing..." - -msg "Checking that user set at least one load balancer" -if test -z "$ELB_LIST"; then - error_exit "Must have at least one load balancer to register to" +msg "Instance is not part of an ASG, continuing with ELB" + +if [ -z "$ELB_LIST" ]; then + error_exit "ELB_LIST is empty. Must have at least one load balancer to register to, or \"_all_\", \"_any_\" values." +elif [ "${ELB_LIST}" = "_all_" ]; then + if [ "$(get_flag "dereg")" = "true" ]; then + msg "Finding all the ELBs that this instance was previously registered to" + if ! ELB_LIST=$(get_flag "ELBs"); then + error_exit "$FLAGFILE doesn't exist or is unreadble" + elif [ -z $ELB_LIST ]; then + error_exit "Couldn't find any. Must have at least one load balancer to register to." + fi + else + msg "Assuming this is the first deployment and ELB_LIST=_all_ so finishing successfully without registering." + finish_msg + exit 0 + fi +elif [ "${ELB_LIST}" = "_any_" ]; then + if [ "$(get_flag "dereg")" = "true" ]; then + msg "Finding all the ELBs that this instance was previously registered to" + if ! ELB_LIST=$(get_flag "ELBs"); then + error_exit "$FLAGFILE doesn't exist or is unreadble" + elif [ -z $ELB_LIST ]; then + msg "Couldn't find any, but ELB_LIST=_any_ so finishing successfully without registering." + remove_flagfile + finish_msg + exit 0 + fi + else + msg "Assuming this is the first deployment and ELB_LIST=_any_ so finishing successfully without registering." + finish_msg + exit 0 + fi fi # Loop through all LBs the user set, and attempt to register this instance to them. @@ -72,7 +101,7 @@ for elb in $ELB_LIST; do fi done -# Wait for all Registrations to finish +# Wait for all registrations to finish msg "Waiting for instance to register to its load balancers" for elb in $ELB_LIST; do wait_for_state "elb" $INSTANCE_ID "InService" $elb @@ -81,9 +110,6 @@ for elb in $ELB_LIST; do fi done -msg "Finished $(basename $0) at $(/bin/date "+%F %T")" - -end_sec=$(/bin/date +%s.%N) -elapsed_seconds=$(echo "$end_sec - $start_sec" | /usr/bin/bc) +remove_flagfile -msg "Elapsed time: $elapsed_seconds" +finish_msg