Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions src/pages/docs/platform/integrations/queues.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,22 @@ The following steps explain how to provision an Ably Queue:
A [Dead Letter Queue](#deadletter) is automatically created. It stores messages that fail to be processed, or expire.
</Aside>

### Modifying queues <a id="modify"/>

Queues cannot be modified after creation because they are immutable. Different settings require creating a new queue and migrating.

This includes limits like TTL and max length. Even after [upgrading](/docs/account/billing-and-payments#upgrading) an Ably account, existing queues retain the limits they were created with. Replacement is required to get the higher limits from a new plan.

Steps to switch to a new queue:

1. Create a new queue with the required settings.
2. Update consumers to subscribe to both the old and new queues.
3. Change queue rules to route messages to the new queue.
4. Wait for the old queue to drain completely.
5. Delete the old queue once empty.

This process ensures no message loss during the transition.

### Configure a Queue rule <a id="config"/>

After you provision a Queue, create one or more Queue rules to republish messages, presence events, or channel events from channels into that queue.
Expand Down
10 changes: 9 additions & 1 deletion src/pages/docs/platform/integrations/webhooks/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,15 @@ The backoff delay follows the formula: `delay = delay * sqrt(2)` where the initi

The back off for consecutively failing requests will increase until it reaches 60s. All subsequent retries for failed requests will then be made at 60s intervals until a request is successful. The queue of events is retain for 5 minutes. If an event cannot be delivered within that time then events are discarded to prevent the queue from growing indefinitely.

### Batched event payloads <a id="batched-events"/>
## Message ordering <a id="ordering"/>

Webhooks do not always preserve message order the same way Ably channels do. This depends on the webhook configuration.

Batched webhooks preserve message order when messages are from the same publisher on the same channel. If a batch fails and gets retried, newer messages are included while maintaining correct order. Messages from different regions might arrive in separate batches, maintaining per-publisher ordering.

Single request webhooks cannot guarantee order. Each message triggers its own HTTP request, and arrival order is not predictable. HTTP/2 server support can restore ordering through request pipelining over a single connection.

Publishing via REST (not realtime) removes ordering guarantees even with batching enabled, as REST uses a connection pool where requests can complete out of order.

Given the various potential combinations of enveloped, batched, and message sources, it's helpful to understand what to expect in different scenarios.

Expand Down
70 changes: 69 additions & 1 deletion src/pages/docs/platform/integrations/webhooks/lambda.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,6 @@ The following steps show you how to create a policy for an AWS Lambda.

<Code>
```json
{
{
"Version": "2012-10-17",
"Statement": [
Expand Down Expand Up @@ -105,3 +104,72 @@ Then ensure the checkbox for the policy is selected.
8. You don't need to add tags so click **Next: Review**.
9. Enter a suitable name for your role.
10. Click **Create Role**.

## Lambda retry behavior <a id="retry"/>

Ably invokes Lambda functions asynchronously using the `event` invocation type. When a function returns an error, AWS Lambda automatically retries the execution up to two more times with delays between attempts (1 minute, then 2 minutes).

Lambda functions might run multiple times for the same Ably event. Design functions to handle this by making them idempotent or checking for duplicate processing.

You can configure retry behavior in your AWS Lambda console under the function's asynchronous invocation settings. See the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-errors) for details on adjusting retry settings.

## Routing messages with integration Rules <a id="routing"/>

When an Integration Rule triggers your Lambda function, it can process the incoming message and publish a response back to Ably. This enables message routing and transformation patterns across your channels.

### Lambda Function setup <a id="lambda-setup"/>

Your Lambda Function must be packaged with the Ably SDK and uploaded to AWS Lambda as a zip file.

Uses `Ably.Rest` instead of `Ably.Realtime` because REST API is more efficient for one-off publishing operations and avoids WebSocket connection overhead in Lambda's stateless environment.

The following example shows an AWS Lambda function that receives Ably events and publishes responses back to an Ably channel:

<Code>
```javascript
'use strict';

const Ably = require('ably');
const inspect = require('util').inspect;

exports.handler = (event, context, callback) => {
console.log("Received the following event from Ably: ", inspect(event));

// Parse the incoming event
// With enveloping enabled: event contains 'source', 'appId', 'channel',
// 'site', 'ruleId', and 'messages' or 'presence' arrays
// With enveloping disabled: event is the message data directly
const details = JSON.parse(event.messages[0].data);

// Use Ably.Rest for efficient REST-based publishing
// This avoids the overhead of establishing a WebSocket connection
const ably = new Ably.Rest({ key: '<YOUR_API_KEY>' });

// Get the target channel and publish the response
// Important: Do not publish to a channel that triggers this same rule
// to avoid infinite loops
const channel = ably.channels.get('<TARGET_CHANNEL_NAME>');

channel.publish('lambdaresponse', 'success', (err) => {
if(err) {
console.log("Error publishing back to ably:", inspect(err));
callback(err);
} else {
// Only call callback() after publish completes
// to ensure the HTTP request finishes before function execution ends
callback(null, 'success');
}
});
};
```
</Code>

## Handling high message volumes <a id="high-volume"/>

Rate limiting is necessary when the message rate on source channels exceeds what your Lambda function can process. Without rate limiting, unprocessed messages accumulate in a backlog with no visibility or management options.

### Using Kinesis for High-Volume Processing <a id="kinesis-processing"/>

For high-volume message processing, use an intermediary queue such as [AWS Kinesis](https://aws.amazon.com/kinesis/). Configure [Integration Rules](/docs/platform/integrations/webhooks) to send events to Kinesis, then stream from Kinesis into your Lambda function.

See [AWS documentation on streaming from Kinesis to Lambda](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html) for configuration details.