diff --git a/src/pages/docs/platform/integrations/queues.mdx b/src/pages/docs/platform/integrations/queues.mdx
index 2ca26a3f4d..3a3414be72 100644
--- a/src/pages/docs/platform/integrations/queues.mdx
+++ b/src/pages/docs/platform/integrations/queues.mdx
@@ -78,6 +78,22 @@ The following steps explain how to provision an Ably Queue:
A [Dead Letter Queue](#deadletter) is automatically created. It stores messages that fail to be processed, or expire.
+### Modifying queues
+
+Queues cannot be modified after creation because they are immutable. Different settings require creating a new queue and migrating.
+
+This includes limits like TTL and max length. Even after [upgrading](/docs/account/billing-and-payments#upgrading) an Ably account, existing queues retain the limits they were created with. Replacement is required to get the higher limits from a new plan.
+
+Steps to switch to a new queue:
+
+1. Create a new queue with the required settings.
+2. Update consumers to subscribe to both the old and new queues.
+3. Change queue rules to route messages to the new queue.
+4. Wait for the old queue to drain completely.
+5. Delete the old queue once empty.
+
+This process ensures no message loss during the transition.
+
### Configure a Queue rule
After you provision a Queue, create one or more Queue rules to republish messages, presence events, or channel events from channels into that queue.
diff --git a/src/pages/docs/platform/integrations/webhooks/index.mdx b/src/pages/docs/platform/integrations/webhooks/index.mdx
index d0b7247b48..78d6cd063f 100644
--- a/src/pages/docs/platform/integrations/webhooks/index.mdx
+++ b/src/pages/docs/platform/integrations/webhooks/index.mdx
@@ -92,7 +92,15 @@ The backoff delay follows the formula: `delay = delay * sqrt(2)` where the initi
The back off for consecutively failing requests will increase until it reaches 60s. All subsequent retries for failed requests will then be made at 60s intervals until a request is successful. The queue of events is retain for 5 minutes. If an event cannot be delivered within that time then events are discarded to prevent the queue from growing indefinitely.
-### Batched event payloads
+## Message ordering
+
+Webhooks do not always preserve message order the same way Ably channels do. This depends on the webhook configuration.
+
+Batched webhooks preserve message order when messages are from the same publisher on the same channel. If a batch fails and gets retried, newer messages are included while maintaining correct order. Messages from different regions might arrive in separate batches, maintaining per-publisher ordering.
+
+Single request webhooks cannot guarantee order. Each message triggers its own HTTP request, and arrival order is not predictable. HTTP/2 server support can restore ordering through request pipelining over a single connection.
+
+Publishing via REST (not realtime) removes ordering guarantees even with batching enabled, as REST uses a connection pool where requests can complete out of order.
Given the various potential combinations of enveloped, batched, and message sources, it's helpful to understand what to expect in different scenarios.
diff --git a/src/pages/docs/platform/integrations/webhooks/lambda.mdx b/src/pages/docs/platform/integrations/webhooks/lambda.mdx
index 401eebfa21..fe822ce882 100644
--- a/src/pages/docs/platform/integrations/webhooks/lambda.mdx
+++ b/src/pages/docs/platform/integrations/webhooks/lambda.mdx
@@ -57,7 +57,6 @@ The following steps show you how to create a policy for an AWS Lambda.
```json
-{
{
"Version": "2012-10-17",
"Statement": [
@@ -105,3 +104,72 @@ Then ensure the checkbox for the policy is selected.
8. You don't need to add tags so click **Next: Review**.
9. Enter a suitable name for your role.
10. Click **Create Role**.
+
+## Lambda retry behavior
+
+Ably invokes Lambda functions asynchronously using the `event` invocation type. When a function returns an error, AWS Lambda automatically retries the execution up to two more times with delays between attempts (1 minute, then 2 minutes).
+
+Lambda functions might run multiple times for the same Ably event. Design functions to handle this by making them idempotent or checking for duplicate processing.
+
+You can configure retry behavior in your AWS Lambda console under the function's asynchronous invocation settings. See the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-errors) for details on adjusting retry settings.
+
+## Routing messages with integration Rules
+
+When an Integration Rule triggers your Lambda function, it can process the incoming message and publish a response back to Ably. This enables message routing and transformation patterns across your channels.
+
+### Lambda Function setup
+
+Your Lambda Function must be packaged with the Ably SDK and uploaded to AWS Lambda as a zip file.
+
+Uses `Ably.Rest` instead of `Ably.Realtime` because REST API is more efficient for one-off publishing operations and avoids WebSocket connection overhead in Lambda's stateless environment.
+
+The following example shows an AWS Lambda function that receives Ably events and publishes responses back to an Ably channel:
+
+
+```javascript
+'use strict';
+
+const Ably = require('ably');
+const inspect = require('util').inspect;
+
+exports.handler = (event, context, callback) => {
+ console.log("Received the following event from Ably: ", inspect(event));
+
+ // Parse the incoming event
+ // With enveloping enabled: event contains 'source', 'appId', 'channel',
+ // 'site', 'ruleId', and 'messages' or 'presence' arrays
+ // With enveloping disabled: event is the message data directly
+ const details = JSON.parse(event.messages[0].data);
+
+ // Use Ably.Rest for efficient REST-based publishing
+ // This avoids the overhead of establishing a WebSocket connection
+ const ably = new Ably.Rest({ key: '' });
+
+ // Get the target channel and publish the response
+ // Important: Do not publish to a channel that triggers this same rule
+ // to avoid infinite loops
+ const channel = ably.channels.get('');
+
+ channel.publish('lambdaresponse', 'success', (err) => {
+ if(err) {
+ console.log("Error publishing back to ably:", inspect(err));
+ callback(err);
+ } else {
+ // Only call callback() after publish completes
+ // to ensure the HTTP request finishes before function execution ends
+ callback(null, 'success');
+ }
+ });
+};
+```
+
+
+## Handling high message volumes
+
+Rate limiting is necessary when the message rate on source channels exceeds what your Lambda function can process. Without rate limiting, unprocessed messages accumulate in a backlog with no visibility or management options.
+
+### Using Kinesis for High-Volume Processing
+
+For high-volume message processing, use an intermediary queue such as [AWS Kinesis](https://aws.amazon.com/kinesis/). Configure [Integration Rules](/docs/platform/integrations/webhooks) to send events to Kinesis, then stream from Kinesis into your Lambda function.
+
+See [AWS documentation on streaming from Kinesis to Lambda](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html) for configuration details.