You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/pages/docs/platform/architecture/edge-network.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ Behind CloudFront, each Ably region employs AWS Network Load Balancers to distri
33
33
34
34
Ably uses DNS-based latency routing to direct clients to the nearest available datacenter. The primary endpoints for client connections and HTTP requests is `main.realtime.ably.net`.
35
35
36
-
When a client performs a DNS lookup for this endpoint, the DNS service resolves to the closest datacenter, among those that are currently enabled, to the client's location. This latency-based routing ensures that clients connect to the datacenter with the lowest network latency, maximising the responsiveness of the service.
36
+
When a client performs a DNS lookup for this endpoint, the DNS service resolves to the closest datacenter, among those that are currently enabled, to the client's location. This latency-based routing ensures that clients connect to the datacenter with the lowest network latency, maximizing the responsiveness of the service.
37
37
38
38
Ably's DNS configuration uses a TTL of 60 seconds, allowing for relatively quick rerouting of traffic if a datacenter becomes unhealthy. The health of each datacenter is continuously monitored, and if issues are detected, Ably can modify the DNS routing to direct traffic away from the affected datacenter within minutes.
39
39
@@ -111,9 +111,9 @@ The geographic distribution of these access points is continuously reviewed and
111
111
112
112
#### China connectivity
113
113
114
-
Ably's service works in China, with customers successfully using the platform to serve users globally, including within China. The extensive global network with over 700 edge locations provides connectivity to Chinese users.
114
+
Ably's service works in China, with customers successfully using the platform to serve users globally, including within China. The global network with over 700 edge locations provides connectivity to Chinese users.
115
115
116
-
However, China operates a national firewall that can block access to foreign websites and services without notice. While Ably is not currently aware of any blocks affecting its services, potential firewall changes could impact service availability. Ably has implemented specific strategies to ensure reliable service in China, including partnerships with local providers, alternative routing arrangements, and region-specific optimizations.
116
+
China operates a national firewall that can block access to foreign websites and services without notice. While Ably is not currently aware of any blocks affecting its services, potential firewall changes could impact service availability. Ably has implemented specific strategies to ensure reliable service in China, including partnerships with local providers, alternative routing arrangements, and region-specific optimizations.
Copy file name to clipboardExpand all lines: src/pages/docs/platform/architecture/index.mdx
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,20 +14,20 @@ Ably's globally distributed infrastructure forms the foundation of the platform,
14
14
15
15
Ably characterizes the system across [4 pillars of dependability](https://ably.com/four-pillars-of-dependability):
16
16
17
-
***Performance**: Ably focuses on predictability of latencies to provide certainty in uncertain operating conditions.
17
+
* Performance: Focuses on predictability of latencies to provide certainty in uncertain operating conditions.
18
18
*`<30ms` round trip latency within datacenter (99th percentile)
19
19
*`<65ms` global round trip latency (99th percentile)
20
-
***Integrity**: Guarantees for message ordering and delivery.
20
+
* Integrity: Guarantees for message ordering and delivery.
21
21
* Exactly-once delivery semantics
22
22
* Guaranteed message ordering from publishers to subscribers
23
23
* Automatic connection recovery with message continuity
24
-
***Reliability**: Fault tolerant architecture at regional and global levels to survive multiple failures without outages.
24
+
* Reliability: Fault tolerant architecture at regional and global levels to survive multiple failures without outages.
25
25
* 100% message delivery guarantee through multi-region redundancy
26
26
* 99.999999% message survivability
27
27
* 99.99999999% persisted data survivability
28
28
* Edge network failure resolution by the client SDKs within 30s
29
29
* Automated routing of all traffic away from an abrupt failure of datacenter in less than two minutes
30
-
***Availability**: Meticulously designed to provide continuity of service even in the case of instance or whole datacenter failures.
30
+
* Availability: Designed to provide continuity of service even in the case of instance or whole datacenter failures.
31
31
* 99.999% global service availability (5 minutes 15 seconds of downtime per year)
32
32
* 50% global capacity margin for instant demand surges
33
33
@@ -37,12 +37,12 @@ Ably's platform is a global service that supports all realtime messaging and ass
37
37
38
38
The platform has been designed with the following primary objectives in mind:
39
39
40
-
***Horizontal scalability**: As more nodes are added, load is automatically redistributed across the cluster so that global capacity increases linearly with the number of instances Ably runs.
41
-
***No single point of congestion**: As the system scales, there is no single point of congestion for any data path, and data within the system is routed peer-to-peer, ensuring no single component becomes overloaded as traffic scales for an individual app or across the cluster.
42
-
***Fault tolerance**: Faults in the system are expected, and the system must have redundancy at every layer in the stack to ensure availability and reliability.
43
-
***Autonomy**: Each component in the system should be able to operate fully without reliance on a global controller. For example, two isolated data centers should continue to service realtime requests while isolated.
44
-
***Consistent low latencies**: Within data centers, Ably aims for latencies to be in the low 10s of milliseconds and less than 100ms globally. Consistently achieving low latencies requires careful consideration of the placement of data and services across the system as well as prioritisation of the computation performed by each service.
45
-
***Quality of service**: Ably intentionally designs for high QoS targets to enable sophisticated realtime applications that would be impossible on platforms with weaker guarantees.
40
+
* Horizontal scalability: As more nodes are added, load is automatically redistributed across the cluster so that global capacity increases linearly with the number of instances.
41
+
* No single point of congestion: As the system scales, there is no single point of congestion for any data path, and data within the system is routed peer-to-peer, ensuring no single component becomes overloaded as traffic scales for an individual app or across the cluster.
42
+
* Fault tolerance: Faults in the system are expected, and the system must have redundancy at every layer in the stack to ensure availability and reliability.
43
+
* Autonomy: Each component in the system should be able to operate fully without reliance on a global controller. For example, two isolated data centers should continue to service realtime requests while isolated.
44
+
* Consistent low latencies: Within data centers, latencies are in the low 10s of milliseconds and less than 100ms globally. Consistently achieving low latencies requires careful consideration of the placement of data and services across the system as well as prioritisation of the computation performed by each service.
45
+
* Quality of service: Designed for high QoS targets to enable sophisticated realtime applications that would be impossible on platforms with weaker guarantees.
46
46
47
47
## Cluster architecture
48
48
@@ -52,10 +52,10 @@ Each regional deployment operates independently, handling its own subscriber con
52
52
53
53
Ably's architecture consists of four primary layers:
54
54
55
-
***Routing Layer**: Provides intelligent, latency optimized routing for robust end client connectivity.
56
-
***Gossip Layer**: Distributes network topology information and facilitates service discovery.
57
-
***Frontend Layer**: Handles REST requests and maintains realtime connections (such as WebSocket, Comet and SSE).
58
-
***Core Layer**: Performs all central message processing for channels.
55
+
* Routing Layer: Provides intelligent, latency optimized routing for robust end client connectivity.
56
+
* Gossip Layer: Distributes network topology information and facilitates service discovery.
57
+
* Frontend Layer: Handles REST requests and maintains realtime connections (such as WebSocket, Comet and SSE).
58
+
* Core Layer: Performs all central message processing for channels.
@@ -144,5 +144,5 @@ Once a message is acknowledged, it is stored in multiple physical locations, pro
144
144
145
145
Messages are stored in two ways:
146
146
147
-
***Ephemeral Storage**: Messages are held for 2 minutes in an in-memory database (Redis). This data is distributed according to Ably's consistent hashing mechanism and relocated when channels move between nodes. This short-term storage enables low-latency message delivery and retrieval and supports features like [connection recovery](/docs/connect/states).
148
-
***Persisted Storage**: Messages can optionally be stored persistently on disk if longer term retention is required. Ably uses a globally distributed and clustered database (Cassandra) for this purpose, deployed across multiple data centers with message data replicated to three regions to ensure integrity and availability even if a region fails.
147
+
* Ephemeral Storage: Messages are held for 2 minutes in an in-memory database (Redis). This data is distributed according to the consistent hashing mechanism and relocated when channels move between nodes. This short-term storage enables low-latency message delivery and retrieval and supports features like [connection recovery](/docs/connect/states).
148
+
* Persisted Storage: Messages can optionally be stored persistently on disk if longer term retention is required. Uses a globally distributed and clustered database (Cassandra) for this purpose, deployed across multiple data centers with message data replicated to three regions to ensure integrity and availability even if a region fails.
Copy file name to clipboardExpand all lines: src/pages/docs/platform/architecture/performance.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ The first objective is minimizing latency and latency variance for message deliv
23
23
24
24
The second objective is maximizing single-channel throughput in terms of both message rate and bandwidth. This ensures that even when a channel handles high volumes of traffic, such as during live events or peak usage periods, the system can efficiently process and distribute messages without degradation.
25
25
26
-
Ably achieves a [**global mean latency of 37ms**](/docs/platform/architecture/latency) to provide the best possible realtime experience to your users.
26
+
Ably achieves a [global mean latency of 37ms](/docs/platform/architecture/latency).
27
27
28
28
Beyond average latency, Ably focuses on the performance of the slowest percentiles of messages (p95, p99, p99.9) to ensure consistent performance for all messages. These tail latencies often reveal performance issues that might be hidden by average measurements.
0 commit comments