Skip to content
This repository was archived by the owner on Jan 9, 2024. It is now read-only.

Commit 187838a

Browse files
committed
Merge branch 'unstable'
2 parents 038ff1e + 8fe8b71 commit 187838a

25 files changed

+283
-73
lines changed

.travis.yml

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,22 +13,23 @@ install:
1313
- "if [[ $REDIS_VERSION == '3.0' ]]; then REDIS_VERSION=3.0 make redis-install; fi"
1414
- "if [[ $REDIS_VERSION == '3.2' ]]; then REDIS_VERSION=3.2 make redis-install; fi"
1515
- "if [[ $REDIS_VERSION == '4.0' ]]; then REDIS_VERSION=4.0 make redis-install; fi"
16+
- "if [[ $REDIS_VERSION == '5.0' ]]; then REDIS_VERSION=5.0 make redis-install; fi"
1617
- pip install -r dev-requirements.txt
1718
- pip install -e .
1819
- "if [[ $HIREDIS == '1' ]]; then pip install hiredis; fi"
1920
env:
20-
# Redis 3.0
21+
# Redis 3.0 & HIREDIS
2122
- HIREDIS=0 REDIS_VERSION=3.0
22-
# Redis 3.0 and HIREDIS
2323
- HIREDIS=1 REDIS_VERSION=3.0
24-
# Redis 3.2
24+
# Redis 3.2 & HIREDIS
2525
- HIREDIS=0 REDIS_VERSION=3.2
26-
# Redis 3.2 and HIREDIS
2726
- HIREDIS=1 REDIS_VERSION=3.2
28-
# Redis 4.0
27+
# Redis 4.0 & HIREDIS
2928
- HIREDIS=0 REDIS_VERSION=4.0
30-
# Redis 4.0 and HIREDIS
3129
- HIREDIS=1 REDIS_VERSION=4.0
30+
# Redis 5.0 & HIREDIS
31+
- HIREDIS=0 REDIS_VERSION=5.0
32+
- HIREDIS=1 REDIS_VERSION=5.0
3233
script:
3334
- make start
3435
- coverage erase

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -216,7 +216,7 @@ ifndef REDIS_TRIB_RB
216216
endif
217217

218218
ifndef REDIS_VERSION
219-
REDIS_VERSION=3.0.7
219+
REDIS_VERSION=4.0.10
220220
endif
221221

222222
export REDIS_CLUSTER_NODE1_CONF

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ True
5454

5555
## License & Authors
5656

57-
Copyright (c) 2013-2017 Johan Andersson
57+
Copyright (c) 2013-2018 Johan Andersson
5858

5959
MIT (See docs/License.txt file)
6060

dev-requirements.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
-r requirements.txt
22

33
coverage>=4.0,<5.0
4-
pytest>=2.8.3,<3.0.0
5-
testfixtures>=4.5.0,<5.0.0
6-
mock>=1.3.0,<2.0.0
4+
pytest>=2.8.3,<4.0.0
5+
testfixtures>=4.5.0,<5.5.0
6+
mock>=1.3.0,<2.1.0
77
docopt>=0.6.2,<1.0.0
88
tox>=2.2.0,<3.0.0
99
python-coveralls>=2.5.0,<3.0.0

docs/authors.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,3 +24,6 @@ Authors who contributed code or testing:
2424
- AngusP - https://github.com/AngusP
2525
- Doug Kent - https://github.com/dkent
2626
- VascoVisser - https://github.com/VascoVisser
27+
- astrohsy - https://github.com/astrohsy
28+
- Artur Stawiarski - https://github.com/astawiarski
29+
- Matthew Anderson - https://github.com/mc3ander

docs/license.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
Licensing
22
---------
33

4-
Copyright (c) 2013-2016 Johan Andersson
4+
Copyright (c) 2013-2018 Johan Andersson
55

66
MIT (See docs/License.txt file)
77

docs/pipelines.rst

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ An `ASK` error means the slot is only partially migrated and that the client can
3838
The philosophy on pipelines
3939
---------------------------
4040

41-
After playing around with pipelines and thinking about possible solutions that could be used in a cluster setting this document will describe how pipelines work, strengths and weaknesses with the implementation that was chosen.
41+
After playing around with pipelines and thinking about possible solutions that could be used in a cluster setting this document will describe how pipelines work, strengths and weaknesses of the implementation that was chosen.
4242

4343
Why can't we reuse the pipeline code in `redis-py`? In short it is almost the same reason why code from the normal redis client can't be reused in a cluster environment and that is because of the slots system. Redis cluster consist of a number of slots that is distributed across a number of servers and each key belongs in one of these slots.
4444

@@ -62,9 +62,16 @@ Consider the following example. Create a pipeline and issue 6 commands `A`, `B`,
6262

6363
If we look back at the order we executed the commands we get `[A, F]` for the first node and `[B, E, C, D]` for the second node. At first glance this looks like it is out of order because command `E` is executed before `C` & `D`. Why is this not matter? Because no multi key operations can be done in a pipeline, we only have to care the execution order is correct for each slot and in this case it was because `B` & `E` belongs to the same slot and `C` & `D` belongs to the same slot. There should be no possible way to corrupt any data between slots if multi key commands are blocked by the code.
6464

65-
What is good with this pipeline solution? First we can actually have a pipeline solution that will work in most cases with few commands blocked (only multi key commands). Secondly we can run it in parralel to increase the performance of the pipeline even further, making the benefits even greater.
65+
What is good with this pipeline solution? First we can actually have a pipeline solution that will work in most cases with few commands blocked (only multi key commands). Secondly we can run it in parallel to increase the performance of the pipeline even further, making the benefits even greater.
6666

6767

68+
Packing Commands
69+
----------------
70+
71+
When issuing only a single command, there is only one network round trip to be made. But what if you issue 100 pipelined commands? In a single-instance redis configuration, you still only need to make one network hop. The commands are packed into a single request and the server responds with all the data for those requests in a single response. But with redis cluster, those keys could be spread out over many different nodes.
72+
73+
The client is responsible for figuring out which commands map to which nodes. Let's say for example that your 100 pipelined commands need to route to 3 different nodes? The first thing the client does is break out the commands that go to each node, so it only has 3 network requests to make instead of 100.
74+
6875

6976
Transactions and WATCH
7077
----------------------
@@ -135,7 +142,7 @@ This section will describe different types of pipeline solutions. It will list t
135142
Suggestion one
136143
**************
137144

138-
Simple but yet sequential pipeline. This solution acts more like an interface for the already existing pipeline implementation and only provides a simple backwards compatible interface to ensure that code that sexists still will work withouth any major modifications. The good this with this implementation is that because all commands is runned in sequence it will handle `MOVED` or `ASK` redirections very good and withouth any problems. The major downside to this solution is that no commands is ever batched and runned in parralell and thus you do not get any major performance boost from this approach. Other plus is that execution order is preserved across the entire cluster but a major downside is that thte commands is no longer atomic on the cluster scale because they are sent in multiple commands to different nodes.
145+
Simple but yet sequential pipeline. This solution acts more like an interface for the already existing pipeline implementation and only provides a simple backwards compatible interface to ensure that code that sexists still will work withouth any major modifications. The good this with this implementation is that because all commands is runned in sequence it will handle `MOVED` or `ASK` redirections very good and withouth any problems. The major downside to this solution is that no command is ever batched and ran in parallel and thus you do not get any major performance boost from this approach. Other plus is that execution order is preserved across the entire cluster but a major downside is that thte commands is no longer atomic on the cluster scale because they are sent in multiple commands to different nodes.
139146

140147
**Good**
141148

@@ -151,24 +158,24 @@ Simple but yet sequential pipeline. This solution acts more like an interface fo
151158
Suggestion two
152159
**************
153160

154-
Current pipeline implementation. This implementation is rather good and works well because it combines the existing pipeline interface and functionality and it also provides a basic handling of `ASK` or `MOVED` errors inside the client. One major downside to this is that execution order is not preserved across the cluster. Altho the execution order is somewhat broken if you look at the entire cluster level becuase commands can be splitted so that cmd1, cmd3, cmd5 get sent to one server and cmd2, cmd4 gets sent to another server. The order is then broken globally but locally for each server it is preserved and maintained correctly. On the other hand i guess that there can't be any commands that can affect different hashslots within the same command so it maybe do not really matter if the execution order is not correct because for each slot/key the order is valid.
155-
There might be some issues with rebuilding the correct response ordering from the scattered data because each command might be in different sub pipelines. But i think that our current code still handles this correctly. I think i have to figure out some wierd case where the execution order acctually matters. There might be some issues with the nonsupported mget/mset commands that acctually performs different sub commands then it currently supports.
161+
Current pipeline implementation. This implementation is rather good and works well because it combines the existing pipeline interface and functionality and it also provides a basic handling of `ASK` or `MOVED` errors inside the client. One major downside to this is that execution order is not preserved across the cluster. Although the execution order is somewhat broken if you look at the entire cluster level because commands can be split so that cmd1, cmd3, cmd5 get sent to one server and cmd2, cmd4 gets sent to another server. The order is then broken globally but locally for each server it is preserved and maintained correctly. On the other hand I guess that there can't be any commands that can affect different hashslots within the same command so maybe it really doesn't matter if the execution order is not correct because for each slot/key the order is valid.
162+
There might be some issues with rebuilding the correct response ordering from the scattered data because each command might be in different sub pipelines. But I think that our current code still handles this correctly. I think I have to figure out some weird case where the execution order actually matters. There might be some issues with the nonsupported mget/mset commands that acctually performs different sub commands then it currently supports.
156163

157164
**Good**
158165

159166
- Sequential execution per node
160167

161168
**Bad**
162169

163-
- Not sequential execution on the entire pipeline
170+
- Non sequential execution on the entire pipeline
164171
- Medium difficult `ASK` or `MOVED` handling
165172

166173

167174

168175
Suggestion three
169176
****************
170177

171-
There is a even simpler form of pipelines that can be made where all commands is supported as long as they conform to the same hashslot because redis supports that mode of operation. The good thing with this is that sinc all keys must belong to the same slot there can't be very few `ASK` or `MOVED` errors that happens and if they happen they will be very easy to handle because the entire pipeline is kinda atomic because you talk to the same server and only 1 server. There can't be any multiple server communication happening.
178+
There is a even simpler form of pipelines that can be made where all commands is supported as long as they conform to the same hashslot because REDIS supports that mode of operation. The good thing with this is that since all keys must belong to the same slot there can't be very few `ASK` or `MOVED` errors that happens and if they happen they will be very easy to handle because the entire pipeline is kinda atomic because you talk to the same server and only 1 server. There can't be any multiple server communication happening.
172179

173180
**Good**
174181

@@ -184,7 +191,7 @@ There is a even simpler form of pipelines that can be made where all commands is
184191
Suggestion four
185192
**************
186193

187-
One other solution is the 2 step commit solution where you send for each server 2 batches of commands. The first command should somehow establish that each keyslot is in the correct state and able to handle the data. After the client have recieved OK from all nodes that all data slots is good to use then it will acctually send the real pipeline with all data and commands. The big problem with this approach is that ther eis a gap between the checking of the slots and the acctual sending of the data where things can happen to the already established slots setup. But at the same time there is no possibility of merging these 2 steps because if step 2 is automatically runned if step 1 is Ok then the pipeline for the first node that will fail will fail but for the other nodes it will suceed but when it should not because if one command gets `ASK` or `MOVED` redirection then all pipeline objects must be rebuilt to match the new specs/setup and then reissued by the client. The major advantage of this solution is that if you have total controll of the redis server and do controlled upgrades when no clients is talking to the server then it can acctually work really well because there is no possibility that `ASK` or `MOVED` will triggered by migrations in between the 2 batches.
194+
One other solution is the 2 step commit solution where you send for each server 2 batches of commands. The first command should somehow establish that each keyslot is in the correct state and able to handle the data. After the client have recieved OK from all nodes that all data slots is good to use then it will acctually send the real pipeline with all data and commands. The big problem with this approach is that ther eis a gap between the checking of the slots and the acctual sending of the data where things can happen to the already established slots setup. But at the same time there is no possibility of merging these 2 steps because if step 2 is automatically runned if step 1 is Ok then the pipeline for the first node that will fail will fail but for the other nodes it will suceed but when it should not because if one command gets `ASK` or `MOVED` redirection then all pipeline objects must be rebuilt to match the new specs/setup and then reissued by the client. The major advantage of this solution is that if you have total controll of the redis server and do controlled upgrades when no clients is talking to the server then it can actually work really well because there is no possibility that `ASK` or `MOVED` will triggered by migrations in between the 2 batches.
188195

189196
**Good**
190197

docs/release-notes.rst

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,26 @@
11
Release Notes
22
=============
33

4+
1.3.5 (July 22, 2018)
5+
--------------
6+
7+
* Add Redis 4 compatability fix to CLUSTER NODES command (See issue #217)
8+
* Fixed bug with command "CLUSTER GETKEYSINSLOT" that was throwing exceptions
9+
* Added new methods cluster_get_keys_in_slot() to client
10+
* Fixed bug with `StrictRedisCluster.from_url` that was ignoring the `readonly_mode` parameter
11+
* NodeManager will now ignore nodes showing cluster errors when initializing the cluster
12+
* Fix bug where RedisCluster wouldn't refresh the cluster table when executing commands on specific nodes
13+
* Add redis 5.0 to travis-ci tests
14+
* Change default redis version from 3.0.7 to 4.0.10
15+
* Increase accepted ranges of dependencies specefied in dev-requirements.txt
16+
* Several major and minor documentation updates and tweaks
17+
* Add example script "from_url_password_protected.py"
18+
* command "CLUSTER GETKEYSINSLOT" is now returned as a list and not int
19+
* Improve support for ssl connections
20+
* Retry on Timeout errors when doing cluster discovery
21+
* Added new error class "MasterDownError"
22+
* Updated requirements for dependency of redis-py to latest version
23+
424
1.3.4 (Mar 5, 2017)
525
-------------------
626

@@ -79,7 +99,7 @@ Release Notes
7999
* Implement all "CLUSTER ..." commands as methods in the client class
80100
* Client now follows the service side setting 'cluster-require-full-coverage=yes/no' (baranbartu)
81101
* Change the pubsub implementation (PUBLISH/SUBSCRIBE commands) from using one single node to now determine the hashslot for the channel name and use that to connect to
82-
a node in the cluster. Other clients that do not use this pattern will not be fully compatible with this client. Known limitations is pattern
102+
a node in the cluster. Other clients that do not use this pattern will not be fully compatible with this client. Known limitations is pattern
83103
subscription that do not work properly because a pattern can't know all the possible channel names in advance.
84104
* Convert all docs to ReadTheDocs
85105
* Rework connection pool logic to be more similar to redis-py. This also fixes an issue with pubsub and that connections

docs/threads.rst

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,6 @@ The advantage to this design is that a smart client can communicate with the clu
1313

1414

1515

16-
Packing Commands
17-
----------------
18-
19-
When issuing only a single command, there is only one network round trip to be made. But what if you issue 100 pipelined commands? In a single-instance redis configuration, you still only need to make one network hop. The commands are packed into a single request and the server responds with all the data for those requests in a single response. But with redis cluster, those keys could be spread out over many different nodes.
20-
21-
The client is responsible for figuring out which commands map to which nodes. Let's say for example that your 100 pipelined commands need to route to 3 different nodes? The first thing the client does is break out the commands that go to each node, so it only has 3 network requests to make instead of 100.
22-
23-
24-
2516
Parallel network i/o using threads
2617
----------------------------------
2718

docs/upgrading.rst

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,16 @@ Upgrading redis-py-cluster
33

44
This document describes what must be done when upgrading between different versions to ensure that code still works.
55

6+
1.3.2 --> Next Release
7+
----------------------
8+
9+
If you created the `StrictRedisCluster` (or `RedisCluster`) instance via the `from_url` method and were passing `readonly_mode` to it, the connection pool created will now properly allow selecting read-only slaves from the pool. Previously it always used master nodes only, even in the case of `readonly_mode=True`. Make sure your code don't attempt any write commands over connections with `readonly_mode=True`.
10+
611

712
1.3.1 --> 1.3.2
813
---------------
914

10-
If your redis instance is configured to not have the `CONFIG ...` comannds enabled due to security reasons you need to pass this into the client object `skip_full_coverage_check=True`. Benefits is that the client class no longer requires the `CONFIG ...` commands to be enabled on the server. Downsides is that you can't use the option in your redis server and still use the same feature in this client.
15+
If your redis instance is configured to not have the `CONFIG ...` commands enabled due to security reasons you need to pass this into the client object `skip_full_coverage_check=True`. Benefits is that the client class no longer requires the `CONFIG ...` commands to be enabled on the server. Downsides is that you can't use the option in your redis server and still use the same feature in this client.
1116

1217

1318

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
from rediscluster import StrictRedisCluster
2+
3+
url="redis://:[email protected]:6572/0"
4+
5+
rc = StrictRedisCluster.from_url(url, skip_full_coverage_check=True)
6+
7+
rc.set("foo", "bar")
8+
9+
print(rc.get("foo"))

examples/pipeline-incrby.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,10 @@
1212
pipe.execute()
1313

1414
pipe = r.pipeline(transaction=False)
15-
pipe.set("foo-{0}".format(d, d))
16-
pipe.incrby("foo-{0}".format(d, 1))
17-
pipe.set("bar-{0}".format(d, d))
18-
pipe.incrby("bar-{0}".format(d, 1))
19-
pipe.set("bazz-{0}".format(d, d))
20-
pipe.incrby("bazz-{0}".format(d, 1))
15+
pipe.set("foo-{0}".format(d), d)
16+
pipe.incrby("foo-{0}".format(d), 1)
17+
pipe.set("bar-{0}".format(d), d)
18+
pipe.incrby("bar-{0}".format(d), 1)
19+
pipe.set("bazz-{0}".format(d), d)
20+
pipe.incrby("bazz-{0}".format(d), 1)
2121
pipe.execute()

rediscluster/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
setattr(redis, "StrictClusterPipeline", StrictClusterPipeline)
1717

1818
# Major, Minor, Fix version
19-
__version__ = (1, 3, 4)
19+
__version__ = (1, 3, 5)
2020

2121
if sys.version_info[0:3] == (3, 4, 0):
2222
raise RuntimeError("CRITICAL: rediscluster do not work with python 3.4.0. Please use 3.4.1 or higher.")

0 commit comments

Comments
 (0)