Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Many simultaneous requests create unbounded connections #13

Closed
grant-aterlo opened this issue Jul 20, 2016 · 8 comments
Closed

Many simultaneous requests create unbounded connections #13

grant-aterlo opened this issue Jul 20, 2016 · 8 comments

Comments

@grant-aterlo
Copy link
Contributor

On the pool maxsize is the limit on kept connections, however unlimited connections will be spawned as needed if there are no connections to grab from the pool.

This can cause too many connections to be created for memcache. Is this behavior on maxsize intended? Does it make sense to have a hard limit parameter?

@fafhrd91
Copy link
Member

this is bug. would you like to make PR?

@grant-aterlo
Copy link
Contributor Author

I have a change that fixes the issue. Could I get permissions to create a pull request

@asvetlov
Copy link
Member

Everyone have a permission for creating a Pull Request, isn't it?

After reviewing PR @fafhrd91 or I will merge your PR and publish new aiomcache release.

@argaen
Copy link
Member

argaen commented Feb 8, 2017

What behavior do we want here? If there are no connections available should we block until one is release? or add a hardcoded MAX and only start blocking until MAX is reached?

I would go for the first option because is more explicit and we don't hide anything to the developer. How is aioredis managing this? @popravich

@popravich
Copy link
Member

aioredis has hard limit — no more then masize connections will be created in a single pool.
In aioredis we free connections, when all free connections are busy we spawn new one,
if maxsize limit is reached — we block.
AFAIR, aiopg and aiomysql act the same.

@argaen
Copy link
Member

argaen commented Feb 8, 2017

Then I would go with this approach to keep consistency with the other projects

@argaen
Copy link
Member

argaen commented Feb 8, 2017

@grant-aterlo do you have a code that reproduces this bug? There is a test case that is checking this error you report and its passing correctly:

@pytest.mark.run_loop
def test_acquire_limit_maxsize(mcache_params,
                               loop):
    pool = MemcachePool(minsize=1, maxsize=1, loop=loop, **mcache_params)
    assert pool.size() == 0

    # Create up to max connections
    _conn = yield from pool.acquire()
    assert pool.size() == 1
    pool.release(_conn)

    @asyncio.coroutine
    def acquire_wait_release():
        conn = yield from pool.acquire()
        assert conn is _conn
        yield from asyncio.sleep(0.01, loop=loop)
        assert len(pool._in_use) == 1
        assert pool.size() == 1
        assert pool._pool.qsize() == 0
        pool.release(conn)

    yield from asyncio.gather(*([acquire_wait_release()] * 10), loop=loop)
    assert pool.size() == 1
    assert len(pool._in_use) == 0

@argaen
Copy link
Member

argaen commented Feb 8, 2017

Lol, this was already fixed in #14.

@argaen argaen closed this as completed Feb 8, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants