Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closing Client leaks dict #51

Closed
jkeys089 opened this issue Jul 11, 2017 · 1 comment
Closed

Closing Client leaks dict #51

jkeys089 opened this issue Jul 11, 2017 · 1 comment

Comments

@jkeys089
Copy link
Contributor

Disclaimer: I'm new to python and asyncio so this may just be my own mis-use.

I've written some code to integrate with the auto-discovery feature of AWS ElastiCache. Part of this is connecting to a memcached cluster address every 60 seconds (it is important to re-connect each time so we resolve the DNS and ensure we get to a healthy cluster member). Everything is working find but it seems this process of frequently connecting / disconnecting is leaking dict's.

Here is a minimal reproducer using pympler to demonstrate the leak:

from pympler import muppy, summary
import asyncio
import aiomcache

loop = asyncio.get_event_loop()

async def hello_aiomcache():
    mc = aiomcache.Client("127.0.0.1", 11211, loop=loop)
    await mc.set(b"some_key", b"Some value")
    value = await mc.get(b"some_key")
    print(value)
    values = await mc.multi_get(b"some_key", b"other_key")
    print(values)
    await mc.delete(b"another_key")
    mc.close()  

# establish a baseline (watch the <class 'dict line)
summary.print_(summary.summarize(muppy.get_objects()))

for i in range(50):
    loop.run_until_complete(hello_aiomcache())

# <class 'dict grows
summary.print_(summary.summarize(muppy.get_objects()))

ds = [ao for ao in muppy.get_objects() if isinstance(ao, dict)]

# leaked dict looks like {'_loop': <_UnixSelectorEventLoop running=False closed=False debug=False>, '_paused': False, '_drain_waiter': None, '_connection_lost': False, '_stream_reader': <StreamReader t=<_SelectorSocketTransport fd=34 read=polling write=<idle, bufsize=0>>>, '_stream_writer': None, '_client_connected_cb': None, '_over_ssl': False}
ds[2364]

It looks like these dict's will hang around forever until loop.close() is called. I'm confused by this. I think I don't want to ever close the loop that I borrowed from tornado via tornado.ioloop.IOLoop.current().asyncio_loop. Is there any other way to properly close / cleanup these connections without closing the loop?

@jkeys089
Copy link
Contributor Author

It looks like the issue was caused by not awaiting the mc.close() call -- definitely caused by my own misuse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant