-
Notifications
You must be signed in to change notification settings - Fork 775
reconnect #838
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reconnect #838
Conversation
…d except TimeoutError - raise WebSocketTimeoutException
…ent compatibility): WrappedDispatcher (for use with generic event dispatchers such as pyevent and rel); create_dispatcher() accepts dispatcher kwarg (default None), and if it is specified, returns a WrappedDispatcher; use create_dispatcher() (passing specified dispatcher if any) every time (regardless of dispatcher specification)
…fault False] is True) to setSock() (prevents those lines from running on ConnectionRefusedError)
… mode); added stack frame count to disconnect (warning) log; grossly oversimplified ;)
…edDispatcher.timeout()
|
Is there an ETA on when this feature will be release? |
|
@maksimu I'm away from my computer right now, but later this month I plan to catch up on everything |
|
Great upgrade, thanks @bubbleboy14! My comments:
I found no issues when testing these changes
To err on the cautious side, I will make a new 1.4.0 release for this in case the upgrades have any unexpected side effects that I didn't catch. For future reference, the CI linting error exists in the master branch and was not introduced by this PR. |
|
Firstly, thanks for this important upgrade! For instance, changing the here is my client code: def on_message(self, message):
print(message)
def on_error(self, error):
print("### error ###:", error)
def on_close(self, close_status_code, close_msg):
print("### closed ###:", close_status_code, close_msg)
if __name__ == '__main__':
websocket.enableTrace(True)
ws = websocket.WebSocketApp(
'ws://localhost:8765/',
on_message=on_message,
on_error=on_error,
on_close=on_close)
scheduler = BackgroundScheduler(daemon=False)
scheduler.add_job(ws.send, 'interval', seconds=2, args=['message'])
scheduler.start()
print("Running forever...")
ws.run_forever()
print("Thread terminating...") |
|
Hello @NadavK. Hm, you're talking about this one, right: On your client, are you using the BackgroundScheduler in PyBackground (https://pypi.org/project/PyBackground/) or apscheduler (https://apscheduler.readthedocs.io/en/stable/userguide.html) or some other project? Could you please show us your full client code, or at least the import lines? Thanks! |
|
I can recreate what @NadavK shared. My opinion is that this is expected behavior and I updated the README to clarify this detail. I will share my explanation and then the code I used to recreate. The "client fails" action happens when the Server code (lightly modified from websocket/tests/echo-server.py in this repo) Client code (lightly modified from Long-Lived Connection code) |
|
@engn33r I agree with you logic here and believe this is the right decision. Do you have a suggestion for an exponential reconnection that doesn't result in a potential stack overflow like calling |
|
In my case, I control the server and the client, and my absolute requirement is that no matter what, the client should retry its connection if it goes away for any reason. Because of that, I decided to put that logic outside of websocket-client, in the launchd configuration which runs it, to restart it whenever the process exits. The discussion above reinforces my plan to stick with the older version of websocket-client, and leave the reconnection outside of it. Actually, I can probably continue to update versions of websocket-client, and simply continue to avoid trying to use a dispatcher. The project I’m using it with is a nice controller for my automated blinds, https://github.com/brunchboy/shade |
|
Hello all! @engn33r , @brunchboy , @mkapuza , check out this pull request: I adjusted the synchronous reconnect logic to avoid growing the stack. Thoughts? BTW here's the super basic client I was using to test this: |
|
Hi, we're seeing strange behaviour since updating clients to 1.4.0. On the websocket server (which uses a different package) the number of open websocket clients seems to infinitely increase. That could be due to a server bug, but I'm wondering if the change in the client behaviour could be playing a role in triggering it? |
|
There seems to be an issue with You can check this by running Is there an alternative to |
|
@milosivanovic the main alternative is to run without an async dispatcher, which is the default. If you want an alternative async dispatcher, pyevent has the same API, but I'm not sure if it runs on Python 3. Bear in mind that the sole purpose of an asynchronous dispatcher is to run multiple WebSocketApps in the same process. If you don't have this requirement, the default (synchronous) dispatcher is all you'll ever need. |
|
@bubbleboy14 I see. In the README for this project, it suggests that if we want automatic reconnection, we should use the |
|
@milosivanovic for that, use the "reconnect" kwarg, eg: ws.run_forever(reconnect=5) In the above example, 5 is the reconnect interval, so if the connection drops, it will be re-established 5 seconds later. And yes, that's a good idea. |
|
|
Hello dear, ` def start(self): |
|
Regarding the threading/rel conflict, @pouya817, what's the error? The reconnect issue is with your run_forever() kwargs. In particular, you currently have True for reconnect, but that should be an integer - try 1. If you still have problems, try out this branch I've been working on: It should work with rel 0.4.9.11, but feel free to try the current working branch: Hope it helps, LMK how it goes! |
run_forever() has a new kwarg, "reconnect".
If "reconnect" is True (which is the default) and the connection ends (or can't be established), it waits for 5 seconds and tries again.
This has been tested with run_forever() and run_forever(dispatcher=rel), and seems to work pretty great.
LMK what you think!
NB: I think I merged in upstream weirdly (maybe forgot to pull first), so there are lots of commits listed going back 7 months. However, the only actual changes (with one small exception - see diff) are in run_forever().