- Apr 09, 2024
-
-
Sumiran Pokharel authored
Co-authored-by:
Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
-
Mathieu Velten authored
Co-authored-by:
Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
-
Erik Johnston authored
This should have been in #17045. Whoops.
-
dependabot[bot] authored
-
dependabot[bot] authored
-
- Apr 08, 2024
-
-
dependabot[bot] authored
Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
dependabot[bot] authored
Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
dependabot[bot] authored
Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
dependabot[bot] authored
Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
dependabot[bot] authored
Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
Erik Johnston authored
Forget a line, and an empty batch is trivially linear. c.f. #17064
-
Erik Johnston authored
PR #16942 removed an invalid optimisation that avoided pulling out state for non-gappy syncs. This causes a large increase in DB usage. c.f. #16941 for why that optimisation was wrong. However, we can still optimise in the simple case where the events in the timeline are a linear chain without any branching/merging of the DAG. cc. @richvdh
-
- Apr 05, 2024
-
-
Erik Johnston authored
Before we were pulling out *all* read receipts for a user for every event we pushed. Instead let's only pull out the relevant receipts. This also pulled out the event rows for each receipt, causing load on the events table.
-
- Apr 04, 2024
-
-
Richard van der Hoff authored
Unfortunately, the optimisation we applied here for non-gappy syncs is not actually valid. Fixes https://github.com/element-hq/synapse/issues/16941. ~~Based on https://github.com/element-hq/synapse/pull/16930.~~ Requires https://github.com/matrix-org/sytest/pull/1374.
-
Richard van der Hoff authored
Fix a long-standing issue which could cause state to be omitted from the sync response if the last event was filtered out. Fixes: https://github.com/element-hq/synapse/issues/16928
-
Richard van der Hoff authored
This PR fixes a very, very niche edge-case, but I've got some more work coming which will otherwise make the problem worse. The bug happens when the syncing user leaves a room, and has a sync filter which includes "left" rooms, but sets the timeline limit to 0. In that case, the state returned in the `state` section is calculated incorrectly. The fix is to pass a token corresponding to the point that the user leaves the room through to `compute_state_delta`.
-
Erik Johnston authored
This was causing sequential scans when using refresh tokens.
-
- Apr 02, 2024
-
-
Erik Johnston authored
-
Erik Johnston authored
Since these queries are duplicated in two places.
-
- Mar 28, 2024
-
-
Erik Johnston authored
Follow on from #17037
-
Erik Johnston authored
-
- Mar 26, 2024
-
-
Erik Johnston authored
-
Erik Johnston authored
-
Erik Johnston authored
Requests may require a User-Agent header, and the change in #16972 accidentally removed it, resulting in requests getting rejected causing login to fail.
-
Erik Johnston authored
- Mar 22, 2024
-
-
Richard van der Hoff authored
Fixes https://github.com/element-hq/synapse/issues/16680, as well as a related bug, where servers which we had *never* successfully sent an event to would not be retried. In order to fix the case of pending to-device messages, we hook into the existing `wake_destinations_needing_catchup` process, by extending it to look for destinations that have pending to-device messages. The federation transmission loop then attempts to send the pending to-device messages as normal.
-
Mathieu Velten authored
-
- Mar 21, 2024
-
-
SpiritCroc authored
-
Hanadi authored
-
Sam Wedgwood authored
-
dependabot[bot] authored
-
Mathieu Velten authored
-
Richard van der Hoff authored
When running unit tests, we patch the database connection pool so that it runs queries "synchronously". This is ok, except that if any queries are launched before we do the patching, those queries get left in limbo and never complete. To fix this, let's change the way we do the switcheroo, by patching out the method which creates the connection pool in the first place.
-
dependabot[bot] authored
-
Tadeusz Sośnierz authored
-
grahhnt authored
-
Eirik authored
-
dependabot[bot] authored
-