- Sep 30, 2020
-
-
Richard van der Hoff authored
Hopefully, N(extremities) * N(state_events) is a more realistic approximation to "how big a problem is this room?".
-
Richard van der Hoff authored
This was a bit unweildy for what I wanted: in particular, I wanted to assign each measurement straight into a bucket, rather than storing an intermediate Counter which didn't do any bucketing at all. I've replaced it with something that is hopefully a bit easier to use. (I'm not entirely sure what the difference between a HistogramMetricFamily and a GaugeHistogramMetricFamily is, but given our counters can go down as well as up the latter *sounds* more accurate?)
-
Richard van der Hoff authored
Our hacked-up `_exposition.py` was stripping out some samples it shouldn't have been. Put them back in, to more closely match the upstream `exposition.py`.
-
Richard van der Hoff authored
Drop compatibility hacks for prometheus-client pre 0.4.0. Debian stretch and Fedora 31 both have newer versions, so hopefully this will be ok.
-
Richard van der Hoff authored
Report metrics on expensive rooms for state res
-
- Sep 29, 2020
-
-
Erik Johnston authored
-
Aaron authored
-
Richard van der Hoff authored
-
Richard van der Hoff authored
-
Richard van der Hoff authored
-
Richard van der Hoff authored
-
Richard van der Hoff authored
-
Will Hunt authored
-
Andrew Morgan authored
* Don't check whether a 3pid is allowed to register during password reset This endpoint should only deal with emails that have already been approved, and are attached with user's account. There's no need to re-check them here. * Changelog
-
Erik Johnston authored
* Fix table scan of events on worker startup. This happened because we assumed "new" writers had an initial stream position of 0, so the replication code tried to fetch all events written by the instance between 0 and the current position. Instead, set the initial position of new writers to the current persisted up to position, on the assumption that new writers won't have written anything before that point. * Consider old writers coming back as "new". Otherwise we'd try and fetch entries between the old stale token and the current position, even though it won't have written any rows. Co-authored-by:
Andrew Morgan <1342360+anoadragon453@users.noreply.github.com> Co-authored-by:
Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
-
Richard van der Hoff authored
For some reason, an apparently unrelated PR upset mypy about this module. Here are a number of little fixes.
-
Andrew Morgan authored
Merge branch 'develop' of github.com:matrix-org/synapse into anoa/info-mainline-no-check-password-reset
-
Andrew Morgan authored
This PR adds a script that: * Builds the local Synapse checkout using our existing `docker/Dockerfile` image. * Downloads [Complement](https://github.com/matrix-org/complement/)'s source code. * Builds the [Synapse.Dockerfile](https://github.com/matrix-org/complement/blob/master/dockerfiles/Synapse.Dockerfile) using the above dockerfile as a base. * Builds and runs Complement against it. This set up differs slightly from [that of the dendrite repo](https://github.com/matrix-org/dendrite/blob/master/build/scripts/complement.sh) (`complement.sh`, `Complement.Dockerfile`), which instead stores a separate, but slightly modified, dockerfile in Dendrite's repo rather than running the one stored in Complement's repo. That synapse equivalent to that dockerfile (`Synapse.Dockerfile`) in Complement's repo is just based on top of `matrixdotorg/synapse:latest`, which we opt to build here locally. Thus copying over the files from Complement's repo wouldn't change any functionality, and would result in two instances of the same files. So just using the dockerfile in Complement's repo was decided upon instead.
-
Will Hunt authored
This is an attempt to fix #8403.
-
Andrew Morgan authored
Broken in https://github.com/matrix-org/synapse/pull/8275 and has yet to be put in a release. Fixes https://github.com/matrix-org/synapse/issues/8418. `next_link` is an optional parameter. However, we were checking whether the `next_link` param was valid, even if it wasn't provided. In that case, `next_link` was `None`, which would clearly not be a valid URL. This would prevent password reset and other operations if `next_link` was not provided, and the `next_link_domain_whitelist` config option was set.
-
Richard van der Hoff authored
One hope is that this might provide some insights into #3365.
-
Richard van der Hoff authored
* Remove `on_timeout_cancel` from `timeout_deferred` The `on_timeout_cancel` param to `timeout_deferred` wasn't always called on a timeout (in particular if the canceller raised an exception), so it was unreliable. It was also only used in one place, and to be honest it's easier to do what it does a different way. * Fix handling of connection timeouts in outgoing http requests Turns out that if we get a timeout during connection, then a different exception is raised, which wasn't always handled correctly. To fix it, catch the exception in SimpleHttpClient and turn it into a RequestTimedOutError (which is already a documented exception). Also add a description to RequestTimedOutError so that we can see which stage it failed at. * Fix incorrect handling of timeouts reading federation responses This was trapping the wrong sort of TimeoutError, so was never being hit. The effect was relatively minor, but we should fix this so that it does the expected thing. * Fix inconsistent handling of `timeout` param between methods `get_json`, `put_json` and `delete_json` were applying a different timeout to the response body to `post_json`; bring them in line and test. Co-authored-by:
Patrick Cloke <clokep@users.noreply.github.com> Co-authored-by:
Erik Johnston <erik@matrix.org>
-
- Sep 28, 2020
-
-
Andrew Morgan authored
-
Andrew Morgan authored
This endpoint should only deal with emails that have already been approved, and are attached with user's account. There's no need to re-check them here.
-
Erik Johnston authored
-
Richard van der Hoff authored
-
Dagfinn Ilmari Mannsåker authored
This table was created in #8034 (1.20.0). It references `ui_auth_sessions`, which is ignored, so this one should be too. Signed-off-by:
Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
-
Richard van der Hoff authored
-
- Sep 27, 2020
-
-
Matthew Hodgson authored
-
- Sep 25, 2020
-
-
Patrick Cloke authored
-
Richard van der Hoff authored
* Fix test_verify_json_objects_for_server_awaits_previous_requests It turns out that this wasn't really testing what it thought it was testing (in particular, `check_context` was turning failures into success, which was making the tests pass even though it wasn't clear they should have been. It was also somewhat overcomplex - we can test what it was trying to test without mocking out perspectives servers. * Fix warnings about finished logcontexts in the keyring We need to make sure that we finish the key fetching magic before we run the verifying code, to ensure that we don't mess up our logcontexts.
-
Tdxdxoz authored
Co-authored-by:
Benjamin Koch <bbbsnowball@gmail.com> This adds configuration flags that will match a user to pre-existing users when logging in via OpenID Connect. This is useful when switching to an existing SSO system.
-
Erik Johnston authored
Fixes #8395.
-
- Sep 24, 2020
-
-
Andrew Morgan authored
-
Erik Johnston authored
On startup `MultiWriteIdGenerator` fetches the maximum stream ID for each instance from the table and uses that as its initial "current position" for each writer. This is problematic as a) it involves either a scan of events table or an index (neither of which is ideal), and b) if rows are being persisted out of order elsewhere while the process restarts then using the maximum stream ID is not correct. This could theoretically lead to race conditions where e.g. events that are persisted out of order are not sent down sync streams. We fix this by creating a new table that tracks the current positions of each writer to the stream, and update it each time we finish persisting a new entry. This is a relatively small overhead when persisting events. However for the cache invalidation stream this is a much bigger relative overhead, so instead we note that for invalidation we don't actually care about reliability over restarts (as there's no caches to invalidate) and simply don't bother reading and writing to the new table in that particular case.
-
Andrew Morgan authored
-
Andrew Morgan authored
-
Andrew Morgan authored
-
Patrick Cloke authored
-