Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
synapse
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container registry
Model registry
Monitor
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Maunium
synapse
Commits
85155654
Unverified
Commit
85155654
authored
4 years ago
by
Neil Johnson
Committed by
GitHub
4 years ago
Browse files
Options
Downloads
Patches
Plain Diff
Documentation on setting up redis (#7446)
parent
0ad6d28b
Branches
Branches containing commit
Tags
Tags containing commit
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
changelog.d/7446.feature
+1
-0
1 addition, 0 deletions
changelog.d/7446.feature
docs/workers.md
+107
-60
107 additions, 60 deletions
docs/workers.md
with
108 additions
and
60 deletions
changelog.d/7446.feature
0 → 100644
+
1
−
0
View file @
85155654
Add
support
for
running
replication
over
Redis
when
using
workers.
This diff is collapsed.
Click to expand it.
docs/workers.md
+
107
−
60
View file @
85155654
# Scaling synapse via workers
# Scaling synapse via workers
Synapse has experimental support for splitting out functionality into
For small instances it recommended to run Synapse in monolith mode (the
multiple separate python processes, helping greatly with scalability. These
default). For larger instances where performance is a concern it can be helpful
to split out functionality into multiple separate python processes. These
processes are called 'workers', and are (eventually) intended to scale
processes are called 'workers', and are (eventually) intended to scale
horizontally independently.
horizontally independently.
All of the below is highly experimental and subject to change as Synapse evolves,
Synapse's worker support is under active development and subject to change as
but documenting it here to help folks needing highly scalable Synapses similar
we attempt to rapidly scale ever larger Synapse instances. However we are
to the one running matrix.org!
documenting it here to help admins needing a highly scalable Synapse instance
similar to the one running
`matrix.org`
.
All processes continue to share the same database instance, and as such,
workers
All processes continue to share the same database instance, and as such,
only work with
p
ostgre
s
based
s
ynapse deployments
(sharing a single sqlite
workers
only work with
P
ostgre
SQL-
based
S
ynapse deployments
. SQLite should only
across multiple processes is a recipe for disaster, plus you should be using
be used for demo purposes and any admin considering workers should already be
postgres anyway if you care about scalability)
.
running PostgreSQL
.
The workers communicate with the master synapse process via a synapse-specific
## Master/worker communication
TCP protocol called 'replication' - analogous to MySQL or Postgres style
database replication; feeding a stream of relevant data to the workers so they
The workers communicate with the master process via a Synapse-specific protocol
can be kept in sync with the main synapse process and database state.
called 'replication' (analogous to MySQL- or Postgres-style database
replication) which feeds a stream of relevant data from the master to the
workers so they can be kept in sync with the master process and database state.
Additionally, workers may make HTTP requests to the master, to send information
in the other direction. Typically this is used for operations which need to
wait for a reply - such as sending an event.
## Configuration
## Configuration
...
@@ -27,66 +35,61 @@ the correct worker, or to the main synapse instance. Note that this includes
...
@@ -27,66 +35,61 @@ the correct worker, or to the main synapse instance. Note that this includes
requests made to the federation port. See
[
reverse_proxy.md
](
reverse_proxy.md
)
requests made to the federation port. See
[
reverse_proxy.md
](
reverse_proxy.md
)
for information on setting up a reverse proxy.
for information on setting up a reverse proxy.
To enable workers, you need to add two replication listeners to the master
To enable workers, you need to add
*two*
replication listeners to the
synapse, e.g.:
main Synapse configuration file (
`homeserver.yaml`
). For example:
listeners:
# The TCP replication port
- port: 9092
bind_address: '127.0.0.1'
type: replication
# The HTTP replication port
- port: 9093
bind_address: '127.0.0.1'
type: http
resources:
- names: [replication]
Under
**no circumstances**
should these replication API listeners be exposed to
```
yaml
the public internet; it currently implements no authentication whatsoever and is
listeners
:
unencrypted.
# The TCP replication port
-
port
:
9092
(Roughly, the TCP port is used for streaming data from the master to the
bind_address
:
'
127.0.0.1'
workers, and the HTTP port for the workers to send data to the main
type
:
replication
synapse process.)
# The HTTP replication port
-
port
:
9093
bind_address
:
'
127.0.0.1'
type
:
http
resources
:
-
names
:
[
replication
]
```
You then create a set of configs for the various worker processes. These
Under
**no circumstances**
should these replication API listeners be exposed to
should be worker configuration files, and should be stored in a dedicated
the public internet; they have no authentication and are unencrypted.
subdirectory, to allow synctl to manipulate them.
Each worker configuration file inherits the configuration of the main homeserver
You should then create a set of configs for the various worker processes. Each
configuration file. You can then override configuration specific to that worker,
worker configuration file inherits the configuration of the main homeserver
e.g. the HTTP listener that it provides (if any); logging configuration; etc.
configuration file. You can then override configuration specific to that
You should minimise the number of overrides though to maintain a usable config.
worker, e.g. the HTTP listener that it provides (if any); logging
configuration; etc. You should minimise the number of overrides though to
maintain a usable config.
In the config file for each worker, you must specify the type of worker
In the config file for each worker, you must specify the type of worker
application (
`worker_app`
). The currently available worker applications are
application (
`worker_app`
). The currently available worker applications are
listed below. You must also specify the replication endpoints that it
's talking
listed below. You must also specify the replication endpoints that it
should
to on the main synapse process.
`worker_replication_host`
should specify
the
talk
to on the main synapse process.
`worker_replication_host`
should specify
host of the main synapse,
`worker_replication_port`
should point to the TCP
the
host of the main synapse,
`worker_replication_port`
should point to the TCP
replication listener port and
`worker_replication_http_port`
should point to
replication listener port and
`worker_replication_http_port`
should point to
the HTTP replication port.
the HTTP replication port.
Currently, the
`event_creator`
and
`federation_reader`
workers require specifying
For example:
`worker_replication_http_port`
.
For instance:
worker_app: synapse.app.synchrotron
```
yaml
worker_app
:
synapse.app.synchrotron
# The replication listener on the synapse to talk to.
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_host
:
127.0.0.1
worker_replication_port: 9092
worker_replication_port
:
9092
worker_replication_http_port: 9093
worker_replication_http_port
:
9093
worker_listeners:
worker_listeners
:
- type: http
-
type
:
http
port: 8083
port
:
8083
resources:
resources
:
- names:
-
names
:
- client
-
client
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
worker_log_config
:
/home/matrix/synapse/config/synchrotron_log_config.yaml
```
...is a full configuration for a synchrotron worker instance, which will expose a
...is a full configuration for a synchrotron worker instance, which will expose a
plain HTTP
`/sync`
endpoint on port 8083 separately from the
`/sync`
endpoint provided
plain HTTP
`/sync`
endpoint on port 8083 separately from the
`/sync`
endpoint provided
...
@@ -101,6 +104,50 @@ recommend the use of `systemd` where available: for information on setting up
...
@@ -101,6 +104,50 @@ recommend the use of `systemd` where available: for information on setting up
`systemd`
to start synapse workers, see
`systemd`
to start synapse workers, see
[
systemd-with-workers
](
systemd-with-workers
)
. To use
`synctl`
, see below.
[
systemd-with-workers
](
systemd-with-workers
)
. To use
`synctl`
, see below.
### **Experimental** support for replication over redis
As of Synapse v1.13.0, it is possible to configure Synapse to send replication
via a
[
Redis pub/sub channel
](
https://redis.io/topics/pubsub
)
. This is an
alternative to direct TCP connections to the master: rather than all the
workers connecting to the master, all the workers and the master connect to
Redis, which relays replication commands between processes. This can give a
significant cpu saving on the master and will be a prerequisite for upcoming
performance improvements.
Note that this support is currently experimental; you may experience lost
messages and similar problems! It is strongly recommended that admins setting
up workers for the first time use direct TCP replication as above.
To configure Synapse to use Redis:
1.
Install Redis following the normal procedure for your distribution - for
example, on Debian,
`apt install redis-server`
. (It is safe to use an
existing Redis deployment if you have one: we use a pub/sub stream named
according to the
`server_name`
of your synapse server.)
2.
Check Redis is running and accessible: you should be able to
`echo PING | nc -q1
localhost 6379`
and get a response of
`+PONG`
.
3.
Install the python prerequisites. If you installed synapse into a
virtualenv, this can be done with:
```
sh
pip
install
matrix-synapse[redis]
```
The debian packages from matrix.org already include the required
dependencies.
4.
Add config to the shared configuration (
`homeserver.yaml`
):
```
yaml
redis
:
enabled
:
true
```
Optional parameters which can go alongside
`enabled`
are
`host`
,
`port`
,
`password`
. Normally none of these are required.
5.
Restart master and all workers.
Once redis replication is in use,
`worker_replication_port`
is redundant and
can be removed from the worker configuration files. Similarly, the
configuration for the
`listener`
for the TCP replication port can be removed
from the main configuration file. Note that the HTTP replication port is
still required.
### Using synctl
### Using synctl
If you want to use
`synctl`
to manage your synapse processes, you will need to
If you want to use
`synctl`
to manage your synapse processes, you will need to
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment