Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • maunium/synapse
  • leytilera/synapse
2 results
Show changes
Showing
with 575 additions and 290 deletions
exported_requirements.txt
10
...@@ -4,10 +4,12 @@ Priority: extra ...@@ -4,10 +4,12 @@ Priority: extra
Maintainer: Synapse Packaging team <packages@matrix.org> Maintainer: Synapse Packaging team <packages@matrix.org>
# keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv. # keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv.
Build-Depends: Build-Depends:
debhelper (>= 10), debhelper-compat (= 12),
dh-virtualenv (>= 1.1), dh-virtualenv (>= 1.1),
libsystemd-dev, libsystemd-dev,
libpq-dev, libpq-dev,
libicu-dev,
pkg-config,
lsb-release, lsb-release,
python3-dev, python3-dev,
python3, python3,
...@@ -16,7 +18,7 @@ Build-Depends: ...@@ -16,7 +18,7 @@ Build-Depends:
python3-venv, python3-venv,
tar, tar,
Standards-Version: 3.9.8 Standards-Version: 3.9.8
Homepage: https://github.com/matrix-org/synapse Homepage: https://github.com/element-hq/synapse
Package: matrix-synapse-py3 Package: matrix-synapse-py3
Architecture: any Architecture: any
...@@ -35,6 +37,7 @@ Depends: ...@@ -35,6 +37,7 @@ Depends:
# so we put perl:Depends in Suggests rather than Depends. # so we put perl:Depends in Suggests rather than Depends.
Recommends: Recommends:
${shlibs1:Recommends}, ${shlibs1:Recommends},
matrix-org-archive-keyring,
Suggests: Suggests:
sqlite3, sqlite3,
${perl:Depends}, ${perl:Depends},
......
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: synapse Upstream-Name: synapse
Source: https://github.com/matrix-org/synapse Source: https://github.com/element-hq/synapse
Files: * Files: *
Copyright: 2014-2017, OpenMarket Ltd, 2017-2018 New Vector Ltd Copyright: 2014-2017, OpenMarket Ltd, 2017-2018 New Vector Ltd
License: Apache-2.0 License: Apache-2.0
Files: *
Copyright: 2023 New Vector Ltd
License: AGPL-3.0-or-later
Files: synapse/config/saml2.py Files: synapse/config/saml2.py
Copyright: 2015, Ericsson Copyright: 2015, Ericsson
License: Apache-2.0 License: Apache-2.0
...@@ -22,29 +26,6 @@ Files: synapse/config/repository.py ...@@ -22,29 +26,6 @@ Files: synapse/config/repository.py
Copyright: 2014-2015, matrix.org Copyright: 2014-2015, matrix.org
License: Apache-2.0 License: Apache-2.0
Files: contrib/jitsimeetbridge/unjingle/strophe/base64.js
Copyright: Public Domain (Tyler Akins http://rumkin.com)
License: public-domain
This code was written by Tyler Akins and has been placed in the
public domain. It would be nice if you left this header intact.
Base64 code from Tyler Akins -- http://rumkin.com
Files: contrib/jitsimeetbridge/unjingle/strophe/md5.js
Copyright: 1999-2002, Paul Johnston & Contributors
License: BSD-3-clause
Files: contrib/jitsimeetbridge/unjingle/strophe/strophe.js
Copyright: 2006-2008, OGG, LLC
License: Expat
Files: contrib/jitsimeetbridge/unjingle/strophe/XMLHttpRequest.js
Copyright: 2010 passive.ly LLC
License: Expat
Files: contrib/jitsimeetbridge/unjingle/*.js
Copyright: 2014 Jitsi
License: Apache-2.0
Files: debian/* Files: debian/*
Copyright: 2016-2017, Erik Johnston <erik@matrix.org> Copyright: 2016-2017, Erik Johnston <erik@matrix.org>
2017, Rahul De <rahulde@swecha.net> 2017, Rahul De <rahulde@swecha.net>
......
.\" generated with Ronn-NG/v0.8.0 .\" generated with Ronn-NG/v0.10.1
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0 .\" http://github.com/apjanke/ronn-ng/tree/0.10.1
.TH "HASH_PASSWORD" "1" "July 2021" "" "" .TH "HASH_PASSWORD" "1" "August 2024" ""
.SH "NAME" .SH "NAME"
\fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset \fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset
.SH "SYNOPSIS" .SH "SYNOPSIS"
\fBhash_password\fR [\fB\-p\fR|\fB\-\-password\fR [password]] [\fB\-c\fR|\fB\-\-config\fR \fIfile\fR] .TS
allbox;
\fBhash_password\fR [\fB\-p\fR \fB\-\-password\fR [password]] [\fB\-c\fR \fB\-\-config\fR \fIfile\fR]
.TE
.SH "DESCRIPTION" .SH "DESCRIPTION"
\fBhash_password\fR calculates the hash of a supplied password using bcrypt\. \fBhash_password\fR calculates the hash of a supplied password using bcrypt\.
.P .P
\fBhash_password\fR takes a password as an parameter either on the command line or the \fBSTDIN\fR if not supplied\. \fBhash_password\fR takes a password as an parameter either on the command line or the \fBSTDIN\fR if not supplied\.
.P .P
It accepts an YAML file which can be used to specify parameters like the number of rounds for bcrypt and password_config section having the pepper value used for the hashing\. By default \fBbcrypt_rounds\fR is set to \fB10\fR\. It accepts an YAML file which can be used to specify parameters like the number of rounds for bcrypt and password_config section having the pepper value used for the hashing\. By default \fBbcrypt_rounds\fR is set to \fB12\fR\.
.P .P
The hashed password is written on the \fBSTDOUT\fR\. The hashed password is written on the \fBSTDOUT\fR\.
.SH "FILES" .SH "FILES"
...@@ -20,7 +23,7 @@ bcrypt_rounds: 17 password_config: pepper: "random hashing pepper" ...@@ -20,7 +23,7 @@ bcrypt_rounds: 17 password_config: pepper: "random hashing pepper"
.SH "OPTIONS" .SH "OPTIONS"
.TP .TP
\fB\-p\fR, \fB\-\-password\fR \fB\-p\fR, \fB\-\-password\fR
Read the password form the command line if [password] is supplied\. If not, prompt the user and read the password form the \fBSTDIN\fR\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\. Read the password form the command line if [password] is supplied, or from \fBSTDIN\fR\. If not, prompt the user and read the password from the tty prompt\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
.TP .TP
\fB\-c\fR, \fB\-\-config\fR \fB\-c\fR, \fB\-\-config\fR
Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\. Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\.
...@@ -33,7 +36,17 @@ $2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8\.X8fWFpum7SxZ9MFe ...@@ -33,7 +36,17 @@ $2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8\.X8fWFpum7SxZ9MFe
.fi .fi
.IP "" 0 .IP "" 0
.P .P
Hash from the STDIN: Hash from the stdin:
.IP "" 4
.nf
$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX\.rcuAbM8ErLoUhybG
.fi
.IP "" 0
.P
Hash from the prompt:
.IP "" 4 .IP "" 4
.nf .nf
$ hash_password $ hash_password
...@@ -53,6 +66,6 @@ $2b$12$CwI\.wBNr\.w3kmiUlV3T5s\.GT2wH7uebDCovDrCOh18dFedlANK99O ...@@ -53,6 +66,6 @@ $2b$12$CwI\.wBNr\.w3kmiUlV3T5s\.GT2wH7uebDCovDrCOh18dFedlANK99O
.fi .fi
.IP "" 0 .IP "" 0
.SH "COPYRIGHT" .SH "COPYRIGHT"
This man page was written by Rahul De <\fI\%mailto:rahulde@swecha\.net\fR> for Debian GNU/Linux distribution\. This man page was written by Rahul De «rahulde@swecha\.net» for Debian GNU/Linux distribution\.
.SH "SEE ALSO" .SH "SEE ALSO"
synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1) synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)
<!DOCTYPE html>
<html>
<head>
<meta http-equiv='content-type' content='text/html;charset=utf-8'>
<meta name='generator' content='Ronn-NG/v0.10.1 (http://github.com/apjanke/ronn-ng/tree/0.10.1)'>
<title>hash_password(1) - Calculate the hash of a new password, so that passwords can be reset</title>
<style type='text/css' media='all'>
/* style: man */
body#manpage {margin:0}
.mp {max-width:100ex;padding:0 9ex 1ex 4ex}
.mp p,.mp pre,.mp ul,.mp ol,.mp dl {margin:0 0 20px 0}
.mp h2 {margin:10px 0 0 0}
.mp > p,.mp > pre,.mp > ul,.mp > ol,.mp > dl {margin-left:8ex}
.mp h3 {margin:0 0 0 4ex}
.mp dt {margin:0;clear:left}
.mp dt.flush {float:left;width:8ex}
.mp dd {margin:0 0 0 9ex}
.mp h1,.mp h2,.mp h3,.mp h4 {clear:left}
.mp pre {margin-bottom:20px}
.mp pre+h2,.mp pre+h3 {margin-top:22px}
.mp h2+pre,.mp h3+pre {margin-top:5px}
.mp img {display:block;margin:auto}
.mp h1.man-title {display:none}
.mp,.mp code,.mp pre,.mp tt,.mp kbd,.mp samp,.mp h3,.mp h4 {font-family:monospace;font-size:14px;line-height:1.42857142857143}
.mp h2 {font-size:16px;line-height:1.25}
.mp h1 {font-size:20px;line-height:2}
.mp {text-align:justify;background:#fff}
.mp,.mp code,.mp pre,.mp pre code,.mp tt,.mp kbd,.mp samp {color:#131211}
.mp h1,.mp h2,.mp h3,.mp h4 {color:#030201}
.mp u {text-decoration:underline}
.mp code,.mp strong,.mp b {font-weight:bold;color:#131211}
.mp em,.mp var {font-style:italic;color:#232221;text-decoration:none}
.mp a,.mp a:link,.mp a:hover,.mp a code,.mp a pre,.mp a tt,.mp a kbd,.mp a samp {color:#0000ff}
.mp b.man-ref {font-weight:normal;color:#434241}
.mp pre {padding:0 4ex}
.mp pre code {font-weight:normal;color:#434241}
.mp h2+pre,h3+pre {padding-left:0}
ol.man-decor,ol.man-decor li {margin:3px 0 10px 0;padding:0;float:left;width:33%;list-style-type:none;text-transform:uppercase;color:#999;letter-spacing:1px}
ol.man-decor {width:100%}
ol.man-decor li.tl {text-align:left}
ol.man-decor li.tc {text-align:center;letter-spacing:4px}
ol.man-decor li.tr {text-align:right;float:right}
</style>
</head>
<!--
The following styles are deprecated and will be removed at some point:
div#man, div#man ol.man, div#man ol.head, div#man ol.man.
The .man-page, .man-decor, .man-head, .man-foot, .man-title, and
.man-navigation should be used instead.
-->
<body id='manpage'>
<div class='mp' id='man'>
<div class='man-navigation' style='display:none'>
<a href="#NAME">NAME</a>
<a href="#SYNOPSIS">SYNOPSIS</a>
<a href="#DESCRIPTION">DESCRIPTION</a>
<a href="#FILES">FILES</a>
<a href="#OPTIONS">OPTIONS</a>
<a href="#EXAMPLES">EXAMPLES</a>
<a href="#COPYRIGHT">COPYRIGHT</a>
<a href="#SEE-ALSO">SEE ALSO</a>
</div>
<ol class='man-decor man-head man head'>
<li class='tl'>hash_password(1)</li>
<li class='tc'></li>
<li class='tr'>hash_password(1)</li>
</ol>
<h2 id="NAME">NAME</h2>
<p class="man-name">
<code>hash_password</code> - <span class="man-whatis">Calculate the hash of a new password, so that passwords can be reset</span>
</p>
<h2 id="SYNOPSIS">SYNOPSIS</h2>
<table>
<tbody>
<tr>
<td>
<code>hash_password</code> [<code>-p</code>
</td>
<td>
<code>--password</code> [password]] [<code>-c</code>
</td>
<td>
<code>--config</code> <var>file</var>]</td>
</tr>
</tbody>
</table>
<h2 id="DESCRIPTION">DESCRIPTION</h2>
<p><strong>hash_password</strong> calculates the hash of a supplied password using bcrypt.</p>
<p><code>hash_password</code> takes a password as an parameter either on the command line
or the <code>STDIN</code> if not supplied.</p>
<p>It accepts an YAML file which can be used to specify parameters like the
number of rounds for bcrypt and password_config section having the pepper
value used for the hashing. By default <code>bcrypt_rounds</code> is set to <strong>12</strong>.</p>
<p>The hashed password is written on the <code>STDOUT</code>.</p>
<h2 id="FILES">FILES</h2>
<p>A sample YAML file accepted by <code>hash_password</code> is described below:</p>
<p>bcrypt_rounds: 17
password_config:
pepper: "random hashing pepper"</p>
<h2 id="OPTIONS">OPTIONS</h2>
<dl>
<dt>
<code>-p</code>, <code>--password</code>
</dt>
<dd>Read the password form the command line if [password] is supplied, or from <code>STDIN</code>.
If not, prompt the user and read the password from the tty prompt.
It is not recommended to type the password on the command line
directly. Use the STDIN instead.</dd>
<dt>
<code>-c</code>, <code>--config</code>
</dt>
<dd>Read the supplied YAML <var>file</var> containing the options <code>bcrypt_rounds</code>
and the <code>password_config</code> section containing the <code>pepper</code> value.</dd>
</dl>
<h2 id="EXAMPLES">EXAMPLES</h2>
<p>Hash from the command line:</p>
<pre><code>$ hash_password -p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
</code></pre>
<p>Hash from the stdin:</p>
<pre><code>$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
</code></pre>
<p>Hash from the prompt:</p>
<pre><code>$ hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
</code></pre>
<p>Using a config file:</p>
<pre><code>$ hash_password -c config.yml
Password:
Confirm password:
$2b$12$CwI.wBNr.w3kmiUlV3T5s.GT2wH7uebDCovDrCOh18dFedlANK99O
</code></pre>
<h2 id="COPYRIGHT">COPYRIGHT</h2>
<p>This man page was written by Rahul De «rahulde@swecha.net»
for Debian GNU/Linux distribution.</p>
<h2 id="SEE-ALSO">SEE ALSO</h2>
<p><span class="man-ref">synctl<span class="s">(1)</span></span>, <span class="man-ref">synapse_port_db<span class="s">(1)</span></span>, <span class="man-ref">register_new_matrix_user<span class="s">(1)</span></span>, <span class="man-ref">synapse_review_recent_signups<span class="s">(1)</span></span></p>
<ol class='man-decor man-foot man foot'>
<li class='tl'></li>
<li class='tc'>August 2024</li>
<li class='tr'>hash_password(1)</li>
</ol>
</div>
</body>
</html>
...@@ -14,7 +14,7 @@ or the `STDIN` if not supplied. ...@@ -14,7 +14,7 @@ or the `STDIN` if not supplied.
It accepts an YAML file which can be used to specify parameters like the It accepts an YAML file which can be used to specify parameters like the
number of rounds for bcrypt and password_config section having the pepper number of rounds for bcrypt and password_config section having the pepper
value used for the hashing. By default `bcrypt_rounds` is set to **10**. value used for the hashing. By default `bcrypt_rounds` is set to **12**.
The hashed password is written on the `STDOUT`. The hashed password is written on the `STDOUT`.
...@@ -29,8 +29,8 @@ A sample YAML file accepted by `hash_password` is described below: ...@@ -29,8 +29,8 @@ A sample YAML file accepted by `hash_password` is described below:
## OPTIONS ## OPTIONS
* `-p`, `--password`: * `-p`, `--password`:
Read the password form the command line if [password] is supplied. Read the password form the command line if [password] is supplied, or from `STDIN`.
If not, prompt the user and read the password form the `STDIN`. If not, prompt the user and read the password from the tty prompt.
It is not recommended to type the password on the command line It is not recommended to type the password on the command line
directly. Use the STDIN instead. directly. Use the STDIN instead.
...@@ -45,7 +45,14 @@ Hash from the command line: ...@@ -45,7 +45,14 @@ Hash from the command line:
$ hash_password -p "p@ssw0rd" $ hash_password -p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe $2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
Hash from the STDIN: Hash from the stdin:
$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
Hash from the prompt:
$ hash_password $ hash_password
Password: Password:
......
...@@ -31,7 +31,7 @@ EOF ...@@ -31,7 +31,7 @@ EOF
# This file is autogenerated, and will be recreated on upgrade if it is deleted. # This file is autogenerated, and will be recreated on upgrade if it is deleted.
# Any changes you make will be preserved. # Any changes you make will be preserved.
# Whether to report anonymized homeserver usage statistics. # Whether to report homeserver usage statistics.
report_stats: false report_stats: false
EOF EOF
fi fi
...@@ -40,12 +40,12 @@ EOF ...@@ -40,12 +40,12 @@ EOF
/opt/venvs/matrix-synapse/lib/manage_debconf.pl update /opt/venvs/matrix-synapse/lib/manage_debconf.pl update
if ! getent passwd $USER >/dev/null; then if ! getent passwd $USER >/dev/null; then
adduser --quiet --system --no-create-home --home /var/lib/matrix-synapse $USER adduser --quiet --system --group --no-create-home --home /var/lib/matrix-synapse $USER
fi fi
for DIR in /var/lib/matrix-synapse /var/log/matrix-synapse /etc/matrix-synapse; do for DIR in /var/lib/matrix-synapse /var/log/matrix-synapse /etc/matrix-synapse; do
if ! dpkg-statoverride --list --quiet $DIR >/dev/null; then if ! dpkg-statoverride --list --quiet $DIR >/dev/null; then
dpkg-statoverride --force --quiet --update --add $USER nogroup 0755 $DIR dpkg-statoverride --force-statoverride-add --quiet --update --add $USER "$(id -gn $USER)" 0755 $DIR
fi fi
done done
......
#!/bin/sh -e
# Attempt to undo some of the braindamage caused by
# https://github.com/matrix-org/package-synapse-debian/issues/18.
#
# Due to reasons [1], the old python2 matrix-synapse package will not stop the
# service when the package is uninstalled. Our maintainer scripts will do the
# right thing in terms of ensuring the service is enabled and unmasked, but
# then do a `systemctl start matrix-synapse`, which of course does nothing -
# leaving the old (py2) service running.
#
# There should normally be no reason for the service to be running during our
# preinst, so we assume that if it *is* running, it's due to that situation,
# and stop it.
#
# [1] dh_systemd_start doesn't do anything because it sees that there is an
# init.d script with the same name, so leaves it to dh_installinit.
#
# dh_installinit doesn't do anything because somebody gave it a --no-start
# for unknown reasons.
if [ -x /bin/systemctl ]; then
if /bin/systemctl --quiet is-active -- matrix-synapse; then
echo >&2 "stopping existing matrix-synapse service"
/bin/systemctl stop matrix-synapse || true
fi
fi
#DEBHELPER#
exit 0
# Specify environment variables used when running Synapse
# SYNAPSE_CACHE_FACTOR=0.5 (default)
...@@ -5,7 +5,6 @@ Description=Synapse Matrix homeserver ...@@ -5,7 +5,6 @@ Description=Synapse Matrix homeserver
Type=notify Type=notify
User=matrix-synapse User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=-/etc/default/matrix-synapse
ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/
ExecReload=/bin/kill -HUP $MAINPID ExecReload=/bin/kill -HUP $MAINPID
...@@ -13,5 +12,10 @@ Restart=always ...@@ -13,5 +12,10 @@ Restart=always
RestartSec=3 RestartSec=3
SyslogIdentifier=matrix-synapse SyslogIdentifier=matrix-synapse
# The environment file is not shipped by default anymore and the below directive
# is for backwards compatibility only. Please use your homeserver.yaml if
# possible.
EnvironmentFile=-/etc/default/matrix-synapse
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
...@@ -30,14 +30,14 @@ msgid "" ...@@ -30,14 +30,14 @@ msgid ""
"The name that this homeserver will appear as, to clients and other servers " "The name that this homeserver will appear as, to clients and other servers "
"via federation. This is normally the public hostname of the server running " "via federation. This is normally the public hostname of the server running "
"synapse, but can be different if you set up delegation. Please refer to the " "synapse, but can be different if you set up delegation. Please refer to the "
"delegation documentation in this case: https://github.com/matrix-org/synapse/" "delegation documentation in this case: https://github.com/element-hq/synapse/"
"blob/master/docs/delegate.md." "blob/master/docs/delegate.md."
msgstr "" msgstr ""
#. Type: boolean #. Type: boolean
#. Description #. Description
#: ../templates:2001 #: ../templates:2001
msgid "Report anonymous statistics?" msgid "Report homeserver usage statistics?"
msgstr "" msgstr ""
#. Type: boolean #. Type: boolean
...@@ -45,11 +45,11 @@ msgstr "" ...@@ -45,11 +45,11 @@ msgstr ""
#: ../templates:2001 #: ../templates:2001
msgid "" msgid ""
"Developers of Matrix and Synapse really appreciate helping the project out " "Developers of Matrix and Synapse really appreciate helping the project out "
"by reporting anonymized usage statistics from this homeserver. Only very " "by reporting homeserver usage statistics from this homeserver. Your "
"basic aggregate data (e.g. number of users) will be reported, but it helps " "homeserver's server name, along with very basic aggregate data (e.g. "
"track the growth of the Matrix community, and helps in making Matrix a " "number of users) will be reported. But it helps track the growth of the "
"success, as well as to convince other networks that they should peer with " "Matrix community, and helps in making Matrix a success, as well as to "
"Matrix." "convince other networks that they should peer with Matrix."
msgstr "" msgstr ""
#. Type: boolean #. Type: boolean
......
...@@ -31,8 +31,12 @@ A sample YAML file accepted by `register_new_matrix_user` is described below: ...@@ -31,8 +31,12 @@ A sample YAML file accepted by `register_new_matrix_user` is described below:
Local part of the new user. Will prompt if omitted. Local part of the new user. Will prompt if omitted.
* `-p`, `--password`: * `-p`, `--password`:
New password for user. Will prompt if omitted. Supplying the password New password for user. Will prompt if this option and `--password-file` are omitted.
on the command line is not recommended. Use the STDIN instead. Supplying the password on the command line is not recommended.
* `--password-file`:
File containing the new password for user. If set, overrides `--password`.
This is a more secure alternative to specifying the password on the command line.
* `-a`, `--admin`: * `-a`, `--admin`:
Register new user as an admin. Will prompt if omitted. Register new user as an admin. Will prompt if omitted.
...@@ -44,6 +48,9 @@ A sample YAML file accepted by `register_new_matrix_user` is described below: ...@@ -44,6 +48,9 @@ A sample YAML file accepted by `register_new_matrix_user` is described below:
Shared secret as defined in server config file. This is an optional Shared secret as defined in server config file. This is an optional
parameter as it can be also supplied via the YAML file. parameter as it can be also supplied via the YAML file.
* `--exists-ok`:
Do not fail if the user already exists. The user account will be not updated in this case.
* `server_url`: * `server_url`:
URL of the home server. Defaults to 'https://localhost:8448'. URL of the home server. Defaults to 'https://localhost:8448'.
......
...@@ -6,15 +6,19 @@ ...@@ -6,15 +6,19 @@
# assume we only have one package # assume we only have one package
PACKAGE_NAME:=`dh_listpackages` PACKAGE_NAME:=`dh_listpackages`
override_dh_systemd_enable: override_dh_installsystemd:
dh_systemd_enable --name=matrix-synapse dh_installsystemd --name=matrix-synapse
override_dh_installinit:
dh_installinit --name=matrix-synapse
# we don't really want to strip the symbols from our object files. # we don't really want to strip the symbols from our object files.
override_dh_strip: override_dh_strip:
override_dh_auto_configure:
# many libraries pulled from PyPI have allocatable sections after
# non-allocatable ones on which dwz errors out. For those without the issue the
# gains are only marginal
override_dh_dwz:
# dh_shlibdeps calls dpkg-shlibdeps, which finds all the binary files # dh_shlibdeps calls dpkg-shlibdeps, which finds all the binary files
# (executables and shared libs) in the package, and looks for the shared # (executables and shared libs) in the package, and looks for the shared
# libraries that they depend on. It then adds a dependency on the package that # libraries that they depend on. It then adds a dependency on the package that
...@@ -36,9 +40,9 @@ override_dh_shlibdeps: ...@@ -36,9 +40,9 @@ override_dh_shlibdeps:
# to be self-contained, but they have interdependencies and # to be self-contained, but they have interdependencies and
# dpkg-shlibdeps doesn't know how to resolve them. # dpkg-shlibdeps doesn't know how to resolve them.
# #
# As of Pillow 7.1.0, these libraries are in # As of Pillow 7.1.0, these libraries are in site-packages/Pillow.libs.
# site-packages/Pillow.libs. Previously, they were in # Previously, they were in site-packages/PIL/.libs. As of Pillow 10.2.0
# site-packages/PIL/.libs. # the package name is lowercased to site-packages/pillow.libs.
# #
# (we also need to exclude psycopg2, of course, since we've already # (we also need to exclude psycopg2, of course, since we've already
# dealt with that.) # dealt with that.)
...@@ -46,6 +50,7 @@ override_dh_shlibdeps: ...@@ -46,6 +50,7 @@ override_dh_shlibdeps:
dh_shlibdeps \ dh_shlibdeps \
-X site-packages/PIL/.libs \ -X site-packages/PIL/.libs \
-X site-packages/Pillow.libs \ -X site-packages/Pillow.libs \
-X site-packages/pillow.libs \
-X site-packages/psycopg2 -X site-packages/psycopg2
override_dh_virtualenv: override_dh_virtualenv:
......
...@@ -5,17 +5,18 @@ _Description: Name of the server: ...@@ -5,17 +5,18 @@ _Description: Name of the server:
servers via federation. This is normally the public hostname of the servers via federation. This is normally the public hostname of the
server running synapse, but can be different if you set up delegation. server running synapse, but can be different if you set up delegation.
Please refer to the delegation documentation in this case: Please refer to the delegation documentation in this case:
https://github.com/matrix-org/synapse/blob/master/docs/delegate.md. https://element-hq.github.io/synapse/latest/delegate.html.
Template: matrix-synapse/report-stats Template: matrix-synapse/report-stats
Type: boolean Type: boolean
Default: false Default: false
_Description: Report anonymous statistics? _Description: Report homeserver usage statistics?
Developers of Matrix and Synapse really appreciate helping the Developers of Matrix and Synapse really appreciate helping the
project out by reporting anonymized usage statistics from this project out by reporting homeserver usage statistics from this
homeserver. Only very basic aggregate data (e.g. number of users) homeserver. Your homeserver's server name, along with very basic
will be reported, but it helps track the growth of the Matrix aggregate data (e.g. number of users) will be reported. But it
community, and helps in making Matrix a success, as well as to helps track the growth of the Matrix community, and helps in
convince other networks that they should peer with Matrix. making Matrix a success, as well as to convince other networks
that they should peer with Matrix.
. .
Thank you. Thank you.
...@@ -6,12 +6,14 @@ CWD=$(pwd) ...@@ -6,12 +6,14 @@ CWD=$(pwd)
cd "$DIR/.." || exit cd "$DIR/.." || exit
PYTHONPATH=$(readlink -f "$(pwd)") # Do not override PYTHONPATH if we are in a virtual env
export PYTHONPATH if [ "$VIRTUAL_ENV" = "" ]; then
PYTHONPATH=$(readlink -f "$(pwd)")
export PYTHONPATH
echo "$PYTHONPATH" echo "$PYTHONPATH"
fi
# Create servers which listen on HTTP at 808x and HTTPS at 848x.
for port in 8080 8081 8082; do for port in 8080 8081 8082; do
echo "Starting server on port $port... " echo "Starting server on port $port... "
...@@ -19,10 +21,12 @@ for port in 8080 8081 8082; do ...@@ -19,10 +21,12 @@ for port in 8080 8081 8082; do
mkdir -p demo/$port mkdir -p demo/$port
pushd demo/$port || exit pushd demo/$port || exit
# Generate the configuration for the homeserver at localhost:848x. # Generate the configuration for the homeserver at localhost:848x, note that
# the homeserver name needs to match the HTTPS listening port for federation
# to properly work..
python3 -m synapse.app.homeserver \ python3 -m synapse.app.homeserver \
--generate-config \ --generate-config \
--server-name "localhost:$port" \ --server-name "localhost:$https_port" \
--config-path "$port.config" \ --config-path "$port.config" \
--report-stats no --report-stats no
...@@ -42,7 +46,7 @@ for port in 8080 8081 8082; do ...@@ -42,7 +46,7 @@ for port in 8080 8081 8082; do
echo '' echo ''
# Warning, this heredoc depends on the interaction of tabs and spaces. # Warning, this heredoc depends on the interaction of tabs and spaces.
# Please don't accidentaly bork me with your fancy settings. # Please don't accidentally bork me with your fancy settings.
listeners=$(cat <<-PORTLISTENERS listeners=$(cat <<-PORTLISTENERS
# Configure server to listen on both $https_port and $port # Configure server to listen on both $https_port and $port
# This overides some of the default settings above # This overides some of the default settings above
...@@ -76,12 +80,8 @@ for port in 8080 8081 8082; do ...@@ -76,12 +80,8 @@ for port in 8080 8081 8082; do
echo "tls_certificate_path: \"$DIR/$port/localhost:$port.tls.crt\"" echo "tls_certificate_path: \"$DIR/$port/localhost:$port.tls.crt\""
echo "tls_private_key_path: \"$DIR/$port/localhost:$port.tls.key\"" echo "tls_private_key_path: \"$DIR/$port/localhost:$port.tls.key\""
# Ignore keys from the trusted keys server # Request keys directly from servers contacted over federation
echo '# Ignore keys from the trusted keys server' echo 'trusted_key_servers: []'
echo 'trusted_key_servers:'
echo ' - server_name: "matrix.org"'
echo ' accept_keys_insecurely: true'
echo ''
# Allow the servers to communicate over localhost. # Allow the servers to communicate over localhost.
allow_list=$(cat <<-ALLOW_LIST allow_list=$(cat <<-ALLOW_LIST
...@@ -138,6 +138,13 @@ for port in 8080 8081 8082; do ...@@ -138,6 +138,13 @@ for port in 8080 8081 8082; do
per_user: per_user:
per_second: 1000 per_second: 1000
burst_count: 1000 burst_count: 1000
rc_presence:
per_user:
per_second: 1000
burst_count: 1000
rc_delayed_event_mgmt:
per_second: 1000
burst_count: 1000
RC RC
) )
echo "${ratelimiting}" >> "$port.config" echo "${ratelimiting}" >> "$port.config"
......
# syntax=docker/dockerfile:1
# Dockerfile to build the matrixdotorg/synapse docker images. # Dockerfile to build the matrixdotorg/synapse docker images.
# #
# Note that it uses features which are only available in BuildKit - see # Note that it uses features which are only available in BuildKit - see
...@@ -16,75 +17,72 @@ ...@@ -16,75 +17,72 @@
# Irritatingly, there is no blessed guide on how to distribute an application with its # Irritatingly, there is no blessed guide on how to distribute an application with its
# poetry-managed environment in a docker image. We have opted for # poetry-managed environment in a docker image. We have opted for
# `poetry export | pip install -r /dev/stdin`, but there are known bugs in # `poetry export | pip install -r /dev/stdin`, but beware: we have experienced bugs in
# in `poetry export` whose fixes (scheduled for poetry 1.2) have yet to be released. # in `poetry export` in the past.
# In case we get bitten by those bugs in the future, the recommendations here might
# be useful:
# https://github.com/python-poetry/poetry/discussions/1879#discussioncomment-216865
# https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker?answertab=scoredesc
ARG DEBIAN_VERSION=bookworm
ARG PYTHON_VERSION=3.12
ARG PYTHON_VERSION=3.9 ARG POETRY_VERSION=1.8.3
### ###
### Stage 0: generate requirements.txt ### Stage 0: generate requirements.txt
### ###
FROM docker.io/python:${PYTHON_VERSION}-slim as requirements ### This stage is platform-agnostic, so we can use the build platform in case of cross-compilation.
###
# RUN --mount is specific to buildkit and is documented at FROM --platform=$BUILDPLATFORM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS requirements
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
# Here we use it to set up a cache for apt (and below for pip), to improve
# rebuild speeds on slow connections.
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y git \
&& rm -rf /var/lib/apt/lists/*
# We install poetry in its own build stage to avoid its dependencies conflicting with
# synapse's dependencies.
# We use a specific commit from poetry's master branch instead of our usual 1.1.12,
# to incorporate fixes to some bugs in `poetry export`. This commit corresponds to
# https://github.com/python-poetry/poetry/pull/5156 and
# https://github.com/python-poetry/poetry/issues/5141 ;
# without it, we generate a requirements.txt with incorrect environment markers,
# which causes necessary packages to be omitted when we `pip install`.
#
# NB: In poetry 1.2 `poetry export` will be moved into a plugin; we'll need to also
# pip install poetry-plugin-export (https://github.com/python-poetry/poetry-plugin-export).
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --user git+https://github.com/python-poetry/poetry.git@fb13b3a676f476177f7937ffa480ee5cff9a90a5
WORKDIR /synapse WORKDIR /synapse
# Copy just what we need to run `poetry export`... # Copy just what we need to run `poetry export`...
COPY pyproject.toml poetry.lock README.rst /synapse/ COPY pyproject.toml poetry.lock /synapse/
RUN /root/.local/bin/poetry export --extras all -o /synapse/requirements.txt
# If specified, we won't verify the hashes of dependencies.
# This is only needed if the hashes of dependencies cannot be checked for some
# reason, such as when a git repository is used directly as a dependency.
ARG TEST_ONLY_SKIP_DEP_HASH_VERIFICATION
# If specified, we won't use the Poetry lockfile.
# Instead, we'll just install what a regular `pip install` would from PyPI.
ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE
# This silences a warning as uv isn't able to do hardlinks between its cache
# (mounted as --mount=type=cache) and the target directory.
ENV UV_LINK_MODE=copy
# Export the dependencies, but only if we're actually going to use the Poetry lockfile.
# Otherwise, just create an empty requirements file so that the Dockerfile can
# proceed.
ARG POETRY_VERSION
RUN --mount=type=cache,target=/root/.cache/uv \
if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
uvx --with poetry-plugin-export==1.8.0 \
poetry@${POETRY_VERSION} export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}; \
else \
touch /synapse/requirements.txt; \
fi
### ###
### Stage 1: builder ### Stage 1: builder
### ###
FROM docker.io/python:${PYTHON_VERSION}-slim as builder FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS builder
# install the OS build deps # This silences a warning as uv isn't able to do hardlinks between its cache
RUN \ # (mounted as --mount=type=cache) and the target directory.
--mount=type=cache,target=/var/cache/apt,sharing=locked \ ENV UV_LINK_MODE=copy
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \ # Install rust and ensure its in the PATH
build-essential \ ENV RUSTUP_HOME=/rust
libffi-dev \ ENV CARGO_HOME=/cargo
libjpeg-dev \ ENV PATH=/cargo/bin:/rust/bin:$PATH
libpq-dev \ RUN mkdir /rust /cargo
libssl-dev \
libwebp-dev \ RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable --profile minimal
libxml++2.6-dev \
libxslt1-dev \ # arm64 builds consume a lot of memory if `CARGO_NET_GIT_FETCH_WITH_CLI` is not
openssl \ # set to true, so we expose it as a build-arg.
rustc \ ARG CARGO_NET_GIT_FETCH_WITH_CLI=false
zlib1g-dev \ ENV CARGO_NET_GIT_FETCH_WITH_CLI=$CARGO_NET_GIT_FETCH_WITH_CLI
&& rm -rf /var/lib/apt/lists/*
# To speed up rebuilds, install all of the dependencies before we copy over # To speed up rebuilds, install all of the dependencies before we copy over
# the whole synapse project, so that this layer in the Docker cache can be # the whole synapse project, so that this layer in the Docker cache can be
...@@ -92,45 +90,100 @@ RUN \ ...@@ -92,45 +90,100 @@ RUN \
# #
# This is aiming at installing the `[tool.poetry.depdendencies]` from pyproject.toml. # This is aiming at installing the `[tool.poetry.depdendencies]` from pyproject.toml.
COPY --from=requirements /synapse/requirements.txt /synapse/ COPY --from=requirements /synapse/requirements.txt /synapse/
RUN --mount=type=cache,target=/root/.cache/pip \ RUN --mount=type=cache,target=/root/.cache/uv \
pip install --prefix="/install" --no-deps --no-warn-script-location -r /synapse/requirements.txt uv pip install --prefix="/install" --no-deps -r /synapse/requirements.txt
# Copy over the rest of the synapse source code. # Copy over the rest of the synapse source code.
COPY synapse /synapse/synapse/ COPY synapse /synapse/synapse/
COPY rust /synapse/rust/
# ... and what we need to `pip install`. # ... and what we need to `pip install`.
# TODO: once pyproject.toml declares poetry-core as its build system, we'll need to copy COPY pyproject.toml README.rst build_rust.py Cargo.toml Cargo.lock /synapse/
# pyproject.toml here, ditching setup.py and MANIFEST.in.
COPY setup.py MANIFEST.in README.rst /synapse/ # Repeat of earlier build argument declaration, as this is a new build stage.
ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE
# Install the synapse package itself. # Install the synapse package itself.
RUN pip install --prefix="/install" --no-deps --no-warn-script-location /synapse # If we have populated requirements.txt, we don't install any dependencies
# as we should already have those from the previous `pip install` step.
RUN \
--mount=type=cache,target=/root/.cache/uv \
--mount=type=cache,target=/synapse/target,sharing=locked \
--mount=type=cache,target=${CARGO_HOME}/registry,sharing=locked \
if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
uv pip install --prefix="/install" --no-deps /synapse[all]; \
else \
uv pip install --prefix="/install" /synapse[all]; \
fi
### ###
### Stage 2: runtime ### Stage 2: runtime dependencies download for ARM64 and AMD64
### ###
FROM --platform=$BUILDPLATFORM docker.io/library/debian:${DEBIAN_VERSION} AS runtime-deps
FROM docker.io/python:${PYTHON_VERSION}-slim # Tell apt to keep downloaded package files, as we're using cache mounts.
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse' # Add both target architectures
LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/synapse/blob/master/docker/README.md' RUN dpkg --add-architecture arm64
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git' RUN dpkg --add-architecture amd64
LABEL org.opencontainers.image.licenses='Apache-2.0'
# Fetch the runtime dependencies debs for both architectures
# We do that by building a recursive list of packages we need to download with `apt-cache depends`
# and then downloading them with `apt-get download`.
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
apt-get install -y --no-install-recommends rsync && \
apt-cache depends --recurse --no-recommends --no-suggests --no-conflicts --no-breaks --no-replaces --no-enhances --no-pre-depends \
curl \
gosu \
libjpeg62-turbo \
libpq5 \
libwebp7 \
xmlsec1 \
libjemalloc2 \
libicu \
| grep '^\w' > /tmp/pkg-list && \
for arch in arm64 amd64; do \
mkdir -p /tmp/debs-${arch} && \
cd /tmp/debs-${arch} && \
apt-get download $(sed "s/$/:${arch}/" /tmp/pkg-list); \
done
# Extract the debs for each architecture
# On the runtime image, /lib is a symlink to /usr/lib, so we need to copy the
# libraries to the right place, else the `COPY` won't work.
# On amd64, we'll also have a /lib64 folder with ld-linux-x86-64.so.2, which is
# already present in the runtime image.
RUN \ RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \ for arch in arm64 amd64; do \
--mount=type=cache,target=/var/lib/apt,sharing=locked \ mkdir -p /install-${arch}/var/lib/dpkg/status.d/ && \
apt-get update && apt-get install -y \ for deb in /tmp/debs-${arch}/*.deb; do \
curl \ package_name=$(dpkg-deb -I ${deb} | awk '/^ Package: .*$/ {print $2}'); \
gosu \ echo "Extracting: ${package_name}"; \
libjpeg62-turbo \ dpkg --ctrl-tarfile $deb | tar -Ox ./control > /install-${arch}/var/lib/dpkg/status.d/${package_name}; \
libpq5 \ dpkg --extract $deb /install-${arch}; \
libwebp6 \ done; \
xmlsec1 \ rsync -avr /install-${arch}/lib/ /install-${arch}/usr/lib; \
libjemalloc2 \ rm -rf /install-${arch}/lib /install-${arch}/lib64; \
libssl-dev \ done
openssl \
&& rm -rf /var/lib/apt/lists/*
###
### Stage 3: runtime
###
FROM docker.io/library/python:${PYTHON_VERSION}-slim-${DEBIAN_VERSION}
ARG TARGETARCH
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
LABEL org.opencontainers.image.documentation='https://github.com/element-hq/synapse/blob/master/docker/README.md'
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later'
COPY --from=runtime-deps /install-${TARGETARCH} /
COPY --from=builder /install /usr/local COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py COPY ./docker/start.py /start.py
COPY ./docker/conf /conf COPY ./docker/conf /conf
...@@ -140,4 +193,4 @@ EXPOSE 8008/tcp 8009/tcp 8448/tcp ...@@ -140,4 +193,4 @@ EXPOSE 8008/tcp 8009/tcp 8448/tcp
ENTRYPOINT ["/start.py"] ENTRYPOINT ["/start.py"]
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \ HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1 CMD curl -fSs http://localhost:8008/health || exit 1
...@@ -24,20 +24,22 @@ ARG distro="" ...@@ -24,20 +24,22 @@ ARG distro=""
# https://launchpad.net/~jyrki-pulliainen/+archive/ubuntu/dh-virtualenv, but # https://launchpad.net/~jyrki-pulliainen/+archive/ubuntu/dh-virtualenv, but
# it's not obviously easier to use that than to build our own.) # it's not obviously easier to use that than to build our own.)
FROM ${distro} as builder FROM docker.io/library/${distro} AS builder
RUN apt-get update -qq -o Acquire::Languages=none RUN apt-get update -qq -o Acquire::Languages=none
RUN env DEBIAN_FRONTEND=noninteractive apt-get install \ RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends \ -yqq --no-install-recommends \
build-essential \ build-essential \
ca-certificates \ ca-certificates \
devscripts \ devscripts \
equivs \ equivs \
wget wget
# fetch and unpack the package # fetch and unpack the package
# We are temporarily using a fork of dh-virtualenv due to an incompatibility with Python 3.11, which ships with
# Debian sid. TODO: Switch back to upstream once https://github.com/spotify/dh-virtualenv/pull/354 has merged.
RUN mkdir /dh-virtualenv RUN mkdir /dh-virtualenv
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/refs/tags/1.2.2.tar.gz RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/matrix-org/dh-virtualenv/archive/refs/tags/matrixorg-2023010302.tar.gz
RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz
# install its build deps. We do another apt-cache-update here, because we might # install its build deps. We do another apt-cache-update here, because we might
...@@ -53,37 +55,47 @@ RUN cd /dh-virtualenv && DEB_BUILD_OPTIONS=nodoc dpkg-buildpackage -us -uc -b ...@@ -53,37 +55,47 @@ RUN cd /dh-virtualenv && DEB_BUILD_OPTIONS=nodoc dpkg-buildpackage -us -uc -b
### ###
### Stage 1 ### Stage 1
### ###
FROM ${distro} FROM docker.io/library/${distro}
# Get the distro we want to pull from as a dynamic build variable # Get the distro we want to pull from as a dynamic build variable
# (We need to define it in each build stage) # (We need to define it in each build stage)
ARG distro="" ARG distro=""
ENV distro ${distro} ENV distro ${distro}
# Python < 3.7 assumes LANG="C" means ASCII-only and throws on printing unicode
# http://bugs.python.org/issue19846
ENV LANG C.UTF-8
# Install the build dependencies # Install the build dependencies
# #
# NB: keep this list in sync with the list of build-deps in debian/control # NB: keep this list in sync with the list of build-deps in debian/control
# TODO: it would be nice to do that automatically. # TODO: it would be nice to do that automatically.
RUN apt-get update -qq -o Acquire::Languages=none \ RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \ && env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \ -yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
build-essential \ build-essential \
debhelper \ curl \
devscripts \ debhelper \
libsystemd-dev \ devscripts \
lsb-release \ # Required for building cffi from source.
pkg-config \ libffi-dev \
python3-dev \ libsystemd-dev \
python3-pip \ lsb-release \
python3-setuptools \ pkg-config \
python3-venv \ python3-dev \
sqlite3 \ python3-pip \
libpq-dev \ python3-setuptools \
xmlsec1 python3-venv \
sqlite3 \
libpq-dev \
libicu-dev \
pkg-config \
xmlsec1
# Install rust and ensure it's in the PATH
ENV RUSTUP_HOME=/rust
ENV CARGO_HOME=/cargo
ENV PATH=/cargo/bin:/rust/bin:$PATH
RUN mkdir /rust /cargo
RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable --profile minimal
COPY --from=builder /dh-virtualenv_1.2.2-1_all.deb / COPY --from=builder /dh-virtualenv_1.2.2-1_all.deb /
......
# Inherit from the official Synapse docker image # syntax=docker/dockerfile:1
FROM matrixdotorg/synapse
# Install deps ARG SYNAPSE_VERSION=latest
RUN apt-get update ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
RUN apt-get install -y supervisor redis nginx
# Remove the default nginx sites # first of all, we create a base image with an nginx which we can copy into the
RUN rm /etc/nginx/sites-enabled/default # target image. For repeated rebuilds, this is much faster than apt installing
# each time.
# Copy Synapse worker, nginx and supervisord configuration template files FROM docker.io/library/debian:bookworm-slim AS deps_base
COPY ./docker/conf-workers/* /conf/ RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
redis-server nginx-light
# Expose nginx listener port # Similarly, a base to copy the redis server from.
EXPOSE 8080/tcp #
# The redis docker image has fewer dynamic libraries than the debian package,
# which makes it much easier to copy (but we need to make sure we use an image
# based on the same debian version as the synapse image, to make sure we get
# the expected version of libc.
FROM docker.io/library/redis:7-bookworm AS redis_base
# A script to read environment variables and create the necessary # now build the final image, based on the the regular Synapse docker image
# files to run the desired worker configuration. Will start supervisord. FROM $FROM
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
ENTRYPOINT ["/configure_workers_and_start.py"]
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \ # Install supervisord with pip instead of apt, to avoid installing a second
CMD /bin/sh /healthcheck.sh # copy of python.
RUN --mount=type=cache,target=/root/.cache/pip \
pip install supervisor~=4.2
RUN mkdir -p /etc/supervisor/conf.d
# Copy over redis and nginx
COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin
COPY --from=deps_base /usr/sbin/nginx /usr/sbin
COPY --from=deps_base /usr/share/nginx /usr/share/nginx
COPY --from=deps_base /usr/lib/nginx /usr/lib/nginx
COPY --from=deps_base /etc/nginx /etc/nginx
RUN rm /etc/nginx/sites-enabled/default
RUN mkdir /var/log/nginx /var/lib/nginx
RUN chown www-data /var/lib/nginx
# have nginx log to stderr/out
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
# Copy Synapse worker, nginx and supervisord configuration template files
COPY ./docker/conf-workers/* /conf/
# Copy a script to prefix log lines with the supervisor program name
COPY ./docker/prefix-log /usr/local/bin/
# Expose nginx listener port
EXPOSE 8080/tcp
# A script to read environment variables and create the necessary
# files to run the desired worker configuration. Will start supervisord.
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
ENTRYPOINT ["/configure_workers_and_start.py"]
# Replace the healthcheck with one which checks *all* the workers. The script
# is generated by configure_workers_and_start.py.
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD /bin/sh /healthcheck.sh
...@@ -8,79 +8,54 @@ docker images that can be run inside Complement for testing purposes. ...@@ -8,79 +8,54 @@ docker images that can be run inside Complement for testing purposes.
Note that running Synapse's unit tests from within the docker image is not supported. Note that running Synapse's unit tests from within the docker image is not supported.
## Testing with SQLite and single-process Synapse ## Using the Complement launch script
> Note that `scripts-dev/complement.sh` is a script that will automatically build `scripts-dev/complement.sh` is a script that will automatically build
> and run an SQLite-based, single-process of Synapse against Complement. and run Synapse against Complement.
Consult the [contributing guide][guideComplementSh] for instructions on how to use it.
The instructions below will set up Complement testing for a single-process,
SQLite-based Synapse deployment.
Start by building the base Synapse docker image. If you wish to run tests with the latest [guideComplementSh]: https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-integration-tests-complement
release of Synapse, instead of your current checkout, you can skip this step. From the
root of the repository:
```sh
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
This will build an image with the tag `matrixdotorg/synapse`.
Next, build the Synapse image for Complement.
```sh ## Building and running the images manually
docker build -t complement-synapse -f "docker/complement/Dockerfile" docker/complement
```
This will build an image with the tag `complement-synapse`, which can be handed to Under some circumstances, you may wish to build the images manually.
Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to The instructions below will lead you to doing that.
[Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags.
## Testing with PostgreSQL and single or multi-process Synapse
The above docker image only supports running Synapse with SQLite and in a Note that these images can only be built using [BuildKit](https://docs.docker.com/develop/develop-images/build_enhancements/),
single-process topology. The following instructions are used to build a Synapse image for therefore BuildKit needs to be enabled when calling `docker build`. This can be done by
Complement that supports either single or multi-process topology with a PostgreSQL setting `DOCKER_BUILDKIT=1` in your environment.
database backend.
As with the single-process image, build the base Synapse docker image. If you wish to run Start by building the base Synapse docker image. If you wish to run tests with the latest
tests with the latest release of Synapse, instead of your current checkout, you can skip release of Synapse, instead of your current checkout, you can skip this step. From the
this step. From the root of the repository: root of the repository:
```sh ```sh
docker build -t matrixdotorg/synapse -f docker/Dockerfile . docker build -t matrixdotorg/synapse -f docker/Dockerfile .
``` ```
This will build an image with the tag `matrixdotorg/synapse`. Next, build the workerised Synapse docker image, which is a layer over the base
image.
Next, we build a new image with worker support based on `matrixdotorg/synapse:latest`.
Again, from the root of the repository:
```sh ```sh
docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers . docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers .
``` ```
This will build an image with the tag` matrixdotorg/synapse-workers`. Finally, build the multi-purpose image for Complement, which is a layer over the workers image.
It's worth noting at this point that this image is fully functional, and
can be used for testing against locally. See instructions for using the container
under
[Running the Dockerfile-worker image standalone](#running-the-dockerfile-worker-image-standalone)
below.
Finally, build the Synapse image for Complement, which is based on
`matrixdotorg/synapse-workers`.
```sh ```sh
docker build -t matrixdotorg/complement-synapse-workers -f docker/complement/SynapseWorkers.Dockerfile docker/complement docker build -t complement-synapse -f docker/complement/Dockerfile docker/complement
``` ```
This will build an image with the tag `complement-synapse-workers`, which can be handed to This will build an image with the tag `complement-synapse`, which can be handed to
Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to
[Complement's documentation](https://github.com/matrix-org/complement/#running) for [Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags. how to run the tests, as well as the various available command line flags.
See [the Complement image README](./complement/README.md) for information about the
expected environment variables.
## Running the Dockerfile-worker image standalone ## Running the Dockerfile-worker image standalone
For manual testing of a multi-process Synapse instance in Docker, For manual testing of a multi-process Synapse instance in Docker,
...@@ -113,6 +88,9 @@ docker run -d --name synapse \ ...@@ -113,6 +88,9 @@ docker run -d --name synapse \
...substituting `POSTGRES*` variables for those that match a postgres host you have ...substituting `POSTGRES*` variables for those that match a postgres host you have
available (usually a running postgres docker container). available (usually a running postgres docker container).
### Workers
The `SYNAPSE_WORKER_TYPES` environment variable is a comma-separated list of workers to The `SYNAPSE_WORKER_TYPES` environment variable is a comma-separated list of workers to
use when running the container. All possible worker names are defined by the keys of the use when running the container. All possible worker names are defined by the keys of the
`WORKERS_CONFIG` variable in [this script](configure_workers_and_start.py), which the `WORKERS_CONFIG` variable in [this script](configure_workers_and_start.py), which the
...@@ -125,8 +103,11 @@ type, simply specify the type multiple times in `SYNAPSE_WORKER_TYPES` ...@@ -125,8 +103,11 @@ type, simply specify the type multiple times in `SYNAPSE_WORKER_TYPES`
(e.g `SYNAPSE_WORKER_TYPES=event_creator,event_creator...`). (e.g `SYNAPSE_WORKER_TYPES=event_creator,event_creator...`).
Otherwise, `SYNAPSE_WORKER_TYPES` can either be left empty or unset to spawn no workers Otherwise, `SYNAPSE_WORKER_TYPES` can either be left empty or unset to spawn no workers
(leaving only the main process). The container is configured to use redis-based worker (leaving only the main process).
mode. The container will only be configured to use Redis-based worker mode if there are
workers enabled.
### Logging
Logs for workers and the main process are logged to stdout and can be viewed with Logs for workers and the main process are logged to stdout and can be viewed with
standard `docker logs` tooling. Worker logs contain their worker name standard `docker logs` tooling. Worker logs contain their worker name
...@@ -136,3 +117,21 @@ Setting `SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1` will cause worker logs to be writ ...@@ -136,3 +117,21 @@ Setting `SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1` will cause worker logs to be writ
`<data_dir>/logs/<worker_name>.log`. Logs are kept for 1 week and rotate every day at 00: `<data_dir>/logs/<worker_name>.log`. Logs are kept for 1 week and rotate every day at 00:
00, according to the container's clock. Logging for the main process must still be 00, according to the container's clock. Logging for the main process must still be
configured by modifying the homeserver's log config in your Synapse data volume. configured by modifying the homeserver's log config in your Synapse data volume.
### Application Services
Setting the `SYNAPSE_AS_REGISTRATION_DIR` environment variable to the path of
a directory (within the container) will cause the configuration script to scan
that directory for `.yaml`/`.yml` registration files.
Synapse will be configured to load these configuration files.
### TLS Termination
Nginx is present in the image to route requests to the appropriate workers,
but it does not serve TLS by default.
You can configure `SYNAPSE_TLS_CERT` and `SYNAPSE_TLS_KEY` to point to a
TLS certificate and key (respectively), both in PEM (textual) format.
In this case, Nginx will additionally serve using HTTPS on port 8448.