Skip to content
Snippets Groups Projects
config_documentation.md 153 KiB
Newer Older
  • Learn to ignore specific revisions
  • The `custom_template_directory` determines which directory Synapse will try to
    
    find template files in to use to generate email or HTML page contents.
    
    If not set, or a file is not found within the template directory, a default
    
    template from within the Synapse package will be used.
    
    See [here](../../templates.md) for more
    information about using custom templates.
    
    Example configuration:
    ```yaml
    templates:
      custom_template_directory: /path/to/custom/templates/
    ```
    ---
    
    
    This option and the associated options determine message retention policy at the
    server level.
    
    Room admins and mods can define a retention period for their rooms using the
    `m.room.retention` state event, and server admins can cap this period by setting
    the `allowed_lifetime_min` and `allowed_lifetime_max` config options.
    
    If this feature is enabled, Synapse will regularly look for and purge events
    which are older than the room's maximum retention period. Synapse will also
    
    filter events received over federation so that events that should have been
    purged are ignored and not stored again.
    
    The message retention policies feature is disabled by default. You can read more
    about this feature [here](../../message_retention_policies.md).
    
    
    This setting has the following sub-options:
    * `default_policy`: Default retention policy. If set, Synapse will apply it to rooms that lack the
    
       'm.room.retention' state event. This option is further specified by the
       `min_lifetime` and `max_lifetime` sub-options associated with it. Note that the
        value of `min_lifetime` doesn't matter much because Synapse doesn't take it into account yet.
    
    * `allowed_lifetime_min` and `allowed_lifetime_max`: Retention policy limits. If
       set, and the state of a room contains a `m.room.retention` event in its state
    
       which contains a `min_lifetime` or a `max_lifetime` that's out of these bounds,
       Synapse will cap the room's policy to these limits when running purge jobs.
    
    * `purge_jobs` and the associated `shortest_max_lifetime` and `longest_max_lifetime` sub-options:
       Server admins can define the settings of the background jobs purging the
       events whose lifetime has expired under the `purge_jobs` section.
    
      If no configuration is provided for this option, a single job will be set up to delete
      expired events in every room daily.
    
      Each job's configuration defines which range of message lifetimes the job
      takes care of. For example, if `shortest_max_lifetime` is '2d' and
      `longest_max_lifetime` is '3d', the job will handle purging expired events in
      rooms whose state defines a `max_lifetime` that's both higher than 2 days, and
      lower than or equal to 3 days. Both the minimum and the maximum value of a
      range are optional, e.g. a job with no `shortest_max_lifetime` and a
      `longest_max_lifetime` of '3d' will handle every room with a retention policy
      whose `max_lifetime` is lower than or equal to three days.
    
      The rationale for this per-job configuration is that some rooms might have a
      retention policy with a low `max_lifetime`, where history needs to be purged
      of outdated messages on a more frequent basis than for the rest of the rooms
      (e.g. every 12h), but not want that purge to be performed by a job that's
      iterating over every room it knows, which could be heavy on the server.
    
      If any purge job is configured, it is strongly recommended to have at least
      a single job with neither `shortest_max_lifetime` nor `longest_max_lifetime`
      set, or one job without `shortest_max_lifetime` and one job without
      `longest_max_lifetime` set. Otherwise some rooms might be ignored, even if
      `allowed_lifetime_min` and `allowed_lifetime_max` are set, because capping a
      room's policy to these values is done after the policies are retrieved from
      Synapse's database (which is done using the range specified in a purge job's
      configuration).
    
    Example configuration:
    ```yaml
    retention:
      enabled: true
      default_policy:
        min_lifetime: 1d
        max_lifetime: 1y
      allowed_lifetime_min: 1d
      allowed_lifetime_max: 1y
      purge_jobs:
        - longest_max_lifetime: 3d
          interval: 12h
        - shortest_max_lifetime: 3d
    
    
    This option specifies a PEM-encoded X509 certificate for TLS.
    This certificate, as of Synapse 1.0, will need to be a valid and verifiable
    
    certificate, signed by a recognised Certificate Authority. Defaults to none.
    
    
    Be sure to use a `.pem` file that includes the full certificate chain including
    any intermediate certificates (for instance, if using certbot, use
    
    `fullchain.pem` as your certificate, not `cert.pem`).
    
    
    Example configuration:
    ```yaml
    tls_certificate_path: "CONFDIR/SERVERNAME.tls.crt"
    ```
    ---
    
    PEM-encoded private key for TLS. Defaults to none.
    
    
    Example configuration:
    ```yaml
    tls_private_key_path: "CONFDIR/SERVERNAME.tls.key"
    ```
    ---
    
    Whether to verify TLS server certificates for outbound federation requests.
    
    Defaults to true. To disable certificate verification, set the option to false.
    
    Example configuration:
    ```yaml
    federation_verify_certificates: false
    ```
    ---
    
    ### `federation_client_minimum_tls_version`
    
    
    The minimum TLS version that will be used for outbound federation requests.
    
    
    Defaults to `"1"`. Configurable to `"1"`, `"1.1"`, `"1.2"`, or `"1.3"`. Note
    that setting this value higher than `"1.2"` will prevent federation to most
    of the public Matrix network: only configure it to `"1.3"` if you have an
    
    entirely private federation setup and you can ensure TLS 1.3 support.
    
    Example configuration:
    ```yaml
    
    ### `federation_certificate_verification_whitelist`
    
    
    Skip federation certificate verification on a given whitelist
    of domains.
    
    This setting should only be used in very specific cases, such as
    federation over Tor hidden services and similar. For private networks
    of homeservers, you likely want to use a private CA instead.
    
    
    Only effective if `federation_verify_certificates` is `true`.
    
    
    Example configuration:
    ```yaml
    federation_certificate_verification_whitelist:
      - lon.example.com
      - "*.domain.com"
      - "*.onion"
    ```
    ---
    
    
    List of custom certificate authorities for federation traffic.
    
    This setting should only normally be used within a private network of
    homeservers.
    
    Note that this list will replace those that are provided by your
    operating environment. Certificates must be in PEM format.
    
    Example configuration:
    ```yaml
    federation_custom_ca_list:
      - myCA1.pem
      - myCA2.pem
      - myCA3.pem
    ```
    ---
    
    
    Options related to federation.
    
    ---
    
    
    Restrict federation to the given whitelist of domains.
    N.B. we recommend also firewalling your federation listener to limit
    inbound federation traffic as early as possible, rather than relying
    purely on this application-layer restriction.  If not specified, the
    default is to whitelist everything.
    
    
    Note: this does not stop a server from joining rooms that servers not on the
    whitelist are in. As such, this option is really only useful to establish a
    "private federation", where a group of servers all whitelist each other and have
    the same whitelist.
    
    
    Example configuration:
    ```yaml
    federation_domain_whitelist:
      - lon.example.com
      - nyc.example.com
      - syd.example.com
    ```
    ---
    
    
    Report prometheus metrics on the age of PDUs being sent to and received from
    the given domains. This can be used to give an idea of "delay" on inbound
    and outbound federation, though be aware that any delay can be due to problems
    at either end or with the intermediate network.
    
    By default, no domains are monitored in this way.
    
    Example configuration:
    ```yaml
    federation_metrics_domains:
      - matrix.org
      - example.com
    ```
    ---
    
    ### `allow_profile_lookup_over_federation`
    
    
    Set to false to disable profile lookup over federation. By default, the
    Federation API allows other homeservers to obtain profile data of any user
    on this homeserver.
    
    Example configuration:
    ```yaml
    allow_profile_lookup_over_federation: false
    ```
    ---
    
    ### `allow_device_name_lookup_over_federation`
    
    Set this option to true to allow device display name lookup over federation. By default, the
    Federation API prevents other homeservers from obtaining the display names of any user devices
    
    on this homeserver.
    
    Example configuration:
    ```yaml
    
    allow_device_name_lookup_over_federation: true
    
    ### `federation`
    
    The federation section defines some sub-options related to federation.
    
    The following options are related to configuring timeout and retry logic for one request,
    independently of the others.
    Short retry algorithm is used when something or someone will wait for the request to have an
    answer, while long retry is used for requests that happen in the background,
    like sending a federation transaction.
    
    * `client_timeout`: timeout for the federation requests. Default to 60s.
    * `max_short_retry_delay`: maximum delay to be used for the short retry algo. Default to 2s.
    * `max_long_retry_delay`: maximum delay to be used for the short retry algo. Default to 60s.
    * `max_short_retries`: maximum number of retries for the short retry algo. Default to 3 attempts.
    * `max_long_retries`: maximum number of retries for the long retry algo. Default to 10 attempts.
    
    
    The following options control the retry logic when communicating with a specific homeserver destination.
    Unlike the previous configuration options, these values apply across all requests
    for a given destination and the state of the backoff is stored in the database.
    
    * `destination_min_retry_interval`: the initial backoff, after the first request fails. Defaults to 10m.
    * `destination_retry_multiplier`: how much we multiply the backoff by after each subsequent fail. Defaults to 2.
    * `destination_max_retry_interval`: a cap on the backoff. Defaults to a week.
    
    
    Example configuration:
    ```yaml
    federation:
      client_timeout: 180s
      max_short_retry_delay: 7s
      max_long_retry_delay: 100s
      max_short_retries: 5
      max_long_retries: 20
    
      destination_min_retry_interval: 30s
      destination_retry_multiplier: 5
      destination_max_retry_interval: 12h
    
    Options related to caching.
    
    The number of events to cache in memory. Defaults to 10K. Like other caches,
    this is affected by `caches.global_factor` (see below).
    
    Note that this option is not part of the `caches` section.
    
    
    Example configuration:
    ```yaml
    event_cache_size: 15K
    ```
    ---
    
    ### `caches` and associated values
    
    
    A cache 'factor' is a multiplier that can be applied to each of
    Synapse's caches in order to increase or decrease the maximum
    number of entries that can be stored.
    
    
    `caches` can be configured through the following sub-options:
    
    
    * `global_factor`: Controls the global cache factor, which is the default cache factor
      for all caches if a specific factor for that cache is not otherwise
      set.
    
      This can also be set by the `SYNAPSE_CACHE_FACTOR` environment
      variable. Setting by environment variable takes priority over
      setting through the config file.
    
      Defaults to 0.5, which will halve the size of all caches.
    
    * `per_cache_factors`: A dictionary of cache name to cache factor for that individual
       cache. Overrides the global cache factor for a given cache.
    
       These can also be set through environment variables comprised
       of `SYNAPSE_CACHE_FACTOR_` + the name of the cache in capital
       letters and underscores. Setting by environment variable
       takes priority over setting through the config file.
       Ex. `SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0`
    
       Some caches have '*' and other characters that are not
       alphanumeric or underscores. These caches can be named with or
       without the special characters stripped. For example, to specify
       the cache factor for `*stateGroupCache*` via an environment
       variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
    
    * `expire_caches`: Controls whether cache entries are evicted after a specified time
       period. Defaults to true. Set to false to disable this feature. Note that never expiring
    
       caches may result in excessive memory usage.
    
    
    * `cache_entry_ttl`: If `expire_caches` is enabled, this flag controls how long an entry can
      be in a cache without having been accessed before being evicted.
    
    
    * `sync_response_cache_duration`: Controls how long the results of a /sync request are
      cached for after a successful response is returned. A higher duration can help clients
      with intermittent connections, at the cost of higher memory usage.
    
      A value of zero means that sync responses are not cached.
      Defaults to 2m.
    
    
      *Changed in Synapse 1.62.0*: The default was changed from 0 to 2m.
    
    
    * `cache_autotuning` and its sub-options `max_cache_memory_usage`, `target_cache_memory_usage`, and
    
       `min_cache_ttl` work in conjunction with each other to maintain a balance between cache memory
    
       usage and cache entry availability. You must be using [jemalloc](../administration/admin_faq.md#help-synapse-is-slow-and-eats-all-my-ramcpu)
    
       to utilize this option, and all three of the options must be specified for this feature to work. This option
       defaults to off, enable it by providing values for the sub-options listed below. Please note that the feature will not work
       and may cause unstable behavior (such as excessive emptying of caches or exceptions) if all of the values are not provided.
       Please see the [Config Conventions](#config-conventions) for information on how to specify memory size and cache expiry
       durations.
    
         * `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
            They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
    
            the setting below, or until the `min_cache_ttl` is hit. There is no default value for this option.
    
         * `target_cache_memory_usage` sets a rough target for the desired memory usage of the caches. There is no default value
    
         * `min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
            caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
    
            from being emptied while Synapse is evicting due to memory. There is no default value for this option.
    
    
    Example configuration:
    ```yaml
    
    caches:
      global_factor: 1.0
      per_cache_factors:
        get_users_who_share_room_with_user: 2.0
      sync_response_cache_duration: 2m
    
      cache_autotuning:
        max_cache_memory_usage: 1024M
        target_cache_memory_usage: 758M
        min_cache_ttl: 5m
    
    
    ### Reloading cache factors
    
    The cache factors (i.e. `caches.global_factor` and `caches.per_cache_factors`)  may be reloaded at any time by sending a
    [`SIGHUP`](https://en.wikipedia.org/wiki/SIGHUP) signal to Synapse using e.g.
    
    ```commandline
    kill -HUP [PID_OF_SYNAPSE_PROCESS]
    ```
    
    
    If you are running multiple workers, you must individually update the worker
    
    config file and send this signal to each worker process.
    
    If you're using the [example systemd service](https://github.com/matrix-org/synapse/blob/develop/contrib/systemd/matrix-synapse.service)
    file in Synapse's `contrib` directory, you can send a `SIGHUP` signal by using
    `systemctl reload matrix-synapse`.
    
    
    Config options related to database settings.
    
    ---
    
    
    The `database` setting defines the database that synapse uses to store all of
    its data.
    
    Associated sub-options:
    
    * `name`: this option specifies the database engine to use: either `sqlite3` (for SQLite)
    
      or `psycopg2` (for PostgreSQL). If no name is specified Synapse will default to SQLite.
    
    
    * `txn_limit` gives the maximum number of transactions to run per connection
      before reconnecting. Defaults to 0, which means no limit.
    
    * `allow_unsafe_locale` is an option specific to Postgres. Under the default behavior, Synapse will refuse to
      start if the postgres db is set to a non-C locale. You can override this behavior (which is *not* recommended)
      by setting `allow_unsafe_locale` to true. Note that doing so may corrupt your database. You can find more information
      [here](../../postgres.md#fixing-incorrect-collate-or-ctype) and [here](https://wiki.postgresql.org/wiki/Locale_data_changes).
    
    * `args` gives options which are passed through to the database engine,
      except for options starting with `cp_`, which are used to configure the Twisted
      connection pool. For a reference to valid arguments, see:
        * for [sqlite](https://docs.python.org/3/library/sqlite3.html#sqlite3.connect)
        * for [postgres](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS)
    
        * for [the connection pool](https://docs.twistedmatrix.com/en/stable/api/twisted.enterprise.adbapi.ConnectionPool.html#__init__)
    
    
    For more information on using Synapse with Postgres,
    see [here](../../postgres.md).
    
    Example SQLite configuration:
    
    database:
      name: sqlite3
      args:
        database: /path/to/homeserver.db
    ```
    
    Example Postgres configuration:
    
    database:
      name: psycopg2
      txn_limit: 10000
      args:
        user: synapse_user
        password: secretpassword
        database: synapse
        host: localhost
        port: 5432
        cp_min: 5
        cp_max: 10
    ```
    ---
    
    ### `databases`
    
    The `databases` option allows specifying a mapping between certain database tables and
    database host details, spreading the load of a single Synapse instance across multiple
    database backends. This is often referred to as "database sharding". This option is only
    supported for PostgreSQL database backends.
    
    **Important note:** This is a supported option, but is not currently used in production by the
    Matrix.org Foundation. Proceed with caution and always make backups.
    
    `databases` is a dictionary of arbitrarily-named database entries. Each entry is equivalent
    to the value of the `database` homeserver config option (see above), with the addition of
    a `data_stores` key. `data_stores` is an array of strings that specifies the data store(s)
    (a defined label for a set of tables) that should be stored on the associated database
    backend entry.
    
    The currently defined values for `data_stores` are:
    
    * `"state"`: Database that relates to state groups will be stored in this database.
    
      Specifically, that means the following tables:
      * `state_groups`
      * `state_group_edges`
      * `state_groups_state`
    
      And the following sequences:
      * `state_groups_seq_id`
    
    * `"main"`: All other database tables and sequences.
    
    All databases will end up with additional tables used for tracking database schema migrations
    and any pending background updates. Synapse will create these automatically on startup when checking for
    and/or performing database schema migrations.
    
    To migrate an existing database configuration (e.g. all tables on a single database) to a different
    configuration (e.g. the "main" data store on one database, and "state" on another), do the following:
    
    1. Take a backup of your existing database. Things can and do go wrong and database corruption is no joke!
    2. Ensure all pending database migrations have been applied and background updates have run. The simplest
       way to do this is to use the `update_synapse_database` script supplied with your Synapse installation.
    
       ```sh
       update_synapse_database --database-config homeserver.yaml --run-background-updates
       ```
    
    3. Copy over the necessary tables and sequences from one database to the other. Tables relating to database
       migrations, schemas, schema versions and background updates should **not** be copied.
    
       As an example, say that you'd like to split out the "state" data store from an existing database which
       currently contains all data stores.
    
       Simply copy the tables and sequences defined above for the "state" datastore from the existing database
       to the secondary database. As noted above, additional tables will be created in the secondary database
       when Synapse is started.
    
    4. Modify/create the `databases` option in your `homeserver.yaml` to match the desired database configuration.
    5. Start Synapse. Check that it starts up successfully and that things generally seem to be working.
    6. Drop the old tables that were copied in step 3.
    
    Only one of the options `database` or `databases` may be specified in your config, but not both.
    
    Example configuration:
    
    ```yaml
    databases:
      basement_box:
        name: psycopg2
        txn_limit: 10000
        data_stores: ["main"]
        args:
          user: synapse_user
          password: secretpassword
          database: synapse_main
          host: localhost
          port: 5432
          cp_min: 5
          cp_max: 10
    
      my_other_database:
        name: psycopg2
        txn_limit: 10000
        data_stores: ["state"]
        args:
          user: synapse_user
          password: secretpassword
          database: synapse_state
          host: localhost
          port: 5432
          cp_min: 5
          cp_max: 10
    ```
    ---
    
    Config options related to logging.
    
    This option specifies a yaml python logging config file as described
    [here](https://docs.python.org/3/library/logging.config.html#configuration-dictionary-schema).
    
    
    Example configuration:
    ```yaml
    log_config: "CONFDIR/SERVERNAME.log.config"
    ```
    ---
    
    Options related to ratelimiting in Synapse.
    
    
    Each ratelimiting configuration is made of two parameters:
       - `per_second`: number of requests a client can send per second.
       - `burst_count`: number of requests a client can send before being throttled.
    ---
    
    
    
    Ratelimiting settings for client messaging.
    
    This is a ratelimiting option for messages that ratelimits sending based on the account the client
    is using. It defaults to: `per_second: 0.2`, `burst_count: 10`.
    
    Example configuration:
    ```yaml
    rc_message:
      per_second: 0.5
      burst_count: 15
    ```
    ---
    
    
    This option ratelimits registration requests based on the client's IP address.
    
    It defaults to `per_second: 0.17`, `burst_count: 3`.
    
    
    Example configuration:
    ```yaml
    rc_registration:
      per_second: 0.15
      burst_count: 2
    ```
    ---
    
    This option checks the validity of registration tokens that ratelimits requests based on
    
    the client's IP address.
    Defaults to `per_second: 0.1`, `burst_count: 5`.
    
    Example configuration:
    ```yaml
    rc_registration_token_validity:
      per_second: 0.3
      burst_count: 6
    
    
    This option specifies several limits for login:
    * `address` ratelimits login requests based on the client's IP
    
          address. Defaults to `per_second: 0.003`, `burst_count: 5`.
    
    * `account` ratelimits login requests based on the account the
    
      client is attempting to log into. Defaults to `per_second: 0.003`,
    
    * `failed_attempts` ratelimits login requests based on the account the
    
      client is attempting to log into, based on the amount of failed login
      attempts for this account. Defaults to `per_second: 0.17`, `burst_count: 3`.
    
    Example configuration:
    ```yaml
    rc_login:
      address:
        per_second: 0.15
        burst_count: 5
      account:
        per_second: 0.18
        burst_count: 4
      failed_attempts:
        per_second: 0.19
        burst_count: 7
    ```
    ---
    
    This option sets ratelimiting redactions by room admins. If this is not explicitly
    
    set then it uses the same ratelimiting as per `rc_message`. This is useful
    
    to allow room admins to deal with abuse quickly.
    
    
    Example configuration:
    ```yaml
    rc_admin_redaction:
      per_second: 1
      burst_count: 50
    ```
    ---
    
    
    This option allows for ratelimiting number of rooms a user can join. This setting has the following sub-options:
    
    
    * `local`: ratelimits when users are joining rooms the server is already in.
    
       Defaults to `per_second: 0.1`, `burst_count: 10`.
    
    * `remote`: ratelimits when users are trying to join rooms not on the server (which
      can be more computationally expensive than restricting locally). Defaults to
    
      `per_second: 0.01`, `burst_count: 10`
    
    
    Example configuration:
    ```yaml
    rc_joins:
      local:
        per_second: 0.2
        burst_count: 15
      remote:
        per_second: 0.03
        burst_count: 12
    ```
    
    ---
    ### `rc_joins_per_room`
    
    This option allows admins to ratelimit joins to a room based on the number of recent
    joins (local or remote) to that room. It is intended to mitigate mass-join spam
    waves which target multiple homeservers.
    
    By default, one join is permitted to a room every second, with an accumulating
    buffer of up to ten instantaneous joins.
    
    Example configuration (default values):
    ```yaml
    rc_joins_per_room:
      per_second: 1
      burst_count: 10
    ```
    
    _Added in Synapse 1.64.0._
    
    
    
    This option ratelimits how often a user or IP can attempt to validate a 3PID.
    Defaults to `per_second: 0.003`, `burst_count: 5`.
    
    Example configuration:
    ```yaml
    rc_3pid_validation:
      per_second: 0.003
      burst_count: 5
    ```
    ---
    
    This option sets ratelimiting how often invites can be sent in a room or to a
    
    specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10` and
    
    `per_user` defaults to `per_second: 0.003`, `burst_count: 5`.
    
    Client requests that invite user(s) when [creating a
    room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom)
    will count against the `rc_invites.per_room` limit, whereas
    client requests to [invite a single user to a
    room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidinvite)
    will count against both the `rc_invites.per_user` and `rc_invites.per_room` limits.
    
    Federation requests to invite a user will count against the `rc_invites.per_user`
    limit only, as Synapse presumes ratelimiting by room will be done by the sending server.
    
    The `rc_invites.per_user` limit applies to the *receiver* of the invite, rather than the
    sender, meaning that a `rc_invite.per_user.burst_count` of 5 mandates that a single user
    cannot *receive* more than a burst of 5 invites at a time.
    
    
    In contrast, the `rc_invites.per_issuer` limit applies to the *issuer* of the invite, meaning that a `rc_invite.per_issuer.burst_count` of 5 mandates that single user cannot *send* more than a burst of 5 invites at a time.
    
    
    _Changed in version 1.63:_ added the `per_issuer` limit.
    
    
    Example configuration:
    ```yaml
    rc_invites:
      per_room:
        per_second: 0.5
        burst_count: 5
      per_user:
        per_second: 0.004
        burst_count: 3
    
      per_issuer:
        per_second: 0.5
        burst_count: 5
    
    
    This option ratelimits 3PID invites (i.e. invites sent to a third-party ID
    such as an email address or a phone number) based on the account that's
    sending the invite. Defaults to `per_second: 0.2`, `burst_count: 10`.
    
    Example configuration:
    ```yaml
    rc_third_party_invite:
      per_second: 0.2
      burst_count: 10
    ```
    ---
    
    Defines limits on federation requests.
    
    
    The `rc_federation` configuration has the following sub-options:
    * `window_size`: window size in milliseconds. Defaults to 1000.
    * `sleep_limit`: number of federation requests from a single server in
       a window before the server will delay processing the request. Defaults to 10.
    * `sleep_delay`: duration in milliseconds to delay processing events
       from remote servers by if they go over the sleep limit. Defaults to 500.
    * `reject_limit`: maximum number of concurrent federation requests
       allowed from a single server. Defaults to 50.
    * `concurrent`: number of federation requests to concurrently process
       from a single server. Defaults to 3.
    
    Example configuration:
    ```yaml
    rc_federation:
      window_size: 750
      sleep_limit: 15
      sleep_delay: 400
      reject_limit: 40
      concurrent: 5
    ```
    ---
    
    ### `federation_rr_transactions_per_room_per_second`
    
    
    Sets outgoing federation transaction frequency for sending read-receipts,
    per-room.
    
    If we end up trying to send out more read-receipts, they will get buffered up
    
    into fewer transactions. Defaults to 50.
    
    
    Example configuration:
    ```yaml
    federation_rr_transactions_per_room_per_second: 40
    ```
    ---
    
    Config options related to Synapse's media store.
    
    Enable the media store service in the Synapse master. Defaults to true.
    
    Set to false if you are using a separate media store worker.
    
    Example configuration:
    ```yaml
    enable_media_repo: false
    ```
    ---
    
    
    Directory where uploaded images and attachments are stored.
    
    Example configuration:
    ```yaml
    media_store_path: "DATADIR/media_store"
    ```
    ---
    
    
    Media storage providers allow media to be stored in different
    locations. Defaults to none. Associated sub-options are:
    * `module`: type of resource, e.g. `file_system`.
    * `store_local`: whether to store newly uploaded local files
    * `store_remote`: whether to store newly downloaded local files
    * `store_synchronous`: whether to wait for successful storage for local uploads
    
    * `config`: sets a path to the resource through the `directory` option
    
    
    Example configuration:
    ```yaml
    media_storage_providers:
      - module: file_system
        store_local: false
        store_remote: false
        store_synchronous: false
        config:
           directory: /mnt/some/other/directory
    ```
    ---
    
    
    The largest allowed upload size in bytes.
    
    If you are using a reverse proxy you may also need to set this value in
    your reverse proxy's config. Defaults to 50M. Notably Nginx has a small max body size by default.
    
    See [here](../../reverse_proxy.md) for more on using a reverse proxy with Synapse.
    
    
    Example configuration:
    ```yaml
    max_upload_size: 60M
    ```
    ---
    
    
    Maximum number of pixels that will be thumbnailed. Defaults to 32M.
    
    Example configuration:
    ```yaml
    max_image_pixels: 35M
    ```
    ---
    
    ### `prevent_media_downloads_from`
    
    A list of domains to never download media from. Media from these
    domains that is already downloaded will not be deleted, but will be
    inaccessible to users. This option does not affect admin APIs trying
    to download/operate on media.
    
    This will not prevent the listed domains from accessing media themselves.
    It simply prevents users on this server from downloading media originating
    from the listed servers.
    
    This will have no effect on media originating from the local server.
    This only affects media downloaded from other Matrix servers, to
    block domains from URL previews see [`url_preview_url_blacklist`](#url_preview_url_blacklist).
    
    Defaults to an empty list (nothing blocked).
    
    Example configuration:
    ```yaml
    prevent_media_downloads_from:
      - evil.example.org
      - evil2.example.org
    ```
    ---
    
    
    Whether to generate new thumbnails on the fly to precisely match
    the resolution requested by the client. If true then whenever
    a new resolution is requested by the client the server will
    generate a new thumbnail. If false the server will pick a thumbnail
    
    from a precalculated list. Defaults to false.
    
    
    Example configuration:
    ```yaml
    dynamic_thumbnails: true
    ```
    ---
    
    
    List of thumbnails to precalculate when an image is uploaded. Associated sub-options are:
    * `width`
    * `height`
    * `method`: i.e. `crop`, `scale`, etc.
    
    Example configuration:
    ```yaml
    thumbnail_sizes:
      - width: 32
        height: 32
        method: crop
      - width: 96
        height: 96
        method: crop
      - width: 320
        height: 240
        method: scale
      - width: 640
        height: 480
        method: scale
      - width: 800
        height: 600
        method: scale
    ```
    
    
    Controls whether local media and entries in the remote media cache
    (media that is downloaded from other homeservers) should be removed
    under certain conditions, typically for the purpose of saving space.
    
    Purging media files will be the carried out by the media worker
    (that is, the worker that has the `enable_media_repo` homeserver config
    option set to 'true'). This may be the main process.
    
    The `media_retention.local_media_lifetime` and
    `media_retention.remote_media_lifetime` config options control whether
    media will be purged if it has not been accessed in a given amount of
    time. Note that media is 'accessed' when loaded in a room in a client, or
    otherwise downloaded by a local or remote user. If the media has never
    been accessed, the media's creation time is used instead. Both thumbnails
    and the original media will be removed. If either of these options are unset,
    then media of that type will not be purged.
    
    
    Local or cached remote media that has been
    [quarantined](../../admin_api/media_admin_api.md#quarantining-media-in-a-room)
    will not be deleted. Similarly, local media that has been marked as
    [protected from quarantine](../../admin_api/media_admin_api.md#protecting-media-from-being-quarantined)
    will not be deleted.
    
    
    Example configuration:
    ```yaml
    media_retention:
        local_media_lifetime: 90d
        remote_media_lifetime: 14d
    ```
    ---
    
    
    This setting determines whether the preview URL API is enabled.
    It is disabled by default. Set to true to enable. If enabled you must specify a
    `url_preview_ip_range_blacklist` blacklist.
    
    Example configuration:
    ```yaml
    url_preview_enabled: true
    ```
    ---
    
    
    List of IP address CIDR ranges that the URL preview spider is denied
    from accessing.  There are no defaults: you must explicitly
    specify a list for URL previewing to work.  You should specify any
    internal services in your network that you do not want synapse to try
    to connect to, otherwise anyone in any Matrix room could cause your
    synapse to issue arbitrary GET requests to your internal services,
    causing serious security issues.
    
    (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
    listed here, since they correspond to unroutable addresses.)
    
    This must be specified if `url_preview_enabled` is set. It is recommended that
    you use the following example list as a starting point.
    
    Note: The value is ignored when an HTTP proxy is in use.
    
    Example configuration:
    ```yaml
    url_preview_ip_range_blacklist:
      - '127.0.0.0/8'
      - '10.0.0.0/8'
      - '172.16.0.0/12'
      - '192.168.0.0/16'
      - '100.64.0.0/10'
      - '192.0.0.0/24'
      - '169.254.0.0/16'
      - '192.88.99.0/24'
      - '198.18.0.0/15'
      - '192.0.2.0/24'
      - '198.51.100.0/24'
      - '203.0.113.0/24'
      - '224.0.0.0/4'
      - '::1/128'
      - 'fe80::/10'
      - 'fc00::/7'
      - '2001:db8::/32'
      - 'ff00::/8'
      - 'fec0::/10'
    ```