mirror of
https://github.com/matrix-org/dendrite.git
synced 2025-12-17 03:43:11 -06:00
Clean up
This commit is contained in:
parent
bacc44be49
commit
7025844dd2
|
|
@ -1,92 +0,0 @@
|
||||||
# Code Style
|
|
||||||
|
|
||||||
We follow the standard Go style using goimports, but with a few extra
|
|
||||||
considerations.
|
|
||||||
|
|
||||||
## Linters
|
|
||||||
|
|
||||||
We use `golangci-lint` to run a number of linters, the exact list can be found
|
|
||||||
under linters in [.golangci.yml](.golangci.yml).
|
|
||||||
[Installation](https://github.com/golangci/golangci-lint#install) and [Editor
|
|
||||||
Integration](https://github.com/golangci/golangci-lint#editor-integration) for
|
|
||||||
it can be found in the readme of golangci-lint.
|
|
||||||
|
|
||||||
For rare cases where a linter is giving a spurious warning, it can be disabled
|
|
||||||
for that line or statement using a [comment
|
|
||||||
directive](https://github.com/golangci/golangci-lint#nolint), e.g. `var
|
|
||||||
bad_name int //nolint:golint,unused`. This should be used sparingly and only
|
|
||||||
when its clear that the lint warning is spurious.
|
|
||||||
|
|
||||||
The linters can be run using [scripts/find-lint.sh](scripts/find-lint.sh)
|
|
||||||
(see file for docs) or as part of a build/test/lint cycle using
|
|
||||||
[scripts/build-test-lint.sh](scripts/build-test-lint.sh).
|
|
||||||
|
|
||||||
|
|
||||||
## HTTP Error Handling
|
|
||||||
|
|
||||||
Unfortunately, converting errors into HTTP responses with the correct status
|
|
||||||
code and message can be done in a number of ways in golang:
|
|
||||||
|
|
||||||
1. Having functions return `JSONResponse` directly, which can then either set
|
|
||||||
it to an error response or a `200 OK`.
|
|
||||||
2. Have the HTTP handler try and cast error values to types that are handled
|
|
||||||
differently.
|
|
||||||
3. Have the HTTP handler call functions whose errors can only be interpreted
|
|
||||||
one way, for example if a `validate(...)` call returns an error then handler
|
|
||||||
knows to respond with a `400 Bad Request`.
|
|
||||||
|
|
||||||
We attempt to always use option #3, as it more naturally fits with the way that
|
|
||||||
golang generally does error handling. In particular, option #1 effectively
|
|
||||||
requires reinventing a new error handling scheme just for HTTP handlers.
|
|
||||||
|
|
||||||
|
|
||||||
## Line length
|
|
||||||
|
|
||||||
We strive for a line length of roughly 80 characters, though less than 100 is
|
|
||||||
acceptable if necessary. Longer lines are fine if there is nothing of interest
|
|
||||||
after the first 80-100 characters (e.g. long string literals).
|
|
||||||
|
|
||||||
|
|
||||||
## TODOs and FIXMEs
|
|
||||||
|
|
||||||
The majority of TODOs and FIXMEs should have an associated tracking issue on
|
|
||||||
github. These can be added just before merging of the PR to master, and the
|
|
||||||
issue number should be added to the comment, e.g. `// TODO(#324): ...`
|
|
||||||
|
|
||||||
|
|
||||||
## Logging
|
|
||||||
|
|
||||||
We generally prefer to log with static log messages and include any dynamic
|
|
||||||
information in fields.
|
|
||||||
|
|
||||||
```golang
|
|
||||||
logger := util.GetLogger(ctx)
|
|
||||||
|
|
||||||
// Not recommended
|
|
||||||
logger.Infof("Finished processing keys for %s, number of keys %d", name, numKeys)
|
|
||||||
|
|
||||||
// Recommended
|
|
||||||
logger.WithFields(logrus.Fields{
|
|
||||||
"numberOfKeys": numKeys,
|
|
||||||
"entityName": name,
|
|
||||||
}).Info("Finished processing keys")
|
|
||||||
```
|
|
||||||
|
|
||||||
This is useful when logging to systems that natively understand log fields, as
|
|
||||||
it allows people to search and process the fields without having to parse the
|
|
||||||
log message.
|
|
||||||
|
|
||||||
|
|
||||||
## Visual Studio Code
|
|
||||||
|
|
||||||
If you use VSCode then the following is an example of a workspace setting that
|
|
||||||
sets up linting correctly:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"go.lintTool":"golangci-lint",
|
|
||||||
"go.lintFlags": [
|
|
||||||
"--fast"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
@ -1,97 +0,0 @@
|
||||||
# Contributing to Dendrite
|
|
||||||
|
|
||||||
Everyone is welcome to contribute to Dendrite! We aim to make it as easy as
|
|
||||||
possible to get started.
|
|
||||||
|
|
||||||
Please ensure that you sign off your contributions! See [Sign Off](#sign-off)
|
|
||||||
section below.
|
|
||||||
|
|
||||||
## Getting up and running
|
|
||||||
|
|
||||||
See [INSTALL.md](INSTALL.md) for instructions on setting up a running dev
|
|
||||||
instance of dendrite, and [CODE_STYLE.md](CODE_STYLE.md) for the code style
|
|
||||||
guide.
|
|
||||||
|
|
||||||
As of May 2019, we're not using `gb` anymore, which is the tool we had been
|
|
||||||
using for managing our dependencies. We're now using Go modules. To build
|
|
||||||
Dendrite, run the `build.sh` script at the root of this repository (which runs
|
|
||||||
`go install` under the hood), and to run unit tests, run `go test ./...` (which
|
|
||||||
should pick up any unit test and run it). There are also [scripts](scripts) for
|
|
||||||
[linting](scripts/find-lint.sh) and doing a [build/test/lint
|
|
||||||
run](scripts/build-test-lint.sh).
|
|
||||||
|
|
||||||
As of February 2020, we are deprecating support for Go 1.11 and Go 1.12 and are
|
|
||||||
now targeting Go 1.13 or later. Please ensure that you are using at least Go
|
|
||||||
1.13 when developing for Dendrite - our CI will lint and run tests against this
|
|
||||||
version.
|
|
||||||
|
|
||||||
## Continuous Integration
|
|
||||||
|
|
||||||
When a Pull Request is submitted, continuous integration jobs are run
|
|
||||||
automatically to ensure the code builds and is relatively well-written. The jobs
|
|
||||||
are run on [Buildkite](https://buildkite.com/matrix-dot-org/dendrite/), and the
|
|
||||||
Buildkite pipeline configuration can be found in Matrix.org's [pipelines
|
|
||||||
repository](https://github.com/matrix-org/pipelines).
|
|
||||||
|
|
||||||
If a job fails, click the "details" button and you should be taken to the job's
|
|
||||||
logs.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Scroll down to the failing step and you should see some log output. Scan the
|
|
||||||
logs until you find what it's complaining about, fix it, submit a new commit,
|
|
||||||
then rinse and repeat until CI passes.
|
|
||||||
|
|
||||||
### Running CI Tests Locally
|
|
||||||
|
|
||||||
To save waiting for CI to finish after every commit, it is ideal to run the
|
|
||||||
checks locally before pushing, fixing errors first. This also saves other people
|
|
||||||
time as only so many PRs can be tested at a given time.
|
|
||||||
|
|
||||||
To execute what Buildkite tests, first run `./scripts/build-test-lint.sh`; this
|
|
||||||
script will build the code, lint it, and run `go test ./...` with race condition
|
|
||||||
checking enabled. If something needs to be changed, fix it and then run the
|
|
||||||
script again until it no longer complains. Be warned that the linting can take a
|
|
||||||
significant amount of CPU and RAM.
|
|
||||||
|
|
||||||
Once the code builds, run [Sytest](https://github.com/matrix-org/sytest)
|
|
||||||
according to the guide in
|
|
||||||
[docs/sytest.md](https://github.com/matrix-org/dendrite/blob/master/docs/sytest.md#using-a-sytest-docker-image)
|
|
||||||
so you can see whether something is being broken and whether there are newly
|
|
||||||
passing tests.
|
|
||||||
|
|
||||||
If these two steps report no problems, the code should be able to pass the CI
|
|
||||||
tests.
|
|
||||||
|
|
||||||
|
|
||||||
## Picking Things To Do
|
|
||||||
|
|
||||||
If you're new then feel free to pick up an issue labelled [good first
|
|
||||||
issue](https://github.com/matrix-org/dendrite/labels/good%20first%20issue).
|
|
||||||
These should be well-contained, small pieces of work that can be picked up to
|
|
||||||
help you get familiar with the code base.
|
|
||||||
|
|
||||||
Once you're comfortable with hacking on Dendrite there are issues lablled as
|
|
||||||
[help wanted](https://github.com/matrix-org/dendrite/labels/help%20wanted),
|
|
||||||
these are often slightly larger or more complicated pieces of work but are
|
|
||||||
hopefully nonetheless fairly well-contained.
|
|
||||||
|
|
||||||
We ask people who are familiar with Dendrite to leave the [good first
|
|
||||||
issue](https://github.com/matrix-org/dendrite/labels/good%20first%20issue)
|
|
||||||
issues so that there is always a way for new people to come and get involved.
|
|
||||||
|
|
||||||
## Getting Help
|
|
||||||
|
|
||||||
For questions related to developing on Dendrite we have a dedicated room on
|
|
||||||
Matrix [#dendrite-dev:matrix.org](https://matrix.to/#/#dendrite-dev:matrix.org)
|
|
||||||
where we're happy to help.
|
|
||||||
|
|
||||||
For more general questions please use
|
|
||||||
[#dendrite:matrix.org](https://matrix.to/#/#dendrite:matrix.org).
|
|
||||||
|
|
||||||
## Sign off
|
|
||||||
|
|
||||||
We ask that everyone who contributes to the project signs off their
|
|
||||||
contributions, in accordance with the
|
|
||||||
[DCO](https://github.com/matrix-org/matrix-doc/blob/master/CONTRIBUTING.rst#sign-off).
|
|
||||||
140
DESIGN.md
140
DESIGN.md
|
|
@ -1,140 +0,0 @@
|
||||||
# Design
|
|
||||||
|
|
||||||
## Log Based Architecture
|
|
||||||
|
|
||||||
### Decomposition and Decoupling
|
|
||||||
|
|
||||||
A matrix homeserver can be built around append-only event logs built from the
|
|
||||||
messages, receipts, presence, typing notifications, device messages and other
|
|
||||||
events sent by users on the homeservers or by other homeservers.
|
|
||||||
|
|
||||||
The server would then decompose into two categories: writers that add new
|
|
||||||
entries to the logs and readers that read those entries.
|
|
||||||
|
|
||||||
The event logs then serve to decouple the two components, the writers and
|
|
||||||
readers need only agree on the format of the entries in the event log.
|
|
||||||
This format could be largely derived from the wire format of the events used
|
|
||||||
in the client and federation protocols:
|
|
||||||
|
|
||||||
|
|
||||||
C-S API +---------+ Event Log +---------+ C-S API
|
|
||||||
---------> | |+ (e.g. kafka) | |+ --------->
|
|
||||||
| Writers || =============> | Readers ||
|
|
||||||
---------> | || | || --------->
|
|
||||||
S-S API +---------+| +---------+| S-S API
|
|
||||||
+---------+ +---------+
|
|
||||||
|
|
||||||
However the way matrix handles state events in a room creates a few
|
|
||||||
complications for this model.
|
|
||||||
|
|
||||||
1) Writers require the room state at an event to check if it is allowed.
|
|
||||||
2) Readers require the room state at an event to determine the users and
|
|
||||||
servers that are allowed to see the event.
|
|
||||||
3) A client can query the current state of the room from a reader.
|
|
||||||
|
|
||||||
The writers and readers cannot extract the necessary information directly from
|
|
||||||
the event logs because it would take too long to extract the information as the
|
|
||||||
state is built up by collecting individual state events from the event history.
|
|
||||||
|
|
||||||
The writers and readers therefore need access to something that stores copies
|
|
||||||
of the event state in a form that can be efficiently queried. One possibility
|
|
||||||
would be for the readers and writers to maintain copies of the current state
|
|
||||||
in local databases. A second possibility would be to add a dedicated component
|
|
||||||
that maintained the state of the room and exposed an API that the readers and
|
|
||||||
writers could query to get the state. The second has the advantage that the
|
|
||||||
state is calculated and stored in a single location.
|
|
||||||
|
|
||||||
|
|
||||||
C-S API +---------+ Log +--------+ Log +---------+ C-S API
|
|
||||||
---------> | |+ ======> | | ======> | |+ --------->
|
|
||||||
| Writers || | Room | | Readers ||
|
|
||||||
---------> | || <------ | Server | ------> | || --------->
|
|
||||||
S-S API +---------+| Query | | Query +---------+| S-S API
|
|
||||||
+---------+ +--------+ +---------+
|
|
||||||
|
|
||||||
|
|
||||||
The room server can annotate the events it logs to the readers with room state
|
|
||||||
so that the readers can avoid querying the room server unnecessarily.
|
|
||||||
|
|
||||||
[This architecture can be extended to cover most of the APIs.](WIRING.md)
|
|
||||||
|
|
||||||
## How things are supposed to work.
|
|
||||||
|
|
||||||
### Local client sends an event in an existing room.
|
|
||||||
|
|
||||||
0) The client sends a PUT `/_matrix/client/r0/rooms/{roomId}/send` request
|
|
||||||
and an HTTP loadbalancer routes the request to a ClientAPI.
|
|
||||||
|
|
||||||
1) The ClientAPI:
|
|
||||||
|
|
||||||
* Authenticates the local user using the `access_token` sent in the HTTP
|
|
||||||
request.
|
|
||||||
* Checks if it has already processed or is processing a request with the
|
|
||||||
same `txnID`.
|
|
||||||
* Calculates which state events are needed to auth the request.
|
|
||||||
* Queries the necessary state events and the latest events in the room
|
|
||||||
from the RoomServer.
|
|
||||||
* Confirms that the room exists and checks whether the event is allowed by
|
|
||||||
the auth checks.
|
|
||||||
* Builds and signs the events.
|
|
||||||
* Writes the event to a "InputRoomEvent" kafka topic.
|
|
||||||
* Send a `200 OK` response to the client.
|
|
||||||
|
|
||||||
2) The RoomServer reads the event from "InputRoomEvent" kafka topic:
|
|
||||||
|
|
||||||
* Checks if it has already has a copy of the event.
|
|
||||||
* Checks if the event is allowed by the auth checks using the auth events
|
|
||||||
at the event.
|
|
||||||
* Calculates the room state at the event.
|
|
||||||
* Works out what the latest events in the room after processing this event
|
|
||||||
are.
|
|
||||||
* Calculate how the changes in the latest events affect the current state
|
|
||||||
of the room.
|
|
||||||
* TODO: Workout what events determine the visibility of this event to other
|
|
||||||
users
|
|
||||||
* Writes the event along with the changes in current state to an
|
|
||||||
"OutputRoomEvent" kafka topic. It writes all the events for a room to
|
|
||||||
the same kafka partition.
|
|
||||||
|
|
||||||
3a) The ClientSync reads the event from the "OutputRoomEvent" kafka topic:
|
|
||||||
|
|
||||||
* Updates its copy of the current state for the room.
|
|
||||||
* Works out which users need to be notified about the event.
|
|
||||||
* Wakes up any pending `/_matrix/client/r0/sync` requests for those users.
|
|
||||||
* Adds the event to the recent timeline events for the room.
|
|
||||||
|
|
||||||
3b) The FederationSender reads the event from the "OutputRoomEvent" kafka topic:
|
|
||||||
|
|
||||||
* Updates its copy of the current state for the room.
|
|
||||||
* Works out which remote servers need to be notified about the event.
|
|
||||||
* Sends a `/_matrix/federation/v1/send` request to those servers.
|
|
||||||
* Or if there is a request in progress then add the event to a queue to be
|
|
||||||
sent when the previous request finishes.
|
|
||||||
|
|
||||||
### Remote server sends an event in an existing room.
|
|
||||||
|
|
||||||
0) The remote server sends a `PUT /_matrix/federation/v1/send` request and an
|
|
||||||
HTTP loadbalancer routes the request to a FederationReceiver.
|
|
||||||
|
|
||||||
1) The FederationReceiver:
|
|
||||||
|
|
||||||
* Authenticates the remote server using the "X-Matrix" authorisation header.
|
|
||||||
* Checks if it has already processed or is processing a request with the
|
|
||||||
same `txnID`.
|
|
||||||
* Checks the signatures for the events.
|
|
||||||
Fetches the ed25519 keys for the event senders if necessary.
|
|
||||||
* Queries the RoomServer for a copy of the state of the room at each event.
|
|
||||||
* If the RoomServer doesn't know the state of the room at an event then
|
|
||||||
query the state of the room at the event from the remote server using
|
|
||||||
`GET /_matrix/federation/v1/state_ids` falling back to
|
|
||||||
`GET /_matrix/federation/v1/state` if necessary.
|
|
||||||
* Once the state at each event is known check whether the events are
|
|
||||||
allowed by the auth checks against the state at each event.
|
|
||||||
* For each event that is allowed write the event to the "InputRoomEvent"
|
|
||||||
kafka topic.
|
|
||||||
* Send a 200 OK response to the remote server listing which events were
|
|
||||||
successfully processed and which events failed
|
|
||||||
|
|
||||||
2) The RoomServer processes the event the same as it would a local event.
|
|
||||||
|
|
||||||
3a) The ClientSync processes the event the same as it would a local event.
|
|
||||||
328
INSTALL.md
328
INSTALL.md
|
|
@ -1,328 +0,0 @@
|
||||||
# Installing Dendrite
|
|
||||||
|
|
||||||
Dendrite can be run in one of two configurations:
|
|
||||||
|
|
||||||
* **Polylith mode**: A cluster of individual components, dealing with different
|
|
||||||
aspects of the Matrix protocol (see [WIRING.md](./WIRING.md)). Components communicate with each other using internal HTTP APIs and [Apache Kafka](https://kafka.apache.org). This will almost certainly be the preferred model
|
|
||||||
for large-scale deployments.
|
|
||||||
|
|
||||||
* **Monolith mode**: All components run in the same process. In this mode,
|
|
||||||
Kafka is completely optional and can instead be replaced with an in-process
|
|
||||||
lightweight implementation called [Naffka](https://github.com/matrix-org/naffka). This will usually be the preferred model for low-volume, low-user
|
|
||||||
or experimental deployments.
|
|
||||||
|
|
||||||
Regardless of whether you are running in polylith or monolith mode, each Dendrite component that requires storage has its own database. Both Postgres
|
|
||||||
and SQLite are supported and can be mixed-and-matched across components as
|
|
||||||
needed in the configuration file.
|
|
||||||
|
|
||||||
Be advised that Dendrite is still developmental and it's not recommended for
|
|
||||||
use in production environments yet!
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
* Go 1.13+
|
|
||||||
* Postgres 9.5+ (if using Postgres databases)
|
|
||||||
* Apache Kafka 0.10.2+ (optional if using the monolith server):
|
|
||||||
* UNIX-based system ([read more here](https://kafka.apache.org/documentation/#os))
|
|
||||||
* JDK 1.8+ / OpenJDK 1.8+
|
|
||||||
* See [scripts/install-local-kafka.sh](scripts/install-local-kafka.sh) for up-to-date version numbers
|
|
||||||
|
|
||||||
## Building up a monolith deploment
|
|
||||||
|
|
||||||
Start by cloning the code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/matrix-org/dendrite
|
|
||||||
cd dendrite
|
|
||||||
```
|
|
||||||
|
|
||||||
Then build it:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./build.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
## Building up a polylith deployment
|
|
||||||
|
|
||||||
Start by cloning the code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/matrix-org/dendrite
|
|
||||||
cd dendrite
|
|
||||||
```
|
|
||||||
|
|
||||||
Then build it:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./build.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Install and start Kafka (c.f. [scripts/install-local-kafka.sh](scripts/install-local-kafka.sh)):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
KAFKA_URL=http://archive.apache.org/dist/kafka/2.1.0/kafka_2.11-2.1.0.tgz
|
|
||||||
|
|
||||||
# Only download the kafka if it isn't already downloaded.
|
|
||||||
test -f kafka.tgz || wget $KAFKA_URL -O kafka.tgz
|
|
||||||
# Unpack the kafka over the top of any existing installation
|
|
||||||
mkdir -p kafka && tar xzf kafka.tgz -C kafka --strip-components 1
|
|
||||||
|
|
||||||
# Start the zookeeper running in the background.
|
|
||||||
# By default the zookeeper listens on localhost:2181
|
|
||||||
kafka/bin/zookeeper-server-start.sh -daemon kafka/config/zookeeper.properties
|
|
||||||
|
|
||||||
# Start the kafka server running in the background.
|
|
||||||
# By default the kafka listens on localhost:9092
|
|
||||||
kafka/bin/kafka-server-start.sh -daemon kafka/config/server.properties
|
|
||||||
```
|
|
||||||
|
|
||||||
On MacOS, you can use [homebrew](https://brew.sh/) for easier setup of Kafka:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
brew install kafka
|
|
||||||
brew services start zookeeper
|
|
||||||
brew services start kafka
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### SQLite database setup
|
|
||||||
|
|
||||||
Dendrite can use the built-in SQLite database engine for small setups.
|
|
||||||
The SQLite databases do not need to be preconfigured - Dendrite will
|
|
||||||
create them automatically at startup.
|
|
||||||
|
|
||||||
### Postgres database setup
|
|
||||||
|
|
||||||
Assuming that Postgres 9.5 (or later) is installed:
|
|
||||||
|
|
||||||
* Create role, choosing a new password when prompted:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo -u postgres createuser -P dendrite
|
|
||||||
```
|
|
||||||
|
|
||||||
* Create the component databases:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
for i in account device mediaapi syncapi roomserver serverkey federationsender publicroomsapi appservice naffka; do
|
|
||||||
sudo -u postgres createdb -O dendrite dendrite_$i
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
(On macOS, omit `sudo -u postgres` from the above commands.)
|
|
||||||
|
|
||||||
### Server key generation
|
|
||||||
|
|
||||||
Each Dendrite server requires unique server keys.
|
|
||||||
|
|
||||||
Generate the self-signed SSL certificate for federation:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
test -f server.key || openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.crt -days 3650 -nodes -subj /CN=localhost
|
|
||||||
```
|
|
||||||
|
|
||||||
Generate the server signing key:
|
|
||||||
|
|
||||||
```
|
|
||||||
test -f matrix_key.pem || ./bin/generate-keys -private-key matrix_key.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
### Configuration file
|
|
||||||
|
|
||||||
Create config file, based on `dendrite-config.yaml`. Call it `dendrite.yaml`. Things that will need editing include *at least*:
|
|
||||||
|
|
||||||
* The `server_name` entry to reflect the hostname of your Dendrite server
|
|
||||||
* The `database` lines with an updated connection string based on your
|
|
||||||
desired setup, e.g. replacing `component` with the name of the component:
|
|
||||||
* For Postgres: `postgres://dendrite:password@localhost/component`
|
|
||||||
* For SQLite on disk: `file:component.db` or `file:///path/to/component.db`
|
|
||||||
* Postgres and SQLite can be mixed and matched.
|
|
||||||
* The `use_naffka` option if using Naffka in a monolith deployment
|
|
||||||
|
|
||||||
There are other options which may be useful so review them all. In particular,
|
|
||||||
if you are trying to federate from your Dendrite instance into public rooms
|
|
||||||
then configuring `key_perspectives` (like `matrix.org` in the sample) can
|
|
||||||
help to improve reliability considerably by allowing your homeserver to fetch
|
|
||||||
public keys for dead homeservers from somewhere else.
|
|
||||||
|
|
||||||
## Starting a monolith server
|
|
||||||
|
|
||||||
It is possible to use Naffka as an in-process replacement to Kafka when using
|
|
||||||
the monolith server. To do this, set `use_naffka: true` in your `dendrite.yaml` configuration and uncomment the relevant Naffka line in the `database` section.
|
|
||||||
Be sure to update the database username and password if needed.
|
|
||||||
|
|
||||||
The monolith server can be started as shown below. By default it listens for
|
|
||||||
HTTP connections on port 8008, so you can configure your Matrix client to use
|
|
||||||
`http://localhost:8008` as the server. If you set `--tls-cert` and `--tls-key`
|
|
||||||
as shown below, it will also listen for HTTPS connections on port 8448.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-monolith-server --tls-cert=server.crt --tls-key=server.key
|
|
||||||
```
|
|
||||||
|
|
||||||
## Starting a polylith deployment
|
|
||||||
|
|
||||||
The following contains scripts which will run all the required processes in order to point a Matrix client at Dendrite. Conceptually, you are wiring together to form the following diagram:
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
/media +---------------------------+
|
|
||||||
+----------->+------------->| dendrite-media-api-server |
|
|
||||||
^ ^ +---------------------------+
|
|
||||||
| | :7774
|
|
||||||
| |
|
|
||||||
| |
|
|
||||||
| | /directory +----------------------------------+
|
|
||||||
| | +--------->| dendrite-public-rooms-api-server |<========++
|
|
||||||
| | | +----------------------------------+ ||
|
|
||||||
| | | :7775 | ||
|
|
||||||
| | | +<-----------+ ||
|
|
||||||
| | | | ||
|
|
||||||
| | | /sync +--------------------------+ ||
|
|
||||||
| | +--------->| dendrite-sync-api-server |<================++
|
|
||||||
| | | | +--------------------------+ ||
|
|
||||||
| | | | :7773 | ^^ ||
|
|
||||||
Matrix +------------------+ | | | | || client_data ||
|
|
||||||
Clients --->| client-api-proxy |-------+ +<-----------+ ++=============++ ||
|
|
||||||
+------------------+ | | | || ||
|
|
||||||
:8008 | | CS API +----------------------------+ || ||
|
|
||||||
| +--------->| dendrite-client-api-server |==++ ||
|
|
||||||
| | +----------------------------+ ||
|
|
||||||
| | :7771 | ||
|
|
||||||
| | | ||
|
|
||||||
| +<-----------+ ||
|
|
||||||
| | ||
|
|
||||||
| | ||
|
|
||||||
| | +----------------------+ room_event ||
|
|
||||||
| +---------->| dendrite-room-server |===============++
|
|
||||||
| | +----------------------+ ||
|
|
||||||
| | :7770 ||
|
|
||||||
| | ++==========================++
|
|
||||||
| +<------------+ ||
|
|
||||||
| | | VV
|
|
||||||
| | +-----------------------------------+ Matrix
|
|
||||||
| | | dendrite-federation-sender-server |------------> Servers
|
|
||||||
| | +-----------------------------------+
|
|
||||||
| | :7776
|
|
||||||
| |
|
|
||||||
+---------->+ +<-----------+
|
|
||||||
| |
|
|
||||||
Matrix +----------------------+ SS API +--------------------------------+
|
|
||||||
Servers --->| federation-api-proxy |--------->| dendrite-federation-api-server |
|
|
||||||
+----------------------+ +--------------------------------+
|
|
||||||
:8448 :7772
|
|
||||||
|
|
||||||
|
|
||||||
A --> B = HTTP requests (A = client, B = server)
|
|
||||||
A ==> B = Kafka (A = producer, B = consumer)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Client proxy
|
|
||||||
|
|
||||||
This is what Matrix clients will talk to. If you use the script below, point
|
|
||||||
your client at `http://localhost:8008`.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/client-api-proxy \
|
|
||||||
--bind-address ":8008" \
|
|
||||||
--client-api-server-url "http://localhost:7771" \
|
|
||||||
--sync-api-server-url "http://localhost:7773" \
|
|
||||||
--media-api-server-url "http://localhost:7774" \
|
|
||||||
--public-rooms-api-server-url "http://localhost:7775" \
|
|
||||||
```
|
|
||||||
|
|
||||||
### Federation proxy
|
|
||||||
|
|
||||||
This is what Matrix servers will talk to. This is only required if you want
|
|
||||||
to support federation.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/federation-api-proxy \
|
|
||||||
--bind-address ":8448" \
|
|
||||||
--federation-api-url "http://localhost:7772" \
|
|
||||||
--media-api-server-url "http://localhost:7774" \
|
|
||||||
```
|
|
||||||
|
|
||||||
### Client API server
|
|
||||||
|
|
||||||
This is what implements message sending. Clients talk to this via the proxy in
|
|
||||||
order to send messages.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-client-api-server --config=dendrite.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Room server
|
|
||||||
|
|
||||||
This is what implements the room DAG. Clients do not talk to this.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-room-server --config=dendrite.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Sync server
|
|
||||||
|
|
||||||
This is what implements `/sync` requests. Clients talk to this via the proxy
|
|
||||||
in order to receive messages.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-sync-api-server --config dendrite.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Media server
|
|
||||||
|
|
||||||
This implements `/media` requests. Clients talk to this via the proxy in
|
|
||||||
order to upload and retrieve media.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-media-api-server --config dendrite.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Public room server
|
|
||||||
|
|
||||||
This implements `/directory` requests. Clients talk to this via the proxy
|
|
||||||
in order to retrieve room directory listings.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-public-rooms-api-server --config dendrite.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Federation API server
|
|
||||||
|
|
||||||
This implements federation requests. Servers talk to this via the proxy in
|
|
||||||
order to send transactions. This is only required if you want to support
|
|
||||||
federation.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-federation-api-server --config dendrite.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Federation sender
|
|
||||||
|
|
||||||
This sends events from our users to other servers. This is only required if
|
|
||||||
you want to support federation.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-federation-sender-server --config dendrite.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Appservice server
|
|
||||||
|
|
||||||
This sends events from the network to [application
|
|
||||||
services](https://matrix.org/docs/spec/application_service/unstable.html)
|
|
||||||
running locally. This is only required if you want to support running
|
|
||||||
application services on your homeserver.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-appservice-server --config dendrite.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Key server
|
|
||||||
|
|
||||||
This manages end-to-end encryption keys (or rather, it will do when it's
|
|
||||||
finished).
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./bin/dendrite-key-server --config dendrite.yaml
|
|
||||||
```
|
|
||||||
Loading…
Reference in a new issue