This adds a new admin endpoint `/_dendrite/admin/purgeRoom/{roomID}`. It
completely erases all database entries for a given room ID.
The roomserver will start by clearing all data for that room and then
will generate an output event to notify downstream components (i.e. the
sync API and federation API) to do the same.
It does not currently clear media and it is currently not implemented
for SQLite since it relies on SQL array operations right now.
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
Co-authored-by: Till Faelligen <2353100+S7evinK@users.noreply.github.com>
Since #2849 there is no limit for the current state we fetch to
calculate history visibility. In large rooms this can cause us to fetch
thousands of membership events we don't really care about.
This now only gets the state event types and senders in our timeline,
which should significantly reduce the amount of events we fetch from the
database.
Also removes `MaxTopologicalPosition`, as it is an unnecessary DB call,
given we use the result in `topological_position < $1` calls.
Basically enables us to use `test.WithAllDatabases` when testing
internal HTTP APIs, as this would otherwise result in Prometheus
complaining about already registered metric names.
Adds wakeup broadcast handling to the pinecone demos.
This will reset their blacklist status and interrupt any ongoing
federation queue backoffs currently in progress for this peer.
The end result is that any queued events will quickly be sent to the
peer if they had disconnected while attempting to send events to them.
Adds `PUT
/_matrix/client/v3/directory/list/appservice/{networkId}/{roomId}` and
`DELTE
/_matrix/client/v3/directory/list/appservice/{networkId}/{roomId}`
support, as well as the ability to filter `/publicRooms` on networkID
and including all networks.
This is a refactor of the federation destination queues.
It fixes a few things, namely:
- actually retry outgoing events with backoff behaviour
- obtain enough events from the database to fill messages as much as
possible
- minimize the amount of running goroutines
- use pure timers for backoff
- don't restart queue unless necessary
- close the background task when backing off
- increase max edus in a transaction to match the spec
- cleanup timers more aggresively to reduce memory usage
- add jitter to backoff timers to reduce resource spikes
- add a bunch of tests (with real and fake databases) to ensure
everything is working
This fixes some edge cases where federation queue backoffs and
blacklisting weren't behaving as expected.
It also adds new tests for the federation queues to ensure their
behaviour continues to work correctly.
This ensures that the joined hosts in the federation API are correct
after the state is rewritten. This might fix some races around the time
of joining federated rooms.
If the private key file is lost, it's often possible to retrieve the
public key from another server elsewhere, so we should make it possible
to configure it in that way.
Some tweaks for the send-to-device consumers/producers:
- use `json.RawMessage` without marshalling it first
- try further devices (if available) if we failed to `PublishMsg` in the
producers
- some logging changes (to better debug E2EE issues)
We were `json.Unmarshal`ing the EDU and `json.Marshal`ing right before
sending the EDU to the stream. Those are now removed and the consumer
does `json.Unmarshal` once.