dendrite/syncapi/storage/storage_test.go
PiotrKozimor 15cfeb16aa
Upstream release v0.9.0 (#18)
* Correctly redact events over federation (#2526)

* Ensure we check powerlevel/origin before redacting an event

* Add passing test

* Use pl.UserLevel

* Make check more readable, also check for the sender

* Add new next steps page to the documentation

* Highlighting in docs

* Rename the page to "Optimise your installation"

* Attempt to raise the file descriptor limit at startup (#2527)

* Add `--difference` to `resolve-state` tool

* Make the linter happy again

* generic CaddyFile in front of Dendrite (monolith) (#2531)

for Caddy 2.5.x

Co-authored-by: emanuele.aliberti <emanuele.aliberti@mtka.eu>

* Handle state before, send history visibility in output (#2532)

* Check state before event

* Tweaks

* Refactor a bit, include in output events

* Don't waste time if soft failed either

* Tweak control flow, comments, use GMSL history visibility type

* Fix rare panic when returning user devices over federation (#2534)

* Add `InputDeviceListUpdate` to the keyserver, remove old input API (#2536)

* Add `InputDeviceListUpdate` to the keyserver, remove old input API

* Fix copyright

* Log more information when a device list update fails

* Fix nats.go commit (#2540)

Signed-off-by: Jean Lucas <jean@4ray.co>

* Don't return `end` if there are not more messages (#2542)

* Be more spec compliant

* Move lazyLoadMembers to own method

* Return an error if trying to invite a malformed user ID (#2543)

* Add `evacuateUser` endpoint, use it when deactivating accounts (#2545)

* Add `evacuateUser` endpoint, use it when deactivating accounts

* Populate the API

* Clean up user devices when deactivating

* Include invites, delete pushers

* Silence presence logs (#2547)

* Blacklist `Guest users can join guest_access rooms` test until it can be investigated

* Disable WebAssembly builds for now

* Try to fix backfilling (#2548)

* Try to fix backfilling

* Return start/end to not confuse clients

* Update GMSL

* Update GMSL

* Roomserver producers package (#2546)

* Give the roomserver a producers package

* Change init point

* Populate ACLs API

* Fix build issues

* `RoomEventProducer` naming

* Version 0.8.9 (#2549)

* Version 0.8.9

* Update changelog

* feat+fix: Ignore unknown keys and verify required fields are present in appservice registration files (#2550)

* fix: ignore unknown keys in appservice configs

fixes matrix-org/dendrite#1567

* feat: verify required fields in appservice configs

* Use new testrig for key changes tests (#2552)

* Use new testrig for tests

* Log the error message

* Fix QuerySharedUsers for the SyncAPI keychange consumer (#2554)

* Make more use of base.BaseDendrite

* Fix QuerySharedUsers if no UserIDs are supplied

* Return clearer error when no state NID exists for an event (#2555)

* Wrap error from `SnapshotNIDFromEventID`

* Hopefully fix read receipts timestamps (#2557)

This should avoid coercions between signed and unsigned ints which might fix problems like `sql: converting argument $5 type: uint64 values with high bit set are not supported`.

* Fix nil pointer access when redacting events (#2560)

* Fix issue `uint64 values with high bit are not supported` in presence (#2562)

* Fix issue #2528

* Use gomatrixserverlib.Timestamp

* Use ParseUint instead of ParseInt

* Update Pinecone to matrix-org/pinecone@1ce778f

* Ristretto cache (#2563)

* Try Ristretto cache

* Tweak

* It's beautiful

* Update GMSL

* More strict keyable interface

* Fix that some more

* Make less panicky

* Don't enforce mutability checks for now

* Determine mutability using deep equality

* Tweaks

* Namespace keys

* Make federation caches mutable

* Update cost estimation, add metric

* Update GMSL

* Estimate cost for metrics better

* Reduce counters a bit

* Try caching events

* Some guards

* Try again

* Try this

* Use separate caches for hopefully better hash distribution

* Fix bug with admitting events into cache

* Try to fix bugs

* Check nil

* Try that again

* Preserve order jeezo this is messy

* thanks VS Code for doing exactly the wrong thing

* Try this again

* Be more specific

* aaaaargh

* One more time

* That might be better

* Stronger sorting

* Cache expiries, async publishing of EDUs

* Put it back

* Use a shared cache again

* Cost estimation fixes

* Update ristretto

* Reduce counters a bit

* Clean up a bit

* Update GMSL

* 1GB

* Configurable cache sizees

* Tweaks

* Add `config.DataUnit` for specifying friendly cache sizes

* Various tweaks

* Update GMSL

* Add back some lazy loading caching

* Include key in cost

* Include key in cost

* Tweak max age handling, config key name

* Only register prometheus metrics if requested

* Review comments @S7evinK

* Don't return errors when creating caches (it is better just to crash since otherwise we'll `nil`-pointer exception everywhere)

* Review comments

* Update sample configs

* Update GHA Workflow

* Update Complement images to Go 1.18

* Remove the cache test from the federation API as we no longer guarantee immediate cache admission

* Don't check the caches in the renewal test

* Possibly fix the upgrade tests

* Update to matrix-org/gomatrixserverlib#322

* Update documentation to refer to Go 1.18

* Minor SendToDevice fix (#2565)

* Avoid unnecessary marshalling if sending to the local server

* Fix ordering of ToDevice messages

* Revive SendToDevice test

* Use `/v3` to request media from remote servers (update to matrix-org/gomatrixserverlib#324)

* Pointerise `types.RoomInfo` in the cache so we can update it in-place in the latest events updater

* Add a Troubleshooting page

* Update `sytest-whitelist`

* Use sync API database in `filterSharedUsers` (#2572)

* Add function to the sync API storage package for filtering shared users

* Use the database instead of asking the RS API

* Fix unit tests

* Fix map handling in `filterSharedUsers`

* Update 1_createusers.md (#2571)

* Update 1_createusers.md

Added description on how to create user accounts when running in docker.

* Update 1_createusers.md

Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>

* Fix connection_string format in dendrite-sample.polylith.yaml (#2574)

* History visibility database changes (#2533)

* Add new history_visibility column

* Update SQL queries to include history_visibility

* Store the history visibilty calculated by the roomserver

* Update GMSL

* Update migrations

* Fix migration

* Update GMSL

* Fix `go.sum`

* Update GMSL to use sql.Scanner & sql.Valuer

* Re-order migration/table creation

* Update gomatrixserverlib

* Add history_visibility column to current_room_state

* Fix migrations

* Return error instead of Fatal log

Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>

* Tweak cache counters (#2575)

* Tweak cache counters

This makes the number of counters relative to the
maximum cache size. Since the counters
effectively manage the size of the bloom filter,
larger caches need more counters and smaller
caches need less.

10 counters per 1KB data means that the default
cache size of 1GB should result in a bloom filter
and TinyLRU admission set of about 16MB
estimated.

* Remove line left by accident

* Set historyVisibility in rowsToStreamEvents

* Update FAQ

* Add event state key cache (#2576)

* Explain how SRV works in Matrix and discourage using it (#2577)

* Explain how SRV works in Matrix and discourage using it

* Minor tweaks to formatting

Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>

* Fix issue with membership event_nid being 0 (#2580)

* docs: Add build page; correct proxy info; fix Caddy example (#2579)

* Add build page; correct proxy info; fix Caddy example

* Improve Caddyfile example

* Apply review comments; add polylith Caddyfile

* Bump tzinfo from 1.2.9 to 1.2.10 in /docs (#2584)

Bumps [tzinfo](https://github.com/tzinfo/tzinfo) from 1.2.9 to 1.2.10.
- [Release notes](https://github.com/tzinfo/tzinfo/releases)
- [Changelog](https://github.com/tzinfo/tzinfo/blob/master/CHANGES.md)
- [Commits](https://github.com/tzinfo/tzinfo/compare/v1.2.9...v1.2.10)

---
updated-dependencies:
- dependency-name: tzinfo
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Membership updater refactoring (#2541)

* Membership updater refactoring

* Pass in membership state

* Use membership check rather than referring to state directly

* Delete irrelevant membership states

* We don't need the leave event after all

* Tweaks

* Put a log entry in that I might stand a chance of finding

* Be less panicky

* Tweak invite handling

* Don't freak if we can't find the event NID

* Use event NID from `types.Event`

* Clean up

* Better invite handling

* Placate the almighty linter

* Blacklist a Sytest which is otherwise fine under Complement for reasons I don't understand

* Fix the sytest after all (thanks @S7evinK for the spot)

* Try to fix HTTP 500s on `/members` (#2581)

* Update database migrations, remove goose (#2264)

* Add new db migration

* Update migrations
Remove goose

* Add possibility to test direct upgrades

* Try to fix WASM test

* Add checks for specific migrations

* Remove AddMigration
Use WithTransaction
Add Dendrite version to table

* Fix linter issues

* Update tests

* Update comments, outdent if

* Namespace migrations

* Add direct upgrade tests, skipping over one version

* Split migrations

* Update go version in CI

* Fix copy&paste mistake

* Use contexts in migrations

Co-authored-by: kegsay <kegan@matrix.org>
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>

* Add .well-known/matrix/client to clientapi (#2551)

Signed-off-by: Jonathan Bartlett <jonathan@jonnobrow.co.uk>

Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>

* Remove `room_id` field from MSC2946 stripped events (closes #2588)

* Remove `goose` from Dockerfiles

* Make the User API responsible for sending account data output events (#2592)

* Make the User API responsible for sending account data output events

* Clean up producer

* Review comments

* Update NATS Server and nats.go to use upstream

* Set CORS headers for HTTP 404 and 405 errors (#2599)

* Set CORS headers for the 404s

* Use custom handlers, plus one for HTTP 405 too

* Tweak setup

* Add to muxes too

* Tidy up some more

* Use built-in HTTP 404 handler

* Don't bother setting it for federation-facing

* Optimise checking other servers allowed to see events (#2596)

* Try optimising checking if server is allowed to see event

* Fix error

* Handle case where snapshot NID is 0

* Fix query

* Update SQL

* Clean up `CheckServerAllowedToSeeEvent`

* Not supported on SQLite

* Maybe placate the unit tests

* Review comments

* De-race `types.RoomInfo` (#2600)

* De-race `CompleteSync` (#2601)

The `err` was coming from outside of the goroutine and being written to by concurrent goroutines.

* Version 0.9.0 (#2602)

Co-authored-by: Till <2353100+S7evinK@users.noreply.github.com>
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
Co-authored-by: Till Faelligen <davidf@element.io>
Co-authored-by: Emanuele Aliberti <dev@mtka.eu>
Co-authored-by: emanuele.aliberti <emanuele.aliberti@mtka.eu>
Co-authored-by: Jean Lucas <jean@4ray.co>
Co-authored-by: Kabir Kwatra <kabir@kwatra.me>
Co-authored-by: andreever <52261463+andreever@users.noreply.github.com>
Co-authored-by: Maximilian Gaedig <38767445+MaximilianGaedig@users.noreply.github.com>
Co-authored-by: Tulir Asokan <tulir@maunium.net>
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: kegsay <kegan@matrix.org>
Co-authored-by: Jonathan Bartlett <34320158+Jonnobrow@users.noreply.github.com>
2022-08-03 13:35:29 +02:00

628 lines
22 KiB
Go

package storage_test
import (
"bytes"
"context"
"encoding/json"
"fmt"
"reflect"
"testing"
"github.com/matrix-org/dendrite/setup/config"
"github.com/matrix-org/dendrite/syncapi/storage"
"github.com/matrix-org/dendrite/syncapi/types"
"github.com/matrix-org/dendrite/test"
"github.com/matrix-org/gomatrixserverlib"
)
var ctx = context.Background()
func MustCreateDatabase(t *testing.T, dbType test.DBType) (storage.Database, func()) {
connStr, close := test.PrepareDBConnectionString(t, dbType)
db, err := storage.NewSyncServerDatasource(nil, &config.DatabaseOptions{
ConnectionString: config.DataSource(connStr),
})
if err != nil {
t.Fatalf("NewSyncServerDatasource returned %s", err)
}
return db, close
}
func MustWriteEvents(t *testing.T, db storage.Database, events []*gomatrixserverlib.HeaderedEvent) (positions []types.StreamPosition) {
for _, ev := range events {
var addStateEvents []*gomatrixserverlib.HeaderedEvent
var addStateEventIDs []string
var removeStateEventIDs []string
if ev.StateKey() != nil {
addStateEvents = append(addStateEvents, ev)
addStateEventIDs = append(addStateEventIDs, ev.EventID())
}
pos, err := db.WriteEvent(ctx, ev, addStateEvents, addStateEventIDs, removeStateEventIDs, nil, false, gomatrixserverlib.HistoryVisibilityShared)
if err != nil {
t.Fatalf("WriteEvent failed: %s", err)
}
t.Logf("Event ID %s spos=%v depth=%v", ev.EventID(), pos, ev.Depth())
positions = append(positions, pos)
}
return
}
func TestWriteEvents(t *testing.T) {
test.WithAllDatabases(t, func(t *testing.T, dbType test.DBType) {
alice := test.NewUser(t)
r := test.NewRoom(t, alice)
db, close := MustCreateDatabase(t, dbType)
defer close()
MustWriteEvents(t, db, r.Events())
})
}
// These tests assert basic functionality of RecentEvents for PDUs
func TestRecentEventsPDU(t *testing.T) {
test.WithAllDatabases(t, func(t *testing.T, dbType test.DBType) {
db, close := MustCreateDatabase(t, dbType)
defer close()
alice := test.NewUser(t)
// dummy room to make sure SQL queries are filtering on room ID
MustWriteEvents(t, db, test.NewRoom(t, alice).Events())
// actual test room
r := test.NewRoom(t, alice)
r.CreateAndInsert(t, alice, "m.room.message", map[string]interface{}{"body": "hi"})
events := r.Events()
positions := MustWriteEvents(t, db, events)
// dummy room to make sure SQL queries are filtering on room ID
MustWriteEvents(t, db, test.NewRoom(t, alice).Events())
latest, err := db.MaxStreamPositionForPDUs(ctx)
if err != nil {
t.Fatalf("failed to get MaxStreamPositionForPDUs: %s", err)
}
testCases := []struct {
Name string
From types.StreamPosition
To types.StreamPosition
Limit int
ReverseOrder bool
WantEvents []*gomatrixserverlib.HeaderedEvent
WantLimited bool
}{
// The purpose of this test is to make sure that incremental syncs are including up to the latest events.
// It's a basic sanity test that sync works. It creates a streaming position that is on the penultimate event.
// It makes sure the response includes the final event.
{
Name: "penultimate",
From: positions[len(positions)-2], // pretend we are at the penultimate event
To: latest,
Limit: 100,
WantEvents: events[len(events)-1:],
WantLimited: false,
},
// The purpose of this test is to check that limits can be applied and work.
// This is critical for big rooms hence the test here.
{
Name: "limited",
From: 0,
To: latest,
Limit: 1,
WantEvents: events[len(events)-1:],
WantLimited: true,
},
// The purpose of this test is to check that we can return every event with a high
// enough limit
{
Name: "large limited",
From: 0,
To: latest,
Limit: 100,
WantEvents: events,
WantLimited: false,
},
// The purpose of this test is to check that we can return events in reverse order
{
Name: "reverse",
From: positions[len(positions)-3], // 2 events back
To: latest,
Limit: 100,
ReverseOrder: true,
WantEvents: test.Reversed(events[len(events)-2:]),
WantLimited: false,
},
}
for i := range testCases {
tc := testCases[i]
t.Run(tc.Name, func(st *testing.T) {
var filter gomatrixserverlib.RoomEventFilter
filter.Limit = tc.Limit
gotEvents, limited, err := db.RecentEvents(ctx, r.ID, types.Range{
From: tc.From,
To: tc.To,
}, &filter, !tc.ReverseOrder, true)
if err != nil {
st.Fatalf("failed to do sync: %s", err)
}
if limited != tc.WantLimited {
st.Errorf("got limited=%v want %v", limited, tc.WantLimited)
}
if len(gotEvents) != len(tc.WantEvents) {
st.Errorf("got %d events, want %d", len(gotEvents), len(tc.WantEvents))
}
for j := range gotEvents {
if !reflect.DeepEqual(gotEvents[j].JSON(), tc.WantEvents[j].JSON()) {
st.Errorf("event %d got %s want %s", j, string(gotEvents[j].JSON()), string(tc.WantEvents[j].JSON()))
}
}
})
}
})
}
// The purpose of this test is to ensure that backfill does indeed go backwards, using a topology token
func TestGetEventsInRangeWithTopologyToken(t *testing.T) {
test.WithAllDatabases(t, func(t *testing.T, dbType test.DBType) {
db, close := MustCreateDatabase(t, dbType)
defer close()
alice := test.NewUser(t)
r := test.NewRoom(t, alice)
for i := 0; i < 10; i++ {
r.CreateAndInsert(t, alice, "m.room.message", map[string]interface{}{"body": fmt.Sprintf("hi %d", i)})
}
events := r.Events()
_ = MustWriteEvents(t, db, events)
from, err := db.MaxTopologicalPosition(ctx, r.ID)
if err != nil {
t.Fatalf("failed to get MaxTopologicalPosition: %s", err)
}
t.Logf("max topo pos = %+v", from)
// head towards the beginning of time
to := types.TopologyToken{}
// backpaginate 5 messages starting at the latest position.
filter := &gomatrixserverlib.RoomEventFilter{Limit: 5}
paginatedEvents, err := db.GetEventsInTopologicalRange(ctx, &from, &to, r.ID, filter, true)
if err != nil {
t.Fatalf("GetEventsInTopologicalRange returned an error: %s", err)
}
gots := db.StreamEventsToEvents(nil, paginatedEvents)
test.AssertEventsEqual(t, gots, test.Reversed(events[len(events)-5:]))
})
}
/*
// The purpose of this test is to make sure that backpagination returns all events, even if some events have the same depth.
// For cases where events have the same depth, the streaming token should be used to tie break so events written via WriteEvent
// will appear FIRST when going backwards. This test creates a DAG like:
// .-----> Message ---.
// Create -> Membership --------> Message -------> Message
// `-----> Message ---`
// depth 1 2 3 4
//
// With a total depth of 4. It tests that:
// - Backpagination over the whole fork should include all messages and not leave any out.
// - Backpagination from the middle of the fork should not return duplicates (things later than the token).
func TestGetEventsInRangeWithEventsSameDepth(t *testing.T) {
t.Parallel()
db := MustCreateDatabase(t)
var events []*gomatrixserverlib.HeaderedEvent
events = append(events, MustCreateEvent(t, testRoomID, nil, &gomatrixserverlib.EventBuilder{
Content: []byte(fmt.Sprintf(`{"room_version":"4","creator":"%s"}`, testUserIDA)),
Type: "m.room.create",
StateKey: &emptyStateKey,
Sender: testUserIDA,
Depth: int64(len(events) + 1),
}))
events = append(events, MustCreateEvent(t, testRoomID, []*gomatrixserverlib.HeaderedEvent{events[len(events)-1]}, &gomatrixserverlib.EventBuilder{
Content: []byte(`{"membership":"join"}`),
Type: "m.room.member",
StateKey: &testUserIDA,
Sender: testUserIDA,
Depth: int64(len(events) + 1),
}))
// fork the dag into three, same prev_events and depth
parent := []*gomatrixserverlib.HeaderedEvent{events[len(events)-1]}
depth := int64(len(events) + 1)
for i := 0; i < 3; i++ {
events = append(events, MustCreateEvent(t, testRoomID, parent, &gomatrixserverlib.EventBuilder{
Content: []byte(fmt.Sprintf(`{"body":"Message A %d"}`, i+1)),
Type: "m.room.message",
Sender: testUserIDA,
Depth: depth,
}))
}
// merge the fork, prev_events are all 3 messages, depth is increased by 1.
events = append(events, MustCreateEvent(t, testRoomID, events[len(events)-3:], &gomatrixserverlib.EventBuilder{
Content: []byte(`{"body":"Message merge"}`),
Type: "m.room.message",
Sender: testUserIDA,
Depth: depth + 1,
}))
MustWriteEvents(t, db, events)
fromLatest, err := db.EventPositionInTopology(ctx, events[len(events)-1].EventID())
if err != nil {
t.Fatalf("failed to get EventPositionInTopology: %s", err)
}
fromFork, err := db.EventPositionInTopology(ctx, events[len(events)-3].EventID()) // Message 2
if err != nil {
t.Fatalf("failed to get EventPositionInTopology for event: %s", err)
}
// head towards the beginning of time
to := types.TopologyToken{}
testCases := []struct {
Name string
From types.TopologyToken
Limit int
Wants []*gomatrixserverlib.HeaderedEvent
}{
{
Name: "Pagination over the whole fork",
From: fromLatest,
Limit: 5,
Wants: reversed(events[len(events)-5:]),
},
{
Name: "Paginating to the middle of the fork",
From: fromLatest,
Limit: 2,
Wants: reversed(events[len(events)-2:]),
},
{
Name: "Pagination FROM the middle of the fork",
From: fromFork,
Limit: 3,
Wants: reversed(events[len(events)-5 : len(events)-2]),
},
}
for _, tc := range testCases {
// backpaginate messages starting at the latest position.
paginatedEvents, err := db.GetEventsInTopologicalRange(ctx, &tc.From, &to, testRoomID, tc.Limit, true)
if err != nil {
t.Fatalf("%s GetEventsInRange returned an error: %s", tc.Name, err)
}
gots := gomatrixserverlib.HeaderedToClientEvents(db.StreamEventsToEvents(&testUserDeviceA, paginatedEvents), gomatrixserverlib.FormatAll)
assertEventsEqual(t, tc.Name, true, gots, tc.Wants)
}
}
// The purpose of this test is to make sure that the query to pull out events is honouring the room ID correctly.
// It works by creating two rooms with the same events in them, then selecting events by topological range.
// Specifically, we know that events with the same depth but lower stream positions are selected, and it's possible
// that this check isn't using the room ID if the brackets are wrong in the SQL query.
func TestGetEventsInTopologicalRangeMultiRoom(t *testing.T) {
t.Parallel()
db := MustCreateDatabase(t)
makeEvents := func(roomID string) (events []*gomatrixserverlib.HeaderedEvent) {
events = append(events, MustCreateEvent(t, roomID, nil, &gomatrixserverlib.EventBuilder{
Content: []byte(fmt.Sprintf(`{"room_version":"4","creator":"%s"}`, testUserIDA)),
Type: "m.room.create",
StateKey: &emptyStateKey,
Sender: testUserIDA,
Depth: int64(len(events) + 1),
}))
events = append(events, MustCreateEvent(t, roomID, []*gomatrixserverlib.HeaderedEvent{events[len(events)-1]}, &gomatrixserverlib.EventBuilder{
Content: []byte(`{"membership":"join"}`),
Type: "m.room.member",
StateKey: &testUserIDA,
Sender: testUserIDA,
Depth: int64(len(events) + 1),
}))
return
}
roomA := "!room_a:" + string(testOrigin)
roomB := "!room_b:" + string(testOrigin)
eventsA := makeEvents(roomA)
eventsB := makeEvents(roomB)
MustWriteEvents(t, db, eventsA)
MustWriteEvents(t, db, eventsB)
from, err := db.MaxTopologicalPosition(ctx, roomB)
if err != nil {
t.Fatalf("failed to get MaxTopologicalPosition: %s", err)
}
// head towards the beginning of time
to := types.TopologyToken{}
// Query using room B as room A was inserted first and hence A will have lower stream positions but identical depths,
// allowing this bug to surface.
paginatedEvents, err := db.GetEventsInTopologicalRange(ctx, &from, &to, roomB, 5, true)
if err != nil {
t.Fatalf("GetEventsInRange returned an error: %s", err)
}
gots := gomatrixserverlib.HeaderedToClientEvents(db.StreamEventsToEvents(&testUserDeviceA, paginatedEvents), gomatrixserverlib.FormatAll)
assertEventsEqual(t, "", true, gots, reversed(eventsB))
}
// The purpose of this test is to make sure that events are returned in the right *order* when they have been inserted in a manner similar to
// how any kind of backfill operation will insert the events. This test inserts the SimpleRoom events in a manner similar to how backfill over
// federation would:
// - First inserts join event of test user C
// - Inserts chunks of history in strata e.g (25-30, 20-25, 15-20, 10-15, 5-10, 0-5).
// The test then does a backfill to ensure that the response is ordered correctly according to depth.
func TestGetEventsInRangeWithEventsInsertedLikeBackfill(t *testing.T) {
t.Parallel()
db := MustCreateDatabase(t)
events, _ := SimpleRoom(t, testRoomID, testUserIDA, testUserIDB)
// "federation" join
userC := fmt.Sprintf("@radiance:%s", testOrigin)
joinEvent := MustCreateEvent(t, testRoomID, []*gomatrixserverlib.HeaderedEvent{events[len(events)-1]}, &gomatrixserverlib.EventBuilder{
Content: []byte(`{"membership":"join"}`),
Type: "m.room.member",
StateKey: &userC,
Sender: userC,
Depth: int64(len(events) + 1),
})
MustWriteEvents(t, db, []*gomatrixserverlib.HeaderedEvent{joinEvent})
// Sync will return this for the prev_batch
from := topologyTokenBefore(t, db, joinEvent.EventID())
// inject events in batches as if they were from backfill
// e.g [1,2,3,4,5,6] => [4,5,6] , [1,2,3]
chunkSize := 5
for i := len(events); i >= 0; i -= chunkSize {
start := i - chunkSize
if start < 0 {
start = 0
}
backfill := events[start:i]
MustWriteEvents(t, db, backfill)
}
// head towards the beginning of time
to := types.TopologyToken{}
// starting at `from`, backpaginate to the beginning of time, asserting as we go.
chunkSize = 3
events = reversed(events)
for i := 0; i < len(events); i += chunkSize {
paginatedEvents, err := db.GetEventsInTopologicalRange(ctx, from, &to, testRoomID, chunkSize, true)
if err != nil {
t.Fatalf("GetEventsInRange returned an error: %s", err)
}
gots := gomatrixserverlib.HeaderedToClientEvents(db.StreamEventsToEvents(&testUserDeviceA, paginatedEvents), gomatrixserverlib.FormatAll)
endi := i + chunkSize
if endi > len(events) {
endi = len(events)
}
assertEventsEqual(t, from.String(), true, gots, events[i:endi])
from = topologyTokenBefore(t, db, paginatedEvents[len(paginatedEvents)-1].EventID())
}
}
*/
func TestSendToDeviceBehaviour(t *testing.T) {
t.Parallel()
alice := test.NewUser(t)
bob := test.NewUser(t)
deviceID := "one"
test.WithAllDatabases(t, func(t *testing.T, dbType test.DBType) {
db, close := MustCreateDatabase(t, dbType)
defer close()
// At this point there should be no messages. We haven't sent anything
// yet.
_, events, err := db.SendToDeviceUpdatesForSync(ctx, alice.ID, deviceID, 0, 100)
if err != nil {
t.Fatal(err)
}
if len(events) != 0 {
t.Fatal("first call should have no updates")
}
// Try sending a message.
streamPos, err := db.StoreNewSendForDeviceMessage(ctx, alice.ID, deviceID, gomatrixserverlib.SendToDeviceEvent{
Sender: bob.ID,
Type: "m.type",
Content: json.RawMessage("{}"),
})
if err != nil {
t.Fatal(err)
}
// At this point we should get exactly one message. We're sending the sync position
// that we were given from the update and the send-to-device update will be updated
// in the database to reflect that this was the sync position we sent the message at.
streamPos, events, err = db.SendToDeviceUpdatesForSync(ctx, alice.ID, deviceID, 0, streamPos)
if err != nil {
t.Fatal(err)
}
if count := len(events); count != 1 {
t.Fatalf("second call should have one update, got %d", count)
}
// At this point we should still have one message because we haven't progressed the
// sync position yet. This is equivalent to the client failing to /sync and retrying
// with the same position.
streamPos, events, err = db.SendToDeviceUpdatesForSync(ctx, alice.ID, deviceID, 0, streamPos)
if err != nil {
t.Fatal(err)
}
if len(events) != 1 {
t.Fatal("third call should have one update still")
}
err = db.CleanSendToDeviceUpdates(context.Background(), alice.ID, deviceID, streamPos)
if err != nil {
return
}
// At this point we should now have no updates, because we've progressed the sync
// position. Therefore the update from before will not be sent again.
_, events, err = db.SendToDeviceUpdatesForSync(ctx, alice.ID, deviceID, streamPos, streamPos+10)
if err != nil {
t.Fatal(err)
}
if len(events) != 0 {
t.Fatal("fourth call should have no updates")
}
// At this point we should still have no updates, because no new updates have been
// sent.
_, events, err = db.SendToDeviceUpdatesForSync(ctx, alice.ID, deviceID, streamPos, streamPos+10)
if err != nil {
t.Fatal(err)
}
if len(events) != 0 {
t.Fatal("fifth call should have no updates")
}
// Send some more messages and verify the ordering is correct ("in order of arrival")
var lastPos types.StreamPosition = 0
for i := 0; i < 10; i++ {
streamPos, err = db.StoreNewSendForDeviceMessage(ctx, alice.ID, deviceID, gomatrixserverlib.SendToDeviceEvent{
Sender: bob.ID,
Type: "m.type",
Content: json.RawMessage(fmt.Sprintf(`{"count":%d}`, i)),
})
if err != nil {
t.Fatal(err)
}
lastPos = streamPos
}
_, events, err = db.SendToDeviceUpdatesForSync(ctx, alice.ID, deviceID, 0, lastPos)
if err != nil {
t.Fatalf("unable to get events: %v", err)
}
for i := 0; i < 10; i++ {
want := json.RawMessage(fmt.Sprintf(`{"count":%d}`, i))
got := events[i].Content
if !bytes.Equal(got, want) {
t.Fatalf("messages are out of order\nwant: %s\ngot: %s", string(want), string(got))
}
}
})
}
/*
func TestInviteBehaviour(t *testing.T) {
db := MustCreateDatabase(t)
inviteRoom1 := "!inviteRoom1:somewhere"
inviteEvent1 := MustCreateEvent(t, inviteRoom1, nil, &gomatrixserverlib.EventBuilder{
Content: []byte(`{"membership":"invite"}`),
Type: "m.room.member",
StateKey: &testUserIDA,
Sender: "@inviteUser1:somewhere",
})
inviteRoom2 := "!inviteRoom2:somewhere"
inviteEvent2 := MustCreateEvent(t, inviteRoom2, nil, &gomatrixserverlib.EventBuilder{
Content: []byte(`{"membership":"invite"}`),
Type: "m.room.member",
StateKey: &testUserIDA,
Sender: "@inviteUser2:somewhere",
})
for _, ev := range []*gomatrixserverlib.HeaderedEvent{inviteEvent1, inviteEvent2} {
_, err := db.AddInviteEvent(ctx, ev)
if err != nil {
t.Fatalf("Failed to AddInviteEvent: %s", err)
}
}
latest, err := db.SyncPosition(ctx)
if err != nil {
t.Fatalf("failed to get SyncPosition: %s", err)
}
// both invite events should appear in a new sync
beforeRetireRes := types.NewResponse()
beforeRetireRes, err = db.IncrementalSync(ctx, beforeRetireRes, testUserDeviceA, types.StreamingToken{}, latest, 0, false)
if err != nil {
t.Fatalf("IncrementalSync failed: %s", err)
}
assertInvitedToRooms(t, beforeRetireRes, []string{inviteRoom1, inviteRoom2})
// retire one event: a fresh sync should just return 1 invite room
if _, err = db.RetireInviteEvent(ctx, inviteEvent1.EventID()); err != nil {
t.Fatalf("Failed to RetireInviteEvent: %s", err)
}
latest, err = db.SyncPosition(ctx)
if err != nil {
t.Fatalf("failed to get SyncPosition: %s", err)
}
res := types.NewResponse()
res, err = db.IncrementalSync(ctx, res, testUserDeviceA, types.StreamingToken{}, latest, 0, false)
if err != nil {
t.Fatalf("IncrementalSync failed: %s", err)
}
assertInvitedToRooms(t, res, []string{inviteRoom2})
// a sync after we have received both invites should result in a leave for the retired room
res = types.NewResponse()
res, err = db.IncrementalSync(ctx, res, testUserDeviceA, beforeRetireRes.NextBatch, latest, 0, false)
if err != nil {
t.Fatalf("IncrementalSync failed: %s", err)
}
assertInvitedToRooms(t, res, []string{})
if _, ok := res.Rooms.Leave[inviteRoom1]; !ok {
t.Fatalf("IncrementalSync: expected to see room left after it was retired but it wasn't")
}
}
func assertInvitedToRooms(t *testing.T, res *types.Response, roomIDs []string) {
t.Helper()
if len(res.Rooms.Invite) != len(roomIDs) {
t.Fatalf("got %d invited rooms, want %d", len(res.Rooms.Invite), len(roomIDs))
}
for _, roomID := range roomIDs {
if _, ok := res.Rooms.Invite[roomID]; !ok {
t.Fatalf("missing room ID %s", roomID)
}
}
}
func assertEventsEqual(t *testing.T, msg string, checkRoomID bool, gots []gomatrixserverlib.ClientEvent, wants []*gomatrixserverlib.HeaderedEvent) {
t.Helper()
if len(gots) != len(wants) {
t.Fatalf("%s response returned %d events, want %d", msg, len(gots), len(wants))
}
for i := range gots {
g := gots[i]
w := wants[i]
if g.EventID != w.EventID() {
t.Errorf("%s event[%d] event_id mismatch: got %s want %s", msg, i, g.EventID, w.EventID())
}
if g.Sender != w.Sender() {
t.Errorf("%s event[%d] sender mismatch: got %s want %s", msg, i, g.Sender, w.Sender())
}
if checkRoomID && g.RoomID != w.RoomID() {
t.Errorf("%s event[%d] room_id mismatch: got %s want %s", msg, i, g.RoomID, w.RoomID())
}
if g.Type != w.Type() {
t.Errorf("%s event[%d] event type mismatch: got %s want %s", msg, i, g.Type, w.Type())
}
if g.OriginServerTS != w.OriginServerTS() {
t.Errorf("%s event[%d] origin_server_ts mismatch: got %v want %v", msg, i, g.OriginServerTS, w.OriginServerTS())
}
if string(g.Content) != string(w.Content()) {
t.Errorf("%s event[%d] content mismatch: got %s want %s", msg, i, string(g.Content), string(w.Content()))
}
if string(g.Unsigned) != string(w.Unsigned()) {
t.Errorf("%s event[%d] unsigned mismatch: got %s want %s", msg, i, string(g.Unsigned), string(w.Unsigned()))
}
if (g.StateKey == nil && w.StateKey() != nil) || (g.StateKey != nil && w.StateKey() == nil) {
t.Errorf("%s event[%d] state_key [not] missing: got %v want %v", msg, i, g.StateKey, w.StateKey())
continue
}
if g.StateKey != nil {
if !w.StateKeyEquals(*g.StateKey) {
t.Errorf("%s event[%d] state_key mismatch: got %s want %s", msg, i, *g.StateKey, *w.StateKey())
}
}
}
}
func topologyTokenBefore(t *testing.T, db storage.Database, eventID string) *types.TopologyToken {
tok, err := db.EventPositionInTopology(ctx, eventID)
if err != nil {
t.Fatalf("failed to get EventPositionInTopology: %s", err)
}
tok.Decrement()
return &tok
}
*/