dendrite/userapi/storage/interface.go

144 lines
7.6 KiB
Go
Raw Normal View History

Add peer-to-peer support into Dendrite via libp2p and fetch (#880) * Use a fork of pq which supports userCurrent on wasm * Use sqlite3_js driver when running in JS * Add cmd/dendritejs to pull in sqlite3_js driver for wasm only * Update to latest go-sqlite-js version * Replace prometheus with a stub. sigh * Hard-code a config and don't use opentracing * Latest go-sqlite3-js version * Generate a key for now * Listen for fetch traffic rather than HTTP * Latest hacks for js * libp2p support * More libp2p * Fork gjson to allow us to enforce auth checks as before Previously, all events would come down redacted because the hash checks would fail. They would fail because sjson.DeleteBytes didn't remove keys not used for hashing. This didn't work because of a build tag which included a file which no-oped the index returned. See https://github.com/tidwall/gjson/issues/157 When it's resolved, let's go back to mainline. * Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157 * Use latest gomatrixserverlib for sig checks * Fix a bug which could cause exclude_from_sync to not be set Caused when sending events over federation. * Use query variadic to make lookups actually work! * Latest gomatrixserverlib * Add notes on getting p2p up and running Partly so I don't forget myself! * refactor: Move p2p specific stuff to cmd/dendritejs This is important or else the normal build of dendrite will fail because the p2p libraries depend on syscall/js which doesn't work on normal builds. Also, clean up main.go to read a bit better. * Update ho-http-js-libp2p to return errors from RoundTrip * Add an LRU cache around the key DB We actually need this for P2P because otherwise we can *segfault* with things like: "runtime: unexpected return pc for runtime.handleEvent" where the event is a `syscall/js` event, caused by spamming sql.js caused by "Checking event signatures for 14 events of room state" which hammers the key DB repeatedly in quick succession. Using a cache fixes this, though the underlying cause is probably a bug in the version of Go I'm on (1.13.7) * breaking: Add Tracing.Enabled to toggle whether we do opentracing Defaults to false, which is why this is a breaking change. We need this flag because WASM builds cannot do opentracing. * Start adding conditional builds for wasm to handle lib/pq The general idea here is to have the wasm build have a `NewXXXDatabase` that doesn't import any postgres package and hence we never import `lib/pq`, which doesn't work under WASM (undefined `userCurrent`). * Remove lib/pq for wasm for syncapi * Add conditional building to remaining storage APIs * Update build script to set env vars correctly for dendritejs * sqlite bug fixes * Docs * Add a no-op main for dendritejs when not building under wasm * Use the real prometheus, even for WASM Instead, the dendrite-sw.js must mock out `process.pid` and `fs.stat` - which must invoke the callback with an error (e.g `EINVAL`) in order for it to work: ``` global.process = { pid: 1, }; global.fs.stat = function(path, cb) { cb({ code: "EINVAL", }); } ``` * Linting
2020-03-06 04:23:55 -06:00
// Copyright 2020 The Matrix.org Foundation C.I.C.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package storage
Add peer-to-peer support into Dendrite via libp2p and fetch (#880) * Use a fork of pq which supports userCurrent on wasm * Use sqlite3_js driver when running in JS * Add cmd/dendritejs to pull in sqlite3_js driver for wasm only * Update to latest go-sqlite-js version * Replace prometheus with a stub. sigh * Hard-code a config and don't use opentracing * Latest go-sqlite3-js version * Generate a key for now * Listen for fetch traffic rather than HTTP * Latest hacks for js * libp2p support * More libp2p * Fork gjson to allow us to enforce auth checks as before Previously, all events would come down redacted because the hash checks would fail. They would fail because sjson.DeleteBytes didn't remove keys not used for hashing. This didn't work because of a build tag which included a file which no-oped the index returned. See https://github.com/tidwall/gjson/issues/157 When it's resolved, let's go back to mainline. * Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157 * Use latest gomatrixserverlib for sig checks * Fix a bug which could cause exclude_from_sync to not be set Caused when sending events over federation. * Use query variadic to make lookups actually work! * Latest gomatrixserverlib * Add notes on getting p2p up and running Partly so I don't forget myself! * refactor: Move p2p specific stuff to cmd/dendritejs This is important or else the normal build of dendrite will fail because the p2p libraries depend on syscall/js which doesn't work on normal builds. Also, clean up main.go to read a bit better. * Update ho-http-js-libp2p to return errors from RoundTrip * Add an LRU cache around the key DB We actually need this for P2P because otherwise we can *segfault* with things like: "runtime: unexpected return pc for runtime.handleEvent" where the event is a `syscall/js` event, caused by spamming sql.js caused by "Checking event signatures for 14 events of room state" which hammers the key DB repeatedly in quick succession. Using a cache fixes this, though the underlying cause is probably a bug in the version of Go I'm on (1.13.7) * breaking: Add Tracing.Enabled to toggle whether we do opentracing Defaults to false, which is why this is a breaking change. We need this flag because WASM builds cannot do opentracing. * Start adding conditional builds for wasm to handle lib/pq The general idea here is to have the wasm build have a `NewXXXDatabase` that doesn't import any postgres package and hence we never import `lib/pq`, which doesn't work under WASM (undefined `userCurrent`). * Remove lib/pq for wasm for syncapi * Add conditional building to remaining storage APIs * Update build script to set env vars correctly for dendritejs * sqlite bug fixes * Docs * Add a no-op main for dendritejs when not building under wasm * Use the real prometheus, even for WASM Instead, the dendrite-sw.js must mock out `process.pid` and `fs.stat` - which must invoke the callback with an error (e.g `EINVAL`) in order for it to work: ``` global.process = { pid: 1, }; global.fs.stat = function(path, cb) { cb({ code: "EINVAL", }); } ``` * Linting
2020-03-06 04:23:55 -06:00
import (
"context"
"encoding/json"
Add peer-to-peer support into Dendrite via libp2p and fetch (#880) * Use a fork of pq which supports userCurrent on wasm * Use sqlite3_js driver when running in JS * Add cmd/dendritejs to pull in sqlite3_js driver for wasm only * Update to latest go-sqlite-js version * Replace prometheus with a stub. sigh * Hard-code a config and don't use opentracing * Latest go-sqlite3-js version * Generate a key for now * Listen for fetch traffic rather than HTTP * Latest hacks for js * libp2p support * More libp2p * Fork gjson to allow us to enforce auth checks as before Previously, all events would come down redacted because the hash checks would fail. They would fail because sjson.DeleteBytes didn't remove keys not used for hashing. This didn't work because of a build tag which included a file which no-oped the index returned. See https://github.com/tidwall/gjson/issues/157 When it's resolved, let's go back to mainline. * Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157 * Use latest gomatrixserverlib for sig checks * Fix a bug which could cause exclude_from_sync to not be set Caused when sending events over federation. * Use query variadic to make lookups actually work! * Latest gomatrixserverlib * Add notes on getting p2p up and running Partly so I don't forget myself! * refactor: Move p2p specific stuff to cmd/dendritejs This is important or else the normal build of dendrite will fail because the p2p libraries depend on syscall/js which doesn't work on normal builds. Also, clean up main.go to read a bit better. * Update ho-http-js-libp2p to return errors from RoundTrip * Add an LRU cache around the key DB We actually need this for P2P because otherwise we can *segfault* with things like: "runtime: unexpected return pc for runtime.handleEvent" where the event is a `syscall/js` event, caused by spamming sql.js caused by "Checking event signatures for 14 events of room state" which hammers the key DB repeatedly in quick succession. Using a cache fixes this, though the underlying cause is probably a bug in the version of Go I'm on (1.13.7) * breaking: Add Tracing.Enabled to toggle whether we do opentracing Defaults to false, which is why this is a breaking change. We need this flag because WASM builds cannot do opentracing. * Start adding conditional builds for wasm to handle lib/pq The general idea here is to have the wasm build have a `NewXXXDatabase` that doesn't import any postgres package and hence we never import `lib/pq`, which doesn't work under WASM (undefined `userCurrent`). * Remove lib/pq for wasm for syncapi * Add conditional building to remaining storage APIs * Update build script to set env vars correctly for dendritejs * sqlite bug fixes * Docs * Add a no-op main for dendritejs when not building under wasm * Use the real prometheus, even for WASM Instead, the dendrite-sw.js must mock out `process.pid` and `fs.stat` - which must invoke the callback with an error (e.g `EINVAL`) in order for it to work: ``` global.process = { pid: 1, }; global.fs.stat = function(path, cb) { cb({ code: "EINVAL", }); } ``` * Linting
2020-03-06 04:23:55 -06:00
"errors"
"github.com/matrix-org/dendrite/clientapi/auth/authtypes"
"github.com/matrix-org/dendrite/userapi/api"
Implement Push Notifications (#1842) * Add Pushserver component with Pushers API Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Dan Peleg <dan@globekeeper.com> * Wire Pushserver component Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com> * Add PushGatewayClient. The full event format is required for Sytest. * Add a pushrules module. * Change user API account creation to use the new pushrules module's defaults. Introduces "scope" as required by client API, and some small field tweaks to make some 61push Sytests pass. * Add push rules query/put API in Pushserver. This manipulates account data over User API, and fires sync messages for changes. Those sync messages should, according to an existing TODO in clientapi, be moved to userapi. Forks clientapi/producers/syncapi.go to pushserver/ for later extension. * Add clientapi routes for push rules to Pushserver. A cleanup would be to move more of the name-splitting logic into pushrules.go, to depollute routing.go. * Output rooms.join.unread_notifications in /sync. This is the read-side. Pushserver will be the write-side. * Implement pushserver/storage for notifications. * Use PushGatewayClient and the pushrules module in Pushserver's room consumer. * Use one goroutine per user to avoid locking up the entire server for one bad push gateway. * Split pushing by format. * Send one device per push. Sytest does not support coalescing multiple devices into one push. Matches Synapse. Either we change Sytest, or remove the group-by-url-and-format logic. * Write OutputNotificationData from push server. Sync API is already the consumer. * Implement read receipt consumers in Pushserver. Supports m.read and m.fully_read receipts. * Add clientapi route for /unstable/notifications. * Rename to UpsertPusher for clarity and handle pusher update * Fix linter errors * Ignore body.Close() error check * Fix push server internal http wiring * Add 40 newly passing 61push tests to whitelist * Add next 12 newly passing 61push tests to whitelist * Send notification data before notifying users in EDU server consumer * NATS JetStream * Goodbye sarama * Fix `NewStreamTokenFromString` * Consume on the correct topic for the roomserver * Don't panic, NAK instead * Move push notifications into the User API * Don't set null values since that apparently causes Element upsetti * Also set omitempty on conditions * Fix bug so that we don't override the push rules unnecessarily * Tweak defaults * Update defaults * More tweaks * Move `/notifications` onto `r0`/`v3` mux * User API will consume events and read/fully read markers from the sync API with stream positions, instead of consuming directly Co-authored-by: Piotr Kozimor <p1996k@gmail.com> Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
2022-03-03 05:40:53 -06:00
"github.com/matrix-org/dendrite/userapi/storage/tables"
Add peer-to-peer support into Dendrite via libp2p and fetch (#880) * Use a fork of pq which supports userCurrent on wasm * Use sqlite3_js driver when running in JS * Add cmd/dendritejs to pull in sqlite3_js driver for wasm only * Update to latest go-sqlite-js version * Replace prometheus with a stub. sigh * Hard-code a config and don't use opentracing * Latest go-sqlite3-js version * Generate a key for now * Listen for fetch traffic rather than HTTP * Latest hacks for js * libp2p support * More libp2p * Fork gjson to allow us to enforce auth checks as before Previously, all events would come down redacted because the hash checks would fail. They would fail because sjson.DeleteBytes didn't remove keys not used for hashing. This didn't work because of a build tag which included a file which no-oped the index returned. See https://github.com/tidwall/gjson/issues/157 When it's resolved, let's go back to mainline. * Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157 * Use latest gomatrixserverlib for sig checks * Fix a bug which could cause exclude_from_sync to not be set Caused when sending events over federation. * Use query variadic to make lookups actually work! * Latest gomatrixserverlib * Add notes on getting p2p up and running Partly so I don't forget myself! * refactor: Move p2p specific stuff to cmd/dendritejs This is important or else the normal build of dendrite will fail because the p2p libraries depend on syscall/js which doesn't work on normal builds. Also, clean up main.go to read a bit better. * Update ho-http-js-libp2p to return errors from RoundTrip * Add an LRU cache around the key DB We actually need this for P2P because otherwise we can *segfault* with things like: "runtime: unexpected return pc for runtime.handleEvent" where the event is a `syscall/js` event, caused by spamming sql.js caused by "Checking event signatures for 14 events of room state" which hammers the key DB repeatedly in quick succession. Using a cache fixes this, though the underlying cause is probably a bug in the version of Go I'm on (1.13.7) * breaking: Add Tracing.Enabled to toggle whether we do opentracing Defaults to false, which is why this is a breaking change. We need this flag because WASM builds cannot do opentracing. * Start adding conditional builds for wasm to handle lib/pq The general idea here is to have the wasm build have a `NewXXXDatabase` that doesn't import any postgres package and hence we never import `lib/pq`, which doesn't work under WASM (undefined `userCurrent`). * Remove lib/pq for wasm for syncapi * Add conditional building to remaining storage APIs * Update build script to set env vars correctly for dendritejs * sqlite bug fixes * Docs * Add a no-op main for dendritejs when not building under wasm * Use the real prometheus, even for WASM Instead, the dendrite-sw.js must mock out `process.pid` and `fs.stat` - which must invoke the callback with an error (e.g `EINVAL`) in order for it to work: ``` global.process = { pid: 1, }; global.fs.stat = function(path, cb) { cb({ code: "EINVAL", }); } ``` * Linting
2020-03-06 04:23:55 -06:00
)
type Profile interface {
Add peer-to-peer support into Dendrite via libp2p and fetch (#880) * Use a fork of pq which supports userCurrent on wasm * Use sqlite3_js driver when running in JS * Add cmd/dendritejs to pull in sqlite3_js driver for wasm only * Update to latest go-sqlite-js version * Replace prometheus with a stub. sigh * Hard-code a config and don't use opentracing * Latest go-sqlite3-js version * Generate a key for now * Listen for fetch traffic rather than HTTP * Latest hacks for js * libp2p support * More libp2p * Fork gjson to allow us to enforce auth checks as before Previously, all events would come down redacted because the hash checks would fail. They would fail because sjson.DeleteBytes didn't remove keys not used for hashing. This didn't work because of a build tag which included a file which no-oped the index returned. See https://github.com/tidwall/gjson/issues/157 When it's resolved, let's go back to mainline. * Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157 * Use latest gomatrixserverlib for sig checks * Fix a bug which could cause exclude_from_sync to not be set Caused when sending events over federation. * Use query variadic to make lookups actually work! * Latest gomatrixserverlib * Add notes on getting p2p up and running Partly so I don't forget myself! * refactor: Move p2p specific stuff to cmd/dendritejs This is important or else the normal build of dendrite will fail because the p2p libraries depend on syscall/js which doesn't work on normal builds. Also, clean up main.go to read a bit better. * Update ho-http-js-libp2p to return errors from RoundTrip * Add an LRU cache around the key DB We actually need this for P2P because otherwise we can *segfault* with things like: "runtime: unexpected return pc for runtime.handleEvent" where the event is a `syscall/js` event, caused by spamming sql.js caused by "Checking event signatures for 14 events of room state" which hammers the key DB repeatedly in quick succession. Using a cache fixes this, though the underlying cause is probably a bug in the version of Go I'm on (1.13.7) * breaking: Add Tracing.Enabled to toggle whether we do opentracing Defaults to false, which is why this is a breaking change. We need this flag because WASM builds cannot do opentracing. * Start adding conditional builds for wasm to handle lib/pq The general idea here is to have the wasm build have a `NewXXXDatabase` that doesn't import any postgres package and hence we never import `lib/pq`, which doesn't work under WASM (undefined `userCurrent`). * Remove lib/pq for wasm for syncapi * Add conditional building to remaining storage APIs * Update build script to set env vars correctly for dendritejs * sqlite bug fixes * Docs * Add a no-op main for dendritejs when not building under wasm * Use the real prometheus, even for WASM Instead, the dendrite-sw.js must mock out `process.pid` and `fs.stat` - which must invoke the callback with an error (e.g `EINVAL`) in order for it to work: ``` global.process = { pid: 1, }; global.fs.stat = function(path, cb) { cb({ code: "EINVAL", }); } ``` * Linting
2020-03-06 04:23:55 -06:00
GetProfileByLocalpart(ctx context.Context, localpart string) (*authtypes.Profile, error)
SearchProfiles(ctx context.Context, searchString string, limit int) ([]authtypes.Profile, error)
Add peer-to-peer support into Dendrite via libp2p and fetch (#880) * Use a fork of pq which supports userCurrent on wasm * Use sqlite3_js driver when running in JS * Add cmd/dendritejs to pull in sqlite3_js driver for wasm only * Update to latest go-sqlite-js version * Replace prometheus with a stub. sigh * Hard-code a config and don't use opentracing * Latest go-sqlite3-js version * Generate a key for now * Listen for fetch traffic rather than HTTP * Latest hacks for js * libp2p support * More libp2p * Fork gjson to allow us to enforce auth checks as before Previously, all events would come down redacted because the hash checks would fail. They would fail because sjson.DeleteBytes didn't remove keys not used for hashing. This didn't work because of a build tag which included a file which no-oped the index returned. See https://github.com/tidwall/gjson/issues/157 When it's resolved, let's go back to mainline. * Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157 * Use latest gomatrixserverlib for sig checks * Fix a bug which could cause exclude_from_sync to not be set Caused when sending events over federation. * Use query variadic to make lookups actually work! * Latest gomatrixserverlib * Add notes on getting p2p up and running Partly so I don't forget myself! * refactor: Move p2p specific stuff to cmd/dendritejs This is important or else the normal build of dendrite will fail because the p2p libraries depend on syscall/js which doesn't work on normal builds. Also, clean up main.go to read a bit better. * Update ho-http-js-libp2p to return errors from RoundTrip * Add an LRU cache around the key DB We actually need this for P2P because otherwise we can *segfault* with things like: "runtime: unexpected return pc for runtime.handleEvent" where the event is a `syscall/js` event, caused by spamming sql.js caused by "Checking event signatures for 14 events of room state" which hammers the key DB repeatedly in quick succession. Using a cache fixes this, though the underlying cause is probably a bug in the version of Go I'm on (1.13.7) * breaking: Add Tracing.Enabled to toggle whether we do opentracing Defaults to false, which is why this is a breaking change. We need this flag because WASM builds cannot do opentracing. * Start adding conditional builds for wasm to handle lib/pq The general idea here is to have the wasm build have a `NewXXXDatabase` that doesn't import any postgres package and hence we never import `lib/pq`, which doesn't work under WASM (undefined `userCurrent`). * Remove lib/pq for wasm for syncapi * Add conditional building to remaining storage APIs * Update build script to set env vars correctly for dendritejs * sqlite bug fixes * Docs * Add a no-op main for dendritejs when not building under wasm * Use the real prometheus, even for WASM Instead, the dendrite-sw.js must mock out `process.pid` and `fs.stat` - which must invoke the callback with an error (e.g `EINVAL`) in order for it to work: ``` global.process = { pid: 1, }; global.fs.stat = function(path, cb) { cb({ code: "EINVAL", }); } ``` * Linting
2020-03-06 04:23:55 -06:00
SetAvatarURL(ctx context.Context, localpart string, avatarURL string) error
SetDisplayName(ctx context.Context, localpart string, displayName string) error
}
type Account interface {
// CreateAccount makes a new account with the given login name and password, and creates an empty profile
// for this account. If no password is supplied, the account will be a passwordless account. If the
// account already exists, it will return nil, ErrUserExists.
CreateAccount(ctx context.Context, localpart string, plaintextPassword string, appserviceID string, accountType api.AccountType) (*api.Account, error)
GetAccountByPassword(ctx context.Context, localpart, plaintextPassword string) (*api.Account, error)
GetNewNumericLocalpart(ctx context.Context) (int64, error)
CheckAccountAvailability(ctx context.Context, localpart string) (bool, error)
GetAccountByLocalpart(ctx context.Context, localpart string) (*api.Account, error)
DeactivateAccount(ctx context.Context, localpart string) (err error)
SetPassword(ctx context.Context, localpart string, plaintextPassword string) error
}
type AccountData interface {
SaveAccountData(ctx context.Context, localpart, roomID, dataType string, content json.RawMessage) error
GetAccountData(ctx context.Context, localpart string) (global map[string]json.RawMessage, rooms map[string]map[string]json.RawMessage, err error)
// GetAccountDataByType returns account data matching a given
// localpart, room ID and type.
// If no account data could be found, returns nil
// Returns an error if there was an issue with the retrieval
GetAccountDataByType(ctx context.Context, localpart, roomID, dataType string) (data json.RawMessage, err error)
}
type Device interface {
GetDeviceByAccessToken(ctx context.Context, token string) (*api.Device, error)
GetDeviceByID(ctx context.Context, localpart, deviceID string) (*api.Device, error)
GetDevicesByLocalpart(ctx context.Context, localpart string) ([]api.Device, error)
GetDevicesByID(ctx context.Context, deviceIDs []string) ([]api.Device, error)
// CreateDevice makes a new device associated with the given user ID localpart.
// If there is already a device with the same device ID for this user, that access token will be revoked
// and replaced with the given accessToken. If the given accessToken is already in use for another device,
// an error will be returned.
// If no device ID is given one is generated.
// Returns the device on success.
CreateDevice(ctx context.Context, localpart string, deviceID *string, accessToken string, displayName *string, ipAddr, userAgent string) (dev *api.Device, returnErr error)
UpdateDevice(ctx context.Context, localpart, deviceID string, displayName *string) error
UpdateDeviceLastSeen(ctx context.Context, localpart, deviceID, ipAddr string) error
RemoveDevices(ctx context.Context, localpart string, devices []string) error
// RemoveAllDevices deleted all devices for this user. Returns the devices deleted.
RemoveAllDevices(ctx context.Context, localpart, exceptDeviceID string) (devices []api.Device, err error)
}
type KeyBackup interface {
CreateKeyBackup(ctx context.Context, userID, algorithm string, authData json.RawMessage) (version string, err error)
UpdateKeyBackupAuthData(ctx context.Context, userID, version string, authData json.RawMessage) (err error)
DeleteKeyBackup(ctx context.Context, userID, version string) (exists bool, err error)
GetKeyBackup(ctx context.Context, userID, version string) (versionResult, algorithm string, authData json.RawMessage, etag string, deleted bool, err error)
UpsertBackupKeys(ctx context.Context, version, userID string, uploads []api.InternalKeyBackupSession) (count int64, etag string, err error)
GetBackupKeys(ctx context.Context, version, userID, filterRoomID, filterSessionID string) (result map[string]map[string]api.KeyBackupSession, err error)
CountBackupKeys(ctx context.Context, version, userID string) (count int64, err error)
}
type LoginToken interface {
// CreateLoginToken generates a token, stores and returns it. The lifetime is
// determined by the loginTokenLifetime given to the Database constructor.
CreateLoginToken(ctx context.Context, data *api.LoginTokenData) (*api.LoginTokenMetadata, error)
// RemoveLoginToken removes the named token (and may clean up other expired tokens).
RemoveLoginToken(ctx context.Context, token string) error
// GetLoginTokenDataByToken returns the data associated with the given token.
// May return sql.ErrNoRows.
GetLoginTokenDataByToken(ctx context.Context, token string) (*api.LoginTokenData, error)
}
type OpenID interface {
CreateOpenIDToken(ctx context.Context, token, userID string) (exp int64, err error)
GetOpenIDTokenAttributes(ctx context.Context, token string) (*api.OpenIDTokenAttributes, error)
}
Implement Push Notifications (#1842) * Add Pushserver component with Pushers API Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Dan Peleg <dan@globekeeper.com> * Wire Pushserver component Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com> * Add PushGatewayClient. The full event format is required for Sytest. * Add a pushrules module. * Change user API account creation to use the new pushrules module's defaults. Introduces "scope" as required by client API, and some small field tweaks to make some 61push Sytests pass. * Add push rules query/put API in Pushserver. This manipulates account data over User API, and fires sync messages for changes. Those sync messages should, according to an existing TODO in clientapi, be moved to userapi. Forks clientapi/producers/syncapi.go to pushserver/ for later extension. * Add clientapi routes for push rules to Pushserver. A cleanup would be to move more of the name-splitting logic into pushrules.go, to depollute routing.go. * Output rooms.join.unread_notifications in /sync. This is the read-side. Pushserver will be the write-side. * Implement pushserver/storage for notifications. * Use PushGatewayClient and the pushrules module in Pushserver's room consumer. * Use one goroutine per user to avoid locking up the entire server for one bad push gateway. * Split pushing by format. * Send one device per push. Sytest does not support coalescing multiple devices into one push. Matches Synapse. Either we change Sytest, or remove the group-by-url-and-format logic. * Write OutputNotificationData from push server. Sync API is already the consumer. * Implement read receipt consumers in Pushserver. Supports m.read and m.fully_read receipts. * Add clientapi route for /unstable/notifications. * Rename to UpsertPusher for clarity and handle pusher update * Fix linter errors * Ignore body.Close() error check * Fix push server internal http wiring * Add 40 newly passing 61push tests to whitelist * Add next 12 newly passing 61push tests to whitelist * Send notification data before notifying users in EDU server consumer * NATS JetStream * Goodbye sarama * Fix `NewStreamTokenFromString` * Consume on the correct topic for the roomserver * Don't panic, NAK instead * Move push notifications into the User API * Don't set null values since that apparently causes Element upsetti * Also set omitempty on conditions * Fix bug so that we don't override the push rules unnecessarily * Tweak defaults * Update defaults * More tweaks * Move `/notifications` onto `r0`/`v3` mux * User API will consume events and read/fully read markers from the sync API with stream positions, instead of consuming directly Co-authored-by: Piotr Kozimor <p1996k@gmail.com> Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
2022-03-03 05:40:53 -06:00
type Pusher interface {
UpsertPusher(ctx context.Context, p api.Pusher, localpart string) error
GetPushers(ctx context.Context, localpart string) ([]api.Pusher, error)
RemovePusher(ctx context.Context, appid, pushkey, localpart string) error
RemovePushers(ctx context.Context, appid, pushkey string) error
}
type ThreePID interface {
SaveThreePIDAssociation(ctx context.Context, threepid, localpart, medium string) (err error)
RemoveThreePIDAssociation(ctx context.Context, threepid string, medium string) (err error)
GetLocalpartForThreePID(ctx context.Context, threepid string, medium string) (localpart string, err error)
GetThreePIDsForLocalpart(ctx context.Context, localpart string) (threepids []authtypes.ThreePID, err error)
}
type Notification interface {
Implement Push Notifications (#1842) * Add Pushserver component with Pushers API Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Dan Peleg <dan@globekeeper.com> * Wire Pushserver component Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com> * Add PushGatewayClient. The full event format is required for Sytest. * Add a pushrules module. * Change user API account creation to use the new pushrules module's defaults. Introduces "scope" as required by client API, and some small field tweaks to make some 61push Sytests pass. * Add push rules query/put API in Pushserver. This manipulates account data over User API, and fires sync messages for changes. Those sync messages should, according to an existing TODO in clientapi, be moved to userapi. Forks clientapi/producers/syncapi.go to pushserver/ for later extension. * Add clientapi routes for push rules to Pushserver. A cleanup would be to move more of the name-splitting logic into pushrules.go, to depollute routing.go. * Output rooms.join.unread_notifications in /sync. This is the read-side. Pushserver will be the write-side. * Implement pushserver/storage for notifications. * Use PushGatewayClient and the pushrules module in Pushserver's room consumer. * Use one goroutine per user to avoid locking up the entire server for one bad push gateway. * Split pushing by format. * Send one device per push. Sytest does not support coalescing multiple devices into one push. Matches Synapse. Either we change Sytest, or remove the group-by-url-and-format logic. * Write OutputNotificationData from push server. Sync API is already the consumer. * Implement read receipt consumers in Pushserver. Supports m.read and m.fully_read receipts. * Add clientapi route for /unstable/notifications. * Rename to UpsertPusher for clarity and handle pusher update * Fix linter errors * Ignore body.Close() error check * Fix push server internal http wiring * Add 40 newly passing 61push tests to whitelist * Add next 12 newly passing 61push tests to whitelist * Send notification data before notifying users in EDU server consumer * NATS JetStream * Goodbye sarama * Fix `NewStreamTokenFromString` * Consume on the correct topic for the roomserver * Don't panic, NAK instead * Move push notifications into the User API * Don't set null values since that apparently causes Element upsetti * Also set omitempty on conditions * Fix bug so that we don't override the push rules unnecessarily * Tweak defaults * Update defaults * More tweaks * Move `/notifications` onto `r0`/`v3` mux * User API will consume events and read/fully read markers from the sync API with stream positions, instead of consuming directly Co-authored-by: Piotr Kozimor <p1996k@gmail.com> Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
2022-03-03 05:40:53 -06:00
InsertNotification(ctx context.Context, localpart, eventID string, pos int64, tweaks map[string]interface{}, n *api.Notification) error
DeleteNotificationsUpTo(ctx context.Context, localpart, roomID string, pos int64) (affected bool, err error)
SetNotificationsRead(ctx context.Context, localpart, roomID string, pos int64, read bool) (affected bool, err error)
Implement Push Notifications (#1842) * Add Pushserver component with Pushers API Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Dan Peleg <dan@globekeeper.com> * Wire Pushserver component Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com> * Add PushGatewayClient. The full event format is required for Sytest. * Add a pushrules module. * Change user API account creation to use the new pushrules module's defaults. Introduces "scope" as required by client API, and some small field tweaks to make some 61push Sytests pass. * Add push rules query/put API in Pushserver. This manipulates account data over User API, and fires sync messages for changes. Those sync messages should, according to an existing TODO in clientapi, be moved to userapi. Forks clientapi/producers/syncapi.go to pushserver/ for later extension. * Add clientapi routes for push rules to Pushserver. A cleanup would be to move more of the name-splitting logic into pushrules.go, to depollute routing.go. * Output rooms.join.unread_notifications in /sync. This is the read-side. Pushserver will be the write-side. * Implement pushserver/storage for notifications. * Use PushGatewayClient and the pushrules module in Pushserver's room consumer. * Use one goroutine per user to avoid locking up the entire server for one bad push gateway. * Split pushing by format. * Send one device per push. Sytest does not support coalescing multiple devices into one push. Matches Synapse. Either we change Sytest, or remove the group-by-url-and-format logic. * Write OutputNotificationData from push server. Sync API is already the consumer. * Implement read receipt consumers in Pushserver. Supports m.read and m.fully_read receipts. * Add clientapi route for /unstable/notifications. * Rename to UpsertPusher for clarity and handle pusher update * Fix linter errors * Ignore body.Close() error check * Fix push server internal http wiring * Add 40 newly passing 61push tests to whitelist * Add next 12 newly passing 61push tests to whitelist * Send notification data before notifying users in EDU server consumer * NATS JetStream * Goodbye sarama * Fix `NewStreamTokenFromString` * Consume on the correct topic for the roomserver * Don't panic, NAK instead * Move push notifications into the User API * Don't set null values since that apparently causes Element upsetti * Also set omitempty on conditions * Fix bug so that we don't override the push rules unnecessarily * Tweak defaults * Update defaults * More tweaks * Move `/notifications` onto `r0`/`v3` mux * User API will consume events and read/fully read markers from the sync API with stream positions, instead of consuming directly Co-authored-by: Piotr Kozimor <p1996k@gmail.com> Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
2022-03-03 05:40:53 -06:00
GetNotifications(ctx context.Context, localpart string, fromID int64, limit int, filter tables.NotificationFilter) ([]*api.Notification, int64, error)
GetNotificationCount(ctx context.Context, localpart string, filter tables.NotificationFilter) (int64, error)
GetRoomNotificationCounts(ctx context.Context, localpart, roomID string) (total int64, highlight int64, _ error)
DeleteOldNotifications(ctx context.Context) error
}
Implement Push Notifications (#1842) * Add Pushserver component with Pushers API Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Dan Peleg <dan@globekeeper.com> * Wire Pushserver component Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com> * Add PushGatewayClient. The full event format is required for Sytest. * Add a pushrules module. * Change user API account creation to use the new pushrules module's defaults. Introduces "scope" as required by client API, and some small field tweaks to make some 61push Sytests pass. * Add push rules query/put API in Pushserver. This manipulates account data over User API, and fires sync messages for changes. Those sync messages should, according to an existing TODO in clientapi, be moved to userapi. Forks clientapi/producers/syncapi.go to pushserver/ for later extension. * Add clientapi routes for push rules to Pushserver. A cleanup would be to move more of the name-splitting logic into pushrules.go, to depollute routing.go. * Output rooms.join.unread_notifications in /sync. This is the read-side. Pushserver will be the write-side. * Implement pushserver/storage for notifications. * Use PushGatewayClient and the pushrules module in Pushserver's room consumer. * Use one goroutine per user to avoid locking up the entire server for one bad push gateway. * Split pushing by format. * Send one device per push. Sytest does not support coalescing multiple devices into one push. Matches Synapse. Either we change Sytest, or remove the group-by-url-and-format logic. * Write OutputNotificationData from push server. Sync API is already the consumer. * Implement read receipt consumers in Pushserver. Supports m.read and m.fully_read receipts. * Add clientapi route for /unstable/notifications. * Rename to UpsertPusher for clarity and handle pusher update * Fix linter errors * Ignore body.Close() error check * Fix push server internal http wiring * Add 40 newly passing 61push tests to whitelist * Add next 12 newly passing 61push tests to whitelist * Send notification data before notifying users in EDU server consumer * NATS JetStream * Goodbye sarama * Fix `NewStreamTokenFromString` * Consume on the correct topic for the roomserver * Don't panic, NAK instead * Move push notifications into the User API * Don't set null values since that apparently causes Element upsetti * Also set omitempty on conditions * Fix bug so that we don't override the push rules unnecessarily * Tweak defaults * Update defaults * More tweaks * Move `/notifications` onto `r0`/`v3` mux * User API will consume events and read/fully read markers from the sync API with stream positions, instead of consuming directly Co-authored-by: Piotr Kozimor <p1996k@gmail.com> Co-authored-by: Tommie Gannert <tommie@gannert.se> Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
2022-03-03 05:40:53 -06:00
type Database interface {
Account
AccountData
Device
KeyBackup
LoginToken
Notification
OpenID
Profile
Pusher
ThreePID
Add peer-to-peer support into Dendrite via libp2p and fetch (#880) * Use a fork of pq which supports userCurrent on wasm * Use sqlite3_js driver when running in JS * Add cmd/dendritejs to pull in sqlite3_js driver for wasm only * Update to latest go-sqlite-js version * Replace prometheus with a stub. sigh * Hard-code a config and don't use opentracing * Latest go-sqlite3-js version * Generate a key for now * Listen for fetch traffic rather than HTTP * Latest hacks for js * libp2p support * More libp2p * Fork gjson to allow us to enforce auth checks as before Previously, all events would come down redacted because the hash checks would fail. They would fail because sjson.DeleteBytes didn't remove keys not used for hashing. This didn't work because of a build tag which included a file which no-oped the index returned. See https://github.com/tidwall/gjson/issues/157 When it's resolved, let's go back to mainline. * Use gjson@1.6.0 as it fixes https://github.com/tidwall/gjson/issues/157 * Use latest gomatrixserverlib for sig checks * Fix a bug which could cause exclude_from_sync to not be set Caused when sending events over federation. * Use query variadic to make lookups actually work! * Latest gomatrixserverlib * Add notes on getting p2p up and running Partly so I don't forget myself! * refactor: Move p2p specific stuff to cmd/dendritejs This is important or else the normal build of dendrite will fail because the p2p libraries depend on syscall/js which doesn't work on normal builds. Also, clean up main.go to read a bit better. * Update ho-http-js-libp2p to return errors from RoundTrip * Add an LRU cache around the key DB We actually need this for P2P because otherwise we can *segfault* with things like: "runtime: unexpected return pc for runtime.handleEvent" where the event is a `syscall/js` event, caused by spamming sql.js caused by "Checking event signatures for 14 events of room state" which hammers the key DB repeatedly in quick succession. Using a cache fixes this, though the underlying cause is probably a bug in the version of Go I'm on (1.13.7) * breaking: Add Tracing.Enabled to toggle whether we do opentracing Defaults to false, which is why this is a breaking change. We need this flag because WASM builds cannot do opentracing. * Start adding conditional builds for wasm to handle lib/pq The general idea here is to have the wasm build have a `NewXXXDatabase` that doesn't import any postgres package and hence we never import `lib/pq`, which doesn't work under WASM (undefined `userCurrent`). * Remove lib/pq for wasm for syncapi * Add conditional building to remaining storage APIs * Update build script to set env vars correctly for dendritejs * sqlite bug fixes * Docs * Add a no-op main for dendritejs when not building under wasm * Use the real prometheus, even for WASM Instead, the dendrite-sw.js must mock out `process.pid` and `fs.stat` - which must invoke the callback with an error (e.g `EINVAL`) in order for it to work: ``` global.process = { pid: 1, }; global.fs.stat = function(path, cb) { cb({ code: "EINVAL", }); } ``` * Linting
2020-03-06 04:23:55 -06:00
}
// Err3PIDInUse is the error returned when trying to save an association involving
// a third-party identifier which is already associated to a local user.
var Err3PIDInUse = errors.New("this third-party identifier is already in use")