The database is the source of truth. If we add the metadata to the
database and it succeeds, and then the file fails to be moved, we think
we have a file when we actually don't.
Spawns a GET request for the same file in 100 parallel go routines and
prints the body (which is some error JSON) in case of not 200 OK. Also
prints the number of successful requests.
This of course should take command line arguments for the URL and number
of requests but that can be done as soon as needed.
- `s/Server/OutputRoomEvent/` in `consumers` to accurately reflect what is being consumed.
- `s/set/userIDSet/` in `notifier.go` for clarity.
- Removed lying comments.
The logic required to populate the right bits of `RoomData` tends towards
the complete `/sync` response struct, so just use the actual response struct
and save the hassle of mapping between the two. It may not make much difference
in its current form, but the next PR will make use of this.
This PR has no functional changes.
This is only 'mostly' correct currently, because what should be no-op dupe
joins will actually trigger the entire room state to be re-sent.
Bizarrely, it's significantly easier to just do that than work out if we should,
and there are no client-visible effects to doing so, so we just do it for now.
- Test data for the sync server is now in its own file.
- Rejig the sync server tests to support multiple /sync requests and corresponding
assertions.
- Fixed a minor bug which resulted in state events to appear twice in /sync
responses when syncing without a `since` parameter.
If multiple requests arrive for the same remote file, we want to
download them once and then serve to all the remaining incoming requests
from the cache.
The main thing missing from the code at this point is a mechanism to
time out database queries. They are made across a network and so we
should be robust to network connectivity issues. This is a general
problem across dendrite and not limited to just this code.