Appending to bytes in a loop leads to a quadratic slowdown since
Python doesn’t optimize this for bytes like it does for str.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
While accepting an invitation from a user, there was no condition in
place to check if the user sending the invitation was now
now-deactivated.
Skip sending notifications about newly-joined users to users who are
now disabled.
Fixes#18569.
We don't have to go to the database to get the Recipient
fields for `user_profile.recipient`.
See also 85ed6f332a from a little
over a year ago--it's very similar.
The bug here probably didn't come up too much in
practice, but if we were adding a user to multiple
streams when they already had used all N available
colors, all the new streams would be assigned the same
color, since the size of used_colors would stay at N,
thwarting our little modulo-len hackery.
It's not a terrible bug, since users can obviously
customize their stream colors as they see fit.
Usually when we are adding a user to multiple streams,
the users are fairly new, and thus don't have many
existing streams, so I have never heard this bug
reported in the field.
Anyway, assigning the colors in bulk seems to make more
sense, and I added some tests.
For the situations where all the colors have already
been used, I didn't put a ton of thought into exactly
which repeated colors we want to choose; instead, I
just ensure they're different modulo 24. It's possible
that we should just have more than 24 canned colors, or
we should just assign the same default color every time
and let users change it themselves (once they've gone
beyond the 24, to be clear). Or maybe we can just do
something smarter here. I don't have enough time for a
deep dive on this issue.
Part of our codepath for subscribing users involves
fetching the users' existing subscriptions to make sure
we can do things like properly report to the clients
that the users were already subscribed. This codepath
used to be coupled to code that helped users maintain
unique stream colors.
Suppose you are creating a new stream, and you are
importing users from an older stream with 15k
subscribers, and each of your users is subscribed to
about 20 streams.
The prior code, instead of filtering on recipient_id,
would literally look at every subscription for every
user, which was kind of crazy if you didn't understand
the pick-stream-color complications.
Before this commit, we would fetch 300k rows with 15
columns each (granted, all but one of the columns are
bool/int). That's a total of 4.5 million tiny objects
that we had to glom into Django ORM objects and slice
and dice.
After this commit, we would fetch exactly zero rows
for the are-they-already-subscribed logic.
Yes, ZERO.
If we were to mistakenly try to re-add the same 15k
subscribers to the new stream (under the new code), we
will now fetch 15k Sub rows instead of 300k.
It is worth looking at the prior commit. We go through
great pains to ensure that users get new stream colors
when we invite them to a stream, and we still fetch a
bunch of data for that. Instead of 4.5 million cells,
it's more like 600k cells (2 columns per row), and it's
less than that insofar as some users may only
have 24 distinct colors among their many streams.
It's a lot of work.
This commit sets us up for the next commit, which will
save us a very expensive query.
If you are adding 15k users to a stream, and each user
has about 20 existing streams, then we need to retrieve
300k rows from the database to figure out which stream
colors they already have. We don't need all the extra
fields from Subscription, so now we get just the two
values we need for making a color map.
In the next commit we'll eliminate the other use case
for the big query, and I will explain in greater
depth how splitting out the color-picking code can
be a huge win. It is possible that some product decisions
could make this codepath easier. We could also do some
engineering specific to stream colors, such as caching
which colors users have already used.
This does cost us an extra round trip to the database.
Having the `wildcard_mentions_notify` setting turned on does
not necessarily mean that the user will receive notification
for that message. There is more nuance to this, as explained
in the updated comment.
We recently ran into a payload in production that didn't contain
an event type at all. A payload where we can't figure out the event
type is quite rare. Instead of letting these payloads run amok, we
should raise a more informative exception for such unusual payloads.
If we encounter too many of these, then we can choose to conduct a
deeper investigation on a case-by-case basis.
With some changes by Tim Abbott.
We avoid repeating the same calculations over and
over again for the same stream.
This helps, but the real bottleneck in this function
is that send_event usually takes at least a millisecond,
and that adds up quickly if you're doing something
like subscribing 5k users to a new stream.
GIF files can be `.GIF`, and also we determine the file format by
inspecting the image data, so there's no reason to have this
assertion.
(The code for serving still images does not rely on the file being a
GIF.)
Have kept process_new_human_user out of
the atomic block because it involves many
different operations and also sends events.
Tried enclosing event in on_commit but that
would need many changes in the tests, so have
skipped it for now.
Updates testing helpers in `event_schema.py` for `do_update_message` so
that all stream message fields are present in any edits / updates to
stream messages. Adds verfication tests of events returned from private
message edits and from stream message content-only and topic-only edits.
Updates the `update_message` event type to always include a `stream_id`
field when the message being edited is a stream message. This change
aligns with the current definition of the `\get-events` endpoint
in the OpenAPI documentation.
It is better to press on, than stop halfway through due to a user
whose email no longer works. The exception is already logged, which
is sufficient here, as this is generally run interactively.
We now complain if a test author sends a stream message
that does not result in the sender getting a
UserMessage row for the message.
This is basically 100% equivalent to complaining that
the author failed to subscribe the sender to the stream
as part of the test setup, as far as I can tell, so the
AssertionError instructs the author to subscribe the
sender to the stream.
We exempt bots from this check, although it is
plausible we should only exempt the system bots like
the notification bot.
I considered auto-subscribing the sender to the stream,
but that can be a little more expensive than the
current check, and we generally want test setup to be
explicit.
If there is some legitimate way than a subscribed human
sender can't get a UserMessage, then we probably want
an explicit test for that, or we may want to change the
backend to just write a UserMessage row in that
hypothetical situation.
For most tests, including almost all the ones fixed
here, the author just wants their test setup to
realistically reflect normal operation, and often devs
may not realize that Cordelia is not subscribed to
Denmark or not realize that Hamlet is not subscribed to
Scotland.
Some of us don't remember our Shakespeare from high
school, and our stream subscriptions don't even
necessarily reflect which countries the Bard placed his
characters in.
There may also be some legitimate use case where an
author wants to simulate sending a message to an
unsubscribed stream, but for those edge cases, they can
always set allow_unsubscribed_sender to True.
While races here are unlikely, it is most correct to enforce this
invariant at the database layer, and having a database-level
constraint makes the models file a bit more readable.
These are not considered to be "personal"
info, even if you upload them, so we
don't export them.
Generally the only folks who upload
these are admins, who can easily get
them in other ways. In fact, anybody
can get these via the app.
We now ensure that all message ids are sorted BEFORE
we split them into batches.
We now do a few extra "slim" queries to get message
ids up front.
But, now, when we divide them into batches, we no
longer run 2 or 3 different complicated queries in
a loop. We just basically hydrate our message ids,
so `write_message_partials` should be easy to reason
about.
This change also means that for tiny realms with
< 1000 messages you will always have just one
json file, since we aggregate the ids from the
queries before batching.