We send user_id of the referrer instead of email in the invites dict.
Sending user_ids is more robust, as those are an immutable reference
to a user, rather than something that can change with time.
Updates to the webapp UI to display the inviters for more convenient
inspection will come in a future commit.
We send a remove mobile push notification to the users who were
no longer mentioned after the content of the message was edited.
This also corrects the notification count for the mobile apps
where a user was prior mentioned in a muted stream / topic and the
message was edited and the user is no longer mentioned now.
Hence, fixing the case where user has read all his unreads
but the notification badge on the app is still positive.
Fixes#15428.
do_clear_mobile_push_notifications_for_ids can now be used to
clear push_notification for multiple users at once. This method
loops over users, so no performance optimization is gained.
We've been seeing an exception in server_event_dispatch.js in
production where in large organizations, sometimes when a new user
joined, every other browser in the organization would throw an
exception processing some sort of realm_user/update event.
It turns out the cause was that when a user copies their profile from
an existing user account with a user-uploaded avatar, the code path we
reused to set the avatar properly send a realm_user/update event about
the avatar change -- for a user that hadn't been fully created and
certainly hadn't have the realm_user/add event sent for.
We fix this and add tests and comments to prevent it recurring.
(Removed an incorrect docstring while working on this).
This commit changes do_get_user_invites function to not return
multiuse invites to non-admin users. We should only return multiuse
invites to admins, as we only allow admins to create them.
Streams can have lots of subscribers, meaning that the archiving process
will be moving tons of UserMessages per message. For that reason, using
a smaller batch size for stream messages is justified.
Some personal messages need to be added in test_scrub_realm to have
coverage of do_delete_messages_by_sender after these changes.
Old: a validator returns None on success and returns an error string
on error.
New: a validator returns the validated value on success and raises
ValidationError on error.
This allows mypy to catch mismatches between the annotated type of a
REQ parameter and the type that the validator actually validates.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Two things were broken here:
* we were using name(s) instead of id(s)
* we were always sending lists that only
had one element
Now we just send "stream_id" instead of "subscriptions".
If anything, we should start sending a list of users
instead of a list of streams. For example, see
the code below:
if peer_user_ids:
for new_user_id in new_user_ids:
event = dict(type="subscription", op="peer_add",
stream_id=stream.id,
user_id=new_user_id)
send_event(realm, event, peer_user_ids)
Note that this only affects the webapp, as mobile/ZT
don't use this.
The loop I added here in 5b49839b08 was
ill-conceived. The critical issue was that despite its name,
do_clear_mobile_push_notifications_for_ids does not immediately clear
push notifications (Except in our test suite, where `send_event`
immediately calls into the queue worker code!).
Instead, it queues work to clear those push notifications. Which
means that the first user to declare bankruptcy with a large number of
unreads will fill the queue, and then this will just be an infinite
loop adding more work to the queue.
This adds a new client_capability that clients such as the mobile apps
can use to avoid unreasonable network bandwidth consumed sending
avatar URLs in organizations with 10,000s of users.
Clients don't strictly need this data, as they can always use the
/avatar/{user_id} endpoint to fetch the avatar if desired.
This will be more efficient especially for realms with
10,000+ users because the avatar URLs would increase the
payload size significantly and cost us more bandwidth.
Fixes#15287.
This commit adds backend support for setting message_retention_days
while creating streams and updating it for an existing stream. We only
allow organization owners to set/update it for a stream.
'message_retention_days' field for a stream existed previously also, but
there was no way to set it while creating streams or update it for an
exisiting streams using any endpoint.
Fixes#14498.
When a topic is moved to a different stream, the message may no
longer be reachable to guest user, if the user is not subscribed
to the new stream.
We used to send message update event to the client in these cases,
which seems to be confusing both to the client updating the message
and the server sending push_notifications for it.
Now, we delete the UserMessage entry for these messages for the
user and send a delete message event to the client; which makes
both push_notification and the event handling client think that
the message was deleted and hence no confusion in the code is
raised.
The most import change here is the one in maybe_send_to_registration
codepath, as the insufficient validation there could lead to fetching
an expired PreregistrationUser that was invited as an administrator
admin even years ago, leading to this registration ending up in the
new user being a realm administrator.
Combined with the buggy migration in
0198_preregistrationuser_invited_as.py, this led to users incorrectly
joining as organizations administrators by accident. But even without
that bug, this issue could have allowed a user who was invited as an
administrator but then had that invitation expire and then joined via
social authentication incorrectly join as an organization administrator.
The second change is in ConfirmationEmailWorker, where this wasn't a
security problem, but if the server was stopped for long enough, with
some invites to send out email for in the queue, then after starting it
up again, the queue worker would send out emails for invites that
had already expired.
This commit removes is_old_stream property from the stream objects
returned by the API. This property was unnecessary and is essentially
equivalent to 'stream_weekly_traffic != null'.
We compute sub.is_old_stream in stream_data.update_calculated_fields
in frontend code and it is used to check whether we have a non-null
stream_weekly_traffic or not.
Fixes#15181.
This likely fix a bug that can leak thousands of messages into the
invalid state where:
* user_message.flags.active_mobile_push_notification is True
* user_message.flags.read is True
which is intended to be impossible except during the transient process
between marking messages as read sending the "remove push
notifications" event.
The bug is that if a user who is declaring bankruptcy with 10,000s of
unreads ends up having the database query to mark all of those as read
take 60s, the Django/uwsgi request will time out and kill the process.
If the postgres transaction still completes, we'll end up with the
second half of this function never being run.
A safer ordering is to do the smaller queries first.
We do this in a loop for correctness in the unlikely event there are
more than 10,000 of these.
This is designed to have no user-facing change unless the client
declares bulk_message_deletion in its client_capabilities.
Clients that do so will receive a single bulk event for bulk deletions
of messages within a single conversation (topic or PM thread).
Backend implementation of #15285.
Whenever we use API queries to mark messages as read we now increment
two new LoggingCount stats, messages_read::hour and
messages_read_interactions::hour.
We add an early return in do_increment_logging_stat function if there
are no changes (increment is 0), as an optimization to avoid
unnecessary database queries.
We also log messages_read_interactions::hour Logging stat
as the number of API queries to mark messages as read.
We don't include tests for the case where do_update_pointer is called
because do_update_pointer will most likely be removed from the
codebase in the near future.
Use read-only types (List ↦ Sequence, Dict ↦ Mapping, Set ↦
AbstractSet) to guard against accidental mutation of the default
value.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
There seems to have been a confusion between two different uses of the
word “optional”:
• An optional parameter may be omitted and replaced with a default
value.
• An Optional type has None as a possible value.
Sometimes an optional parameter has a default value of None, or None
is otherwise a meaningful value to provide, in which case it makes
sense for the optional parameter to have an Optional type. But in
other cases, optional parameters should not have Optional type. Fix
them.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit adds three `.pysa` model files: `false_positives.pysa`
for ruling out false positive flows with `Sanitize` annotations,
`req_lib.pysa` for educating pysa about Zulip's `REQ()` pattern for
extracting user input, and `redirects.pysa` for capturing the risk
of open redirects within Zulip code. Additionally, this commit
introduces `mark_sanitized`, an identity function which can be used
to selectively clear taint in cases where `Sanitize` models will not
work. This commit also puts `mark_sanitized` to work removing known
false postive flows.
The only clients that should use the typing
indicators endpoint are our internal clients,
and they should send a JSON-formatted list
of user_ids.
We now enforce this, which removes some
complexity surrounding legacy ways of sending
users, such as emails and comma-delimited
strings of user_ids.
There may be a very tiny number of mobile
clients that still use the old emails API.
This won't have any user-facing effect on
the mobile users themselves, but if you type
a message to your friend on an old mobile
app, the friend will no longer see typing
indicators.
Also, the mobile team may see some errors
in their Sentry logs from the server rejecting
posts from the old mobile clients.
The error messages we report here are a bit
more generic, since we now just use REQ
to do validation with this code:
validator=check_list(check_int)
This also allows us to remove a test hack
related to the API documentation. (We changed
the docs to reflect the modern API in an
earlier commit, but the tests couldn't be
fixed while we still had the more complex
semantics for the "to" parameter.)
Generated by pyupgrade --py36-plus --keep-percent-format, but with the
NamedTuple changes reverted (see commit
ba7906a3c6, #15132).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Old topic of the msg edit event can be used to help the client
calculate useful information such as if a change
in current narrow is required.
This fixes our re narrow logic after a stream edit of a topic, with
no change in topic name itself, since the original topic was not
present in the event received and hence the `orig_topic` was
undefined in this case.
Option to disable breadcrumb messages were given in both message edit
form and topic edit stream popover.
User now has the option to select which stream to send the notification
of stream edit of a topic via checkboxes in the UI.
We pipe realm_id through functions where it is available,
this helps us avoid doing query for realm_id in loop when
multiple messages are being processed.
This reimplements our Zoom video call integration to use an OAuth
application. In addition to providing a cleaner setup experience,
especially on zulipchat.com where the server administrators can have
done the app registration already, it also fixes the limitation of the
previous integration that it could only have one call active at a time
when set up with typical Zoom API keys.
Fixes#11672.
Co-authored-by: Marco Burstein <marco@marco.how>
Co-authored-by: Tim Abbott <tabbott@zulipchat.com>
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
We change do_create_user and create_user to accept
role as a parameter instead of 'is_realm_admin' and 'is_guest'.
These changes are done to minimize data conversions between
role and boolean fields.
This commit changes the person dict in event sent by do_change_user_role
to send role instead of is_admin or is_guest.
This makes things much more straightforward for our upcoming primary
owners feature.