Earlier, we used to check whether the length of altered_user_ids was 1
and then create a dict of that user id and the streams that they were
added/removed to/from, and optimise our event sending that way. But that
was making the code harder to read.
Now, we just keep the key of user_streams as a concanated list of
user_ids and then add streams accordingly to user_streams.
Furthermore we do not check for peer_user_ids before modifying
user_streams anymore, since it is very highly unlikely that it will be
empty and if it is, send_event can handle it just fine.
Earlier, we used to send a single event for all web-public and public
streams. But public streams can have guests, which means the peer user
ids for each of them can be different based on which guests are
subscribed to which channel.
In the previous code, we were using the last stream id from another loop
to get subscribers, which was causing a lot of non-deterministic
failures in our test, since that stream id could keep on changing.
Moreover, it doesn't make much sense to use that id here.
This commit still keeps around the optimisation for public channels with
non-guest users. It will send one event for all public channels with
non-guest users, one for web public channels and for the rest of the
channels it will send an event for each channel with a different set of
peer user ids.
For `check_user_has_permission_by_role`, we were using
`user.is_moderator` by default to check whether the user had those
priviliges. But that specific function returns false if the user is an
admin or an owner. So we check `is_realm_admin` too in that case.
"first_message_id" field for subscription objects needs
to be updated when archiving a stream as we send a
notification message, but first_message_id will only
change if the stream did not have any messages previously.
This commit updates the code to update first_message_id
only when required.
When checking DM permissions, instead of using list of
users, we now use set of users to check if any user is
in direct_message_permission_group because there can be
case when sender can also be one of the recipients.
This commit updates code to not pre-fetch DM permission
group settings using select_related and instead just
fetch the required data from DB when checking permission.
This will increase one query but will help in pre-fetching
the settings for all users and for all type of messages.
Fixes part of #33677.
This commit updates is_user_in_group and is_any_user_in_group
to accept group ID as parameter instead of UserGroup object.
This is a prep commit for updating code to not prefetch
direct message permissions group.
When testing `do_deactivate_user`, we were getting a non-deterministic
failure where `peer_remove` event was not being sent. We were unable to
figure out the exact reason why, but this commit subscribes both the
user being deactivated and the user receiving the event to a new channel
in the hopes of this event being always sent regardless of other test
conditions.
We need `corporate_enabled` and some other params to render
500 error page which is not passed when using `server_error`,
as it only contains our custom inserted `DEFAULT_PAGE_PARAMS`.
We render the page with `zulip_default_context` to fix this.
This commit removes the `/try-zulip` landing page.
The URLs are replaced with `chat.zulip.org/?show_try_zulip_modal`,
which leads to display a modal for spectators.
Fixes#34181.
This commit adds a modal which will be displayed when
a spectator visits `/?show_try_zulip_modal`.
When a user visits `/?show_try_zulip_modal` and is a spectator,
we set a new `show_try_zulip_modal` field in `page_params` to
`true` (in all other cases, it's `false`).
Based on the `show_try_zulip_modal` page param, the web client
shows the modal.
Fixes part of #34181.
We're doing this so that the client can keep track of which channels
it might need to request full subscriber data from, and which already
have full subscriber data.
In deploys `nginx_listen_port` set, tusd would fail to send its hook
requests, as it assumed that nginx would always be listening on
127.0.0.1:80.
Set the `nginx_listen_port` on the hook URL, if necessary.
As explained in the comments, if in an export with consent there are no
consenting owners or in a public export there are no owners with email
visibility set to at least ADMINS, the exported data will, upon import,
create an organization without usable owner accounts.
Adds detailed tests for the work in the prior commits fixing the
treatment of private data in various tables in exports with consent and
public exports.
This is private information, as by inspecting the DirectMessageGroup
objects and their associated Subscription objects, you could determine
which users conversed with each other in a DM group.
This did *not* leak any actual message - only the fact that at least one
of the users in the group sent a group DM.
The prior significantly restricted what data gets exported from
non-consented users. The last thing we're missing is to fix the logic
to work correctly for public exports.
Prior commits focused on addressing exports with consent. This commit
adapts it to work with public exports.:
- Do not turn user accounts into mirror dummies in the public export - or
after export->import you'll end up with a realm with no functional
accounts; as every user is non-consented and the original logic added in
the prior commits will turn them into mirror dummies.
- Some of the custom fetch/process functions were changed without
considering public exports - now they work correctly, by setting
consenting_user_ids to an empty set.
The Subscription Config is constructed in a bit of a strange way, that's
not compatible with defining a custom_fetch function.
Instead we have to extend the system to support passing a custom
function for processing just the final list of rows right before it's
returned for writing to export files.
As explained in the comment, if we turn a non-consented deactivated user
into a mirror dummy, this will violate the rule that a deactivated user
cannot restore their account by themselves after an export->import
cycle.
As explained in the comment added to the function, in terms of privacy
concerns, it is fine to export all data for these accounts. And it is
important to do - so that exporting an organization which was originally
imported e.g. from Slack doesn't result in excessively limited data for
accounts that were mirror dummies and never "activated" themselves.
Now that we severely limited the way that non-consenting users get
exported, we need to start to consider bots as consenting when
appropriate - otherwise the exported bot accounts will be unusable after
importing.
An administrator shouldn't be able to bypass a user's setting to hide
their email address from everyone, including admins.
Therefore, we should overwrite the delivery_email for such users during
export - unless the user consented to have their private data exported.
The notable consequence of this is that such user accounts will become
completely inaccessible after importing this data to a new server, due
to not having a functional email address on record.
These accounts will only be possible to reclaim via a manual
intervention to change the email address on the `UserProfile` by server
administrators.
This allows us to get rid of the call to `get_consented_user_ids` in
`fetch_usermessages`. Now it's only called at the beginning of the
export, eliminating the redundant db query and also resolving the
potential for data consistency issues, if some users change their
consent setting after the export starts.
Now the full export process operates with a single snapshot of these
consenting user ids.
These ids need to be plumbed through via a file rather than normal arg
passing, because this is a separate management command, run in
subprocesses during the export.
These users didn't consent to having their private data exported.
Therefore, correct handling of these users should involve scrubbing
their settings to just match the realm defaults.
Instead of making repeated calls to get_consented_user_ids, we can just
fetch it (mostly) once and put it in
`context["exportable_user_ids"]`. This is essentially what the
(unused until now) exportable_user_ids logic was added for after all.
The added, intended, effect of this is that non-consenting users will
now get exported as mirror dummy accounts, due to the handling of
non-exportable users in `custom_fetch_user_profile`.
The remaining additional call to `get_consented_user_ids` is in
`fetch_usermessages`. This one is tricky as this function gets called
in subprocesses via
`zerver/management/commands/export_usermessage_batch.py` management
command invoked by the export process.
It requires passing the `exportable_user_ids` in some other way. This
can be dealt with in upcoming commits.
We shouldn't export the entire Client table - it includes Clients for
all the realms on the server, completely unrelated to the realm we're
exporting. Since these contain parts of the UserAgents used by the
users, we should treat these as private data and only export the Clients
that the specific data we're exporting "knows" about.