This commit updates code to not pre-fetch DM permission
group settings using select_related and instead just
fetch the required data from DB when checking permission.
This will increase one query but will help in pre-fetching
the settings for all users and for all type of messages.
Fixes part of #33677.
This commit updates is_user_in_group and is_any_user_in_group
to accept group ID as parameter instead of UserGroup object.
This is a prep commit for updating code to not prefetch
direct message permissions group.
When testing `do_deactivate_user`, we were getting a non-deterministic
failure where `peer_remove` event was not being sent. We were unable to
figure out the exact reason why, but this commit subscribes both the
user being deactivated and the user receiving the event to a new channel
in the hopes of this event being always sent regardless of other test
conditions.
We need `corporate_enabled` and some other params to render
500 error page which is not passed when using `server_error`,
as it only contains our custom inserted `DEFAULT_PAGE_PARAMS`.
We render the page with `zulip_default_context` to fix this.
This commit removes the `/try-zulip` landing page.
The URLs are replaced with `chat.zulip.org/?show_try_zulip_modal`,
which leads to display a modal for spectators.
Fixes#34181.
This commit adds a modal which will be displayed when
a spectator visits `/?show_try_zulip_modal`.
When a user visits `/?show_try_zulip_modal` and is a spectator,
we set a new `show_try_zulip_modal` field in `page_params` to
`true` (in all other cases, it's `false`).
Based on the `show_try_zulip_modal` page param, the web client
shows the modal.
Fixes part of #34181.
We're doing this so that the client can keep track of which channels
it might need to request full subscriber data from, and which already
have full subscriber data.
In deploys `nginx_listen_port` set, tusd would fail to send its hook
requests, as it assumed that nginx would always be listening on
127.0.0.1:80.
Set the `nginx_listen_port` on the hook URL, if necessary.
As explained in the comments, if in an export with consent there are no
consenting owners or in a public export there are no owners with email
visibility set to at least ADMINS, the exported data will, upon import,
create an organization without usable owner accounts.
Adds detailed tests for the work in the prior commits fixing the
treatment of private data in various tables in exports with consent and
public exports.
This is private information, as by inspecting the DirectMessageGroup
objects and their associated Subscription objects, you could determine
which users conversed with each other in a DM group.
This did *not* leak any actual message - only the fact that at least one
of the users in the group sent a group DM.
The prior significantly restricted what data gets exported from
non-consented users. The last thing we're missing is to fix the logic
to work correctly for public exports.
Prior commits focused on addressing exports with consent. This commit
adapts it to work with public exports.:
- Do not turn user accounts into mirror dummies in the public export - or
after export->import you'll end up with a realm with no functional
accounts; as every user is non-consented and the original logic added in
the prior commits will turn them into mirror dummies.
- Some of the custom fetch/process functions were changed without
considering public exports - now they work correctly, by setting
consenting_user_ids to an empty set.
The Subscription Config is constructed in a bit of a strange way, that's
not compatible with defining a custom_fetch function.
Instead we have to extend the system to support passing a custom
function for processing just the final list of rows right before it's
returned for writing to export files.
As explained in the comment, if we turn a non-consented deactivated user
into a mirror dummy, this will violate the rule that a deactivated user
cannot restore their account by themselves after an export->import
cycle.
As explained in the comment added to the function, in terms of privacy
concerns, it is fine to export all data for these accounts. And it is
important to do - so that exporting an organization which was originally
imported e.g. from Slack doesn't result in excessively limited data for
accounts that were mirror dummies and never "activated" themselves.
Now that we severely limited the way that non-consenting users get
exported, we need to start to consider bots as consenting when
appropriate - otherwise the exported bot accounts will be unusable after
importing.
An administrator shouldn't be able to bypass a user's setting to hide
their email address from everyone, including admins.
Therefore, we should overwrite the delivery_email for such users during
export - unless the user consented to have their private data exported.
The notable consequence of this is that such user accounts will become
completely inaccessible after importing this data to a new server, due
to not having a functional email address on record.
These accounts will only be possible to reclaim via a manual
intervention to change the email address on the `UserProfile` by server
administrators.
This allows us to get rid of the call to `get_consented_user_ids` in
`fetch_usermessages`. Now it's only called at the beginning of the
export, eliminating the redundant db query and also resolving the
potential for data consistency issues, if some users change their
consent setting after the export starts.
Now the full export process operates with a single snapshot of these
consenting user ids.
These ids need to be plumbed through via a file rather than normal arg
passing, because this is a separate management command, run in
subprocesses during the export.
These users didn't consent to having their private data exported.
Therefore, correct handling of these users should involve scrubbing
their settings to just match the realm defaults.
Instead of making repeated calls to get_consented_user_ids, we can just
fetch it (mostly) once and put it in
`context["exportable_user_ids"]`. This is essentially what the
(unused until now) exportable_user_ids logic was added for after all.
The added, intended, effect of this is that non-consenting users will
now get exported as mirror dummy accounts, due to the handling of
non-exportable users in `custom_fetch_user_profile`.
The remaining additional call to `get_consented_user_ids` is in
`fetch_usermessages`. This one is tricky as this function gets called
in subprocesses via
`zerver/management/commands/export_usermessage_batch.py` management
command invoked by the export process.
It requires passing the `exportable_user_ids` in some other way. This
can be dealt with in upcoming commits.
We shouldn't export the entire Client table - it includes Clients for
all the realms on the server, completely unrelated to the realm we're
exporting. Since these contain parts of the UserAgents used by the
users, we should treat these as private data and only export the Clients
that the specific data we're exporting "knows" about.
This reduces confusion amount users when they download the
intel version and it works super slow. Downloading the arm64
version on an intel mac would just not work.
Users who use intel macs have a habit now (atleast me) to look
for `Intel` version of software when downloading an app. So, made
`Intel` bold to help that process.
We already do this in computed_settings.py, but only if the
S3 (secret) key is set. Those aren't required to be set, and tusd
_requires_ a region, so we try again to suss it out here.
Earlier, in 'EmptyTopicNameTest.test_initial_state_data',
we were not passing a short 'fetch_event_types' to 'do_events_register'
resulting in unnecessary work to fetch extra initial data which
isn't important for the test.
This commits updates the test to pass a 'fetch_event_types'
parameter with the event types required for the test.
This commit renames "(no topic)" to "" when used as
topic name in `POST /typing`.
Message sent in "(no topic)" is translated as being
sent in "" by the server, so it makes sense to show
the typing notification in "" when message is being composed.
To zulip/python-zulip-api, to keep them closer to their source code.
- Renamed the generate_zulip_bots_static_files to
generate_bots_integrations_static_files to accomodate the new function.
- Added a new function to
tools/setup/generate_bots_integrations_static_files to copy the
integration docs into static/generated/integrations.
- Updated integrations.py and computed_settings.py to use the new doc
paths.
- Deleted the affected integration docs.
- Updated the dependency URL.
The Jira plugin integration doc will be moved in the next commit, to
"static/generated/integrations/jira/doc.md", as the directory name of
the integration is "jira" in zulip/python-zulip-api.
- Jira integration doc (previously) - "zerver/webhooks/jira/doc.md"
- Jira plugin integration doc (next commit) -
"static/generated/integrations/jira/doc.md"
Both of these will use the same path "jira/doc.md" as their
integration.doc value, and the actual file is loaded based on the order
of template directories listed in computed_settings.py.
Hence, use a custom path for the Jira integration doc to avoid this
collision.