This was confusingly doing an assertion about the subscription being
active, not the channel. We could rename it to
require_active_subscription. But it was only passed with a non-default
value in b2cb443d24, and that call was
removed in 378062cc83.
- Made the branch-filtering checks uniform across all the integrations,
by adding a helper function to git.py, and re-using it.
- Instead of checking if the name of the branch that generated the
event is a substring of the "branches" parameter, we now check if
there's an exact match.
For example, if there are two branches named "main" and
"release/v1.0-main", and the user wants to track pushes to only the
"release/v1.0-main" branch, they wouldn't have been able to
previously, it will always track pushes to both branches. There was no
way to filter out the smaller named branch when there were overlaps.
`deliver_scheduled_emails` tries to deliver the email synchronously,
and if it fails, it retries after 10 seconds. Since it does not track
retries, and always tries the earliest-scheduled-but-due message
first, the worker will not make forward progress if there is a
persistent failure with that message, and will retry indefinitely.
This can result in excessive network or email delivery charges from
the remote SMTP server.
Switch to delivering emails via a new queue worker. The
`deliver_scheduled_emails` job now serves only to pull deferred jobs
out of the table once they are due, insert them into RabbitMQ, and
then delete them. This limits the potential for head-of-queue
failures to failures inserting into RabbitMQ, which is more reasonable
than failures speaking to a complex external system we do not control.
Retries and any connections to the SMTP server are left to the
RabbitMQ consumer.
We build a new RabbitMQ queue, rather than use the existing
`email_senders` queue, because that queue is expected to be reasonably
low-latency, for things like missed message notifications. The
`send_future_email` codepath which inserts into ScheduledEmails is
also (ab)used to digest emails, which are extremely bursty in their
frequency -- and a large burst could significantly delay emails behind
it in the queue.
The new queue is explicitly only for messages which were not initiated
by user actions (e.g., invitation reminders, digests, new account
follow-ups) which are thus not latency-sensitive.
Fixes: #32463.
Passing the user group object in case of named user group is fine for
`do_change_stream_group_based_setting`. But for anonymous groups, if the
code path calling that function is not creating a new anonymous user
group, it has to modify the user group by itself before calling that
function. In that case, if `old_setting_api_value` is not provided,
`old_user_group` is calculated false, since the group id has not changed
for the stream, but the group membership has changed.
old_setting_api_value will be the same as new_setting_api_value in such
a case.
It is better to accept the new setting value as either an int or
UserGroupMembersDict, so that `do_change_stream_group_based_setting` can
decide what to do with that argument.
Although, currently there are no scenarios where we are using
bulk_access_messages for edit. But we might do so in the future, and
it's better to have an explicit argument called is_modiying_message in
that case, so that the person making that change makes a conscious
decision of setting that property.
This is valuable so that one is forced to explicitly make a decision
on what is correct when adding new callers. Past experience tells us that
not having to explicitly show the decision leads to people introducing
security bugs in PRs that the maintainer has to catch in review, and our
goal for access control code should be that security bugs are hard to write.
Fixes#33688.
This is valuable so that one is forced to explicitly make a decision
on what is correct when adding new callers. Past experience tells us that
not having to explicitly show the decision leads to people introducing
security bugs in PRs that the maintainer has to catch in review, and our
goal for access control code should be that security bugs are hard to write.
Fixes part of #33688.
We do not fetch all the realm group settings using
select_related for register data now since it takes a
lot of time in planning phase. This commit updates
the code to fetch all the members and subgroups data
in user_groups_in_realm_serialized so that we do not
need to access each setting group individually.
user_groups_in_realm_serialized is updated to send the
required data accordingly.
Fixes#33656.
Following improvments are done in user_groups_in_realm_serialized-
- Members and subgroups are fetched using a single query using
union.
- Query to fetch groups now does not fetch setting fields since
we already have the members and subgroups of all the setting
groups and we can check if setting group was a named group
or not directly without checking named_user_group field of
UserGroup object as we have IDs of all named groups of the realm.
We no longer archive private streams when they become vacant,
since user can still have permissions to subscribe to it.
And streams can anyways be archived manually if needed.
Fixes#33689.
Fixes#33567.
We have used the flag `is_modifying_message` since it's more generic
than an archived channel specific flag and helps us understand better
what is the condition where we do not want to allow archived channels.
We have not added tests for message edit since it has an existing test
for this.
This commit implements the backend of migrating the
`allow_edit_history` setting to
`message_edit_history_visibility_policy`.
This allows organizations, to have an intermediate setting to
view only the "Moves" history of the messages.
We still pass `realm_allow_edit_history` in `/register` response
though for older clients with its value being set depending on the
value of `realm_message_edit_history_visibility_policy`. We set
`realm_allow_edit_history` to `False` if the
`realm_message_edit_history_visibility_policy` is "None", and
`True` for "Moves only" or "All" message edit history.
Fixes part of #21398.
Co-authored-by: Shlok Patel <shlokcpatel2001@gmail.com>
Co-authored-by: Tim Abbott <tabbott@zulip.com>
This parameter is no longer restricted to realm administrators. Any
user can get the streams they have metadata access to by setting this
parameter to true.
In public_stream_user_ids function, which is used to get users
who can access public streams, there is no need to fetch members
of can_add_subscribers_group as we eventually exclude guests
from them and we have already included all non guest users of
the realm.
We were not actually using anything but the IDs here, so it was a
bunch of wasted work to fetch these.
This essentially reverts f48e87cd3c. At
the time, something like that was required, because we needed to check
if the channel was deactivated before exposing it to the API, but more
recent reworking of the system to change the setting if the channel is
deactivated, rather than masking it in fetch_initial_state_data, means
we can do this cleanup.
We keep around the old `include_all_active` parameter for backwards
compatibility.
Web frontend doesn't use this API and thus there were no changes needed
there.
When sending invites and password reminders, ensure that the email
address is not on the AWS SES suppression list. Addresses often
mistakenly end up on there and are never removed; automatically
removing them, if necessary, before we reach out to attempt a signup
reduces support overhead and perceived buggy behaviour.
This adds the Python copy of `hash_util.parse_narrow`. In the web app,
it will mainly be used in the import process later on. So, although it
has the same purpose as its frontend twin, there are differences:
- This doesn't convert a user-id-slug into a list of user emails. It
will instead parse it into a list of user IDs, as that is the preferred
form for those kinds of operators. It will also help in later operations
to remap the object IDs during import.
- To the same effect as the first point, operands can be an actual list
or int instead of a list or int as a string (e.g., "12,14,15" or "93").
- It has fewer validations than its frontend counterpart. It doesn't
look up the parsed object IDs for validity. This is partly because of
its main use case in import.
Previously `NarrowTerm` is only used in our event-handling paths and to
a lesser extent in the `detect_narrowed_window` in `view/home.py`. Both
of which haven't yet support or handle the `negated` term.
Since we're planning to parse a narrow URL into narrow terms (like in
`hash_util.ts`) in the web app, we're going to need a `NarrowTerm`
dataclass with all three flags.
This commit adds the `negated` term to `NarrowTerm` and adds a
`NeverNegatedNarrowTerm` which is a subclass of `NarrowTerm` that always
has the `negated` flag as `False`, so functionally it is the same as the
current `NarrowTerm`.
To get content access streams for mention.py, we will now use
get_content_access_streams and we have done a lot more other refactors
in this commit around filter_stream_authorization. Mainly making that
function only to be used for adding subscribers and naming it
accordingly.