The context property was previously being used in the macro
`git-webhook-url-with-branches` which has been removed as it is not in
use any longer.
The context property was now being used only in 2 integration docs, both
of which used it only in a single location, and directly used their own
names as the channel name. Making the context property unnecessary.
Removed `bot_creation_policy` property, as the permission to create
bot users in the organization is now controlled by two new realm settings,
`can_create_bots_group` and `can_create_write_only_bots_group`
Added `can_create_bots_group` setting which controls who can
create any type of bots in the organization.
Added `can_create_write_only_bots_group` setting which controls
who can create incoming webhooks in the organization in additon
to those who are in `can_create_bots_group`.
This commit tries to address the problem of not getting the
latex markdown on selecting and quoting a message which
contains normal text as well katex html elements.
It works by grabbing the parent of all the katex elements,
display (mathblocks) as well as inline expressions
and iterating over each immediate child to convert the
elements into markdown based on certain conditions.
Support has also been added to convert inline expressions to an
approximate markdown representation.
To facilitate selection of inline math expressions along with
text nodes, which are intermediate pieces of text sandwiched
between two katex spans, we transform the paste_html to
have spans instead of text nodes, so that they can be
processed by turndown js, since its filter function only
iterates through Elements and not TEXT_NODE.
The new tests have been added in katex_test_cases.json to
prevent cluttering the node tests in copy_and_paste.test.cjs.
These tests are looped over in the node tests.
Fixes#31608.
Made maybe_negated a required parameter in
check_not_both_channel_and_dm_narrow so that it doesn't raise error
when -is:dm is called with channels:public or
when -channels:public is called with is:dm
Fixes#33033
This commit adds hardening such that if the invariant "no usermessage
row corresponding to a message exists if the user loses access to the
message" is violated due to some bug, user can't access the attachment
if they are not subscribed.
This commit updates the 'get_mentions_for_message_updates' function to
use the generic 'event_recipient_ids_for_action_on_messages' function
to determine users having access to the message and perform an
intersection with the mentioned users to filter out the users who
no longer can access the message.
It helps to add hardening such that if the invariant "no usermessage
row corresponding to a message exists if the user loses access to the
message" is violated due to some bug, it has minimal user impact.
This commit updates the 'do_update_embedded_data' function to use
the generic 'event_recipient_ids_for_action_on_messages' function
while deciding the event's recipients.
It helps to add hardening such that if the invariant "no usermessage
row corresponding to a message exists if the user loses access to the
message" is violated due to some bug, it has minimal user impact.
Users most likely to run into this will be the ones who are moving to a
new server, but keeping their original domain and thus just need to
transfer the registration.
If the server controls the registration's hostname, it can reclaim its
registration credentials. This is useful, because self-hosted admins
frequently lose the credentials when moving their Zulip server to a
different machine / deployment method.
The flow is the following:
1. The host sends a POST request to
/api/v1/remotes/server/register/takeover.
2. The bouncer responds with a signed token.
3. The host prepares to serve this token at /api/v1/zulip-services/verify and
sends a POST to /remotes/server/register/verify_challenge endpoint of
the bouncer.
4. Upon receiving the POST request, the bouncer GETS
https://{hostname}/api/v1/zulip-services/verify, verifies the secret and
responds to the original POST with the registration credentials.
5. The host can now save these credentials to it zulip-secrets.conf file
and thus regains its push notifications registration.
Includes a global rate limit on the usage of the /verify_challenge
endpoint, as it causes us to make outgoing requests.
Fixes: #32934
context:
Fetching all users who are members (directly or via sub-groups)
of groups mentioned in one message.
Reduce O(n) queries, where n is the number
of mentioned groups, to a constant of 1 query.
Extend "get_recursive_subgroups_for_groups" functionality to
"get_root_id_annotated_recursive_subgroups_for_groups"
which is the same but keeps track of each group root_id and
annotates it to each group.
Then in init_user_group_data(), we only fetch
each group's root_id along with
active direct members.
If this is called on a user without is_mirror_dummy=True, that seems
certain to be a bug. Therefore, an assert is preferable in order to
catch this, rather than returning early with noop like some other
function such as do_deactivate_user.
- Add a step to update the bot's API key if using a generic bot
for bidirectional bridging.
- Clarify setup options for how Slack channels are mapped.
- Edit for clarity and succinctness.
This commit adds support to display `Message.EMPTY_TOPIC_FALLBACK_NAME`
value (translated) in the missedmessage email body and subject for
topics having the actual value of empty string.
Fixes part of #32996.
Previously, 'check_update_message' allowed moving messages to
empty topic even with `mandatory_topic=true`.
This commit fixes the bug. We now raise an error in that case.
Co-authored-by: Prakhar Pratyush <prakhar@zulip.com>
We've added a comment highlighting that the function does not check
whether a user has access to the channel or not. Adding `accessible` to
the function name further emphasises that.
Rename `can_subscribe_others_to_all_streams` to
`can_subscribe_others_to_all_accessible_streams` so it's clear that we
are not attempting to check basic access in this function.
value can be None when property is jitsi_server_url,
message_content_edit_limit_seconds,
message_content_delete_limit_seconds,
move_messages_between_streams_limit_seconds, or
move_messages_within_stream_limit_seconds.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes#32369
Migrate stream delete event to include only stream ids in the form of
"stream_ids": [1,...], because clients only need the ids.
While keep sending ids in the form of "streams": [{stream_id: 1},...]
for compatibility with all clients other than web.
This commit updates embedded bots to mark messages they have process as
read. Since the service bots have their own `UserMessage` rows, this
change enables us to track whether the bot has in fact processed the
message by adding the `read` flag to their `UserMessage`.
Fixes#28869.
This commit updates outgoing bots to mark messages they process as read.
Since the service bots have their own `UserMessage` rows, this change
enables us to track whether the bot has in fact processed the message by
adding the `read` flag to their `UserMessage`.
Previously, service bots don't get UserMessage rows for new messages to
optimize performance. This commit adds UserMessage row for service bots
so that they behave more similarly to generic bots.
The current `get_migration_by_app` has a rather naive approach to
compiling the migration status of a realm, which has led to issues like
#32826. Specifically, those flaws are:
- it does not report the complete state of the migration status of the
exporting servers, only the applied migration.
- it shows both the replaced and the squashed migrations. This would be
a problem if we decide to clean up old migration files we've
squashed(replaced) and import a slightly older realm with those still in
disk. `check_migration_status` would complain of incompatibility even
though those migration files don't matter (they are replaced, after
all).
- it does not clean up ancient/stale applied migrations (for reference,
see how `check-database-compatibility` cleans those)
This commit attempts to write a better `migration_status.json` by
parsing the output of `showmigrations` instead.
This is because Django's `showmigrations` has a lot more logic and
validations baked into it than previously thought. Ones that we care
about are:
- it does validations to make sure app names are valid
- it doesn't list replaced migrations and only squashed one
- it takes into account migrations in disk(`MigrationsLoader`) vs
applied migrations (`MigrationsRecorder`)
Which would resolve the first two points highlighted above.
This commit adds `parse_migration_status`, which takes in the string
output of `showmigrations` and parse it into key-value pair of installed
apps and a list of its migration status.
This is a prep commit to rework the check migrations function of
import/export which will parse the output of `showmigrations` to write
the `migration_status.json` file.
This consolidates the list of stale migration to
`lib/migration_status.py` as `STALE_MIGRATIONS`.
This is a prep work to make the migration status tool at
`migration_status.py` be able to clean its output of these migrations
too.
Currently if for what ever reason one decided to change how
`migration_status.json` is written, the check migrations tests in
`test_import_export.py` will happily just use the old and potentially
stale migration status test fixtures in
`fixtures/applied_migrations_fixtures` against other stale fixtures and
run the check migrations tests in a bubble.
This adds an assertion in `verify_migration_status_json` that makes sure
all the migration status fixtures we use in the tests resembles the
actual `migration_status.json` file our export tool will write.
in `get_migrations_status`, we clean up the printed output of any ANSI
codes used to format the output. Currently the regex only cleans up bold
ANSI escape code (\x1b[1m) and style reset code (\x1b[0m). So it won't
be able to clean up basic ANSI escape codes such as "\x1b\31;1m" which
is used to format `showmigrations` output for apps with no migrations.
e.g, "\x1b\31;1m (no migrations)"
This commit updates the regex to catch a wider range of basic ANSI
codes.
The `get_migration_status` command calls `connections.close_all()` when
its done and it was previously only called when we need to rebuild the
dev or test database and when running the `get_migration_status`
command.
This commit moves the `connections.close_all()` call out of the function
and into `test_fixtures.py` directly, making sure it will only be called
when we are rebuilding the dev/test database. This is a prep work to
refactor the check migration function of import/export later on which
plans to use `get_migration_status`.
This moves `get_migration_status` to its own file in
zerver/lib/migration_status.py. This is a prep work to refactor the
check migration function of import/export later on.
Some of the imports are moved into `get_migration_status` because we're
planning to share this file with `check-database-compatibility` which is
also called when one does `production-upgrade`, so we'd want to avoid
doing file-wide import on certain types of modules because it will fail
under that scenario.
In `test_fixtures.py`, `get_migration_status` is imported within
`Database.what_to_do_with_migrations` so that it is called after
`cov.start()` in `test-backend`. This is to avoid wierd interaction with
coverage, see more details in #33063.
Fixes#33063.
1. Fetching from the `/users.list` endpoint is supposed to use
pagination. Slack will return at most 1000 results in a single
request. This means that our Slack import system hasn't worked
properly for workspaces with more than 1000 users. Users after the
first 1000 would be considered by our tool as mirror dummies and thus
created with is_active=False,is_mirror_dummy=True.
Ref https://api.slack.com/methods/users.list
2. Workspaces with a lot of users, and therefore requiring the use of
paginated requests to fetch them all, might also get us to run into
Slack's rate limits, since we'll be doing repeating requests to the
endpoint.
Therefore, the API fetch needs to also handle rate limiting errors
correctly.
Per, https://api.slack.com/apis/rate-limits#headers, we can just read
the retry-after header from the rsponse and wait the indicated number
of seconds before repeating the requests. This is an easy approach to
implement, so that's what we go with here.
Fixes#32199
We only need a log in button since that will take users to
"/accounts/go" if we are on a non-realm specific URL.
"/accounts/go" already has link to go to "Find accounts" page.