Email clients tend to sort emails by the "Date" header, which is not
when the email was received -- emails can be arbitrarily delayed
during relaying. Messages without a Date header (as all Zulip
messages previously) have one inserted by the first mailserver they
encounter. As there are now multiple email-sending queues, we would
like the view of the database, as presented by the emails that are
sent out, to be consistent based on the Date header, which may not be
the same as when the client gets the email in their inbox.
Insert a Date header of when the Zulip system inserted the data into
the local queue, as that encodes when the full information was pulled
from the database. This also opens the door to multiple workers
servicing the email_senders queue, to limit backlogging during large
notifications, without having to worry about inconsistent delivery
order between those two workers.
Messages which are sent synchronously via `send_email()` get a Date
header of when we attempt to send the message; this is, in practice,
no different from Django's default behaviour of doing so, but makes
the behaviour slightly more consistent.
An even better way than the current json error message recommending the
--registration-transfer option is to return an appropriate error code
and have that get picked up by the register_server command.
The register_server command can then display a more comprehensive,
better formatted error message with proper whitespaces and a pointer to
the documentation.
This is the final naming that we want, compared to the naming we merged
in #32399.
Includes renaming the API endpoints, but that should be fine as the
original PR was just merged and this isn't deployed anywhere.
Users most likely to run into this will be the ones who are moving to a
new server, but keeping their original domain and thus just need to
transfer the registration.
If the server controls the registration's hostname, it can reclaim its
registration credentials. This is useful, because self-hosted admins
frequently lose the credentials when moving their Zulip server to a
different machine / deployment method.
The flow is the following:
1. The host sends a POST request to
/api/v1/remotes/server/register/takeover.
2. The bouncer responds with a signed token.
3. The host prepares to serve this token at /api/v1/zulip-services/verify and
sends a POST to /remotes/server/register/verify_challenge endpoint of
the bouncer.
4. Upon receiving the POST request, the bouncer GETS
https://{hostname}/api/v1/zulip-services/verify, verifies the secret and
responds to the original POST with the registration credentials.
5. The host can now save these credentials to it zulip-secrets.conf file
and thus regains its push notifications registration.
Includes a global rate limit on the usage of the /verify_challenge
endpoint, as it causes us to make outgoing requests.
If we move a paid plan from a remote server to a remote realm, and
the plan has automated license management, then we create an updated
license ledger entry when we move the plan for the remote realm
billing data so that we have an accurate user count for licenses
when the plan is next invoiced.
In commit ea863bab5b, handle_customer_migration_from_server_to_realm
was updated to only move a remote server's plan in the case that there
is only one remote realm on the server.
This commit adds `durable=True` to the outermost db transactions
created in the following:
* confirm_email_change
* handle_upload_pre_finish_hook
* deliver_scheduled_emails
* restore_data_from_archive
* do_change_realm_subdomain
* do_create_realm
* do_deactivate_realm
* do_reactivate_realm
* do_delete_user
* do_delete_user_preserving_messages
* create_stripe_customer
* process_initial_upgrade
* do_update_plan
* request_sponsorship
* upload_message_attachment
* register_remote_server
* do_soft_deactivate_users
* maybe_send_batched_emails
It helps to avoid creating unintended savepoints in the future.
This is as a part of our plan to explicitly mark all the
transaction.atomic calls with either 'savepoint=False' or
'durable=True' as required.
* 'savepoint=True' is used in special cases.
This commit adds 'durable=True' to the outermost transaction
in 'remote_server_post_analytics'.
It also adds 'savepoint=False' to inner transaction.atomic
decorator to avoid creating savepoint.
This is as a part of our plan to explicitly mark all the
transaction.atomic decorators with either 'savepoint=False' or
'durable=True' as required.
* 'savepoint=True' is used in special cases.
This commit adds 'durable=True' to the outermost transactions
of the following functions:
* do_create_multiuse_invite_link
* do_revoke_user_invite
* do_revoke_multi_use_invite
* sync_ldap_user_data
* do_reactivate_remote_server
* do_deactivate_remote_server
* bulk_handle_digest_email
* handle_customer_migration_from_server_to_realm
* add_reaction
* remove_reaction
* deactivate_user_group
It helps to avoid creating unintended savepoints in the future.
This is as a part of our plan to explicitly mark all the
transaction.atomic decorators with either 'savepoint=False' or
'durable=True' as required.
* 'savepoint=True' is used in special cases.
Because Django does not support returning the inserted row-ids with a
`bulk_create(..., ignore_conflicts=True)`, we previously counted the
total rows before and after insertion. This is rather inefficient,
and can lead to database contention when many servers are reporting
statistics at once.
Switch to reaching into the private `_insert` method, which does
support what we need. While relying a private method is poor form, it
is mildly preferable to attempting to re-implement all of the
complexities of it.
migrated views:
- `zilencer.views.register_remote_server`
- `zilencer.views.register_remote_push_device`
- `zilencer.views.unregister_remote_push_device`
- `zilencer.views.unregister_all_remote_push_devices`
- `zilencer.views.remote_server_notify_push`
to make sure the previous checks for `remote_server_notify_push` matches
to old one, The `RemoteServerNotificationPayload` is defined.
The IntegrityError shows up in the database logs, which looks
unnecessarily concerning. Use `ON CONFLICT IGNORE` to mark this as
expected, especially since the return value is never used.
As explained in the comment, when we're moving the server plan to the
remote realm's Customer object, the realm Customer may not have
stripe_customer_id set and therefore that value needs to get moved from
the server Customer.
When a server doesn't submit a remote realm info which was
previously submitted, we mark it as locally deleted.
If such a realm has paid plan attached to it, we should investigate.
This commit adds logic to send an email to sales@zulip.com for
investigation.
We no longer want to migrate Legacy plans from server to realms, since
Legacy plans are not really a thing in the original sense anymore, since
February 15th.
Now they're just a tool to give temporary extensions of access to the
push notification service for users, when needed. And as such, it makes
no sense to migrate like that.
The remaining code in this function is for migrating (any) plan from the
server object to the realm object, if the server has just a single
realm.
The logic in the case where there's only one realm and the function
tries to migrate the server's plan to it, had two main unhandled edge
cases that would throw exceptions:
1.
```
remote_realm = RemoteRealm.objects.get(
uuid=realm_uuids[0], plan_type=RemoteRealm.PLAN_TYPE_SELF_MANAGED
)
```
This could throw an exception if the RemoteRealm exists, but has an
active e.g. Legacy plan. Then there'd be no object matching the
plan_type in the query, raising RemoteRealm.DoesNotExist.
2. If the RemoteRealm had e.g. a Legacy plan in the past, that's now
expired, then it'd have a Customer object. Meaning that the attempt
to move the server's customer to the realm:
`server_plan.customer = remote_realm_customer`
would trigger an IntegrityError since a RemoteRealm can't have two
Customer objects.
In simple cases the situation in (2) can still be easily migrated, by
moving the plan from the server's customer to the realm's customer.
Just like deactivated realms should be excluded, so should locally
deleted realms.
In particular, failure to exclude locally deleted realms breaks
handle_customer_migration_from_server_to_realms.