Using postfix to handle the incoming email gateway complicates things
a great deal:
- It cannot verify that incoming email addresses exist in Zulip before
accepting them; it thus accepts mail at the `RCPT TO` stage which it
cannot handle, and thus must reject after the `DATA`.
- It is built to handle both incoming and outgoing email, which
results in subtle errors (1c17583ad5, 79931051bd, a53092687e,
#18600).
- Rate-limiting happens much too late to avoid denial of
service (#12501).
- Mis-configurations of the HTTP endpoint can break incoming
mail (#18105).
Provide a replacement SMTP server which accepts incoming email on port
25, verifies that Zulip can accept the address, and that no
rate-limits are being broken, and then adds it directly to the
relevant queue.
Removes an incorrect comment which implied that missed-message
addresses were only usable once. We leave rate-limiting to only
channel email addresses, since missed-message addresses are unlikely
to be placed into automated systems, as channel email addresses are.
Also simplifies #7814 somewhat.
`deliver_scheduled_emails` tries to deliver the email synchronously,
and if it fails, it retries after 10 seconds. Since it does not track
retries, and always tries the earliest-scheduled-but-due message
first, the worker will not make forward progress if there is a
persistent failure with that message, and will retry indefinitely.
This can result in excessive network or email delivery charges from
the remote SMTP server.
Switch to delivering emails via a new queue worker. The
`deliver_scheduled_emails` job now serves only to pull deferred jobs
out of the table once they are due, insert them into RabbitMQ, and
then delete them. This limits the potential for head-of-queue
failures to failures inserting into RabbitMQ, which is more reasonable
than failures speaking to a complex external system we do not control.
Retries and any connections to the SMTP server are left to the
RabbitMQ consumer.
We build a new RabbitMQ queue, rather than use the existing
`email_senders` queue, because that queue is expected to be reasonably
low-latency, for things like missed message notifications. The
`send_future_email` codepath which inserts into ScheduledEmails is
also (ab)used to digest emails, which are extremely bursty in their
frequency -- and a large burst could significantly delay emails behind
it in the queue.
The new queue is explicitly only for messages which were not initiated
by user actions (e.g., invitation reminders, digests, new account
follow-ups) which are thus not latency-sensitive.
Fixes: #32463.
This commit adds a `sender_id` parameter to the
`GET /streams/{stream_id}/email_address` endpoint to specify the
ID of a user or bot which should appear as the sender when messages
are sent to a channel using the channel email address.
Earlier, Email gateway bot was always the sender.
Fixes part of #31566.
This commit extracts the logic for creating or retrieving a channel
email token into a dedicated `get_channel_email_token` function.
This improves code clarity by decoupling token generation from
the email encoding process.
This commit renames the 'queue_json_publish' function to
'queue_json_publish_rollback_unsafe' to reflect the fact that it doesn't
wait for the db transaction (within which it gets called, if any)
to commit and sends event irrespective of commit or rollback.
In most of the cases we don't want to send event in the case of
rollbacks, so the caller should be aware that calling the function
directly is rollback unsafe.
Fixes part of #30489.
The "invites" worker exists to do two things -- make a Confirmation
object, and send the outgoing email. Making the Confirmation object
in a background process from where the PreregistrationUser is created
temporarily leaves the PreregistrationUser in invalid state, and
results in 500's, and the user not immediately seeing the sent
invitation. That the "invites" worker also wants to create the
Confirmation object means that "resending" an invite invalidates the
URL in the previous email, which can be confusing to the user.
Moving the Confirmation creation to the same transaction solves both
of these issues, and leaves the "invites" worker with nothing to do
but send the email; as such, we remove it entirely, and use the
existing "email_senders" worker to send the invites. The volume of
invites is small enough that this will not affect other uses of that
worker.
Fixes: #21306Fixes: #24275
This leads to significant speedups. In a test, with 100 random unique
event classes, the old code processed a batch of 100 rows (on average
66-ish unique in the batch) in 0.45 seconds. Doing this in a single
query processes the same batch in 0.0076 seconds.
This commit renames the 'PRIVATE_MESSAGE' attribute of the
'NotificationTriggers' class to 'DIRECT_MESSAGE'.
Custom migration to update the existing value in the database.
It includes 'TODO/compatibility' code to support the old
notification trigger value 'private_message' in the
push notification queue during the Zulip server upgrades.
Earlier 'private_message' was one of the possible values for the
'trigger' property of the '[`POST /zulip-outgoing-webhook`]' response;
Update the docs to reflect the change in the above-mentioned trigger
value.
For tests that use the dev server, like test-api, test-js-with-puppeteer,
we don't have the consumers for the queues. As they eventually timeout,
we get unnecessary error messages. This adds a new flag, disable_timeout,
to disable this behavior for the test cases.
This makes it less likely we will accidentally fail to include a class
if the subclassing of QueueProcessingWorker changes, and lets mypy
more accurately understand the typing.
The previous implementation leaked database connections, as a new
thread (and thus a new thread-local database connection) was made for
each timer execution. While these connections were relatively
lightweight in Python, they also incur memory overhead in the
PostgreSQL server itself. The logic for managing the timer was also
unclear, and the unavoidable deadlock in the stopping logic was rather
unfortunate.
Rewrite with one explicit worker thread which handles the delayed
message sending. The RabbitMQ consumer creates the database rows, and
notifies the worker to start its 5s timeout. Because it is controlled
by a condition variable, it does not hold the lock while waiting, and
can be notified to exit.
Both of these compatibility blocks can be deleted, since you can't
upgrade directly to any supported release from the versions where the
old event formats would be used.
This commit changes the invite API to accept invitation
expiration time in minutes since we are going to add a
custom option in further commits which would allow a user
to set expiration time in minutes, hours and weeks as well.
This extends the invite api endpoints to handle an extra
argument, expiration duration, which states the number of
days before the invitation link expires.
For prereg users, expiration info is attached to event
object to pass it to invite queue processor in order to
create and send confirmation link.
In case of multiuse invites, confirmation links are
created directly inside do_create_multiuse_invite_link(),
For filtering valid user invites, expiration info stored in
Confirmation object is used, which is accessed by a prereg
user using reverse generic relations.
Fixes#16359.
This information can be gleaned from the stacktrace, but making it
explicit in the stringification makes it much easier to differentiate
types of errors at a glance, particularly in Sentry.
maybe_send_batched_emails handles batches of emails from different
users at once; as it processes each user's batch, it enqueues messages
onto the `email_senders` queue. If `handle_missedmessage_emails`
raises an exception when processing a single user's email, no events
are marked as handled -- including those that were already handled and
enqueued onto `email_senders`. This results in an increasing number
of users being sent repeated emails about the same missed messages.
Catch and log any exceptions when handling an individual user's
events. This guarantees forward progress, and that notifications are
sent at-most-once, not at-least-once.
Previously, we stored up to 2 minutes worth of email events in memory
before processing them. So, if the server were to go down we would lose
those events.
To fix this, we store the events in the database.
This is a prep change for allowing users to set custom grace period for
email notifications, since the bug noted above will aggravate with
longer grace periods.
The `# nocoverage` was unnecessary apart from for the compatibility code,
so add a test for that code and remove the `# nocoverage`.
The `message_id` -> `message_ids` conversion was done in
9869153ae8.
Throwing an exception is excessive in case of this worker, as it's
expected for it to time out sometimes if the urls take too long to
process.
With a test added by tabbott.