This is already installed on a lot of systems, and is used indirectly
when upgrading Zulip from Git.
We previously removed this in
263212decf, I believe due to an
incorrect understanding of only makemessages needing it.
`deliver_scheduled_emails` tries to deliver the email synchronously,
and if it fails, it retries after 10 seconds. Since it does not track
retries, and always tries the earliest-scheduled-but-due message
first, the worker will not make forward progress if there is a
persistent failure with that message, and will retry indefinitely.
This can result in excessive network or email delivery charges from
the remote SMTP server.
Switch to delivering emails via a new queue worker. The
`deliver_scheduled_emails` job now serves only to pull deferred jobs
out of the table once they are due, insert them into RabbitMQ, and
then delete them. This limits the potential for head-of-queue
failures to failures inserting into RabbitMQ, which is more reasonable
than failures speaking to a complex external system we do not control.
Retries and any connections to the SMTP server are left to the
RabbitMQ consumer.
We build a new RabbitMQ queue, rather than use the existing
`email_senders` queue, because that queue is expected to be reasonably
low-latency, for things like missed message notifications. The
`send_future_email` codepath which inserts into ScheduledEmails is
also (ab)used to digest emails, which are extremely bursty in their
frequency -- and a large burst could significantly delay emails behind
it in the queue.
The new queue is explicitly only for messages which were not initiated
by user actions (e.g., invitation reminders, digests, new account
follow-ups) which are thus not latency-sensitive.
Fixes: #32463.
In some cases, it is not possible to configure the load-balancer to
add an X-Forwarded-Proto header. If Zulip is serving its traffic over
HTTP, it will rightly error out, since it cannot guarantee that its
response will be served over an encrypted connection.
Add a new `loadbalancer.rejects_http_requests` settings which serves
as a way for the operator to swear that the load-balancer will *never*
serve responses from Zulip over an unencrypted connection. In most
cases, this is because the load-balancer is configured to have port 80
always serve an HTTP 301 redirect to the same URL over HTTPS.
Properly configuring the proxy to send `X-Forwarded-Proto` is always a
better solution than using this configuration parameter, so use of
this should be viewed as a last resort.
Zulip instances without a database included, like the Docker image,
would not fail to use TLS properly, since `TLS_REQCERT` was not set in
`/etc/ldap/ldap.conf`. While there's a few other ways we could fix
this, just installing libldap-common on app frontend instances seems
like a good solution, and has no impact on other Zulip systems, and it
was already being installed through a "Recommends" tier apt dependency
indirectly from the PostgreSQL server package.
Fixeszulip/docker-zulip#454.
The nginx-to-uwsig-timeout defaults to 60s, which is exactly the same
as the current "harakiri" timeout configured in uwsgi (which limits
the length a request can run before the worker is terminated). This
causes a race, where if nginx hits its 60s before uwsgi, then we
return a 504; otherwise, we get a 502.
Make the nginx-to-uwsgi timeout explicit, and shorten the "harakiri"
timeout to be explicitly less than that. Document the 60s timeout,
which all outer reverse proxies must be set to _longer than_ in order
to have proper "onion" timeouts.
Currently, it handles two hook types: 'pre-create' (to verify that the
user is authenticated and the file size is within the limit) and
'pre-finish' (which creates an attachment row).
No secret is shared between Django and tusd for authentication of the
hooks endpoints, because none is necessary -- tusd forwards the
end-user's credentials, and the hook checks them like it would any
end-user request. An end-user gaining access to the endpoint would be
able to do no more harm than via tusd or the normal file upload API.
Regardless, the previous commit has restricted access to the endpoint
at the nginx layer.
Co-authored-by: Brijmohan Siyag <brijsiyag@gmail.com>
Zopfli[^1] performs very good, but time-intensive, zlib compression.
It is hence only suitable for pre-compressing objects, not on-the-fly
compression.
Use a webpack plugin to write pre-compressed versions of JS and CSS
assets using Zopfli, and configure nginx to serve those assets when
`Accept-Encoding: gzip` is provided.
This reduces the size of the JS and CSS assets on initial pageload
from 1422872 bytes to 1108267 bytes, or about a 22% savings.
[^1]: https://github.com/google/zopfli
The default compression level is 1; increasing this to 3 takes a small
amount more CPU time (single-digit ms on multi-MB transfers), but
results in a small but noticeable (4-7%) percentage better
compression in JSON content.
Assuming a 25 megabit connection (the current average data rate for
cell phones in the U.S.), a 2MB file which is shrunk an additional 4%
saves approximately 25 milliseconds of transfer time; thus the
additional few milliseconds of CPU-time is well worth the cost. For
faster connections (e.g. 100 megabit), the tradeoff is more or less a
wash.
There's no need for sharding, but this allows one to spend a bit of
extra memory to reduce image-processing latency when bursts of images
are uploaded at once.