Since c8ec3dfcf6, the file must contain the version that was
configured, or we run `ALTER EXTENSION pgroonga UPDATE`; if the file
is missing, and pgroonga was previously installed, it run `CREATE
EXTENSION pgroonga` which will be an error. If the file is present
but pgroonga was not configured, a later attempt to enable pgroonga
will incorrectly run `ALTER EXTENSION pgroonga UPDATE` instead of
`CREATE EXTENSION pgroonga`.
If the file existed on the previous version, touch it in the new
PostgreSQL version. This will ensure that puppet will *always* run
the pgroonga update, which may be necessary in case the pgroonga
version also changed. At worst, if the pgroonga version has not
changed, this will be a safe no-op.
‘rabbitmqctl await_startup’ does not retry to wait for the Erlang
runtime to start, only to wait for the RabbitMQ application to start
once Erlang is running.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This code was originally added in 608173657d in Zulip Server 2.0;
since we can only directly upgrade from 5.0 or later, this code is
guaranteed to have run already. Remove it.
This code was originally added in 5f3765b872 in Zulip Server 4.0;
since we can only directly upgrade from 5.0 or later, this code is
guaranteed to have run already. Remove it.
This code was originally added in 2b146012e1 in Zulip Server 1.7.0;
since we can only directly upgrade from 5.0 or later, this code is
guaranteed to have run already. Remove it.
The python3-yaml dependency was added at install time in 3314fefaec
in Zulip Server 4.0, and this workaround was added in de41a10d38,
also in 4.0. Since we can only directly upgrade from 5.0 or later,
the dependency is guaranteed to be installed already, by one or the
other of those ways. Remove this workaround.
This code was originally added in 382261dc72 in Zulip Server 3.0;
since we can only directly upgrade from 5.0 or later, this code is
guaranteed to have run already. Remove it.
This code was originally added in e705883857 in Zulip Server 5.0;
since we can only directly upgrade from 5.0 or later, this code is
guaranteed to have run already. Remove it.
The current stable branch is on uv, so we no longer need to preserve
the old-style zulip-venv-cache directories from the last 14 days.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Build warnings are unfortunately very common in third-party packages.
They’re difficult to reliably detect since packages don’t always build
from source, and they can’t be whitelisted on a per-package basis
since they’re all attributed to setuptools or an anonymous code
string.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This generalizes from thumbnail_workers, to include any other queue.
We only additionally choose to document `email_senders_workers`,
however, since other queues are not guaranteed to work correctly with
multiple consumers.
c5200e8b05 switched `digest_emails` from sending emails by inserting
into the ScheduledEmail table, and being processed later by
`deliver_scheduled_emails`, to inserting into the
`deferred_email_senders` RabbitMQ queue. This moved it from being in
an unmonitored table, to a monitored queue.
This slightly improved throughput -- but began paging, since the
backlog was now in a monitored form. Increase the paging thresholds
to not page for expected behaviour.
The cron jobs are potentially wrapped by Sentry, which logs "cron
failures" and sends emails. We would like those failures to only be
when the cron job itself failed to run successfully -- not when the
underlying metric is outside of its normal range. We would like to
differentiate a failure of the monitoring infrastructure from a
failure of what it is monitoring.
Swap to return 0 on everything except "unknown" results.
`deliver_scheduled_emails` tries to deliver the email synchronously,
and if it fails, it retries after 10 seconds. Since it does not track
retries, and always tries the earliest-scheduled-but-due message
first, the worker will not make forward progress if there is a
persistent failure with that message, and will retry indefinitely.
This can result in excessive network or email delivery charges from
the remote SMTP server.
Switch to delivering emails via a new queue worker. The
`deliver_scheduled_emails` job now serves only to pull deferred jobs
out of the table once they are due, insert them into RabbitMQ, and
then delete them. This limits the potential for head-of-queue
failures to failures inserting into RabbitMQ, which is more reasonable
than failures speaking to a complex external system we do not control.
Retries and any connections to the SMTP server are left to the
RabbitMQ consumer.
We build a new RabbitMQ queue, rather than use the existing
`email_senders` queue, because that queue is expected to be reasonably
low-latency, for things like missed message notifications. The
`send_future_email` codepath which inserts into ScheduledEmails is
also (ab)used to digest emails, which are extremely bursty in their
frequency -- and a large burst could significantly delay emails behind
it in the queue.
The new queue is explicitly only for messages which were not initiated
by user actions (e.g., invitation reminders, digests, new account
follow-ups) which are thus not latency-sensitive.
Fixes: #32463.
The refactoring in 4e28e1d3ff incorrectly switched a check for
`if args.from_git` into `if NEW_ZULIP_MERGE_BASE`, which is
incorrect -- the merge-base is always defined, it may just match the
version. This led to errors when installing from tarball, without a
git repo.
Since the run_hooks command was already set up to take a `--from-git`
argument, but was ignoring it, pass down that flag from
upgrade-zulip-stage-3 when necessary, and swap the run_hooks logic
back to basing the version-resolution logic on that flag.
When flushing caches, we want to ensure that even processes which may
have a wrong cache-key-prefix know to fetch the latest data from the
database. This is complicated by the cache-key-prefixes being stored
on disk, and thus checking that every cache delete is not sufficiently
performant.
We store the list of cache-key-prefixes in the cache, itself, with no
prefix. This cache is updated when a new cache-key is written, and is
also allowed to lapse after 24 hours. Updating this global cache
entry on new prefix creation ensures that even a
not-yet-restarted-into deployment will have its caches appropriately
purged if changes are made to the underlying data.
However, this both adds a cache-get, as well as multiplies the size of
all cache clears; for large bulk clears (e.g. for stream renames,
which clear the cache for all message-ids in them) this may prove
untenable.
The old /srv/zulip-npm-cache system has been unused for two
years (Zulip Server ≥ 7.0). We can just delete this directory.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This consolidates the list of stale migration to
`lib/migration_status.py` as `STALE_MIGRATIONS`.
This is a prep work to make the migration status tool at
`migration_status.py` be able to clean its output of these migrations
too.