Found by semgrep 0.115 more accurately applying the rule added in
commit 0d6c771baf (#15349).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 47c5deeccd)
This includes the change from 28fde2ee27.
Only a minor bump is required because it has no effect on type
checking yet before django-stubs gets integrated.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
(cherry picked from commit b5f1134172)
These suffixes suppress some checks in the process, but still generate
and upload a tarball, push a tag, and make a Github prerelease.
`upload-release` already understands that anything with a suffix never
becomes the "latest" release.
(cherry picked from commit f3fd0b6975)
Since Django factors request.is_secure() into its CSRF check, we need
this to tell it to consider requests forwarded from nginx to Tornado
as secure.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit ce9ceb7f9f)
Before Zulip 4.9, the Zulip install process left any already-installed
rabbitmq with whatever nodename it had previously configured. Wince
this encodes the name of the host when it was installed, this does not
function well with containers.
Leave rabbitmq-server uninstalled, which lets the Zulip installation
process set the nodename to `localhost`, which ensures that it is
usable across container restarts.
(cherry picked from commit 63d2565467)
The production CI image starts `rabbitmq-server` but does not stop it,
which leaves a stale `/var/run/rabbitmq/pid` file in the image.
`rabbitmqctl wait --timeout 600 /var/run/rabbitmq/pid`, which is run
after starting the rabbitmq node, reads the PID file and waits for the
PID to be running, and for rabbitmq's port to be responding to pings.
If it reads an old PID file before the new PID is written, it
aborts (all but the first and last lines are output from `rabbitmqctl
wait` that is hidden by `/etc/init.d/rabbitmq-server`):
```
* Starting RabbitMQ Messaging Server rabbitmq-server
Waiting for pid file '/var/run/rabbitmq/pid' to appear
pid is 341
Waiting for erlang distribution on node 'rabbit@fc8f64d6acdb' while OS process '341' is running
Error:
process_not_running
* FAILED - check /var/log/rabbitmq/startup_\{log, _err\}
```
If it failed, the `production-upgrade` script tried to start
`rabbitmq` again -- despite it already still starting in the
background. These two attempts conflicted, and often one or both
failed.
Stop `rabbitmq-server` when building the image, which removes the
stale PID file.
(cherry picked from commit fb338f22d7)
This fixes a bug in commit 513207523c
(#21284) where handle_global_notification_updates would throw an error
on wildcard_mentions_notify because our API isn’t as symmetric as it
should be.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit d123056000)
The previous commit did this for revoking sessions. send_events should
be handled similarly too, to correctly handle calling do_deactivate_user
inside a transaction.
(cherry picked from commit 470c0458e6)
django.request logs responses with 5xx response codes (our configuration
of the logger prevents it from logging 4xx as well which it normally
does too). However, it does it without the traceback which results in
quite unhelpful log message that look like
"Bad Gateway:/api/v1/users/me/apns_device_token" - particularly
confusing when sent via email to server admins.
The solution here is to do the logging ourselves, using Django's
log_response() (which is meant for this purpose), and including the
traceback. Django tracks (via response._has_been_logged attribute) that
the response has already been logged, and knows to not duplicate that
action. See log_response() in django's codebase for these details.
Fixes#19596.
It seems helpful for this to get logged with the traceback rather than
just the general
"<exception name> while trying to connect to push notification bouncer."
We have observed infrequent storms of accesses (tens of thousands of
requests to minute) to `/` after an event queue expires. The current
best theory is that the act of reloading the page itself triggers a
focus event, which itself triggers a reload before the prior one had
had time to do anything but send the network request.
Since the `focus` event here is merely as a backstop in case the
synchronous reloading and deferred reloading fail, we need only run it
once.
(cherry picked from commit e7ff4afc36)
Prevent a non-immediate reload from being scheduled while an immediate
reload is already in progress. This is highly unlikely in practice,
but is a reasonable safeguard.
(cherry picked from commit f8c9d60d33)
A `reload.initiate({immediate: true, ...})` *should* not return, as it
should trigger a `window.location.reload` and stop execution.
In the event that it continues execution and returns (for instance,
due to being in the background and reloads being suppressed for
power-saving -- see #6821), there is no need to fall through and
potentially schedule a 90-second-later retry.
(cherry picked from commit ffadf82f8c)
Due to mismatches between the URL parsers in Python and browsers, it
was possible to hoodwink rewrite_local_links_to_relative into
generating links that browsers would interpret as absolute.
Signed-off-by: Anders Kaseorg <anders@zulip.com>