This works around the `/usr/bin/pg_dump` failure described in the
previous commit. Since we are now calling the appropriately-versioned
`pg_dump` binary directly, it is no longer "necessary", but is added
as a defense-in-depth.
`/usr/bin/pg_dump` on Ubuntu and Debian is actually a tool which
attempts to choose which `pg_dump` binary from all of the
`postgresql-client-*` packages that are installed to run. However,
its logic is confused by passing empty `--host` and `--port` options
-- instead of looking at the running server instance on the server, it
instead assumes some remote host and chooses the highest versioned
`pg_dump` which is installed.
Because Zulip writes binary database backups, they are sensitive to
the version of the client `pg_dump` binary is used -- and the output
may not be backwards compatible. Using a PostgreSQL 16 `pg_dump`
writes archive format 1.15, which cannot be read by a PostgreSQL 15
`pg_restore`.
Zulip does not currently support PostgreSQL 16 as a server. This
means that backups on servers with `postgresql-client-16` installed
did not successfully round-trip Zulip backups -- their backups are
written using PostgreSQL 16's client, and the `pg_restore` chosen on
restore was correctly chosen as the one whose version matched the
server (PostgreSQL 15 or below), and thus did not understand the new
archive format.
Existing `./manage.py backups` taken since `postgresql-client-16` were
installed are thus not directly usable by the `restore-backup` script.
They are not useless, however, since they can theoretically be
converted into a format readable by PostgreSQL 15 -- by importing into
a PostgreSQL 16 instance, and re-dumping with a PostgreSQL 15
`pg_dump`.
Fix this issue by hard-coding path to the binary whose version matches
the version of the server we are connected to. This may theoretically
fail if we are connected to a remote PostgreSQL instance and we do not
have a `postgresql-client` package locally installed which matches the
remote PostgreSQL server's version. However, choosing a matching
version is the only way to ensure that it will be able to be imported
cleanly -- and it is preferable that we fail the backup process rather
than write backups that we cannot easily restore from.
Fixes: #27160.
This fixes the explanation of the setting's syntax to be more precise
(which doesn't mean "easily understandable" - because the setting is
a bit tricky) as well as an example to illustrate it.
This fixes a regression introduced in
9954db4b59, where the realm's default
language would be ignored for users created via API/LDAP/SAML,
resulting in all such users having English as their default language.
The API/LDAP/SAML account creation code paths don't have a request,
and thus cannot pull default language from the user's browser.
We have the `realm.default_language` field intended for this use case,
but it was not being passed through the system.
Rather than pass `realm.default_language` through from each caller, we
make the low-level user creation code set this field, as that seems
more robust to the creation of future callers.
Making request a mandatory kwarg avoids confusion about the meaning of
parameters, especially with `request` acquiring the ability to be None
in the upcoming next commit.
The comment logic doesn’t make sense. Every build gets to write to
the caches; some builds do in fact add new items, and without
clean_unused_caches.py there’s no way for them to remove items.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 124c5d02e5)
This works around a failure in the current postgresql-client-common
and postgresql-client-15 packages; it exists primarily to improve
the signal on our CI builds, as the failure is a real failure caused
by the package upgrade process.
All `X-amz-*` headers must be included in the signed request to S3;
since Django did not take those headers into account (it constructed a
request from scratch, while nginx's request inherits them from the
end-user's request), the proxied request fails to be signed correctly.
Strip off the `X-amz-cf-id` header added by CloudFront. While we
would ideally strip off all `X-amz-*` headers, this requires a
third-party module[^1].
[^1]: https://github.com/openresty/headers-more-nginx-module#more_clear_input_headers
This is common in cases where the reverse proxy itself is making
health-check requests to the Zulip server; these requests have no
X-Forwarded-* headers, so would normally hit the error case of
"request through the proxy, but no X-Forwarded-Proto header".
Add an additional special-case for when the request's originating IP
address is resolved to be the reverse proxy itself; in these cases,
HTTP requests with no X-Forwarded-Proto are acceptable.
Previously (with ERROR_REPORTING = True), we’d stuff the entire
traceback of the initial exception into the subject line of an error
email, and then also send a separate email for the JSON 500 response.
Instead, log one error with the standard Django format.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The name and docstring were just wrong, having a UserMessage row isn't
sufficient for having message access and is actually only relevant in a
private stream with private history. The function is only used in a
single place anyway, in bulk_access_messages.
The comment mentioning this function in handle_remove_push_notification
can be tweaked to just not mention any function specifically and just
say why we're not checking message access.
Users who used to be subscribed to a private stream and have been
removed from it since retain the ability to edit messages/topics, and
delete messages that they used to have access to, if other relevant
organization permissions allow these actions. For example, a user may be
able to edit or delete their old messages they posted in such a private
stream. An administrator will be able to delete old messages (that they
had access to) from the private stream.
We fix this by fixing the logic in has_message_access (which lies at the
core of our message access checks - access_message() and
bulk_access_messages())
to not rely on only a UserMessage row for checking access but also
verify stream type and subscription status.
Previously, if a user tried to create a webhook using the Webhooks
plugin in Sentry and used the "Test plugin" to test the webhook,
the server would send a 500 error, even though the integration
worked perfectly. This led users to believe that the integration
was not working.
Fixes#26173.
(cherry picked from commit eb8714c9dc)
This is done to clarify from where this fixture is coming from; as there
are two documented ways to test the integration.
(cherry picked from commit fdc14ee3f0)
Because the third party might not be expecting a 400 from our
webhooks, we now instead use 200 status code for unknown events,
while sending back the error to Sentry. Because it is no longer an error
response, the response type should now be "success".
Fixes#24721.
(cherry picked from commit 84723654c8)
This helps reduce the impact on busy uwsgi processes in case there are
slow timeout failures of Sentry servers. The p99 is less than 300ms,
and p99.9 per day peaks at around 1s, so this will not affect more
than .1% of requests in normal operation.
This is not a complete solution (see #26229); it is merely stop-gap
mitigation.
(cherry picked from commit a076d49be7)