Compare commits

...

246 Commits

Author SHA1 Message Date
Alex Vandiver
8c31437dd1 Release Zulip Server 4.11. 2022-03-15 20:51:10 +00:00
Alex Vandiver
e6eace307e CVE-2022-24751: Clear sessions outside of the transaction.
Clearing the sessions inside the transaction makes Zulip vulnerable to
a narrow window where the deleted session has not yet been committed,
but has been removed from the memcached cache.  During this window, a
request with the session-id which has just been deleted can
successfully re-fill the memcached cache, as the in-database delete is
not yet committed, and thus not yet visible.  After the delete
transaction commits, the cache will be left with a cached session,
which allows further site access until it expires (after
SESSION_COOKIE_AGE seconds), is ejected from the cache due to memory
pressure, or the server is upgraded.

Move the session deletion outside of the transaction.

Because the testsuite runs inside of a transaction, it is impossible
to test this is CI; the testsuite uses the non-caching
`django.contrib.sessions.backends.db` backend, regardless.  The test
added in this commit thus does not fail before this commit; it is
merely a base expression that the session should be deleted somehow,
and does not exercise the assert added in the previous commit.
2022-03-15 20:29:05 +00:00
Alex Vandiver
c28d1169c3 session: Enforce that changes cannot happen in a transaction. 2022-03-15 20:29:05 +00:00
Alex Vandiver
d525cf8f9d i18n: Remove wrongly-added id_ID locale.
These locale files break `./manage.py compilemessages` provisioning,
and didn't reappear on subsequent transifex import.
2022-03-12 02:12:15 +00:00
Alex Vandiver
93f13ff2f5 i18n: Update translation data from Transifex. 2022-03-12 00:34:16 +00:00
Anders Kaseorg
4eaf9d7e55 provision: Install non-PGDG PGroonga package in development environment.
The development environment installs PostgreSQL from the OS, not PGDG,
so we should install the non-PGDG PGroonga package to match.  This is
required on Debian 10 where postgresql-12-pgdg-pgroonga does not exist.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 1178e015d1)
2022-03-09 17:31:13 -08:00
Alex Vandiver
aaf0e1d93b version: Update version after 4.10 release. 2022-02-25 14:56:26 -08:00
Tim Abbott
0d72a12ffa lint: Run prettier on changelog. 2022-02-25 13:59:54 -08:00
Alex Vandiver
4bb22d2535 Release Zulip Server 4.10. 2022-02-25 21:19:38 +00:00
Mateusz Mandera
c93cef91e8 create_preregistration_user: Add additional hardening assertion.
TestMaybeSendToRegistration needs tweaking here, because it wasn't
setting the subdomain for the dummy request, so
maybe_send_to_registration was actually running with realm=None, which
is not right for these tests.

Also, test_sso_only_when_preregistration_user_exists was creating
PreregistrationUser without setting the realm, which was also incorrect.
2022-02-22 23:18:34 +00:00
Mateusz Mandera
0c227217b2 registration: Change create_preregistration_user to take realm as arg.
create_preregistration_user is a footgun, because it takes the realm
from the request. The calling code is supposed to validate that
registration for the realm is allowed
first, but can sometimes do that on "realm" taken from something else
than the request - and later on calls create_preregistration_user, thus
leading to prereg user creation on unvalidated request.realm.

It's safer, and makes more sense, for this function to take the intended
realm as argument, instead of taking the entire request. It follows that
the same should be done for prepare_activation_url.
2022-02-22 23:18:34 +00:00
Mateusz Mandera
b5c7a79bdf tests: Fix some instances of logged in session polluting test state.
In these tests, the code ends up with a logged in session when it's
undesired - later on these tests make requests to a different subdomain
- where a logged in session is not supposed to exist. This leads to an
unintended, strange situation where request.user is a user from the old
subdomain but the request itself is to a *different* subdomain. This
throws off get_realm_from_request, which will return the realm from
request.user.realm - which is not what these tests want and can lead to
these tests failing when some of the production code being tested
switches to using get_realm_from_request instead of
get_realm(get_subdomain).
2022-02-22 23:18:34 +00:00
Mateusz Mandera
7e991c8c7e CVE-2022-21706: Prevent use of multiuse invites to join other orgs.
The codepaths for joining an organization via a multi-use invitation
(accounts_home_from_multiuse_invite and maybe_send_to_registration)
weren't validating whether
the organization the invite was generated for matches the organization
the user attempts to join - potentially allowing an attacker with access
to organization A to generate a multi-use invite and use it to join
organization B within the same deployment, that they shouldn't have
access to.
2022-02-22 23:18:31 +00:00
Mateusz Mandera
974c98a45a CVE-2021-3967: Only regenerate the API key by authing with the old key. 2022-02-22 14:13:56 -08:00
Alex Vandiver
5784bdd0ed puppet: Use goarch for wal-g.
wal-g does not currently provide pre-built binaries for
arm64/aarch64 (see #21070) but if they begin to, it will likely be
with the goarch names.

(cherry picked from commit d7e8733705)
2022-02-15 15:57:00 -08:00
Alex Vandiver
e10ea15aa9 puppet: Use goarch for go-camo.
(cherry picked from commit abdbe4ca83)
2022-02-15 15:57:00 -08:00
Alex Vandiver
d860242220 puppet: Use goarch for golang.
Fixes: #21051.
(cherry picked from commit be2f2a5bde)
2022-02-15 15:57:00 -08:00
Alex Vandiver
b2c3f5e510 puppet: Include go version in go-camo release information. 2022-02-15 15:57:00 -08:00
Alex Vandiver
232fe495be puppet: Factor out $::architecture case statement for golang.
(cherry picked from commit 788daa953b)
2022-02-15 15:57:00 -08:00
Alex Vandiver
c20afad828 puppet: Add aarch64 build hashes to external dependencies.
wal-g does not ship aarch64 binaries, currently; the compilation
process([1]) is somewhat complicated, so we defer the decision about
how to support wal-g for aarch64 until a later date.

[1]: https://github.com/wal-g/wal-g/blob/master/docs/PostgreSQL.md#installing

(cherry picked from commit c094867a74)
2022-02-15 15:57:00 -08:00
Alex Vandiver
3fad49a9c1 puppet: Centralize versions and sha256 hashes of external dependencies.
This will make it easier to update versions of these dependencies.

(cherry picked from commit f166f9f7d6)
2022-02-15 15:57:00 -08:00
Alex Vandiver
cc95aac176 puppet: Move wal-g to external_dep, in /srv/zulip-wal-g-*. 2022-02-15 15:57:00 -08:00
Alex Vandiver
1b27ec9fae puppet: Stop making resources for external binaries and directories.
In the event that extracting doesn't produce the binary we expected it
to, all this will do is create an _empty_ file where we expect the
binary to be.  This will likely muddle debugging.

Since the only reason the resource was made in the first place was to
make dependencies clear, switch to depending on the External_Dep
itself, when such a dependency is needed.

(cherry picked from commit 1e4e6a09af)
2022-02-15 15:57:00 -08:00
Alex Vandiver
ebd74239a2 puppet: Move slash out of $dir by convention.
(cherry picked from commit 3c163a7d5e)
2022-02-15 15:57:00 -08:00
Alex Vandiver
8dcb1e489d puppet: Adjust wal-g release version and SHA256.
wal-g apparently removed the 1.1.1 release; replace it with the
equivalent rc.

(cherry picked from commit d2a78bac7e)
2022-02-15 15:57:00 -08:00
Anders Kaseorg
4e7419e168 provision: Use apt-get --allow-downgrades.
Needed for commit 9c8d2b7be3 (#21115).

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 7213116dd3)
2022-02-14 16:45:29 -08:00
Anders Kaseorg
8567e19fff apt-repos: Downgrade PostgreSQL to dodge PGroonga regression.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 9c8d2b7be3)
2022-02-14 15:05:06 -08:00
Anders Kaseorg
9aa7c19891 apt-repos: Remove groovy.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 43c4672deb)
2022-02-14 15:05:06 -08:00
Anders Kaseorg
4fdbf274ac setup-apt-repo: Support installing an APT preferences file.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit fdc1294993)
2022-02-14 15:05:06 -08:00
Anders Kaseorg
218eca14b8 setup-apt-repo: Move supported release check earlier.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 7077a289ae)
2022-02-14 15:05:06 -08:00
Anders Kaseorg
08efebbaff setup-apt-repo: Use /etc/os-release instead of lsb_release.
But still install lsb-release for now since Puppet acts funny without
it.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit c8bb98554e)
2022-02-14 15:05:06 -08:00
Alex Vandiver
1c819208d0 setup: Merge multiple setup-apt-repo scripts into one.
This moves the `.asc` files into subdirectories, and writes out the
according `.list` files into them.  It moves from templates to
written-out `.list` files for clarity and ease of
implementation (Debian and Ubuntu need different templates for
`zulip`), and as a way of making explicit which releases are supported
for each list.  For the special-case of the PGroonga signing key, we
source an additional file within the directory.

This simplifies the process for adding another class of `.list` file.

(cherry picked from commit f3eea72c2a)
2022-02-14 15:05:06 -08:00
Anders Kaseorg
8424649c70 reindex-textual-data: Find psycopg2 in the virtualenv.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit aec6cd4cdb)
2022-01-26 17:15:47 -08:00
Alex Vandiver
33d8c190d8 version: Update version after 4.9 release. 2022-01-24 18:44:09 -08:00
Alex Vandiver
0213b811ec Release Zulip Server 4.9 2022-01-25 01:40:31 +00:00
Alex Vandiver
c27324927e CVE-2021-43799: Set a secure Erlang cookie.
The RabbitMQ docs state ([1]):

    RabbitMQ nodes and CLI tools (e.g. rabbitmqctl) use a cookie to
    determine whether they are allowed to communicate with each
    other. [...] The cookie is just a string of alphanumeric
    characters up to 255 characters in size. It is usually stored in a
    local file.

...and goes on to state (emphasis ours):

    If the file does not exist, Erlang VM will try to create one with
    a randomly generated value when the RabbitMQ server starts
    up. Using such generated cookie files are **appropriate in
    development environments only.**

The auto-generated cookie does not use cryptographic sources of
randomness, and generates 20 characters of `[A-Z]`.  Because of a
semi-predictable seed, the entropy of this password is thus less than
the idealized 26^20 = 94 bits of entropy; in actuality, it is 36 bits
of entropy, or potentially as low as 20 if the performance of the
server is known.

These sizes are well within the scope of remote brute-force attacks.

On provision, install, and upgrade, replace the default insecure
20-character Erlang cookie with a cryptographically secure
255-character string (the max length allowed).

[1] https://www.rabbitmq.com/clustering.html#erlang-cookie
2022-01-25 01:35:31 +00:00
Alex Vandiver
c087ed4c26 configure-rabbitmq: Set -u, and not -x. 2022-01-25 01:34:20 +00:00
Alex Vandiver
ffc1f81cde configure-rabbitmq: Factor out sudo, instead of rabbitmqctl. 2022-01-25 01:34:20 +00:00
Alex Vandiver
90b6fe2c6e upgrade: Show output from (re)starting zulip.
5c450afd2d, in ancient history, switched from `check_call` to
`check_output` and throwing away its result.

Use check_call, so that we show the steps to (re)starting the server.
2022-01-25 01:34:20 +00:00
Alex Vandiver
36cebad4c0 CVE-2021-43799: During upgrades, restart rabbitmq if necessary.
Check if it is listening on a public interface on port 25672, and if
so shut it down so it can pick up the new configuration.
2022-01-25 01:34:20 +00:00
Alex Vandiver
f33fbb527c upgrade: Make calling shutdown_server twice, only try once. 2022-01-25 01:34:20 +00:00
Alex Vandiver
134a8d4301 CVE-2021-43799: Write rabbitmq configuration before starting.
Zulip writes a `rabbitmq.config` configuration file which locks down
RabbitMQ to listen only on localhost:5672, as well as the RabbitMQ
distribution port, on localhost:25672.

The "distribution port" is part of Erlang's clustering configuration;
while it is documented that the protocol is fundamentally
insecure ([1], [2]) and can result in remote arbitrary execution of
code, by default the RabbitMQ configuration on Debian and Ubuntu
leaves it publicly accessible, with weak credentials.

The configuration file that Zulip writes, while effective, is only
written _after_ the package has been installed and the service
started, which leaves the port exposed until RabbitMQ or system
restart.

Ensure that rabbitmq's `/etc/rabbitmq/rabbitmq.config` is written
before rabbitmq is installed or starts, and that changes to that file
trigger a restart of the service, such that the ports are only ever
bound to localhost.  This does not mitigate existing installs, since
it does not force a rabbitmq restart.

[1] https://www.erlang.org/doc/apps/erts/erl_dist_protocol.html
[2] https://www.erlang.org/doc/reference_manual/distributed.html#distributed-erlang-system
2022-01-25 01:34:17 +00:00
Alex Vandiver
a07f64a463 puppet: Always set the RabbitMQ nodename to zulip@localhost.
This is required in order to lock down the RabbitMQ port to only
listen on localhost.  If the nodename is `rabbit@hostname`, in most
circumstances the hostname will resolve to an external IP, which the
rabbitmq port will not be bound to.

Installs which used `rabbit@hostname`, due to RabbitMQ having been
installed before Zulip, would not have functioned if the host or
RabbitMQ service was restarted, as the localhost restrictions in the
RabbitMQ configuration would have made rabbitmqctl (and Zulip cron
jobs that call it) unable to find the rabbitmq server.

The previous commit ensures that configure-rabbitmq is re-run after
the nodename has changed.  However, rabbitmq needs to be stopped
before `rabbitmq-env.conf` is changed; we use an `onlyif` on an `exec`
to print the warning about the node change, and let the subsequent
config change and notify of the service and configure-rabbitmq to
complete the re-configuration.
2022-01-25 01:33:27 +00:00
Alex Vandiver
e9af26df6e puppet: Run configure-rabbitmq on nodename change.
`/etc/rabbitmq/rabbitmq-env.conf` sets the nodename; anytime the
nodename changes, the backing database changes, and this requires
re-creating the rabbitmq users and permissions.

Trigger this in puppet by running configure-rabbitmq after the file
changes.
2022-01-24 23:09:02 +00:00
Alex Vandiver
7f6b423532 setup: Remove unused RABBITMQ_NODE.
This reverts commit 889547ff5e.  It is
unused in the Docker container, as the configurtaion of the `zulip`
user in the rabbitmq node is done via environment variables.  The
Zulip host in that context does not have `rabbitmqctl` installed, and
would have needed to know the Erlang cookie to be able to run these
commands.
2022-01-24 23:09:02 +00:00
Alex Vandiver
d95fb34ba7 puppet: Admit we leave epmd port 4369 open on all interfaces.
The Erlang `epmd` daemon listens on port 4369, and provides
information (without authentication) about which Erlang processes are
listening on what ports.  This information is not itself a
vulnerability, but may provide information for remote attackers about
what local Erlang services (such as `rabbitmq-server`) are running,
and where.

`epmd` supports an `ERL_EPMD_ADDRESS` environment variable to limit
which interfaces it binds on.  While this environment variable is set
in `/etc/default/rabbitmq-server`, Zulip unfortunately attempts to
start `epmd` using an explicit `exec` block, which ignores those
settings.

Regardless, this lack of `ERL_EPMD_ADDRESS` variable only controls
`epmd`'s startup upon first installation.  Upon reboot, there are two
ways in which `epmd` might be started, neither of which respect
`ERL_EPMD_ADDRESS`:

 - On Focal, an `epmd` service exists and is activated, which uses
   systemd's configuration to choose which interfaces to bind on, and
   thus `ERL_EPMD_ADDRESS` is irrelevant.

 - On Bionic (and Focal, due to a broken dependency from
   `rabbitmq-server` to `epmd@` instead of `epmd`, which may lead to
   the explicit `epmd` service losing a race), `epmd` is started by
   `rabbitmq-server` when it does not detect a running instance.
   Unfortunately, only `/etc/init.d/rabbitmq-server` would respects
   `/etc/default/rabbitmq-server` -- and it defers the actual startup
   to using systemd, which does not pass the environment variable
   down.  Thus, `ERL_EPMD_ADDRESS` is also irrelevant here.

We unfortunately cannot limit `epmd` to only listening on localhost,
due to a number of overlapping bugs and limitations:

 - Manually starting `epmd` with `-address 127.0.0.1` silently fails
   to start on hosts with IPv6 disabled, due to an Erlang bug ([1],
   [2]).

 - The dependencies of the systemd `rabbitmq-server` service can be
   fixed to include the `epmd` service, and systemd can be made to
   bind to `127.0.0.1:4369` and pass that socket to `epmd`, bypassing
   the above bug.  However, the startup of this service is not
   guaranteed, because it races with other sources of `epmd` (see
   below).

 - Any process that runs `rabbitmqctl` results in `epmd` being started
   if one is not currently running; these instances do not respect any
   environment variables as to which addresses to bind on.  This is
   also triggered by `service rabbitmq-server status`, as well as
   various Zulip cron jobs which inspect the rabbitmq queues.  As
   such, it is difficult-to-impossible to ensure that some other
   `epmd` process will not win the race and open the port on all
   interfaces.

Since the only known exposure from leaving port 4369 open is
information that rabbitmq is running on the host, and the complexity
of adjusting this to only bind on localhost is high, we remove the
setting which does not address the problem, and document that the port
is left open, and should be protected via system-level or
network-level firewalls.

[1]: https://bugs.launchpad.net/ubuntu/+source/erlang/+bug/1374109
[2]: https://github.com/erlang/otp/issues/4820
2022-01-24 23:09:02 +00:00
Alex Vandiver
5ff759d35c puppet: Remove rabbitmq_mochiweb configuration.
mochiweb was renamed to web_dispatch in RabbitMQ 3.8.0, and the plugin
is not enabled.  Nor does this control the management interface, which
would listen on port 15672.
2022-01-24 23:09:02 +00:00
Alex Vandiver
a0d1074212 ci: Cache with the OS name, not the job name.
The job name is just the constant `production_build`.  Renaming it to
have the OS in the key ensures that it is not shared across OS'es (for
instance between `4.x` and `main`, which are now bionic and buster,
respectively), and also allows it to share caches with the install
step, which uses the OS name in that place.
2022-01-24 15:07:50 -08:00
Alex Vandiver
2e1e2b08f1 puppet: Fix standalone certbot configurations.
This addresses the problems mentioned in the previous commit, but for
existing installations which have `authenticator = standalone` in
their configurations.

This reconfigures all hostnames in certbot to use the webroot
authenticator, and attempts to force-renew their certificates.
Force-renewal is necessary because certbot contains no way to merely
update the configuration.  Let's Encrypt allows for multiple extra
renewals per week, so this is a reasonable cost.

Because the certbot configuration is `configobj`, and not
`configparser`, we have no way to easily parse to determine if webroot
is in use; additionally, `certbot certificates` does not provide this
information.  We use `grep`, on the assumption that this will catch
nearly all cases.

It is possible that this will find `authenticator = standalone`
certificates which are managed by Certbot, but not Zulip certificates.
These certificates would also fail to renew while Zulip is running, so
switching them to use the Zulip webroot would still be an improvement.

Fixes #20593.

(cherry picked from commit a3adaf4aa3)
2022-01-24 20:14:23 +00:00
Alex Vandiver
b44a1b68f6 setup: Install a temporary certificate, before certbot runs.
Installing certbot with --method=standalone means that the
configuration file will be written to assume that the standalone
method will be used going forward.  Since nginx will be running,
attempts to renew the certificate will fail.

Install a temporary self-signed certificate, just to allow nginx to
start, and then follow up (after applying puppet to start nginx) with
the call to setup-certbot, which will use the webroot authenticator.

The `setup-certbot --method=standalone` option is left intact, for use
in development environments.

Fixes part of #20593; it does not address installs which were
previously improperly configured with `authenticator = standalone`.

(cherry picked from commit 76ce8631c0)
2022-01-24 20:14:23 +00:00
Alex Vandiver
c3adbcea13 docs: Mention Camo does not use a local Smokescreen in the proxies docs.
This documents the new behaviour in d328d3dd4d.

(cherry picked from commit be1c4c2bd8)
2022-01-21 16:21:15 -08:00
Alex Vandiver
e088b343b3 puppet: Document that upgrades from Git require 3GB.
The step of rebuilding static assets using webpack requires more than
2G of RAM.

(cherry picked from commit 5f237cb34e)
2022-01-19 12:37:55 -08:00
Alex Vandiver
1d559bbffa puppet: Allow routing camo requests through an outgoing proxy.
Because Camo includes logic to deny access to private subnets, routing
its requests through Smokescreen is generally not necessary.  However,
it may be necessary if Zulip has configured a non-Smokescreen exit
proxy.

Default Camo to using the proxy only if it is not Smokescreen, with a
new `proxy.enable_for_camo` setting to override this behaviour if need
be.  Note that that setting is in `zulip.conf` on the host with Camo
installed -- not the Zulip frontend host, if they are different.

Fixes: #20550.
(cherry picked from commit d328d3dd4d)
2022-01-11 15:13:09 -08:00
Alex Vandiver
cb24f93bba puppet: Make zulipconf() understand coercion to bool.
If the default is a bool, coerce the value into a bool as well.  For
backwards compatibility, this does not adjust any existing callsites.

`queue_workers_multiprocess` is the only setting which is passed a
bool default, but it was already documented to take `true` or `false`;
simplify it to no longer add the now-unnecessary Boolean conversion.

(cherry picked from part of commit 2c5fc1827c)
2022-01-11 15:13:09 -08:00
Tim Abbott
868180a25d upgrade-zulip-from-git: Improve webpack failure error handling.
We've had a number of unhappy reports of upgrades failing due to
webpack requiring too much memory.  While the previous commit will
likely fix this issue for everyone, it's worth improving the error
message for failures here.

We avoid doing the stop+retry ourselves, because that could cause an
outage in a production system if webpack fails for another reason.

Fixes #20105.
2022-01-07 11:47:05 -08:00
Tim Abbott
20fc1f651a upgrade-zulip-from-git: Require more memory to run webpack.
Since the upgrade to Webpack 5, we've been seeing occasional reports
that servers with roughly 4GiB of RAM were getting OOM kills while
running webpack.

Since we can't readily optimize the memory requirements for webpack
itself, we should raise the RAM requirements for doing the
lower-downtime upgrade strategy.

Fixes #20231.
2022-01-07 11:47:05 -08:00
Emilio López
0d79d6735a docs: Clarify use of loadbalancer.ips when using a reverse proxy.
When Zulip is run behind one or more reverse proxies, you must
configure `loadbalancer.ips` so that Zulip respects the client IP
addresses found in the `X-Forwarded-For` header. This is not
immediately clear from the documentation, so this commit makes it more
clear and augments the existing examples to showcase this need.

Fixes: #19073
(cherry picked from commit baea14ee57)
2022-01-07 11:44:41 -08:00
Anders Kaseorg
45568a08c0 reindex-textual-data: Reindex textual functional indexes too.
This catches nine functional indexes that the previous query didn’t:

upper_preregistration_email_idx
upper_stream_name_idx
upper_subject_idx
upper_userprofile_email_idx
zerver_message_recipient_upper_subject
zerver_mutedtopic_stream_topic
zerver_stream_realm_id_name_uniq
zerver_userprofile_realm_id_delivery_email_uniq
zerver_userprofile_realm_id_email_uniq

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 1cc1de82cd)
2022-01-07 10:37:38 -08:00
Alex Vandiver
22152a0662 Revert "puppet: Do not assume amd64 architecture."
This reverts commit 859d88f76c.  It does
not work, since the sha256 hashes are different for different
architectures.

arm64 support exists in `main`.
2022-01-04 15:00:39 -08:00
Alya Abbott
9bbb336441 developer docs: Tweak ToS for push notifications wording. 2021-12-14 14:47:20 -08:00
Sahil Batra
3d966f1af9 message: Check wildcard mention restrictions while editing message.
This commit adds code to check whether a user is allowed to use
wildcard mention in a large stream or not while editing a message
based on the realm settings.

Previously this was only checked while sending message, thus user
was easily able to use wildcard mention by first sending a normal
message and then using a wildcard mention by editing it.

(cherry picked from commit b68ebf5a22)
2021-12-14 11:55:18 -08:00
Alex Vandiver
ab98f3801f setup-certbot: Reinstate nginx reload after installation.
If nginx was already installed, and we're using the webroot method of
initializing certbot, nginx needs to be reloaded.  Hooks in
`/etc/letsencrypt/renewal-hooks/deploy/` do not run during initial
`certbot certonly`, so an explicit reload is required.

(cherry picked from commit f6520a97cd)
2021-12-13 10:30:00 -08:00
Alex Vandiver
ddca8a7f9a puppet: Use certbot package timer, not our own cron job.
The certbot package installs its own systemd timer (and cron job,
which disabled itself if systemd is enabled) which updates
certificates.  This process races with the cron job which Zulip
installs -- the only difference being that Zulip respects the
`certbot.auto_renew` setting, and that it passes the deploy hook.
This means that occasionally nginx would not be reloaded, when the
systemd timer caught the expiration first.

Remove the custom cron job and `certbot-maybe-renew` script, and
reconfigure certbot to always reload nginx after deploying, using
certbot directory hooks.

Since `certbot.auto_renew` can't have an effect, remove the setting.
In turn, this removes the need for `--no-zulip-conf` to
`setup-certbot`.  `--deploy-hook` is similarly removed, as running
deploy hooks to restart nginx is now the default; pass
`--no-directory-hooks` in standalone mode to not attempt to reload
nginx.  The other property of `--deploy-hook`, of skipping symlinking
into place, is given its own flog.

(cherry picked from commit 01e8f752a8)
2021-12-09 13:48:20 -08:00
Tim Abbott
c1c3dfced5 scripts: Fix running compare-settings-to-template from any CWD.
This matches the number of dirname() calls for other files in its
directory.

Fixes #20489.
2021-12-07 14:47:27 -08:00
Alex Vandiver
2d3f505505 puppet: Install camo on Docker.
Now that go-camo runs within supervisor, it can be run in Docker
simply.

Fixes #20101.
Fixes zulip/docker-zulip#179.

(cherry picked from commit f31bf3f06c)
2021-12-06 19:33:31 +00:00
Alex Vandiver
d3573af95c puppet: Read camo secret at startup time, not at puppet-apply time.
Writing the secret to the supervisor configuration file makes changes
to the secret requires a zulip-puppet-apply to take hold.  The Docker
image is constructed to avoid having to run zulip-puppet-apply on
startup, and indeed cannot run zulip-puppet-apply after having
configured secrets, as it has replaced the zulip.conf file with a
symlink, for example.  This means that camo gets the static secret
that was built into the image, and not the one regenerated on first
startup.

Read the camo secret at process startup time.  Because this pattern is
likely common with "12-factor" applications which can read from
environment variables, write a generic tool to map secrets to
environment variables before exec'ing a binary, and use that for Camo.

(cherry picked from commit 358a7fb0c6)
2021-12-06 19:33:31 +00:00
Alex Vandiver
859d88f76c puppet: Do not assume amd64 architecture.
(cherry picked from commit 7db146d0a9)
2021-12-06 11:10:37 -08:00
Alex Vandiver
9a0fb497a4 changelog: Fix lint issues. 2021-12-01 23:39:28 +00:00
Alex Vandiver
7ea4ad75af version: Update version after 4.8 release. 2021-12-01 23:37:49 +00:00
Alex Vandiver
ae000bfdba Release Zulip Server 4.8 2021-12-01 23:17:46 +00:00
Mateusz Mandera
551b387164 CVE-2021-43791: Validate confirmation keys in /accounts/register/ codepath.
A confirmation link takes a user to the check_prereg_key_and_redirect
endpoint, before getting redirected to POST to /accounts/register/. The
problem was that validation was happening in the check_prereg_key_and_redirect
part and not in /accounts/register/ - meaning that one could submit an
expired confirmation key and be able to register.

We fix this by moving validation into /accouts/register/.
2021-12-01 23:13:11 +00:00
Mateusz Mandera
720d16e809 confirmation: Use error status codes for confirmation link error pages. 2021-12-01 20:28:51 +00:00
Alex Vandiver
f338ff64c3 puppet: Use sysv status command, not supervisorctl status.
Since Supervisor 4, which is installed on Ubuntu 20.04 and Debian 11,
`supervisorctl status` returns exit code 3 if any of the
supervisor-controlled processes are not running.

Using `supervisorctl status` as the Puppet `status` command for
Supervisor leads to unnecessarily trying to "start" a Supervisor
process which is already started, but happens to have one or more of
its managed processes stopped.  This is an unnecessary no-op in
production environments, but in docker-init enviroments, such as in
CI, attempting to start the process a second time is an error.

Switch to checking if supervisor is running by way of sysv init.  This
fixes the potential error in CI, as well as eliminates unnecessary
"starts" of supervisor when it was already running -- a situation
which made zulip-puppet-apply not idempotent:

```
root@alexmv-prod:~# supervisorctl status
process-fts-updates                                             STOPPED   Nov 10 12:33 AM
smokescreen                                                     RUNNING   pid 1287280, uptime 0:35:32
zulip-django                                                    STOPPED   Nov 10 12:33 AM
zulip-tornado                                                   STOPPED   Nov 10 12:33 AM
[...]

root@alexmv-prod:~# ~zulip/deployments/current/scripts/zulip-puppet-apply --force
Notice: Compiled catalog for alexmv-prod.zulipdev.org in environment production in 2.32 seconds
Notice: /Stage[main]/Zulip::Supervisor/Service[supervisor]/ensure: ensure changed 'stopped' to 'running'
Notice: Applied catalog in 0.91 seconds

root@alexmv-prod:~# ~zulip/deployments/current/scripts/zulip-puppet-apply --force
Notice: Compiled catalog for alexmv-prod.zulipdev.org in environment production in 2.35 seconds
Notice: /Stage[main]/Zulip::Supervisor/Service[supervisor]/ensure: ensure changed 'stopped' to 'running'
Notice: Applied catalog in 0.92 seconds
```

(cherry picked from commit 7af2fa2e92)
2021-12-01 12:19:30 -08:00
Tim Abbott
98610c984c i18n: Add Sinhala translation. 2021-11-30 15:09:31 -08:00
Tim Abbott
ab965e5892 i18n: Update translation dat from Transifex. 2021-11-30 15:08:05 -08:00
PIG208
7a03827047 integrations: Add V3 support for PagerDuty. 2021-11-30 14:43:03 -08:00
PIG208
5954e622bc doc: Change supported extension type to reflect the change. 2021-11-30 14:42:57 -08:00
PIG208
687db48ea8 integrations: Change format of templates for PagerDuty V3.
Because the payload of V3 will no longer include the description,
We replace the ":" by "." in the message and create the new string
template for trigger messages.
2021-11-30 14:42:31 -08:00
Alex Vandiver
399391c3aa puppet: Default go-camo to listening on localhost for standalone deploys.
The default in the previous commit, inherited from camo, was to bind
to 0.0.0.0:9292.  In standalone deployments, camo is deployed on the
same host as the nginx reverse proxy, and as such there is no need to
open it up to other IPs.

Make `zulip::camo` take an optional parameter, which allows overriding
it in puppet, but skips a `zulip.conf` setting for it, since it is
unlikely to be adjust by most users.

(cherry picked from commit c514feaa22)
2021-11-19 17:51:08 -08:00
Alex Vandiver
cd5eec5eea camo: Replace with go-camo implementation.
The upstream of the `camo` repository[1] has been unmaintained for
several years, and is now archived by the owner.  Additionally, it has
a number of limitations:
 - It is installed as a sysinit service, which does not run under
   Docker
 - It does not prevent access to internal IPs, like 127.0.0.1
 - It does not respect standard `HTTP_proxy` environment variables,
   making it unable to use Smokescreen to prevent the prior flaw
 - It occasionally just crashes, and thus must have a cron job to
   restart it.

Swap camo out for the drop-in replacement go-camo[2], which has the
same external API, requiring not changes to Django code, but is more
maintained.  Additionally, it resolves all of the above complaints.

go-camo is not configured to use Smokescreen as a proxy, because its
own private-IP filtering prevents using a proxy which lies within that
IP space.  It is also unclear if the addition of Smokescreen would
provide any additional protection over the existing IP address
restrictions in go-camo.

go-camo has a subset of the security headers that our nginx reverse
proxy sets, and which camo set; provide the missing headers with `-H`
to ensure that go-camo, if exposed from behind some other non-nginx
load-balancer, still provides the necessary security headers.

Fixes #18351 by moving to supervisor.
Fixes zulip/docker-zulip#298 also by moving to supervisor.

[1] https://github.com/atmos/camo
[2] https://github.com/cactus/go-camo

(cherry picked from commit b982222e03)
2021-11-19 17:50:47 -08:00
Alex Vandiver
e7d48c0c10 puppet: Default to installing smokescreen on application frontends.
This is an additional security hardening step, to make Zulip default
to preventing SSRF attacks.  The overhead of running Smokescreen is
minimal, and there is no reason to force deployments to take
additional steps in order to secure themselves against SSRF attacks.

Deployments which already have a different external proxy configured
will not gain a local Smokescreen installation, and running without
Smokescreen is supported by explicitly unsetting the `host` or `port`
values in `/etc/zulip/zulip.conf`.

(cherry picked from commit c33562f0a8)
2021-11-19 17:49:37 -08:00
Alex Vandiver
023dfc01ba puppet: Split smokescreen into a non-profile version.
In a subsequent commit, we intend to include it from
`zulip::app_frontend_base`, which is a layering violation if it only
exists in the form of a profile.

(cherry picked from commit 44f1ea6bae)
2021-11-19 17:49:22 -08:00
Alex Vandiver
5d9285fff3 puppet: Remove unused smokescreen symlink.
(cherry picked from commit c2ed3c22b5)
2021-11-19 17:48:38 -08:00
Alex Vandiver
53f353ec26 puppet: Tidy old smokescreen binaries.
(cherry picked from commit 47e16a5d41)
2021-11-19 17:48:38 -08:00
Alex Vandiver
245c87c567 puppet: Embed golang version into binary path, to rebuild on new golang.
This will cause the output binary path to be sensitive to golang
version, causing it to be rebuilt on new golang, and an updated
supervisor config file written out, and thus supervisor also
restarted.

(cherry picked from commit 239ac8413e)
2021-11-19 17:48:38 -08:00
Alex Vandiver
26aa4d57e3 puppet: Factor out smokescreen binary path.
(cherry picked from commit 216eeba2dd)
2021-11-19 17:48:37 -08:00
Alex Vandiver
bee225782a puppet: Switch smokescreen to using zulip::external_dep, so it tidies.
(cherry picked from commit 3a7cef6582)
2021-11-19 17:48:37 -08:00
Alex Vandiver
4a6e69357a puppet: Move /srv/smokescreen-src to /srv/zulip-smokescreen-src.
As with the previous commit for `/srv/golang`, we have the custom of
namespacing things under `/srv` with `zulip-` to help ensure that we
play nice with anything else that happens to be on the host.

(cherry picked from commit ea08111d60)
2021-11-19 17:48:37 -08:00
Anders Kaseorg
3e6d3810d4 puppet: Upgrade Smokescreen v0.0.2-59-gbfca45c to v0.0.2-63-gdc40301.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit c64e1adb19)
2021-11-19 17:48:37 -08:00
Alex Vandiver
bc21dde235 puppet: Extract an external-tarball-dependency manifest.
(cherry picked from commit bb9d2df1ae)
2021-11-19 17:48:37 -08:00
Alex Vandiver
182ce488e2 puppet: Tidy old golang directories.
This relies on behavior which is only in Puppet 5.5.1 and above, which
means it must be skipped on Ubuntu 18.04.

(cherry picked from commit 3c8d7e2598)
2021-11-19 17:48:37 -08:00
Alex Vandiver
bd557a9a13 puppet: Move /srv/golang to /srv/zulip-golang.
We have the custom of namespacing things under `/srv` with `zulip-`
to help ensure that we play nice with anything else that happens
to be on the host.

(cherry picked from commit 2fc4acdf81)
2021-11-19 17:48:36 -08:00
Alex Vandiver
7e8ead7325 puppet: Switch dependency to the golang binary we need.
(cherry picked from commit 00a4abb642)
2021-11-19 17:48:36 -08:00
Alex Vandiver
8fa783f13d puppet: Stop making a /srv/golang symlink.
Nothing needs this extra directory.

(cherry picked from commit 2d5f813094)
2021-11-19 17:48:36 -08:00
Alex Vandiver
11924f4b66 puppet: Factor out golang variables.
(cherry picked from commit 93af6c7f06)
2021-11-19 17:48:36 -08:00
Alex Vandiver
f01cbba0ce puppet: Shorten golang version variable name.
(cherry picked from commit 21be36f15f)
2021-11-19 17:48:36 -08:00
Alex Vandiver
31050be173 puppet: Upgrade golang from 1.16.4 to 1.17.3.
(cherry picked from commit 6b9e74adee)
2021-11-19 17:48:35 -08:00
Alex Vandiver
56d857ca89 puppet: Split out golang toolchain into its own manifest.
(cherry picked from commit 514801c509)
2021-11-19 17:48:35 -08:00
Alex Vandiver
d587252ddb tornado: Move SIGTERM shutdown handler into a callback.
A SIGTERM can show up at any point in the ioloop, even in places which
are not prepared to handle it.  This results in the process ignoring
the `sys.exit` which the SIGTERM handler calls, with an uncaught
SystemExit exception:

```
2021-11-09 15:37:49.368 ERR  [tornado.application:9803] Uncaught exception
Traceback (most recent call last):
  File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/http1connection.py", line 238, in _read_message
    delegate.finish()
  File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/httpserver.py", line 314, in finish
    self.delegate.finish()
  File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/routing.py", line 251, in finish
    self.delegate.finish()
  File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 2097, in finish
    self.execute()
  File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 2130, in execute
    **self.path_kwargs)
  File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/gen.py", line 307, in wrapper
    yielded = next(result)
  File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 1510, in _execute
    result = method(*self.path_args, **self.path_kwargs)
  File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/handlers.py", line 150, in get
    request = self.convert_tornado_request_to_django_request()
  File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/handlers.py", line 113, in convert_tornado_request_to_django_request
    request = WSGIRequest(environ)
  File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/wsgi.py", line 66, in __init__
    script_name = get_script_name(environ)
  File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/event_queue.py", line 611, in <lambda>
    signal.signal(signal.SIGTERM, lambda signum, stack: sys.exit(1))
SystemExit: 1
```

Supervisor then terminates the process with a SIGKILL, which results
in dropping data held in the tornado process, as it does not dump its
queue.

The only command which is safe to run in the signal handler is
`ioloop.add_callback_from_signal`, which schedules the callback to run
during the course of the normal ioloop.  This callbacks does an
orderly shutdown of the server and the ioloop before exiting.

(cherry picked from commit bc5539d871)
2021-11-12 09:59:58 -08:00
Alex Vandiver
eadefdf2f5 soft_deactivate: Handle multiple SUBSCRIPTION_DEACTIVATEDs.
Race conditions in stream unsubscription may lead to multiple
back-to-back SUBSCRIPTION_DEACTIVATED RealmAuditLog entries for the
same stream.  The current logic constructs duplicate UserMessage
entries for such, which then later fail to insert.

Keep a set of message-ids that have been prep'd to be inserted, so
that we don't duplicate them if there is a duplicated
SUBSCRIPTION_DEACTIVATED row.  This also renames the `message` local
variable, which otherwise overrode the `message` argument of a
different type.

(cherry picked from commit 6b6dcf6ce1)
2021-11-10 12:30:24 -08:00
Anders Kaseorg
c05bbd0fd4 requirements: Upgrade Python requirements.
Sync versions from commit 069d6ced69 on
main, excluding django-auth-ldap, Jinja2, mypy, premailer, PyJWT,
semgrep, Sphinx, SQLAlchemy, zulip, and zulip-bots.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-11-03 20:47:32 -07:00
Tim Abbott
deedda2c18 push_notifications: Truncate overly large remove events.
Fixes #19224.
2021-11-03 11:41:57 -07:00
Tim Abbott
9bec6bb5eb docs: Extend Certbot troubleshooting documentation.
This should help folks who have problems with Certbot renewal; we had
a couple reported this week which I think were both caused by firewall
issues.
2021-11-02 21:35:50 -07:00
Alex Vandiver
634b6ea97b markdown: CSS-escape preview links.
This adds `soupsieve` as an explicit dependency, but intentionally
does not adjust the provision version, as it was already an indirect
dependency.

(cherry picked from commit 6a40c17ccf)
2021-10-27 05:23:34 +00:00
Alex Vandiver
10583bdb32 markdown: Run URL preview links through camo.
Not proxying these requests through camo is a security concern.
Furthermore, on the desktop client, any embed image which is hosted on
a server with an expired or otherwise invalid certificate will trigger
a blocking modal window with no clear source and a confusing error
message; see zulip/zulip-desktop#1119.

Rewrite all `message_embed_image` URLs through camo, if it is enabled.

(cherry picked from commit 52f74bbd9b)
2021-10-27 04:36:47 +00:00
Mateusz Mandera
ebb6a92f71 saml: Don't raise AssertionError if no name is provided in SAMLResponse.
This is an acceptable edge case for SAML and shouldn't raise any errors.
2021-10-26 16:48:23 -07:00
Alex Vandiver
80b7df1b0d scheduled_email: Consistently lock users table.
Only clear_scheduled_emails previously took a lock on the users before
removing them; make deliver_scheduled_emails do so as well, by using
prefetch_related to ensure that the table appears in the SELECT.  This
is not necessary for correctness, since all accesses of
ScheduledEmailUser first access the ScheduledEmail and lock it; it is
merely for consistency.

Since SELECT ... FOR UPDATE takes an UPDATE lock on all tables
mentioned in the SELECT, merely doing the prefetch is sufficient to
lock both tables; no `on=(...)` is needed to `select_for_update`.

This also does not address the pre-existing potential deadlock from
these two use cases, where both try to lock the same ScheduledEmail
rows in opposite orders.

(cherry picked from commit 4c518c2bba)
2021-10-18 17:06:11 -07:00
Alex Vandiver
7b6cee1164 send_email: Change clear_scheduled_emails to only take one user.
No codepath except tests passes in more than one user_profile -- and
doing so is what makes the deduplication necessary.

Simplify the API by making it only take one user_profile id.

(cherry picked from commit ebaafb32f3)
2021-10-18 17:06:11 -07:00
Alex Vandiver
99cc5598ac send_email: Fix sleep logic.
This was broken in the refactor in 1e67e0f218.

(cherry picked from commit 4ffda1be87)
2021-10-18 17:06:11 -07:00
Alex Vandiver
d23778869f deliver_scheduled_*: SELECT FOR UPDATE the relevant rows.
`deliver_scheduled_emails` and `deliver_scheduled_messages` use their
respective tables like a queue, but do not have guarantees that there
was only one consumer (besides the EMAIL_DELIVERER_DISABLED setting),
and could send duplicate messages if multiple consumers raced in
reading rows.

Use database locking to ensure that the database only feeds a given
ScheduledMessage or ScheduledEmail row to a single consumer.  A second
consumer, if it exists, will block until the first consumer commits
the transaction.

(cherry picked from commit 1e67e0f218)
2021-10-18 17:06:11 -07:00
Adam Benesh
6ba333c2ff puppet: Add WSGIApplicationGroup config to Apache SSO example.
Zulip apparently is now affected by a bad interaction between Apache's
WSGI using Python subinterpreters and C extension modules like `re2`
that are not designed for it.

The solution is apparently to set WSGIApplicationGroup to %{GLOBAL},
which disables Apache's use of Python subinterpreters.

See https://serverfault.com/questions/514242/non-responsive-apache-mod-wsgi-after-installing-scipy/514251#514251 for background.

Fixes #19924.
2021-10-08 15:08:14 -07:00
rht
3cf07d1671 Slack import: Use Python ZipFile to unzip.
This should handle the case when non-ASCII Unicode folder names are
created on Windows.

Fixes #19899.
2021-10-07 09:47:20 -07:00
rht
1b4832a703 slack_import: Remove obsolete SlackImportAttachment placeholder.
This was introduced in f4ad464d82, and
incompletely removed in e037c2f93e649c28a71c02559b5ae7a3333f42a8; here
we finish removing it.
2021-10-07 09:47:20 -07:00
Alex Vandiver
af5958e407 data_import: Protect better against bad Slack tokens.
An invalid token would be treated the same as a token with no scopes;
differentiate these better.
2021-10-07 09:47:20 -07:00
Alex Vandiver
a659944fe3 data_import: Support importing from Slack conversions in a directory.
Sometimes the Slack import zip file we get isn't quite the canonical
form that Slack produces -- often because the user has unzip'd it,
looked at it, and re-zip'd it, resulting in extra nested directories
and the like.

For such cases, support passing in a path to an unpacked Slack export
tree.
2021-10-07 09:47:20 -07:00
Alex Vandiver
19db2fa773 import_data: Do some quick verification of Slack import formats. 2021-10-07 09:47:20 -07:00
Priyansh Garg
b303477e86 data_import: Make slack bot emails unique.
Slack bot emails generated by us can be duplicate for two bots.
If such a case occur, append a counter to the email to make it
unique.

For maintaining the counter of duplicate emails and the final
email assigned to each bot, a class based approach is used with
static variables and static (class) methods. This keeps all the
data related to slack bot emails at the same place and easily
accessible from anywhere inside the module (without defining any
class object and passing it around).

Fixes: #16793
2021-10-07 09:47:20 -07:00
Alex Vandiver
5c01e23776 version: Update version after 4.7 release. 2021-10-04 14:24:43 -07:00
Alex Vandiver
4e724c1ec6 Release Zulip Server 4.7 2021-10-04 17:31:57 +00:00
Alex Vandiver
e2d303c1bb CVE-2021-41115: Use re2 for user-supplied linkifier patterns.
Zulip attempts to validate that the regular expressions that admins
enter for linkifiers are well-formatted, and only contain a specific
subset of regex grammar.  The process of checking these
properties (via a regex!) can cause denial-of-service via
backtracking.

Furthermore, this validation itself does not prevent the creation of
linkifiers which themselves cause denial-of-service when they are
executed.  As the validator accepts literally anything inside of a
`(?P<word>...)` block, any quadratic backtracking expression can be
hidden therein.

Switch user-provided linkifier patterns to be matched in the Markdown
processor by the `re2` library, which is guaranteed constant-time.
This somewhat limits the possible features of the regular
expression (notably, look-head and -behind, and back-references);
however, these features had never been advertised as working in the
context of linkifiers.

A migration removes any existing linkifiers which would not function
under re2, after printing them for posterity during the upgrade; they
are unlikely to be common, and are impossible to fix automatically.

The denial-of-service in the linkifier validator was discovered by
@erik-krogh and @yoff, as GHSL-2021-118.
2021-10-04 17:24:37 +00:00
Alex Vandiver
d3091a6096 requirements: Add google-re2, a drop-in replacement for re using re2.
re2[1] compiles (strictly) regular expressions to deterministic finite
automata, which guarantees linear-time behavior; `google-re2` is a
drop-in replacement for the `re` module which uses re2 under the hood.

[1]: https://github.com/google/re2/
2021-10-02 01:01:14 +00:00
Alex Vandiver
313bcfd02a github: Ignore CodeQL analysis in private repos.
CodeQL only runs in public repos; private forks will otherwise error
their CI runs.

(cherry picked from commit acbe7ae7a8)
2021-10-01 18:00:52 -07:00
Gaurav Pandey
09bfd485e9 ci: Remove unnecessary steps from production upgrade script.
This removes some steps which are no longer necessary to be run
in the production upgrade script. The steps were used due to
errors related to supervisor failing to restart which was resolved
in the commit 08c39a7388.

(cherry picked from commit dc2066c7e8)
2021-10-01 18:00:52 -07:00
Anders Kaseorg
576ae9cc9f ci: Use apt-get -y in production-upgrade test.
We currently configure ‘APT::Get::Assume-Yes’ in our custom Docker
image, but this is the only place we rely on it (outside of the
Dockerfile itself), and it’s better not to.

Also ‘apt-get remove && apt-get purge’ is the same as just ‘apt-get
purge’.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit db476bdc51)
2021-10-01 18:00:52 -07:00
Alex Vandiver
300447ddd9 ci: Use an init process to reap defunct processes.
When Github Actions run in Docker, the default pid 1 entrypoint is
`tail -f /dev/null`.  PID 1 is responsible for propagating signals to
its children, and calling `waitpid()` on defunct processes; `tail`
does not do these things.  This results in zombie processes piling up
inside the container, which is not an issue in most contexts.

However, it affects `start-stop-daemon`, which hangs when stopping
daemon processes, as they are never reaped.  This appears in CI as
`/etc/init.d/supervisor restart` never being able to succeed.

Run the docker container with `--init`, which spawns a
`/sbin/docker-init` PID 1 to handle the job of an init process.

(cherry picked from commit 2daad58afa)
2021-10-01 18:00:52 -07:00
Gaurav Pandey
f8149b0d5a ci: Add prod upgrade step to prod suite.
This adds a check in the current production suite of
CI that upgrades a previous release of zulip server
with a newer one.

Fixes #18346.

(cherry picked from commit e648ad3477)
2021-10-01 18:00:52 -07:00
Priyank Patel
b579dad7d9 github-actions: Upgrade styfle/cancel-workflow-action.
(cherry picked from commit 05510a8c04)
2021-10-01 18:00:52 -07:00
Priyank Patel
fdfabb800d github-actions: Ensure cancel previous run job never fails.
(cherry picked from commit 607110ca33)
2021-10-01 18:00:52 -07:00
Tim Abbott
2c4156678c docs: Inline some upgrade instructions.
It feels like the "Same as" content was unnecessarily requiring the
user to bounce around in these cases.

(I've left the "Same as" text for the Ubuntu ones, where it's two
steps in a row to follow).
2021-10-01 11:10:13 -07:00
Gaurav Pandey
0a87276a27 docs: Document upgrade steps from buster to bullseye.
Fixes #17863.
2021-10-01 11:10:12 -07:00
Tim Abbott
19aed43817 version: Update version after 4.6 release. 2021-09-23 16:14:53 -07:00
Tim Abbott
d370aefe3a Release Zulip Server 4.6. 2021-09-23 16:09:51 -07:00
Anders Kaseorg
0f5657b0ed setup_venv: Skip virtualenv’s automatic download of setuptools.
It recently started failing on Debian 10 (buster).  We immediately
follow this by replacing these packages with our own versions from
pip.txt, anyway.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 902883d818)
2021-09-23 15:06:39 -07:00
Tim Abbott
24277a144e outgoing webhooks: Fix inconsistencies with Slack's API.
Apparently, our slack compatible outgoing webhook format didn't
exactly match Slack, especially in the types used for values.  Fix
this by using a much more consistent format, where we preserve their
pattern of prefixing IDs with letters.

This fixes a bug where Zulip's team_id could be the empty string,
which tripped up using GitLab's slash commands with Zulip.

Fixes #19588.
2021-09-23 14:49:36 -07:00
Tim Abbott
df8b8b9836 i18n: Update translation data from Transifex. 2021-09-23 12:17:05 -07:00
Anders Kaseorg
64fab06adb ci: Remove legacy-os test.
As of yesterday, the GitHub Actions ubuntu-16.04 environment has been
removed.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit e96abc3c5a)
2021-09-22 16:00:30 -07:00
Gaurav Pandey
9391840d34 docs: Add documentation for bullseye support.
The support for bullseye was added in #17951
but it was not documented as bullseye was
frozen and did not have proper configuration
files, hence wasn't documented.

Since now bullseye is released as a stable
version, it's support can be documented.

(cherry picked from commit 502697d239)
2021-09-14 22:02:48 +00:00
Eeshan Garg
658e641d12 docs: Indicate latest Zulip version in installation and upgrade docs.
With copy-editing from tabbott, and also a migration to use
LATEST_RELEASE_VERSION, which will be correct even on the /latest/
paths.

Fixes #19695.

(cherry picked from commit 3b1cb0b25a)
2021-09-10 17:07:53 -07:00
Alex Vandiver
467723145b tools: Switch to download.zulip.com from www.zulip.org.
(cherry picked from commit 7d7d727865)
2021-09-10 17:07:34 -07:00
Anders Kaseorg
4ce37176db docs: Migrate from recommonmark to MyST-Parser.
Recommonmark is no longer maintained, and MyST-Parser is much more
complete.

https://myst-parser.readthedocs.io/

Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-09-10 16:12:52 -07:00
Anders Kaseorg
82bf185b1b lint: Add Markdown files to Prettier linter.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit c3448370a4)
2021-09-10 16:02:22 -07:00
Anders Kaseorg
d81ce3ba76 docs: Format Markdown with Prettier.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit a4dbc1edd4)
2021-09-10 16:02:22 -07:00
Anders Kaseorg
aa6e70382d docs: Apply sentence single-spacing from Prettier.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 35c1c8d41b)
2021-09-10 16:02:22 -07:00
Anders Kaseorg
0147c6adce docs: Apply bullet style changes from Prettier.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 915884bff7)
2021-09-10 16:02:22 -07:00
Anders Kaseorg
5ae8fe292d docs: Rewrap to avoid line breaks in inline code spans.
This works around https://github.com/prettier/prettier/issues/11372.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 6145fdf678)
2021-09-10 16:02:22 -07:00
Anders Kaseorg
2e8d8ca044 docs: Fix pip compile typo.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit a6e01b35fc)
2021-09-10 16:02:22 -07:00
Shelly
ec0835b947 models: Add setters for is_realm_owner and is_moderator.
This fixes a regression where one could end up deactivating all owners
of a realm when trying to synchronize LDAP with the `is_realm_admin`
flag configured in `AUTH_LDAP_USER_FLAGS_BY_GROUP`.

With tweaks by tabbott to add is_moderator as well.

Fixes #18677.
2021-09-07 17:16:20 -07:00
Anders Kaseorg
e5e7e58c99 docs: Display main branch name as inline code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit f4d2d199e2)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
6a6c6d469b Rename default branch to ‘main’.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 646c04eff2)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
34512727e4 integrations: Document default branch name updates.
53e59c8c09

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit e5a818b869)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
da3396b4d7 docs: Update links for other repository branch renames.
GitHub redirects these, but we should use the canonical URLs.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 1ce12191aa)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
3f1b444a9a prettier: Exclude backend-processed Markdown files.
Our backend processor is not yet sufficiently CommonMark compliant to
accept Prettier formatted Markdown files.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 7df2be0965)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
d5a5d0a3e7 prettier: Disable embedded language formatting for Markdown.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 08fb51483b)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
bac90f6a9d editorconfig: Restore indent_size = 2 for Markdown.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 676fc93e1f)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
9fbfdb0aca docs: Avoid [GitHub] as an internal Markdown link reference name.
To avoid confusing the linter later when Prettier lowercases these.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit fdb7ec8c9e)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
7fe1e55483 reading-list: Inline links.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 0e4a30daad)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
cb0d29d845 docs: Escape asterisks for Prettier compatibility.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 7b3d4ff1de)
2021-09-07 13:56:41 -07:00
Anders Kaseorg
1c83ebfc71 docs: Adjust list item indentation for Prettier compatibility.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 0a3e022376)
2021-09-07 13:56:40 -07:00
Anders Kaseorg
8d040d36ed docs: Fix list item indentation mistakes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 4bfffc9f74)
2021-09-07 13:56:40 -07:00
Anders Kaseorg
f4b955f2ee docs: Fix “sinternet” typo.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 962f14995e)
2021-09-07 13:56:40 -07:00
Anders Kaseorg
aa3f9004ba docs: Add missing blockquote.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit fe3db63381)
2021-09-07 13:56:40 -07:00
Anders Kaseorg
90bf44bde0 docs: Add syntax highlighting languages to code blocks.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit b29b6f6526)
2021-09-07 13:56:40 -07:00
Anders Kaseorg
dbb7bc824c docs: Remove trailing newlines from code blocks.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 8f2827a65d)
2021-09-07 13:30:53 -07:00
Anders Kaseorg
3d4071fea7 docs: Fix misaligned Markdown source indentation.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit cb61ea69c6)
2021-09-07 13:30:53 -07:00
Anders Kaseorg
eb7464c68d docs: Fix code span syntax in embedded reST block.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 25c6d5c99c)
2021-09-07 13:30:22 -07:00
Anders Kaseorg
1c2deb0cd3 docs: Move authentication-methods#ldap anchor to appropriate heading.
Commit 30eaed0378 (#15001) incorrectly
inserted a different section between the anchor and the heading.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit c3646ec67f)
2021-09-07 13:18:07 -07:00
Anders Kaseorg
26f4ab9a9d upgrade-zulip-from-git: Run git fetch with --prune.
This prevents upgrading to an obsolete version of a branch that has
been deleted or renamed.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 02582c6956)
2021-09-01 15:56:45 -07:00
Alex Vandiver
5feba78939 upgrade-postgresql: Do not remove other supervisor configs.
We previously used `zulip-puppet-apply` with a custom config file,
with an updated PostgreSQL version but more limited set of
`puppet_classes`, to pre-create the basic settings for the new cluster
before running `pg_upgradecluster`.

Unfortunately, the supervisor config uses `purge => true` to remove
all SUPERVISOR configuration files that are not included in the puppet
configuration; this leads to it removing all other supervisor
processes during the upgrade, only to add them back and start them
during the second `zulip-puppet-apply`.

It also leads to `process-fts-updates` not being started after the
upgrade completes; this is the one supervisor config file which was
not removed and re-added, and thus the one that is not re-started due
to having been re-added.  This was not detected in CI because CI added
a `start-server` command which was not in the upgrade documentation.

Set a custom facter fact that prevents the `purge` behaviour of the
supervisor configuration.  We want to preserve that behaviour in
general, and using `zulip-puppet-apply` continues to be the best way
to pre-set-up the PostgreSQL configuration -- but we wish to avoid
that behaviour when we know we are applying a subset of the puppet
classes.

Since supervisor configs are no longer removed and re-added, this
requires an explicit start-server step in the instructions after the
upgrades complete.  This brings the documentation into alignment with
what CI is testing.
2021-08-24 19:02:24 -07:00
Mateusz Mandera
04600acbbb management: Rename clear_auth_rate_limit_history command.
(cherry picked from commit 7ef1a024db)
2021-08-23 11:54:09 -07:00
Mateusz Mandera
6ffbb6081b rate_limit: Add management command to reset auth rate limit.
The auth attempt rate limit is quite low (on purpose), so this can be a
common scenario where a user asks their admin to reset the limit instead
of waiting. We should provide a tool for administrators to handle such
requests without fiddling around with code in manage.py shell.

(cherry picked from commit fdbde59b07)
2021-08-23 11:54:02 -07:00
Iam-VM
1f2767f940 migrations: Fix possible 0257_fix_has_link_attribute.py failure.
While it should be an invariant that message.rendered_content is never
None for a row saved to the database, it is possible for that
invariant to be violated, likely including due to bugs in previous
versions of data import/export tools.

While it'd be ideal for such messages to be rendered to fix the
invariant, it doesn't make sense for this has_link migration to crash
because of such a corrupted row, so we apply the similar policy we
already have for rendered_content="".
2021-08-04 12:52:22 -07:00
Tim Abbott
9173ed0fb9 message_edit: Fix live update bug in left sidebar.
We've had for years a subtle bug, where after editing a topic in the
left sidebar that had previously had unread messages (but doesn't
anymore), the old topic might still appear in the sidebar.

The bug was hard to notice except for new organizations or in the
development environment, because the pre-edit topic appeared with a
sort key of -Infinity (that being the max ID in an empty list of
message IDs). But this is an important onboarding bug in reducing
faith in Zulip's topic editing just working, so I'm glad to have it
fixed.

Fixes #11901.
2021-07-29 15:01:39 -07:00
Mateusz Mandera
303bde6c55 email-mirror-postfix: Choose scheme based on http_only config.
Fixes #16659.
If the server is behind a reverse proxy with http_only=True, the
requests made by email-mirror-postfix need to use http, as https
doesn't work.
2021-07-29 15:00:39 -07:00
Tim Abbott
bc118496a2 i18n: Update translation data from Transifex. 2021-07-27 16:35:41 -07:00
Tim Abbott
f118da6b86 version: Update version after 4.5 release. 2021-07-25 16:03:39 -07:00
Tim Abbott
1ba708ca96 Release Zulip Server 4.5. 2021-07-25 15:40:46 -07:00
Alex Vandiver
e156db2bc7 reindex-textual-data: Provide a tool to reindex all text indices.
The script is added to upgrade steps for 20.04 and Buster because
those are the upgrades that cross glibc 2.28, which is most
problematic.  It will also be called out in the upgrade notes, to
catch those that have already done that upgrade.
2021-07-25 15:36:11 -07:00
Alex Vandiver
d0235add03 version: Update version after 4.4 release. 2021-07-22 17:10:37 -07:00
Alex Vandiver
a6b06df895 Release Zulip Server 4.4. 2021-07-22 22:32:34 +00:00
Anders Kaseorg
2df2f7eec6 fenced_code: Optimize FENCE_RE to fix cubic worst-case complexity.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-07-22 21:31:36 +00:00
Anders Kaseorg
ad858d2c79 fenced_code: Write FENCE_RE with a raw string.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-07-22 21:31:36 +00:00
Alex Vandiver
5290f17adb puppet: Run the supervisor-restart step only after it is started.
In an initial install, the following is a potential rule ordering:
```
Notice: /Stage[main]/Zulip::Supervisor/File[/etc/supervisor/conf.d/zulip]/ensure: created
Notice: /Stage[main]/Zulip::Supervisor/File[/etc/supervisor/supervisord.conf]/content: content changed '{md5}99dc7e8a1178ede9ae9794aaecbca436' to '{md5}7ef9771d2c476c246a3ebd95fab784cb'
Notice: /Stage[main]/Zulip::Supervisor/Exec[supervisor-restart]: Triggered 'refresh' from 1 event
[...]
Notice: /Stage[main]/Zulip::App_frontend_base/File[/etc/supervisor/conf.d/zulip/zulip.conf]/ensure: defined content as '{md5}d98ac8a974d44efb1d1bb2ef8b9c3dee'
[...]
Notice: /Stage[main]/Zulip::App_frontend_once/File[/etc/supervisor/conf.d/zulip/zulip-once.conf]/ensure: defined content as '{md5}53f56ae4b95413bfd7a117e3113082dc'
[...]
Notice: /Stage[main]/Zulip::Process_fts_updates/File[/etc/supervisor/conf.d/zulip/zulip_db.conf]/ensure: defined content as '{md5}96092d7f27d76f48178a53b51f80b0f0'
Notice: /Stage[main]/Zulip::Supervisor/Service[supervisor]/ensure: ensure changed 'stopped' to 'running'
```

The last line is misleading -- supervisor was already started by the
`supervisor-restart` process on the third line.  As can be shown with
`zulip-puppet-apply --debug`, the last line just installs supervisor
to run on startup, using `systemctl`:
```
Debug: Executing: 'supervisorctl status'
Debug: Executing: '/usr/bin/systemctl unmask supervisor'
Debug: Executing: '/usr/bin/systemctl start supervisor'
```

This means the list of processes started by supervisor depends
entirely on which configuration files were successfully written out by
puppet before the initial `supervisor-restart` ran.  Since
`zulip_db.conf` is written later than the rest, the initial install
often fails to start the `process-fts-updates` process.  In this
state, an explicit `supervisorctl restart` or `supervisorctl reread &&
supervisorctl update` is required for the service to be found and
started.

Reorder the `supervisor-restart` exec to only run after the service is
started.  Because all supervisor configuration files have a `notify`
of the service, this forces the ordering of:

```
(package) -> (config files) -> (service) -> (optional restart)
```

On first startup, this will start and them immediately restart
supervisor, which is unfortunate but unavoidable -- and not terribly
relevant, since the database will not have been created yet, and thus
most processes will be in a restart loop for failing to connect to it.
2021-07-22 14:23:41 -07:00
Alex Vandiver
9824a9d7cf puppet: Work around sysvinit supervisor init bug.
The sysvinit script for supervisor has a long-standing bug where
`/etc/init.d/supervisor restart` stops but does not then start the
supervisor process.

Work around this by making restart then try to start, and return if it
is currently running.
2021-07-22 14:23:41 -07:00
Alex Vandiver
88a2a80d81 ci: Use an init process to reap defunct processes.
When Github Actions run in Docker, the default pid 1 entrypoint is
`tail -f /dev/null`.  PID 1 is responsible for propagating signals to
its children, and calling `waitpid()` on defunct processes; `tail`
does not do these things.  This results in zombie processes piling up
inside the container, which is not an issue in most contexts.

However, it affects `start-stop-daemon`, which hangs when stopping
daemon processes, as they are never reaped.  This appears in CI as
`/etc/init.d/supervisor restart` never being able to succeed.

Run the docker container with `--init`, which spawns a
`/sbin/docker-init` PID 1 to handle the job of an init process.
2021-07-22 14:23:37 -07:00
Erik Tews
5b16ee0c08 auth: show _OR_ during login only when other methods are available.
There might be good reasons to have other external authentication
methods such as SAML configured, but none of them is available.

This happens, for example, when you have enabled SAML so that Zulip is
able to generate the metadata in XML format, but you haven't
configured an IdP yet. This commit makes sure that the phrase _OR_ is
only shown on the login/account page when there are actually other
authentication methods available. When they are just configured, but
not available yet, the page looks like as if no external
authentication methods are be configured.

We achieve this by deleting any_social_backend_enabled, which was very
similar to page_params.external_authentication_methods, which
correctly has one entry per configured SAML IdP.
2021-07-20 14:31:54 -07:00
Tim Abbott
17dced26ff i18n: Update translation data from Transifex. 2021-07-15 09:44:04 -07:00
Alex Vandiver
fc9c5b1f43 puppet: Ensure psycopg2 is installed before running process_fts_updates.
Not having the package installed will cause startup failures in
`process_fts_updates`; ensure that we've installed the package before
we potentially start the service.
2021-07-15 00:25:39 +00:00
Alex Vandiver
564873a207 smokescreen: Default to only listening on 127.0.0.1.
This prevents Smokescreen from acting as an open proxy.

Fixes #19214.
2021-07-14 15:41:33 -07:00
Mateusz Mandera
c692263255 management: Add change_password command.
Zulip identifies users by realm+delivery_email which means that the
Django changepassword command doesn't work well -
since it looks only at the .email field.
Thus we fork its code to our own change_password command.
2021-07-09 12:34:56 -07:00
Mateusz Mandera
bfe428f608 saml: Add setting to skip the "continue to registration" page.
It's a smoother Just-In-Time provisioning process to allow
creating the account and getting signed in on the first login by the
user.
2021-07-08 15:21:40 -07:00
Mateusz Mandera
d200e3547f embed_links: Interrupt consume() function on worker timeout.
This fixes a bug introduced in 95b46549e1
which made the worker simply log a warning about the timeout and then
continue consume()ing the event that should have also been interrupted.

The idea here is to introduce an exception which can be used to
interrupt the consume() process without triggering the regular handling
of exceptions that happens in _handle_consume_exception.
2021-07-07 09:25:13 -07:00
Tim Abbott
b6afa4a82b test_queue_worker: Fix order-dependent assertions. 2021-07-06 14:37:28 -07:00
Mateusz Mandera
4db187856d embed_links: Only log warning if worker times out.
Throwing an exception is excessive in case of this worker, as it's
expected for it to time out sometimes if the urls take too long to
process.

With a test added by tabbott.
2021-07-06 14:18:08 -07:00
Mateusz Mandera
36638c95b9 queue_processors: Make timer_expired receive list of events as argument.
This will give queue workers more flexibility when defining their own
override of the method.
2021-07-06 14:18:04 -07:00
Mateusz Mandera
85f14eb4f7 queue_processors: Make timer_expired() a method.
This allows specific queue workers to override the defaut behavior and
implement their own response to the timer expiring. We will want to use
this for embed_links queue at least.
2021-07-06 14:18:01 -07:00
Steve Howell
0fab79c027 widgets: Add range checks on backend for indexes. 2021-07-01 15:15:11 -07:00
Steve Howell
7d46bed507 widgets: Validate todo data on the backend. 2021-07-01 15:15:11 -07:00
Alex Vandiver
a89ba9c7d6 puppet: Catch when a comma is left out of puppet_classes.
With two space-separated classes in `puppet_classes`, the second one
is silently ignored.  With three of more, puppet generates the
following very opaque error message:

```
Error: Could not parse for environment production: This
Name has no effect. A value was produced and then forgotten (one or
more preceding expressions may have the wrong form)
```

Catch when this has happened, and give an error message to the user.

Fixes #18992.
2021-06-28 17:59:46 -07:00
Tim Abbott
8f735f4683 install: Use a period at end of root error message. 2021-06-23 09:10:12 -07:00
Gaurav Pandey
e7cfd30d53 upgrade: Modify upgrade scripts to handle failure.
The current `upgrade-zulip` and `upgrade-zulip-from-git`
bash scripts exit with a zero status even if the
upgrade commands exit with a non-zero status.
Hence add `set -e` command which exits the script with
the same status as the non-zero command.

For pipe commands however, the net status of a command
is the status of the last command, hence if the other parts
fail, the net status is only determined by the last command.
This is the case with our main /lib/upgrade-zulip* command
in the scripts whose status is determined by the `tee` command
instead. Hence add a small condition to get the status of the
actual upgrade command and exit the script if it fails with
a non-zero command.

We also check whether the script is being run as root, matching the
install script logic.
2021-06-23 09:10:11 -07:00
Mateusz Mandera
10c8c0e071 upload: Use URL manipulation for get_public_upload_url logic.
This is much faster than calling generate_presigned_url each time.

```
In [3]: t = time.time()
   ...: for i in range(250):
   ...:     x = u.get_public_upload_url("foo")
   ...: print(time.time()-t)
0.0010945796966552734
```
2021-06-22 09:36:29 -07:00
Mateusz Mandera
9f8b5e225d upload: Cache the boto client to improve performance.
Fixes #18915

This was very slow, causing performance issues. After investigating,
generate_presigned_url is the cheap part of this, but the
session.client() call is expensive - so that's what we should cache.

Before the change:
```
In [4]: t = time.time()
   ...: for i in range(250):
   ...:     x = u.get_public_upload_url("foo")
   ...: print(time.time()-t)
6.408717393875122
```

After:
```
In [4]: t = time.time()
   ...: for i in range(250):
   ...:     x = u.get_public_upload_url("foo")
   ...: print(time.time()-t)
0.48990607261657715
```

This is not good enough to avoid doing something ugly like replacing
generate_presigned_url with some manual URL manipulation, but it's a
helpful structure that we may find useful with further refactoring.
2021-06-22 09:36:28 -07:00
Steve Howell
62194eb20f poll widget: Add server validation. 2021-06-14 17:57:24 -07:00
Steve Howell
2492f4b60e submessages: Add verify_submessage_sender.
Before this change a rogue actor could try to
widgetize another person's message. (The
rogue actor would already have access to read
the message.)
2021-06-14 17:57:23 -07:00
Signior-X
1b2967ddb5 login: Remove browser show password in IE, edge.
The Microsoft browsers such as IE and Edge has their own
show password that is a bit bugy and also conflicts with
the show password in Zulip that was added in #17305.
This fixes the issue by making the display none for the
ms-reveal that comes in the input.

More details can be found at
https://chat.zulip.org/#narrow/stream/101-design/topic/Show.20password/near/1173890
2021-06-14 16:36:15 -07:00
Tim Abbott
42774b101f webhooks: Update link to BuildBot documentation. 2021-06-10 17:16:09 -07:00
Anders Kaseorg
716cba04de zulip_tools: Flush ‘set -x’-style messages in run.
Otherwise they often get buffered until after the command actually
runs.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit d8cb418586)
2021-06-09 16:16:42 -07:00
Tim Abbott
332add3bb6 import: Fix propagation of subdomain error messages.
The previous logic would provide a very confusing error message if the
subdomain was already in use.
2021-06-09 13:22:23 -07:00
Anders Kaseorg
b596cd7607 webpack: Fix CSS source map generation on 1-CPU systems.
We were passing a SourceMapGenerator as `map`, but it seems that
css-minimizer-webpack-plugin expects a string, and only implicitly
stringifies it when running with parallelism.

Fixes #18727.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit aedc5af351)
2021-06-08 16:26:58 -07:00
Alex Vandiver
21cedabbdf subdomains: Extend "static" to include resources hosted on S3.
This causes avatars and emoji which are hosted by Zulip in S3 (or
compatible) servers to no longer go through camo.  Routing these
requests through camo does not add any privacy benefit (as the request
logs there go to the Zulip admins regardless), and may break emoji
imported from Slack before 1bf385e35f,
which have `application/octet-stream` as their stored Content-Type.
2021-06-08 15:28:32 -07:00
Alex Vandiver
f910d5b8a9 docs: Remove link to 16.04, which can be confusing.
The instructions do not just apply to 16.04; the block below describes
the settings, which are correct for all relevant Ubuntu versions.
2021-06-02 17:18:41 -07:00
Alex Vandiver
daf185705d send_test_email: Capture and show SMTP log on errors. 2021-06-02 13:18:26 -07:00
Tim Abbott
1fa7081a4c version: Update version after 4.3 release. 2021-06-02 12:54:04 -07:00
Tim Abbott
0d17a5e76d Release Zulip Server 4.3. 2021-06-02 11:40:33 -07:00
Tim Abbott
9815581957 i18n: Update translation data from Transifex. 2021-06-02 09:48:12 -07:00
Tim Abbott
33d7aa9d47 i18n: Adjust Transifex sync-translations download mode.
It appears that some server-side change to Transifex resulted in the
"onlytranslated" mode deleting some (all?) strings from django.po files that
were not translated.

Testing determined that the "translator" mode appears to now be the
only mode that works with both our django.po and translations.json
files (We want to avoid both copying the English strings and deleting
strings), so we're switching to that.

Background is available here:
https://chat.zulip.org/#narrow/stream/3-backend/topic/4.2Ex.20branch.20translations.20sync/near/1187324
2021-06-02 09:44:40 -07:00
Alex Vandiver
6c3a6ef6c1 docs: Add a missing close paren. 2021-06-01 16:33:10 -07:00
Alex Vandiver
a63150ca35 docs: Update path to nginx.conf, as it is now a template.
Also provide the right expansion for the one embedded variable
currently in the template.
2021-06-01 16:33:06 -07:00
Anders Kaseorg
7ab8455596 giphy: Load Giphy SDK lazily.
The Giphy SDK sends tracking pings when it loads; we don’t want those
to be sent for visitors who aren’t using Giphy.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-05-28 15:45:07 -07:00
Tim Abbott
43be62c7ef upload: Use get_public_upload_url for export tarballs too.
This deduplicates the code so that we now just have one function for
constructing S3 URLs.
2021-05-27 23:30:00 -07:00
ryanreh99
7b15ce71c2 s3 uploads: Refactor to access objects via get_public_upload_url.
Our current logic only allows S3 block storage providers whose
upload URL matches with the format used by AWS. This also allows
other styles such as the "virtual host" format used by Oracle cloud.

Fixes #17762.
2021-05-27 23:29:59 -07:00
Sumanth V Rao
96c5a9e303 models: Fix bug in unique_together condition on RealmPlayground.
We don't need to worry about breaking already configured playgrounds
since this tweak makes the condition less strict.
2021-05-26 18:17:24 -07:00
Anders Kaseorg
0b337e0819 actions: Fix incorrect audit logging in bulk_remove_subscriptions.
modified_user=sub_info.user and modified_stream=sub_info.stream, added
by commit 6d1f9de7d3 (#16553), were
always coming from the last entry in the loop above, not from the
enclosing list comprehension.

Found by the Pylint rule undefined-loop-variable.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-05-26 18:17:08 -07:00
Anders Kaseorg
d4b3c20e48 markdown: Fix Dropbox image previews.
?dl=1 causes Dropbox to send Content-Type: application/binary, which
can’t be interpreted by Camo.  Use ?raw=1 instead.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-05-26 12:17:48 -07:00
Vishnu KS
31be0f04b9 i18n: Tag strings in status message settings for translation.
Fixes #18609
2021-05-26 11:04:25 -07:00
Vishnu KS
6af0e28e5d user status: Remove data attributes from user status options.
I don't see any good reason why we have to store the status
values in data attributes when they are already stored as
the content of the buttons.
2021-05-26 11:04:24 -07:00
Adam Birds
9cb538b08f integrations: Add label_create_activity to unsupported pivotal events.
Fixes #18580.
2021-05-25 20:57:17 -07:00
AdamVB
bf49f962c0 integrations: Enhance Grafana integration with alert state.
Having the alert state in the message body is useful when alert topics 
are not defined by alert description but encoded in the url.

E.g. in large environments having a topic for each alert [alerting] and [ok] would 
make it harder to properly track if an alert has been resolved.

When each alert is in a single topic, so far, the alert state has been missing.

This change will add the current alert state and a fitting icon in front
of the alert name.(Similar to the prometheus alertmanager integration)

The test cases have been amended to cover all possible alert states, even
though realistically grafana only fires the ok and alerting states via
webhook.
2021-05-24 14:25:47 -07:00
Alex Vandiver
2a69b4f3b7 update-prod-static: Ensure that it is run as the zulip user. 2021-05-21 16:53:02 -07:00
sahil839
540904aa9d giphy: Add a '?' icon besides the "GIPHY integration" label.
We add a '?' icon besides the "GIPHY integration" label of
giphy settings dropdown.

The icon links to readthedocs page for setting up giphy API
key when api key is not set, and it points to help center
article of GIFs when the api key is added.
2021-05-19 13:21:41 -07:00
sahil839
26bdf79642 css: Change width of upgrade-tip and to max-content.
We change the width of upgrade-tip to be max-content
such that it matches with the other elements in
settings overlay like dropdown, which are not of full
width.
2021-05-19 13:21:23 -07:00
sahil839
2c1ffaceca giphy: Fix live update of giphy icon when API key is empty.
We fix the code to show giphy icon live update only if the
updated setting is not disabled and API key has been added.
Though the dropdown is disabled,the setting can still be
changed using API, so this change is necessary.

Previously, we were not checking whether API key is there or
not and icon was shown on live update even if API key was
not there and then it went off on reload.
2021-05-19 13:21:19 -07:00
sahil839
dffff73654 giphy: Disable giphy settings dropdown if API key is not present. 2021-05-19 13:21:15 -07:00
Tim Abbott
2f9d4f5a96 settings: Fix setting JITSI_SERVER_URL to None.
This fixes a bug introduced in
55a23754c3, that resulted in Zulip
crashing on startup if JITSI_SERVER_URL=None.

Fixes #18512.
2021-05-18 19:17:13 -07:00
Tim Abbott
ce96018af4 version: Update version after 4.2 release. 2021-05-13 22:08:45 -07:00
Tim Abbott
a025fab082 Release Zulip Server 4.2. 2021-05-13 22:03:34 -07:00
Anders Kaseorg
812ad52007 install: Run git config commands from a known readable cwd.
Fixes this error when running the installer from a directory that
isn’t world-readable:

+ su zulip -c 'git config --global user.email anders@zulip.com'
fatal: cannot come back to cwd: Permission denied

Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-05-13 22:01:01 -07:00
Anders Kaseorg
9066fcac9a postgresql-init-db: Fix installation from world-unreadable directory.
This reverts part of commit 476524c0c1
(#18215), to fix this error when running the installer from a
directory that isn’t world-readable:

+ '[' -e /var/run/supervisor.sock ']'
+++ dirname /root/zulip-server-4.1/scripts/setup/postgresql-init-db
++ dirname /root/zulip-server-4.1/scripts/setup
+ su zulip -c /root/zulip-server-4.1/scripts/stop-server
bash: /root/zulip-server-4.1/scripts/stop-server: Permission denied

Zulip installation failed (exit code 126)!

Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-05-13 22:00:56 -07:00
Anders Kaseorg
a70ebdb005 purge-old-deployments: Check /srv/zulip.git existence before pruning it.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-05-13 20:56:47 -07:00
Tim Abbott
956d4b2568 version: Link blog post from 4.0 release. 2021-05-13 18:59:53 -07:00
Tim Abbott
ea2256da29 version: Update version after 4.1 release. 2021-05-13 18:58:51 -07:00
Tim Abbott
d1bd8f3637 Release Zulip Server 4.1. 2021-05-13 18:35:06 -07:00
Tim Abbott
22d486bbf7 scripts: Fix check for services running when upgrading.
When upgrading from a pre-4.0 release, scripts/stop-server logic would
check whether supervisord configuration files were present to
determine what it needed to restart, but only considered paths to
those files that are introduced in Zulip 4.0.
2021-05-13 18:10:08 -07:00
Aman Agrawal
977ff62fe8 message_edit_form: Fix vertical alignment of bottom elements. 2021-05-13 17:19:22 -07:00
Anders Kaseorg
5bfc162df9 changelog: Fix version number typo.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
2021-05-13 17:19:12 -07:00
Tim Abbott
2aa643502a version: Update version after 4.0 release. 2021-05-13 15:53:02 -07:00
428 changed files with 47591 additions and 16394 deletions

View File

@@ -17,7 +17,7 @@ max_line_length = 100
[*.{py,pyi}]
max_line_length = 110
[*.{svg,rb,pp,yaml,yml}]
[*.{md,svg,rb,pp,yaml,yml}]
indent_size = 2
[package.json]

View File

@@ -1,14 +1,11 @@
<!-- What's this PR for? (Just a link to an issue is fine.) -->
**Testing plan:** <!-- How have you tested? -->
**GIFs or screenshots:** <!-- If a UI change. See:
https://zulip.readthedocs.io/en/latest/tutorials/screenshot-and-gif-software.html
-->
<!-- Also be sure to make clear, coherent commits:
https://zulip.readthedocs.io/en/latest/contributing/version-control.html
-->

View File

@@ -22,6 +22,7 @@ jobs:
# so this is required.
- name: Get workflow IDs.
id: workflow_ids
continue-on-error: true # Don't fail this job on failure
env:
# This is in <owner>/<repo> format e.g. zulip/zulip
REPOSITORY: ${{ github.repository }}
@@ -35,7 +36,8 @@ jobs:
ids=$(node -e "$script")
echo "::set-output name=ids::$ids"
- uses: styfle/cancel-workflow-action@0.4.1
- uses: styfle/cancel-workflow-action@0.9.0
continue-on-error: true # Don't fail this job on failure
with:
workflow_id: ${{ steps.workflow_ids.outputs.ids }}
access_token: ${{ github.token }}

View File

@@ -4,6 +4,7 @@ on: [push, pull_request]
jobs:
CodeQL:
if: ${{!github.event.repository.private}}
runs-on: ubuntu-latest
steps:

View File

@@ -1,24 +0,0 @@
name: Legacy OS
on: [push, pull_request]
jobs:
xenial:
name: Ubuntu 16.04 Xenial (Python 3.5, legacy)
runs-on: ubuntu-16.04
steps:
- uses: actions/checkout@v2
- name: Check tools/provision error message on xenial
run: |
{ { ! tools/provision 2>&1 >&3; } | tee provision.err; } 3>&1 >&2
grep -Fqx 'Error: ubuntu 16.04 is no longer a supported platform for Zulip.' provision.err
- name: Check scripts/lib/upgrade-zulip-stage-2 error message on xenial
run: |
{ { ! sudo scripts/lib/upgrade-zulip-stage-2 2>&1 >&3; } | tee upgrade.err; } 3>&1 >&2
grep -Fq 'upgrade-zulip-stage-2: Unsupported platform: ubuntu 16.04' upgrade.err
- name: Report status
if: failure()
env:
ZULIP_BOT_KEY: ${{ secrets.ZULIP_BOT_KEY }}
run: tools/ci/send-failure-message

View File

@@ -30,6 +30,8 @@ defaults:
jobs:
production_build:
# This job builds a release tarball from the current commit, which
# will be used for all of the following install/upgrade tests.
name: Bionic production build
runs-on: ubuntu-latest
@@ -66,22 +68,22 @@ jobs:
uses: actions/cache@v2
with:
path: /srv/zulip-npm-cache
key: v1-yarn-deps-${{ github.job }}-${{ hashFiles('package.json') }}-${{ hashFiles('yarn.lock') }}
restore-keys: v1-yarn-deps-${{ github.job }}
key: v1-yarn-deps-bionic-${{ hashFiles('package.json') }}-${{ hashFiles('yarn.lock') }}
restore-keys: v1-yarn-deps-bionic
- name: Restore python cache
uses: actions/cache@v2
with:
path: /srv/zulip-venv-cache
key: v1-venv-${{ github.job }}-${{ hashFiles('requirements/dev.txt') }}
restore-keys: v1-venv-${{ github.job }}
key: v1-venv-bionic-${{ hashFiles('requirements/dev.txt') }}
restore-keys: v1-venv-bionic
- name: Restore emoji cache
uses: actions/cache@v2
with:
path: /srv/zulip-emoji-cache
key: v1-emoji-${{ github.job }}-${{ hashFiles('tools/setup/emoji/emoji_map.json') }}-${{ hashFiles('tools/setup/emoji/build_emoji') }}-${{ hashFiles('tools/setup/emoji/emoji_setup_utils.py') }}-${{ hashFiles('tools/setup/emoji/emoji_names.py') }}-${{ hashFiles('package.json') }}
restore-keys: v1-emoji-${{ github.job }}
key: v1-emoji-bionic-${{ hashFiles('tools/setup/emoji/emoji_map.json') }}-${{ hashFiles('tools/setup/emoji/build_emoji') }}-${{ hashFiles('tools/setup/emoji/emoji_setup_utils.py') }}-${{ hashFiles('tools/setup/emoji/emoji_names.py') }}-${{ hashFiles('package.json') }}
restore-keys: v1-emoji-bionic
- name: Do Bionic hack
run: |
@@ -106,6 +108,9 @@ jobs:
run: tools/ci/send-failure-message
production_install:
# This job installs the server release tarball built above on a
# range of platforms, and does some basic health checks on the
# resulting installer Zulip server.
strategy:
fail-fast: false
matrix:
@@ -133,7 +138,9 @@ jobs:
os: bullseye
name: ${{ matrix.name }}
container: ${{ matrix.docker_image }}
container:
image: ${{ matrix.docker_image }}
options: --init
runs-on: ubuntu-latest
needs: production_build
@@ -206,3 +213,63 @@ jobs:
env:
ZULIP_BOT_KEY: ${{ secrets.ZULIP_BOT_KEY }}
run: /tmp/send-failure-message
production_upgrade:
# The production upgrade job starts with a container with a
# previous Zulip release installed, and attempts to upgrade it to
# the release tarball built for the current commit being tested.
#
# This is intended to catch bugs that result in the upgrade
# process failing.
strategy:
fail-fast: false
matrix:
include:
# Base images are built using `tools/ci/Dockerfile.prod.template`.
# The comments at the top explain how to build and upload these images.
- docker_image: zulip/ci:buster-3.4
name: 3.4 Version Upgrade
is_focal: true
os: buster
name: ${{ matrix.name }}
container:
image: ${{ matrix.docker_image }}
options: --init
runs-on: ubuntu-latest
needs: production_build
steps:
- name: Download built production tarball
uses: actions/download-artifact@v2
with:
name: production-tarball
path: /tmp
- name: Add required permissions and setup
run: |
# This is the GitHub Actions specific cache directory the
# the current github user must be able to access for the
# cache action to work. It is owned by root currently.
sudo chmod -R 0777 /__w/_temp/
# Since actions/download-artifact@v2 loses all the permissions
# of the tarball uploaded by the upload artifact fix those.
chmod +x /tmp/production-upgrade
chmod +x /tmp/production-verify
chmod +x /tmp/send-failure-message
- name: Upgrade production
run: sudo /tmp/production-upgrade
# TODO: We should be running production-verify here, but it
# doesn't pass yet.
#
# - name: Verify install
# run: sudo /tmp/production-verify
- name: Report status
if: failure()
env:
ZULIP_BOT_KEY: ${{ secrets.ZULIP_BOT_KEY }}
run: /tmp/send-failure-message

View File

@@ -1,6 +1,8 @@
/corporate/tests/stripe_fixtures
/locale
/static/third
/templates/**/*.md
/tools/setup/emoji/emoji_map.json
/zerver/tests/fixtures
/zerver/webhooks/*/doc.md
/zerver/webhooks/*/fixtures

View File

@@ -18,15 +18,15 @@ all of us and the technical communities in which we participate.
The following behaviors are expected and requested of all community members:
* Participate. In doing so, you contribute to the health and longevity of
- Participate. In doing so, you contribute to the health and longevity of
the community.
* Exercise consideration and respect in your speech and actions.
* Attempt collaboration before conflict. Assume good faith.
* Refrain from demeaning, discriminatory, or harassing behavior and speech.
* Take action or alert community leaders if you notice a dangerous
- Exercise consideration and respect in your speech and actions.
- Attempt collaboration before conflict. Assume good faith.
- Refrain from demeaning, discriminatory, or harassing behavior and speech.
- Take action or alert community leaders if you notice a dangerous
situation, someone in distress, or violations of this code, even if they
seem inconsequential.
* Community event venues may be shared with members of the public; be
- Community event venues may be shared with members of the public; be
respectful to all patrons of these locations.
## Unacceptable behavior
@@ -34,24 +34,24 @@ The following behaviors are expected and requested of all community members:
The following behaviors are considered harassment and are unacceptable
within the Zulip community:
* Jokes or derogatory language that singles out members of any race,
- Jokes or derogatory language that singles out members of any race,
ethnicity, culture, national origin, color, immigration status, social and
economic class, educational level, language proficiency, sex, sexual
orientation, gender identity and expression, age, size, family status,
political belief, religion, and mental and physical ability.
* Violence, threats of violence, or violent language directed against
- Violence, threats of violence, or violent language directed against
another person.
* Disseminating or threatening to disseminate another person's personal
- Disseminating or threatening to disseminate another person's personal
information.
* Personal insults of any sort.
* Posting or displaying sexually explicit or violent material.
* Inappropriate photography or recording.
* Deliberate intimidation, stalking, or following (online or in person).
* Unwelcome sexual attention. This includes sexualized comments or jokes,
- Personal insults of any sort.
- Posting or displaying sexually explicit or violent material.
- Inappropriate photography or recording.
- Deliberate intimidation, stalking, or following (online or in person).
- Unwelcome sexual attention. This includes sexualized comments or jokes,
inappropriate touching or groping, and unwelcomed sexual advances.
* Sustained disruption of community events, including talks and
- Sustained disruption of community events, including talks and
presentations.
* Advocating for, or encouraging, any of the behaviors above.
- Advocating for, or encouraging, any of the behaviors above.
## Reporting and enforcement

View File

@@ -26,29 +26,30 @@ To make a code or documentation contribution, read our
[step-by-step guide](#your-first-codebase-contribution) to getting
started with the Zulip codebase. A small sample of the type of work that
needs doing:
* Bug squashing and feature development on our Python/Django
- Bug squashing and feature development on our Python/Django
[backend](https://github.com/zulip/zulip), web
[frontend](https://github.com/zulip/zulip), React Native
[mobile app](https://github.com/zulip/zulip-mobile), or Electron
[desktop app](https://github.com/zulip/zulip-desktop).
* Building out our
- Building out our
[Python API and bots](https://github.com/zulip/python-zulip-api) framework.
* [Writing an integration](https://zulip.com/api/integrations-overview).
* Improving our [user](https://zulip.com/help/) or
- [Writing an integration](https://zulip.com/api/integrations-overview).
- Improving our [user](https://zulip.com/help/) or
[developer](https://zulip.readthedocs.io/en/latest/) documentation.
* [Reviewing code](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html)
- [Reviewing code](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html)
and manually testing pull requests.
**Non-code contributions**: Some of the most valuable ways to contribute
don't require touching the codebase at all. We list a few of them below:
* [Reporting issues](#reporting-issues), including both feature requests and
- [Reporting issues](#reporting-issues), including both feature requests and
bug reports.
* [Giving feedback](#user-feedback) if you are evaluating or using Zulip.
* [Sponsor Zulip](https://github.com/sponsors/zulip) through the GitHub sponsors program.
* [Translating](https://zulip.readthedocs.io/en/latest/translating/translating.html)
- [Giving feedback](#user-feedback) if you are evaluating or using Zulip.
- [Sponsor Zulip](https://github.com/sponsors/zulip) through the GitHub sponsors program.
- [Translating](https://zulip.readthedocs.io/en/latest/translating/translating.html)
Zulip.
* [Outreach](#zulip-outreach): Star us on GitHub, upvote us
- [Outreach](#zulip-outreach): Star us on GitHub, upvote us
on product comparison sites, or write for [the Zulip blog](https://blog.zulip.org/).
## Your first (codebase) contribution
@@ -57,7 +58,8 @@ This section has a step by step guide to starting as a Zulip codebase
contributor. It's long, but don't worry about doing all the steps perfectly;
no one gets it right the first time, and there are a lot of people available
to help.
* First, make an account on the
- First, make an account on the
[Zulip community server](https://zulip.readthedocs.io/en/latest/contributing/chat-zulip-org.html),
paying special attention to the community norms. If you'd like, introduce
yourself in
@@ -65,17 +67,17 @@ to help.
your name as the topic. Bonus: tell us about your first impressions of
Zulip, and anything that felt confusing/broken as you started using the
product.
* Read [What makes a great Zulip contributor](#what-makes-a-great-zulip-contributor).
* [Install the development environment](https://zulip.readthedocs.io/en/latest/development/overview.html),
- Read [What makes a great Zulip contributor](#what-makes-a-great-zulip-contributor).
- [Install the development environment](https://zulip.readthedocs.io/en/latest/development/overview.html),
getting help in
[#development help](https://chat.zulip.org/#narrow/stream/49-development-help)
if you run into any troubles.
* Read the
- Read the
[Zulip guide to Git](https://zulip.readthedocs.io/en/latest/git/index.html)
and do the Git tutorial (coming soon) if you are unfamiliar with
Git, getting help in
[#git help](https://chat.zulip.org/#narrow/stream/44-git-help) if
you run into any troubles. Be sure to check out the
you run into any troubles. Be sure to check out the
[extremely useful Zulip-specific tools page](https://zulip.readthedocs.io/en/latest/git/zulip-tools.html).
### Picking an issue
@@ -84,7 +86,7 @@ Now, you're ready to pick your first issue! There are hundreds of open issues
in the main codebase alone. This section will help you find an issue to work
on.
* If you're interested in
- If you're interested in
[mobile](https://github.com/zulip/zulip-mobile/issues?q=is%3Aopen+is%3Aissue),
[desktop](https://github.com/zulip/zulip-desktop/issues?q=is%3Aopen+is%3Aissue),
or
@@ -93,18 +95,18 @@ on.
[#mobile](https://chat.zulip.org/#narrow/stream/48-mobile),
[#desktop](https://chat.zulip.org/#narrow/stream/16-desktop), or
[#integration](https://chat.zulip.org/#narrow/stream/127-integrations).
* For the main server and web repository, we recommend browsing
- For the main server and web repository, we recommend browsing
recently opened issues to look for issues you are confident you can
fix correctly in a way that clearly communicates why your changes
are the correct fix. Our GitHub workflow bot, zulipbot, limits
are the correct fix. Our GitHub workflow bot, zulipbot, limits
users who have 0 commits merged to claiming a single issue labeled
with "good first issue" or "help wanted".
* We also partition all of our issues in the main repo into areas like
- We also partition all of our issues in the main repo into areas like
admin, compose, emoji, hotkeys, i18n, onboarding, search, etc. Look
through our [list of labels](https://github.com/zulip/zulip/labels), and
click on some of the `area:` labels to see all the issues related to your
areas of interest.
* If the lists of issues are overwhelming, post in
- If the lists of issues are overwhelming, post in
[#new members](https://chat.zulip.org/#narrow/stream/95-new-members) with a
bit about your background and interests, and we'll help you out. The most
important thing to say is whether you're looking for a backend (Python),
@@ -119,21 +121,22 @@ have a new feature you'd like to add, we recommend you start by posting in
feature idea and the problem that you're hoping to solve.
Other notes:
* For a first pull request, it's better to aim for a smaller contribution
- For a first pull request, it's better to aim for a smaller contribution
than a bigger one. Many first contributions have fewer than 10 lines of
changes (not counting changes to tests).
* The full list of issues explicitly looking for a contributor can be
- The full list of issues explicitly looking for a contributor can be
found with the
[good first issue](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
and
[help wanted](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
labels. Avoid issues with the "difficult" label unless you
labels. Avoid issues with the "difficult" label unless you
understand why it is difficult and are confident you can resolve the
issue correctly and completely. Issues without one of these labels
issue correctly and completely. Issues without one of these labels
are fair game if Tim has written a clear technical design proposal
in the issue, or it is a bug that you can reproduce and you are
confident you can fix the issue correctly.
* For most new contributors, there's a lot to learn while making your first
- For most new contributors, there's a lot to learn while making your first
pull request. It's OK if it takes you a while; that's normal! You'll be
able to work a lot faster as you build experience.
@@ -144,20 +147,20 @@ the issue thread. [Zulipbot](https://github.com/zulip/zulipbot) is a GitHub
workflow bot; it will assign you to the issue and label the issue as "in
progress". Some additional notes:
* You can only claim issues with the
- You can only claim issues with the
[good first issue](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
or
[help wanted](https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
labels. Zulipbot will give you an error if you try to claim an issue
without one of those labels.
* You're encouraged to ask questions on how to best implement or debug your
- You're encouraged to ask questions on how to best implement or debug your
changes -- the Zulip maintainers are excited to answer questions to help
you stay unblocked and working efficiently. You can ask questions on
chat.zulip.org, or on the GitHub issue or pull request.
* We encourage early pull requests for work in progress. Prefix the title of
- We encourage early pull requests for work in progress. Prefix the title of
work in progress pull requests with `[WIP]`, and remove the prefix when
you think it might be mergeable and want it to be reviewed.
* After updating a PR, add a comment to the GitHub thread mentioning that it
- After updating a PR, add a comment to the GitHub thread mentioning that it
is ready for another review. GitHub only notifies maintainers of the
changes when you post a comment, so if you don't, your PR will likely be
neglected by accident!
@@ -172,26 +175,26 @@ labels.
## What makes a great Zulip contributor?
Zulip has a lot of experience working with new contributors. In our
Zulip has a lot of experience working with new contributors. In our
experience, these are the best predictors of success:
* Posting good questions. This generally means explaining your current
- Posting good questions. This generally means explaining your current
understanding, saying what you've done or tried so far, and including
tracebacks or other error messages if appropriate.
* Learning and practicing
- Learning and practicing
[Git commit discipline](https://zulip.readthedocs.io/en/latest/contributing/version-control.html#commit-discipline).
* Submitting carefully tested code. This generally means checking your work
- Submitting carefully tested code. This generally means checking your work
through a combination of automated tests and manually clicking around the
UI trying to find bugs in your work. See
[things to look for](https://zulip.readthedocs.io/en/latest/contributing/code-reviewing.html#things-to-look-for)
for additional ideas.
* Posting
- Posting
[screenshots or GIFs](https://zulip.readthedocs.io/en/latest/tutorials/screenshot-and-gif-software.html)
for frontend changes.
* Being responsive to feedback on pull requests. This means incorporating or
- Being responsive to feedback on pull requests. This means incorporating or
responding to all suggested changes, and leaving a note if you won't be
able to address things within a few days.
* Being helpful and friendly on chat.zulip.org.
- Being helpful and friendly on chat.zulip.org.
These are also the main criteria we use to select candidates for all
of our outreach programs.
@@ -215,9 +218,9 @@ and how to reproduce it if known, your browser/OS if relevant, and a
if appropriate.
**Reporting security issues**. Please do not report security issues
publicly, including on public streams on chat.zulip.org. You can
email security@zulip.com. We create a CVE for every security
issue in our released software.
publicly, including on public streams on chat.zulip.org. You can
email security@zulip.com. We create a CVE for every security
issue in our released software.
## User feedback
@@ -227,17 +230,17 @@ hear about your experience with the product. If you're not sure what to
write, here are some questions we're always very curious to know the answer
to:
* Evaluation: What is the process by which your organization chose or will
- Evaluation: What is the process by which your organization chose or will
choose a group chat product?
* Pros and cons: What are the pros and cons of Zulip for your organization,
- Pros and cons: What are the pros and cons of Zulip for your organization,
and the pros and cons of other products you are evaluating?
* Features: What are the features that are most important for your
- Features: What are the features that are most important for your
organization? In the best-case scenario, what would your chat solution do
for you?
* Onboarding: If you remember it, what was your impression during your first
- Onboarding: If you remember it, what was your impression during your first
few minutes of using Zulip? What did you notice, and how did you feel? Was
there anything that stood out to you as confusing, or broken, or great?
* Organization: What does your organization do? How big is the organization?
- Organization: What does your organization do? How big is the organization?
A link to your organization's website?
## Outreach programs
@@ -252,15 +255,16 @@ summer interns from Harvard, MIT, and Stanford.
While each third-party program has its own rules and requirements, the
Zulip community's approaches all of these programs with these ideas in
mind:
* We try to make the application process as valuable for the applicant as
- We try to make the application process as valuable for the applicant as
possible. Expect high-quality code reviews, a supportive community, and
publicly viewable patches you can link to from your resume, regardless of
whether you are selected.
* To apply, you'll have to submit at least one pull request to a Zulip
repository. Most students accepted to one of our programs have
- To apply, you'll have to submit at least one pull request to a Zulip
repository. Most students accepted to one of our programs have
several merged pull requests (including at least one larger PR) by
the time of the application deadline.
* The main criteria we use is quality of your best contributions, and
- The main criteria we use is quality of your best contributions, and
the bullets listed at
[What makes a great Zulip contributor](#what-makes-a-great-zulip-contributor).
Because we focus on evaluating your best work, it doesn't hurt your
@@ -274,7 +278,7 @@ important parts of the project. We hope you apply!
### Google Summer of Code
The largest outreach program Zulip participates in is GSoC (14
students in 2017; 11 in 2018; 17 in 2019; 18 in 2020). While we don't control how
students in 2017; 11 in 2018; 17 in 2019; 18 in 2020). While we don't control how
many slots Google allocates to Zulip, we hope to mentor a similar
number of students in future summers.
@@ -282,9 +286,9 @@ If you're reading this well before the application deadline and want
to make your application strong, we recommend getting involved in the
community and fixing issues in Zulip now. Having good contributions
and building a reputation for doing good work is the best way to have
a strong application. About half of Zulip's GSoC students for Summer
a strong application. About half of Zulip's GSoC students for Summer
2017 had made significant contributions to the project by February
2017, and about half had not. Our
2017, and about half had not. Our
[GSoC project ideas page][gsoc-guide] has lots more details on how
Zulip does GSoC, as well as project ideas (though the project idea
list is maintained only during the GSoC application period, so if
@@ -293,9 +297,9 @@ out-of-date).
We also have in some past years run a Zulip Summer of Code (ZSoC)
program for students who we didn't have enough slots to accept for
GSoC but were able to find funding for. Student expectations are the
GSoC but were able to find funding for. Student expectations are the
same as with GSoC, and it has no separate application process; your
GSoC application is your ZSoC application. If we'd like to select you
GSoC application is your ZSoC application. If we'd like to select you
for ZSoC, we'll contact you when the GSoC results are announced.
[gsoc-guide]: https://zulip.readthedocs.io/en/latest/contributing/gsoc-ideas.html
@@ -307,23 +311,24 @@ for ZSoC, we'll contact you when the GSoC results are announced.
perception of projects like Zulip. We've collected a few sites below
where we know Zulip has been discussed. Doing everything in the following
list typically takes about 15 minutes.
* Star us on GitHub. There are four main repositories:
- Star us on GitHub. There are four main repositories:
[server/web](https://github.com/zulip/zulip),
[mobile](https://github.com/zulip/zulip-mobile),
[desktop](https://github.com/zulip/zulip-desktop), and
[Python API](https://github.com/zulip/python-zulip-api).
* [Follow us](https://twitter.com/zulip) on Twitter.
- [Follow us](https://twitter.com/zulip) on Twitter.
For both of the following, you'll need to make an account on the site if you
don't already have one.
* [Like Zulip](https://alternativeto.net/software/zulip-chat-server/) on
- [Like Zulip](https://alternativeto.net/software/zulip-chat-server/) on
AlternativeTo. We recommend upvoting a couple of other products you like
as well, both to give back to their community, and since single-upvote
accounts are generally given less weight. You can also
[upvote Zulip](https://alternativeto.net/software/slack/) on their page
for Slack.
* [Add Zulip to your stack](https://stackshare.io/zulip) on StackShare, star
- [Add Zulip to your stack](https://stackshare.io/zulip) on StackShare, star
it, and upvote the reasons why people like Zulip that you find most
compelling. Again, we recommend adding a few other products that you like
as well.

View File

@@ -8,8 +8,8 @@ allows users to easily process hundreds or thousands of messages a day. With
over 700 contributors merging over 500 commits a month, Zulip is also the
largest and fastest growing open source group chat project.
[![GitHub Actions build status](https://github.com/zulip/zulip/actions/workflows/zulip-ci.yml/badge.svg?branch=master)](https://github.com/zulip/zulip/actions/workflows/zulip-ci.yml?query=branch%3Amaster)
[![coverage status](https://img.shields.io/codecov/c/github/zulip/zulip/master.svg)](https://codecov.io/gh/zulip/zulip/branch/master)
[![GitHub Actions build status](https://github.com/zulip/zulip/actions/workflows/zulip-ci.yml/badge.svg)](https://github.com/zulip/zulip/actions/workflows/zulip-ci.yml?query=branch%3Amain)
[![coverage status](https://img.shields.io/codecov/c/github/zulip/zulip/main.svg)](https://codecov.io/gh/zulip/zulip)
[![Mypy coverage](https://img.shields.io/badge/mypy-100%25-green.svg)][mypy-coverage]
[![code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![code style: prettier](https://img.shields.io/badge/code_style-prettier-ff69b4.svg)](https://github.com/prettier/prettier)
@@ -30,13 +30,13 @@ and tell us what's up!
You might be interested in:
* **Contributing code**. Check out our
- **Contributing code**. Check out our
[guide for new contributors](https://zulip.readthedocs.io/en/latest/overview/contributing.html)
to get started. Zulip prides itself on maintaining a clean and
to get started. Zulip prides itself on maintaining a clean and
well-tested codebase, and a stock of hundreds of
[beginner-friendly issues][beginner-friendly].
* **Contributing non-code**.
- **Contributing non-code**.
[Report an issue](https://zulip.readthedocs.io/en/latest/overview/contributing.html#reporting-issues),
[translate](https://zulip.readthedocs.io/en/latest/translating/translating.html) Zulip
into your language,
@@ -45,12 +45,12 @@ You might be interested in:
[give us feedback](https://zulip.readthedocs.io/en/latest/overview/contributing.html#user-feedback). We
would love to hear from you, even if you're just trying the product out.
* **Supporting Zulip**. Advocate for your organization to use Zulip, become a [sponsor](https://github.com/sponsors/zulip), write a
- **Supporting Zulip**. Advocate for your organization to use Zulip, become a [sponsor](https://github.com/sponsors/zulip), write a
review in the mobile app stores, or
[upvote Zulip](https://zulip.readthedocs.io/en/latest/overview/contributing.html#zulip-outreach) on
product comparison sites.
* **Checking Zulip out**. The best way to see Zulip in action is to drop by
- **Checking Zulip out**. The best way to see Zulip in action is to drop by
the
[Zulip community server](https://zulip.readthedocs.io/en/latest/contributing/chat-zulip-org.html). We
also recommend reading Zulip for
@@ -58,23 +58,23 @@ You might be interested in:
[companies](https://zulip.com/for/companies/), or Zulip for
[working groups and part time communities](https://zulip.com/for/working-groups-and-communities/).
* **Running a Zulip server**. Use a preconfigured [DigitalOcean droplet](https://marketplace.digitalocean.com/apps/zulip),
- **Running a Zulip server**. Use a preconfigured [DigitalOcean droplet](https://marketplace.digitalocean.com/apps/zulip),
[install Zulip](https://zulip.readthedocs.io/en/stable/production/install.html)
directly, or use Zulip's
experimental [Docker image](https://zulip.readthedocs.io/en/latest/production/deployment.html#zulip-in-docker).
Commercial support is available; see <https://zulip.com/plans> for details.
* **Using Zulip without setting up a server**. <https://zulip.com>
- **Using Zulip without setting up a server**. <https://zulip.com>
offers free and commercial hosting, including providing our paid
plan for free to fellow open source projects.
* **Participating in [outreach
- **Participating in [outreach
programs](https://zulip.readthedocs.io/en/latest/overview/contributing.html#outreach-programs)**
like Google Summer of Code.
You may also be interested in reading our [blog](https://blog.zulip.org/) or
following us on [Twitter](https://twitter.com/zulip).
Zulip is distributed under the
[Apache 2.0](https://github.com/zulip/zulip/blob/master/LICENSE) license.
[Apache 2.0](https://github.com/zulip/zulip/blob/main/LICENSE) license.
[beginner-friendly]: https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22

View File

@@ -8,7 +8,7 @@ so you should subscribe if you are running Zulip in production.
We love responsible reports of (potential) security issues in Zulip,
whether in the latest release or our development branch.
Our security contact is security@zulip.com. Reporters should expect a
Our security contact is security@zulip.com. Reporters should expect a
response within 24 hours.
Please include details on the issue and how you'd like to be credited

View File

@@ -34,10 +34,10 @@ def render_confirmation_key_error(
request: HttpRequest, exception: ConfirmationKeyException
) -> HttpResponse:
if exception.error_type == ConfirmationKeyException.WRONG_LENGTH:
return render(request, "confirmation/link_malformed.html")
return render(request, "confirmation/link_malformed.html", status=404)
if exception.error_type == ConfirmationKeyException.EXPIRED:
return render(request, "confirmation/link_expired.html")
return render(request, "confirmation/link_does_not_exist.html")
return render(request, "confirmation/link_expired.html", status=404)
return render(request, "confirmation/link_does_not_exist.html", status=404)
def generate_key() -> str:
@@ -143,9 +143,9 @@ class ConfirmationType:
_properties = {
Confirmation.USER_REGISTRATION: ConfirmationType("check_prereg_key_and_redirect"),
Confirmation.USER_REGISTRATION: ConfirmationType("get_prereg_key_and_redirect"),
Confirmation.INVITATION: ConfirmationType(
"check_prereg_key_and_redirect", validity_in_days=settings.INVITATION_LINK_VALIDITY_DAYS
"get_prereg_key_and_redirect", validity_in_days=settings.INVITATION_LINK_VALIDITY_DAYS
),
Confirmation.EMAIL_CHANGE: ConfirmationType("confirm_email_change"),
Confirmation.UNSUBSCRIBE: ConfirmationType(
@@ -155,7 +155,7 @@ _properties = {
Confirmation.MULTIUSE_INVITE: ConfirmationType(
"join", validity_in_days=settings.INVITATION_LINK_VALIDITY_DAYS
),
Confirmation.REALM_CREATION: ConfirmationType("check_prereg_key_and_redirect"),
Confirmation.REALM_CREATION: ConfirmationType("get_prereg_key_and_redirect"),
Confirmation.REALM_REACTIVATION: ConfirmationType("realm_reactivation"),
}

View File

@@ -12,7 +12,7 @@
# serve to show the default.
import os
import sys
from typing import Any, Dict, List, Optional
from typing import Any, Dict, Optional
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
@@ -20,7 +20,7 @@ from typing import Any, Dict, List, Optional
# sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
from version import ZULIP_VERSION
from version import LATEST_RELEASE_VERSION, ZULIP_VERSION
# -- General configuration ------------------------------------------------
@@ -30,7 +30,14 @@ from version import ZULIP_VERSION
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions: List[str] = []
extensions = [
"myst_parser",
]
myst_enable_extensions = [
"colon_fence",
"substitution",
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
@@ -99,6 +106,10 @@ pygments_style = "sphinx"
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
myst_substitutions = {
"LATEST_RELEASE_VERSION": LATEST_RELEASE_VERSION,
}
# -- Options for HTML output ----------------------------------------------
@@ -293,8 +304,6 @@ texinfo_documents = [
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
from recommonmark.transform import AutoStructify
# The suffix(es) of source filenames. You can specify multiple suffix
# as a dictionary mapping file extensions to file types
# https://www.sphinx-doc.org/en/master/usage/markdown.html
@@ -303,39 +312,11 @@ source_suffix = {
".md": "markdown",
}
# Temporary workaround to remove multiple build warnings caused by upstream bug.
# See https://github.com/zulip/zulip/issues/13263 for details.
from commonmark.node import Node
from recommonmark.parser import CommonMarkParser
class CustomCommonMarkParser(CommonMarkParser):
def visit_document(self, node: Node) -> None:
pass
suppress_warnings = [
"myst.header",
]
def setup(app: Any) -> None:
app.add_source_parser(CustomCommonMarkParser)
app.add_config_value(
"recommonmark_config",
{
"enable_eval_rst": True,
# Turn off recommonmark features we aren't using.
"enable_auto_doc_ref": False,
"auto_toc_tree_section": None,
"enable_auto_toc_tree": False,
"enable_math": False,
"enable_inline_math": False,
"url_resolver": lambda x: x,
},
True,
)
# Enable `eval_rst`, and any other features enabled in recommonmark_config.
# Docs: https://recommonmark.readthedocs.io/en/latest/auto_structify.html
# (But NB those docs are for master, not latest release.)
app.add_transform(AutoStructify)
# overrides for wide tables in RTD theme
app.add_css_file("theme_overrides.css") # path relative to _static

View File

@@ -3,24 +3,24 @@
## Guidelines
In order to accommodate all users, Zulip strives to implement accessibility
best practices in its user interface. There are many aspects to accessibility;
best practices in its user interface. There are many aspects to accessibility;
here are some of the more important ones to keep in mind.
* All images should have alternative text attributes for the benefit of users
- All images should have alternative text attributes for the benefit of users
who cannot see them (this includes users who are utilizing a voice interface
to free up their eyes to look at something else instead).
* The entire application should be usable via a keyboard (many users are unable
- The entire application should be usable via a keyboard (many users are unable
to use a mouse, and many accessibility aids emulate a keyboard).
* Text should have good enough contrast against the background to enable
- Text should have good enough contrast against the background to enable
even users with moderate visual impairment to be able to read it.
* [ARIA](https://www.w3.org/WAI/intro/aria) (Accessible Rich Internet
- [ARIA](https://www.w3.org/WAI/intro/aria) (Accessible Rich Internet
Application) attributes should be used appropriately to enable screen
readers and other alternative interfaces to navigate the application
effectively.
There are many different standards for accessibility, but the most relevant
one for Zulip is the W3C's [WCAG](https://www.w3.org/TR/WCAG20/) (Web Content
Accessibility Guidelines), currently at version 2.0. Whenever practical, we
Accessibility Guidelines), currently at version 2.0. Whenever practical, we
should strive for compliance with the AA level of this specification.
(The W3C itself
[recommends not trying](https://www.w3.org/TR/UNDERSTANDING-WCAG20/conformance.html#uc-conf-req1-head)
@@ -30,26 +30,26 @@ as it is not possible for some content.)
## Tools
There are tools available to automatically audit a web page for compliance
with many of the WCAG guidelines. Here are some of the more useful ones:
with many of the WCAG guidelines. Here are some of the more useful ones:
* [Accessibility Developer Tools][chrome-webstore]
- [Accessibility Developer Tools][chrome-webstore]
This open source Chrome extension from Google adds an accessibility audit to
the "Audits" tab of the Chrome Developer Tools. The audit is performed
the "Audits" tab of the Chrome Developer Tools. The audit is performed
against the page's DOM via JavaScript, allowing it to identify some issues
that a static HTML inspector would miss.
* [aXe](https://www.deque.com/products/axe/) An open source Chrome and Firefox
- [aXe](https://www.deque.com/products/axe/) An open source Chrome and Firefox
extension which runs a somewhat different set of checks than Google's Chrome
extension.
* [Wave](https://wave.webaim.org/) This web application takes a URL and loads
- [Wave](https://wave.webaim.org/) This web application takes a URL and loads
it in a frame, reporting on all the issues it finds with links to more
information. Has the advantage of not requiring installation, but requires
information. Has the advantage of not requiring installation, but requires
a URL which can be directly accessed by an external site.
* [Web Developer](https://chrispederick.com/work/web-developer/) This browser
- [Web Developer](https://chrispederick.com/work/web-developer/) This browser
extension has many useful features, including a convenient link for opening
the current URL in Wave to get an accessibility report.
Note that these tools cannot catch all possible accessibility problems, and
sometimes report false positives as well. They are a useful aid in quickly
sometimes report false positives as well. They are a useful aid in quickly
identifying potential problems and checking for regressions, but their
recommendations should not be blindly obeyed.
@@ -57,10 +57,10 @@ recommendations should not be blindly obeyed.
Problems with Zulip's accessibility should be reported as
[GitHub issues](https://github.com/zulip/zulip/issues) with the "accessibility"
label. This label can be added by entering the following text in a separate
label. This label can be added by entering the following text in a separate
comment on the issue:
@zulipbot add "area: accessibility"
> @zulipbot add "area: accessibility"
If you want to help make Zulip more accessible, here is a list of the
[currently open accessibility issues][accessibility-issues].
@@ -70,15 +70,14 @@ If you want to help make Zulip more accessible, here is a list of the
For more information about making Zulip accessible to as many users as
possible, the following resources may be useful.
* [Font Awesome accessibility guide](https://fontawesome.com/how-to-use/on-the-web/other-topics/accessibility),
- [Font Awesome accessibility guide](https://fontawesome.com/how-to-use/on-the-web/other-topics/accessibility),
which is especially helpful since Zulip uses Font Awesome for its icons.
* [Web Content Accessibility Guidelines (WCAG) 2.0](https://www.w3.org/TR/WCAG/)
* [WAI-ARIA](https://www.w3.org/WAI/intro/aria) - Web Accessibility Initiative
- [Web Content Accessibility Guidelines (WCAG) 2.0](https://www.w3.org/TR/WCAG/)
- [WAI-ARIA](https://www.w3.org/WAI/intro/aria) - Web Accessibility Initiative
Accessible Rich Internet Application Suite
* [WebAIM](https://webaim.org/) - Web Accessibility in Mind
* The [MDN page on accessibility](https://developer.mozilla.org/en-US/docs/Web/Accessibility)
* The [Open edX Accessibility Guidelines][openedx-guidelines] for developers
- [WebAIM](https://webaim.org/) - Web Accessibility in Mind
- The [MDN page on accessibility](https://developer.mozilla.org/en-US/docs/Web/Accessibility)
- The [Open edX Accessibility Guidelines][openedx-guidelines] for developers
[chrome-webstore]: https://chrome.google.com/webstore/detail/accessibility-developer-t/fpkknkljclfencbdbgkenhalefipecmb
[openedx-guidelines]: https://edx.readthedocs.io/projects/edx-developer-guide/en/latest/conventions/accessibility.html

View File

@@ -2,21 +2,22 @@
Please include these elements in your bug report to make it easier for us to help you.
* A brief title
- A brief title
* An explanation of what you were expecting vs. the actual result
- An explanation of what you were expecting vs. the actual result
* Steps to take in order to reproduce the buggy behavior
- Steps to take in order to reproduce the buggy behavior
* Whether you are using Zulip in production or in the development
environment, and whether these are old versions
- Whether you are using Zulip in production or in the development
environment, and whether these are old versions
* Whether you are using the web app, a desktop app or a mobile device
to access Zulip
- Whether you are using the web app, a desktop app or a mobile device
to access Zulip
* Any additional information that would help: screenshots, GIFs, a
pastebin of the error log
- Any additional information that would help: screenshots, GIFs, a
pastebin of the error log
Further reading:
* [How to write a bug report that will make your engineers love you](https://testlio.com/blog/the-ideal-bug-report/)
* [How to Report Bugs Effectively](https://www.chiark.greenend.org.uk/~sgtatham/bugs.html)
- [How to write a bug report that will make your engineers love you](https://testlio.com/blog/the-ideal-bug-report/)
- [How to Report Bugs Effectively](https://www.chiark.greenend.org.uk/~sgtatham/bugs.html)

View File

@@ -4,38 +4,38 @@
forum for the Zulip community.
You can go through the simple signup process at that link, and then
you will soon be talking to core Zulip developers and other users. To
you will soon be talking to core Zulip developers and other users. To
get help in real time, you will have the best luck finding core
developers roughly between 17:00 UTC and 6:00 UTC, but the sun never
sets on the Zulip community. Most questions get a reply within
sets on the Zulip community. Most questions get a reply within
minutes to a few hours, depending on the time of day.
## Community norms
* Send test messages to
- Send test messages to
[#test here](https://chat.zulip.org/#narrow/stream/7-test-here) or
as a PM to yourself to avoid disturbing others.
* When asking for help, provide the details needed for others to help
you. E.g. include the full traceback in a code block (not a
- When asking for help, provide the details needed for others to help
you. E.g. include the full traceback in a code block (not a
screenshot), a link to the code or a WIP PR you're having trouble
debugging, etc.
* Ask questions on streams rather than PMing core contributors.
- Ask questions on streams rather than PMing core contributors.
You'll get answers faster since other people can help, and it makes
it possible for other developers to learn from reading the discussion.
* Use @-mentions sparingly. Unlike IRC or Slack, in Zulip, it's
- Use @-mentions sparingly. Unlike IRC or Slack, in Zulip, it's
usually easy to see which message you're replying to, so you don't
need to mention your conversation partner in every reply.
Mentioning other users is great for timely questions or making sure
someone who is not online sees your message.
* Converse informally; there's no need to use titles like "Sir" or "Madam".
* Use
- Converse informally; there's no need to use titles like "Sir" or "Madam".
- Use
[gender-neutral language](https://en.wikipedia.org/wiki/Gender-neutral_language).
For example, avoid using a pronoun like her or his in sentences like
"Every developer should clean [their] keyboard at least once a week."
* Follow the community [code of conduct](../code-of-conduct.md).
* Participate! Zulip is a friendly and welcoming community, and we
- Follow the community [code of conduct](../code-of-conduct.md).
- Participate! Zulip is a friendly and welcoming community, and we
love meeting new people, hearing about what brought them to Zulip,
and getting their feedback. If you're not sure where to start,
and getting their feedback. If you're not sure where to start,
introduce yourself and your interests in
[#new members](https://chat.zulip.org/#narrow/stream/95-new-members),
using your name as the topic.
@@ -44,7 +44,7 @@ minutes to a few hours, depending on the time of day.
The chat.zulip.org community sends several thousand messages every
single week, about half of them to streams that we have included in
the default streams for new users for discoverability. Keeping up
the default streams for new users for discoverability. Keeping up
with **everything** happening in the Zulip project is both difficult
and rarely a useful goal.
@@ -54,9 +54,9 @@ streams that are only of occasional interest.
## This is a bleeding edge development server
The chat.zulip.org server is frequently deployed off of `master` from
The chat.zulip.org server is frequently deployed off of `main` from
the Zulip Git repository, so please point out anything you notice that
seems wrong! We catch many bugs that escape code review this way.
seems wrong! We catch many bugs that escape code review this way.
The chat.zulip.org server is a development and testing server, not a
production service, so don't use it for anything mission-critical,
@@ -67,59 +67,59 @@ secret/embarrassing, etc.
There are a few streams worth highlighting that are relevant for
everyone, even non-developers:
* [#announce](https://chat.zulip.org/#narrow/stream/1-announce) is for
- [#announce](https://chat.zulip.org/#narrow/stream/1-announce) is for
announcements and discussions thereof; we try to keep traffic there
to a minimum.
* [#feedback](https://chat.zulip.org/#narrow/stream/137-feedback) is for
- [#feedback](https://chat.zulip.org/#narrow/stream/137-feedback) is for
posting feedback on Zulip.
* [#design](https://chat.zulip.org/#narrow/stream/101-design) is where we
- [#design](https://chat.zulip.org/#narrow/stream/101-design) is where we
discuss UI and feature design and collect feedback on potential design
changes. We love feedback, so don't hesitate to speak up!
* [#user community](https://chat.zulip.org/#narrow/stream/138-user-community) is
changes. We love feedback, so don't hesitate to speak up!
- [#user community](https://chat.zulip.org/#narrow/stream/138-user-community) is
for Zulip users to discuss their experiences using and adopting Zulip.
* [#production help](https://chat.zulip.org/#narrow/stream/31-production-help)
- [#production help](https://chat.zulip.org/#narrow/stream/31-production-help)
is for production environment related discussions.
* [#test here](https://chat.zulip.org/#narrow/stream/7-test-here) is
- [#test here](https://chat.zulip.org/#narrow/stream/7-test-here) is
for sending test messages without inconveniencing other users :).
We recommend muting this stream when not using it.
There are dozens of streams for development discussions in the Zulip
community (e.g. one for each app, etc.); check out the
[Streams page](https://chat.zulip.org/#streams/all) to see the
descriptions for all of them. Relevant to almost everyone are these:
descriptions for all of them. Relevant to almost everyone are these:
* [#checkins](https://chat.zulip.org/#narrow/stream/65-checkins) is for
- [#checkins](https://chat.zulip.org/#narrow/stream/65-checkins) is for
progress updates on what you're working on and its status; usually
folks post with their name as the topic. Everyone is welcome to
folks post with their name as the topic. Everyone is welcome to
participate!
* [#development help](https://chat.zulip.org/#narrow/stream/49-development-help)
- [#development help](https://chat.zulip.org/#narrow/stream/49-development-help)
is for asking for help with any Zulip server/webapp development work
(use the app streams for help working on one of the apps).
* [#code review](https://chat.zulip.org/#narrow/stream/91-code-review)
is for getting feedback on your work. We encourage all developers
- [#code review](https://chat.zulip.org/#narrow/stream/91-code-review)
is for getting feedback on your work. We encourage all developers
to comment on work posted here, even if you're new to the Zulip
project; reviewing other PRs is a great way to develop experience,
and even just manually testing a proposed new feature and posting
feedback is super helpful.
* [#documentation](https://chat.zulip.org/#narrow/stream/19-documentation)
- [#documentation](https://chat.zulip.org/#narrow/stream/19-documentation)
is where we discuss improving Zulip's user, sysadmin, and developer
documentation.
* [#translation](https://chat.zulip.org/#narrow/stream/58-translation) is
- [#translation](https://chat.zulip.org/#narrow/stream/58-translation) is
for discussing Zulip's translations.
* [#learning](https://chat.zulip.org/#narrow/stream/92-learning) is for
- [#learning](https://chat.zulip.org/#narrow/stream/92-learning) is for
posting great learning resources one comes across.
There are also official private streams, including large ones for
established community contributors (and for GSoC mentors), and small
streams where Kandra Labs staff can discuss customer support,
production server operations, and security issues. Because our
production server operations, and security issues. Because our
community values are to work in the open, these private streams are
relatively low traffic.
## Searching for past conversations
When [searching][] for previous discussions of a given topic, we
recommend using the `streams:public keyword` set of operators. This
recommend using the `streams:public keyword` set of operators. This
will search the full history of public streams in the organization for
`keyword` (including messages sent before you joined and on public
streams you're not subscribed to).

View File

@@ -1,8 +1,8 @@
# Reviewing Zulip code
Code review is a key part of how Zulip does development! If you've
Code review is a key part of how Zulip does development! If you've
been contributing to Zulip's code, we'd love for you to do reviews.
This is a guide to how. (With some thoughts for writing code too.)
This is a guide to how. (With some thoughts for writing code too.)
## Protocol for authors
@@ -10,8 +10,9 @@ When you send a PR, try to think of a good person to review it --
outside of the handful of people who do a ton of reviews -- and
`@`-mention them with something like "`@person`, would you review
this?". Good choices include
* someone based in your timezone or a nearby timezone
* people working on similar things, or in a loosely related area
- someone based in your timezone or a nearby timezone
- people working on similar things, or in a loosely related area
Alternatively, posting a message in
[#code-review](https://chat.zulip.org/#narrow/stream/91-code-review) on [the Zulip
@@ -26,19 +27,19 @@ to dive right into reviewing the PR's core functionality.
### Responding to a review feedback
Once you've received a review and resolved any feedback, it's critical
to update the GitHub thread to reflect that. Best practices are to:
to update the GitHub thread to reflect that. Best practices are to:
* Make sure that CI passes and the PR is rebased onto recent master.
* Post comments on each feedback thread explaining at least how you
- Make sure that CI passes and the PR is rebased onto recent `main`.
- Post comments on each feedback thread explaining at least how you
resolved the feedback, as well as any other useful information
(problems encountered, reasoning for why you picked one of several
options, a test you added to make sure the bug won't recur, etc.).
* Mark any resolved threads as "resolved" in the GitHub UI, if
- Mark any resolved threads as "resolved" in the GitHub UI, if
appropriate.
* Post a summary comment in the main feed for the PR, explaining that
- Post a summary comment in the main feed for the PR, explaining that
this is ready for another review, and summarizing any changes from
the previous version, details on how you tested the changes, new
screenshots/etc. More detail is better than less, as long as you
screenshots/etc. More detail is better than less, as long as you
take the time to write clearly.
If you resolve the feedback, but the PR has merge conflicts, CI
@@ -48,7 +49,7 @@ will assume it isn't ready for review and move on to other work.
If you need help or think an open discussion topic requires more
feedback or a more complex discussion, move the discussion to a topic
in the Zulip development community server. Be sure to provide links
in the Zulip development community server. Be sure to provide links
from the GitHub PR to the conversation (and vice versa) so that it's
convenient to read both conversations together.
@@ -60,10 +61,10 @@ Anyone can do a code review -- you don't have to have a ton of
experience, and you don't have to have the power to ultimately merge
the PR. If you
* read the code, see if you understand what the change is
- read the code, see if you understand what the change is
doing and why, and ask questions if you don't; or
* fetch the code (for Zulip server code,
- fetch the code (for Zulip server code,
[tools/fetch-rebase-pull-request][git tool] is super handy), play around
with it in your dev environment, and say what you think about how
the feature works
@@ -74,7 +75,7 @@ those are really helpful contributions.
Doing code reviews is an important part of making the project grow.
It's also an important skill to develop for participating in
open-source projects and working in the industry in general. If
open-source projects and working in the industry in general. If
you're contributing to Zulip and have been working in our code for a
little while, we would love for some of your time contributing to come
in the form of doing code reviews!
@@ -86,7 +87,7 @@ the first couple of weeks as you're getting going) doing code reviews.
### Fast replies are key
For the author of a PR, getting feedback quickly is really important
for making progress quickly and staying productive. That means that
for making progress quickly and staying productive. That means that
if you get @-mentioned on a PR with a request for you to review it,
it helps the author a lot if you reply promptly.
@@ -99,14 +100,14 @@ review the PR.
People in the Zulip project live and work in many timezones, and code
reviewers also need focused chunks of time to write code and do other
things, so an immediate reply isn't always possible. But a good
things, so an immediate reply isn't always possible. But a good
benchmark is to try to always reply **within one workday**, at least
with a short initial reply, if you're working regularly on Zulip. And
with a short initial reply, if you're working regularly on Zulip. And
sooner is better.
## Things to look for
* *The CI build.* The tests need to pass. One can investigate
- _The CI build._ The tests need to pass. One can investigate
any failures and figure out what to fix by clicking on a red X next
to the commit hash or the Detail links on a pull request. (Example:
in [#17584](https://github.com/zulip/zulip/pull/17584),
@@ -120,76 +121,76 @@ sooner is better.
See our docs on [continuous integration](../testing/continuous-integration.md)
to learn more.
* *Technical design.* There are a lot of considerations here:
- _Technical design._ There are a lot of considerations here:
security, migration paths/backwards compatibility, cost of new
dependencies, interactions with features, speed of performance, API
changes. Security is especially important and worth thinking about
changes. Security is especially important and worth thinking about
carefully with any changes to security-sensitive code like views.
* *User interface and visual design.* If frontend changes are
- _User interface and visual design._ If frontend changes are
involved, the reviewer will check out the code, play with the new
UI, and verify it for both quality and consistency with the rest of
the Zulip UI. We highly encourage posting screenshots to save
the Zulip UI. We highly encourage posting screenshots to save
reviewers time in getting a feel for what the feature looks like --
you'll get a quicker response that way.
* *Error handling.* The code should always check for invalid user
input. User-facing error messages should be clear and when possible
- _Error handling._ The code should always check for invalid user
input. User-facing error messages should be clear and when possible
be actionable (it should be obvious to the user what they need to do
in order to correct the problem).
* *Testing.* The tests should validate that the feature works
- _Testing._ The tests should validate that the feature works
correctly, and specifically test for common error conditions, bad
user input, and potential bugs that are likely for the type of
change being made. Tests that exclude whole classes of potential
change being made. Tests that exclude whole classes of potential
bugs are preferred when possible (e.g., the common test suite
`test_markdown.py` between the Zulip server's [frontend and backend
Markdown processors](../subsystems/markdown.md), or the `GetEventsTest` test for
buggy race condition handling).
* *Translation.* Make sure that the strings are marked for
- _Translation._ Make sure that the strings are marked for
[translation].
* *Clear function, argument, variable, and test names.* Every new
- _Clear function, argument, variable, and test names._ Every new
piece of Zulip code will be read many times by other developers, and
future developers will grep for relevant terms when researching a
problem, so it's important that variable names communicate clearly
the purpose of each piece of the codebase.
* *Duplicated code.* Code duplication is a huge source of bugs in
- _Duplicated code._ Code duplication is a huge source of bugs in
large projects and makes the codebase difficult to understand, so we
avoid significant code duplication wherever possible. Sometimes
avoid significant code duplication wherever possible. Sometimes
avoiding code duplication involves some refactoring of existing
code; if so, that should usually be done as its own series of
commits (not squashed into other changes or left as a thing to do
later). That series of commits can be in the same pull request as
the feature that they support, and we recommend ordering the history
of commits so that the refactoring comes *before* the feature. That
of commits so that the refactoring comes _before_ the feature. That
way, it's easy to merge the refactoring (and minimize risk of merge
conflicts) if there are still user experience issues under
discussion for the feature itself.
* *Completeness.* For refactorings, verify that the changes are
complete. Usually one can check that efficiently using `git grep`,
- _Completeness._ For refactorings, verify that the changes are
complete. Usually one can check that efficiently using `git grep`,
and it's worth it, as we very frequently find issues by doing so.
* *Documentation updates.* If this changes how something works, does it
update the documentation in a corresponding way? If it's a new
- _Documentation updates._ If this changes how something works, does it
update the documentation in a corresponding way? If it's a new
feature, is it documented, and documented in the right place?
* *Good comments.* It's often worth thinking about whether explanation
- _Good comments._ It's often worth thinking about whether explanation
in a commit message or pull request discussion should be included in
a comment, `/docs`, or other documentation. But it's better yet if
verbose explanation isn't needed. We prefer writing code that is
readable without explanation over a heavily commented codebase using
lots of clever tricks.
* *Coding style.* See the Zulip [code-style] documentation for
details. Our goal is to have as much of this as possible verified
- _Coding style._ See the Zulip [code-style] documentation for
details. Our goal is to have as much of this as possible verified
via the linters and tests, but there's always going to be unusual
forms of Python/JavaScript style that our tools don't check for.
* *Clear commit messages.* See the [Zulip version
- _Clear commit messages._ See the [Zulip version
control][commit-messages] documentation for details on what we look
for.
@@ -197,16 +198,16 @@ sooner is better.
Some points specific to the Zulip server codebase:
* *Testing -- Backend.* We are trying to maintain ~100% test coverage
- _Testing -- Backend._ We are trying to maintain ~100% test coverage
on the backend, so backend changes should have negative tests for
the various error conditions.
* *Testing -- Frontend.* If the feature involves frontend changes,
there should be frontend tests. See the [test
- _Testing -- Frontend._ If the feature involves frontend changes,
there should be frontend tests. See the [test
writing][test-writing] documentation for more details.
* *mypy annotations.* New functions should be annotated using [mypy]
and existing annotations should be updated. Use of `Any`, `ignore`,
- _mypy annotations._ New functions should be annotated using [mypy]
and existing annotations should be updated. Use of `Any`, `ignore`,
and unparameterized containers should be limited to cases where a
more precise type cannot be specified.
@@ -215,7 +216,7 @@ Some points specific to the Zulip server codebase:
To make it easier to review pull requests, if you're working in the
Zulip server codebase, use our [git tool]
`tools/fetch-rebase-pull-request` to check out a pull request locally
and rebase it against master.
and rebase it onto `main`.
If a pull request just needs a little fixing to make it mergeable,
feel free to do that in a new commit, then push your branch to GitHub
@@ -226,14 +227,14 @@ the maintainer time and get the PR merged quicker.
We also strongly recommend reviewers to go through the following resources.
* [The Gentle Art of Patch Review](https://sage.thesharps.us/2014/09/01/the-gentle-art-of-patch-review/)
- [The Gentle Art of Patch Review](https://sage.thesharps.us/2014/09/01/the-gentle-art-of-patch-review/)
article by Sarah Sharp
* [Zulip & Good Code Review](https://www.harihareswara.net/sumana/2016/05/17/0)
- [Zulip & Good Code Review](https://www.harihareswara.net/sumana/2016/05/17/0)
article by Sumana Harihareswara
* [Code Review - A consolidation of advice and stuff from the
sinternet](https://gist.github.com/porterjamesj/002fb27dd70df003646df46f15e898de)
- [Code Review - A consolidation of advice and stuff from the
internet](https://gist.github.com/porterjamesj/002fb27dd70df003646df46f15e898de)
article by James J. Porter
* [Zulip code of conduct](../code-of-conduct.md)
- [Zulip code of conduct](../code-of-conduct.md)
[code-style]: ../contributing/code-style.md
[commit-messages]: ../contributing/version-control.html#commit-messages

View File

@@ -2,7 +2,7 @@
One can summarize Zulip's coding philosophy as a relentless focus on
making the codebase easy to understand and difficult to make dangerous
mistakes in. The majority of work in any large software development
mistakes in. The majority of work in any large software development
project is understanding the existing code so one can debug or modify
it, and investments in code readability usually end up paying for
themselves when someone inevitably needs to debug or improve the code.
@@ -15,10 +15,10 @@ comments/docstrings, and commit messages (roughly in order of priority
better than writing a comment explaining how the bad interface works).
This page documents code style policies that every Zulip developer
should understand. We aim for this document to be short and focused
should understand. We aim for this document to be short and focused
only on details that cannot be easily enforced another way (e.g.
through linters, automated tests, subsystem design that makes classes
of mistakes unlikely, etc.). This approach minimizes the cognitive
of mistakes unlikely, etc.). This approach minimizes the cognitive
load of ensuring a consistent coding style for both contributors and
maintainers.
@@ -34,25 +34,29 @@ When in doubt, ask in [chat.zulip.org](https://chat.zulip.org).
You can run them all at once with
./tools/lint
```bash
./tools/lint
```
You can set this up as a local Git commit hook with
tools/setup-git-repo
```bash
tools/setup-git-repo
```
The Vagrant setup process runs this for you.
`lint` runs many lint checks in parallel, including
- JavaScript ([ESLint](https://eslint.org/),
[Prettier](https://prettier.io/))
- Python ([mypy](http://mypy-lang.org/),
[Pyflakes](https://pypi.python.org/pypi/pyflakes),
[Black](https://github.com/psf/black),
[isort](https://pycqa.github.io/isort/))
- templates
- Puppet configuration
- custom checks (e.g. trailing whitespace and spaces-not-tabs)
- JavaScript ([ESLint](https://eslint.org/),
[Prettier](https://prettier.io/))
- Python ([mypy](http://mypy-lang.org/),
[Pyflakes](https://pypi.python.org/pypi/pyflakes),
[Black](https://github.com/psf/black),
[isort](https://pycqa.github.io/isort/))
- templates
- Puppet configuration
- custom checks (e.g. trailing whitespace and spaces-not-tabs)
## Secrets
@@ -66,28 +70,34 @@ to read secrets from `/etc/zulip/secrets.conf`.
Look out for Django code like this:
bars = Bar.objects.filter(...)
for bar in bars:
foo = bar.foo
# Make use of foo
```python
bars = Bar.objects.filter(...)
for bar in bars:
foo = bar.foo
# Make use of foo
```
...because it equates to:
bars = Bar.objects.filter(...)
for bar in bars:
foo = Foo.objects.get(id=bar.foo.id)
# Make use of foo
```python
bars = Bar.objects.filter(...)
for bar in bars:
foo = Foo.objects.get(id=bar.foo.id)
# Make use of foo
```
...which makes a database query for every Bar. While this may be fast
locally in development, it may be quite slow in production! Instead,
...which makes a database query for every Bar. While this may be fast
locally in development, it may be quite slow in production! Instead,
tell Django's [QuerySet
API](https://docs.djangoproject.com/en/dev/ref/models/querysets/) to
_prefetch_ the data in the initial query:
bars = Bar.objects.filter(...).select_related()
for bar in bars:
foo = bar.foo # This doesn't take another query, now!
# Make use of foo
```python
bars = Bar.objects.filter(...).select_related()
for bar in bars:
foo = bar.foo # This doesn't take another query, now!
# Make use of foo
```
If you can't rewrite it as a single query, that's a sign that something
is wrong with the database schema. So don't defer this optimization when
@@ -118,7 +128,7 @@ different database queries:
For example, the following will, surprisingly, fail:
```
```python
# Bad example -- will raise!
obj: UserProfile = get_user_profile_by_id(17)
some_objs = UserProfile.objects.get(id=17)
@@ -127,15 +137,15 @@ assert obj in set([some_objs])
You should work with the IDs instead:
```
```python
obj: UserProfile = get_user_profile_by_id(17)
some_objs = UserProfile.objects.get(id=17)
assert obj.id in set([o.id for i in some_objs])
```
### user\_profile.save()
### user_profile.save()
You should always pass the update\_fields keyword argument to .save()
You should always pass the update_fields keyword argument to .save()
when modifying an existing Django model object. By default, .save() will
overwrite every value in the column, which results in lots of race
conditions where unrelated changes made by one thread can be
@@ -145,7 +155,7 @@ object before the first thread wrote out its change.
### Using raw saves to update important model objects
In most cases, we already have a function in zerver/lib/actions.py with
a name like do\_activate\_user that will correctly handle lookups,
a name like do_activate_user that will correctly handle lookups,
caching, and notifying running browsers via the event system about your
change. So please check whether such a function exists before writing
new code to modify a model object, since your new code has a good chance
@@ -158,18 +168,19 @@ cause time-related bugs that are hard to catch with a test suite, or bugs
that only show up during daylight savings time.
Good ways to make timezone-aware datetimes are below. We import timezone
libraries as `from datetime import datetime, timezone` and `from
django.utils.timezone import now as timezone_now`.
libraries as `from datetime import datetime, timezone` and
`from django.utils.timezone import now as timezone_now`.
Use:
* `timezone_now()` to get a datetime when Django is available, such as
- `timezone_now()` to get a datetime when Django is available, such as
in `zerver/`.
* `datetime.now(tz=timezone.utc)` when Django is not available, such as
- `datetime.now(tz=timezone.utc)` when Django is not available, such as
for bots and scripts.
* `datetime.fromtimestamp(timestamp, tz=timezone.utc)` if creating a
- `datetime.fromtimestamp(timestamp, tz=timezone.utc)` if creating a
datetime from a timestamp. This is also available as
`zerver.lib.timestamp.timestamp_to_datetime`.
* `datetime.strptime(date_string, format).replace(tzinfo=timezone.utc)` if
- `datetime.strptime(date_string, format).replace(tzinfo=timezone.utc)` if
creating a datetime from a formatted string that is in UTC.
Idioms that result in timezone-naive datetimes, and should be avoided, are
@@ -179,10 +190,11 @@ parameter, `datetime.utcnow()` and `datetime.utcfromtimestamp()`, and
the end.
Additional notes:
* Especially in scripts and puppet configuration where Django is not
- Especially in scripts and puppet configuration where Django is not
available, using `time.time()` to get timestamps can be cleaner than
dealing with datetimes.
* All datetimes on the backend should be in UTC, unless there is a good
- All datetimes on the backend should be in UTC, unless there is a good
reason to do otherwise.
### `x.attr('zid')` vs. `rows.id(x)`
@@ -231,10 +243,8 @@ generally use modern
[ECMAScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Language_Resources)
primitives such as [`for … of`
loops](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...of),
[`Array.prototype.{entries, every, filter, find, indexOf, map,
some}`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array),
[`Object.{assign, entries, keys,
values}`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object),
[`Array.prototype.{entries, every, filter, find, indexOf, map, some}`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array),
[`Object.{assign, entries, keys, values}`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object),
[spread
syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax),
and so on. Our Babel configuration automatically transpiles and
@@ -255,9 +265,9 @@ code a lot uglier, in which case it's fine to go up to 120 or so.
### JavaScript and TypeScript
Our JavaScript and TypeScript code is formatted with
[Prettier](https://prettier.io/). You can ask Prettier to reformat
all code via our [linter tool](../testing/linters.md) with `tools/lint
--only=prettier --fix`. You can also [integrate it with your
[Prettier](https://prettier.io/). You can ask Prettier to reformat
all code via our [linter tool](../testing/linters.md) with
`tools/lint --only=prettier --fix`. You can also [integrate it with your
editor](https://prettier.io/docs/en/editors.html).
Combine adjacent on-ready functions, if they are logically related.
@@ -266,33 +276,39 @@ The best way to build complicated DOM elements is a Mustache template
like `static/templates/message_reactions.hbs`. For simpler things
you can use jQuery DOM building APIs like so:
var new_tr = $('<tr />').attr('id', object.id);
```js
var new_tr = $('<tr />').attr('id', object.id);
```
Passing a HTML string to jQuery is fine for simple hardcoded things
that don't need internationalization:
foo.append('<p id="selected">/</p>');
```js
foo.append('<p id="selected">/</p>');
```
but avoid programmatically building complicated strings.
We used to favor attaching behaviors in templates like so:
<p onclick="select_zerver({{id}})">
```js
<p onclick="select_zerver({{id}})">
```
but there are some reasons to prefer attaching events using jQuery code:
- Potential huge performance gains by using delegated events where
possible
- When calling a function from an `onclick` attribute, `this` is not
bound to the element like you might think
- jQuery does event normalization
- Potential huge performance gains by using delegated events where
possible
- When calling a function from an `onclick` attribute, `this` is not
bound to the element like you might think
- jQuery does event normalization
Either way, avoid complicated JavaScript code inside HTML attributes;
call a helper function instead.
### HTML / CSS
Our CSS is formatted with [Prettier](https://prettier.io/). You can
Our CSS is formatted with [Prettier](https://prettier.io/). You can
ask Prettier to reformat all code via our [linter
tool](../testing/linters.md) with `tools/lint --only=prettier --fix`.
You can also [integrate it with your
@@ -308,37 +324,40 @@ type changes in the future.
### Python
- Our Python code is formatted with
[Black](https://github.com/psf/black) and
[isort](https://pycqa.github.io/isort/). The [linter
tool](../testing/linters.md) enforces this by running Black and
isort in check mode, or in write mode with `tools/lint
--only=black,isort --fix`. You may find it helpful to [integrate
Black](https://black.readthedocs.io/en/stable/editor_integration.html)
and
[isort](https://pycqa.github.io/isort/#installing-isorts-for-your-preferred-text-editor)
with your editor.
- Don't put a shebang line on a Python file unless it's meaningful to
run it as a script. (Some libraries can also be run as scripts, e.g.
to run a test suite.)
- Scripts should be executed directly (`./script.py`), so that the
interpreter is implicitly found from the shebang line, rather than
explicitly overridden (`python script.py`).
- Put all imports together at the top of the file, absent a compelling
reason to do otherwise.
- Unpacking sequences doesn't require list brackets:
- Our Python code is formatted with
[Black](https://github.com/psf/black) and
[isort](https://pycqa.github.io/isort/). The [linter
tool](../testing/linters.md) enforces this by running Black and
isort in check mode, or in write mode with
`tools/lint --only=black,isort --fix`. You may find it helpful to
[integrate
Black](https://black.readthedocs.io/en/stable/editor_integration.html)
and
[isort](https://pycqa.github.io/isort/#installing-isorts-for-your-preferred-text-editor)
with your editor.
- Don't put a shebang line on a Python file unless it's meaningful to
run it as a script. (Some libraries can also be run as scripts, e.g.
to run a test suite.)
- Scripts should be executed directly (`./script.py`), so that the
interpreter is implicitly found from the shebang line, rather than
explicitly overridden (`python script.py`).
- Put all imports together at the top of the file, absent a compelling
reason to do otherwise.
- Unpacking sequences doesn't require list brackets:
[x, y] = xs # unnecessary
x, y = xs # better
```python
[x, y] = xs # unnecessary
x, y = xs # better
```
- For string formatting, use `x % (y,)` rather than `x % y`, to avoid
ambiguity if `y` happens to be a tuple.
- For string formatting, use `x % (y,)` rather than `x % y`, to avoid
ambiguity if `y` happens to be a tuple.
### Tests
Clear, readable code is important for [tests](../testing/testing.md);
familiarize yourself with our testing frameworks so that you can write
clean, readable tests. Comments about anything subtle about what is
clean, readable tests. Comments about anything subtle about what is
being verified are appreciated.
### Third party code

View File

@@ -10,7 +10,7 @@ integrations, all open source.
Zulip has gained a considerable amount of traction since it was
[released as open source software][oss-release] in late 2015, with
code contributions from [over 700 people](https://zulip.com/team)
from all around the world. Thousands of people use Zulip every single
from all around the world. Thousands of people use Zulip every single
day, and your work on Zulip will have impact on the daily experiences
of a large and rapidly growing number of people.
@@ -19,7 +19,7 @@ of a large and rapidly growing number of people.
As an organization, we value high-quality, responsive mentorship and
making sure our product quality is extremely high -- you can expect to
experience disciplined code reviews by highly experienced
engineers. Since Zulip is a team chat product, your GSoC experience
engineers. Since Zulip is a team chat product, your GSoC experience
with the Zulip project will be highly interactive.
As part of that commitment, Zulip has over 160,000 words of
@@ -31,17 +31,17 @@ the way that it does.
### Our history with Google Open Source Programs
Zulip has been a GSoC mentoring organization since 2016, and we aim
for 15-20 GSoC students each summer. We have some of the highest
for 15-20 GSoC students each summer. We have some of the highest
standards of any GSoC organization; successful applications generally
have dozens of commits integrated into Zulip or other open source
projects by the time we review their application. See [our
projects by the time we review their application. See [our
contributing guide](../overview/contributing.md) for details on
getting involved with GSoC.
Zulip participated in GSoC 2016 and mentored three successful students
officially (plus 4 more who did their proposed projects unofficially).
We had 14 (+3) students in 2017, 10 (+3) students in 2018, 17 (+1) in
2019, and 18 in 2020. We've also mentored five Outreachy interns and
2019, and 18 in 2020. We've also mentored five Outreachy interns and
hundreds of Google Code-In participants (several of who are major
contributors to the project today).
@@ -55,7 +55,7 @@ the summer).
[Our guide for having a great summer with Zulip](../contributing/summer-with-zulip.md)
is focused on what one should know once doing a summer project with
Zulip. But it has a lot of useful advice on how we expect students to
Zulip. But it has a lot of useful advice on how we expect students to
interact, above and beyond what is discussed in Google's materials.
[What makes a great Zulip contributor](../overview/contributing.html#what-makes-a-great-zulip-contributor)
@@ -69,7 +69,7 @@ We also recommend reviewing
Finally, keep your eye on
[the GSoC timeline](https://developers.google.com/open-source/gsoc/timeline). The
student application deadline is April 13, 2021. However, as is
student application deadline is April 13, 2021. However, as is
discussed in detail later in this document, we recommend against
working on a proposal until 2 weeks before the deadline.
@@ -100,21 +100,21 @@ make contributions, and make a good proposal.
Your application should include the following:
* Details on any experience you have related to the technologies that
- Details on any experience you have related to the technologies that
Zulip has, or related to our product approach.
* Links to materials to help us evaluate your level of experience and
- Links to materials to help us evaluate your level of experience and
how you work, such as personal projects of yours, including any
existing open source or open culture contributions you've made and
any bug reports you've submitted to open source projects.
* Some notes on what you are hoping to get out of your project.
* A description of the project you'd like to do, and why you're
- Some notes on what you are hoping to get out of your project.
- A description of the project you'd like to do, and why you're
excited about it.
* Some notes on why you're excited about working on Zulip.
* A link to the initial contribution(s) you did.
- Some notes on why you're excited about working on Zulip.
- A link to the initial contribution(s) you did.
We expect applicants to either have experience with the technologies
relevant to their project or have strong general programming
experience. We also expect applicants to be excited about learning
experience. We also expect applicants to be excited about learning
how to do disciplined, professional software engineering, where they
can demonstrate through reasoning and automated tests that their code
is correct.
@@ -138,19 +138,19 @@ Also, you're going to find that people give you links to pages that
answer your questions. Here's how that often works:
1. you [try to solve your problem until you get stuck, including
looking through our code and our documentation, then start formulating
your request for
help](https://blogs.akamai.com/2013/10/you-must-try-and-then-you-must-ask.html)
looking through our code and our documentation, then start formulating
your request for
help](https://blogs.akamai.com/2013/10/you-must-try-and-then-you-must-ask.html)
1. you ask your question
1. someone directs you to a document
1. you go read that document, and try to use it to answer your question
1. you find you are confused about a new thing
1. you ask another question
1. now that you have demonstrated that you have the ability to read,
think, and learn new things, someone has a longer talk with you to
answer your new specific question
think, and learn new things, someone has a longer talk with you to
answer your new specific question
1. you and the other person collaborate to improve the document that you
read in step 3 :-)
read in step 3 :-)
This helps us make a balance between person-to-person discussion and
documentation that everyone can read, so we save time answering common
@@ -166,19 +166,19 @@ post.](https://www.harihareswara.net/sumana/2016/10/12/0)
## Mentors
Zulip has dozens of longtime contributors who sign up to mentoring
projects. We usually decide who will mentor which projects based in
projects. We usually decide who will mentor which projects based in
part on who is a good fit for the needs of each student as well as
technical expertise as well as who has available time during the
summer. You can reach us via
summer. You can reach us via
[#GSoC](https://chat.zulip.org/#narrow/stream/14-GSoC) on [the Zulip
development community server](../contributing/chat-zulip-org.md),
(compose a new stream message with your name as the topic).
Zulip operates under group mentorship. That means you should
Zulip operates under group mentorship. That means you should
generally post in public streams on chat.zulip.org, not send private
messages, for assistance. Our preferred approach is to just post in
messages, for assistance. Our preferred approach is to just post in
an appropriate public stream on chat.zulip.org and someone will help
you. We list the Zulip contributors who are experts for various
you. We list the Zulip contributors who are experts for various
projects by name below; they will likely be able to provide you with
the best feedback on your proposal (feel free to @-mention them in
your Zulip post). In practice, this allows project leadership to
@@ -188,11 +188,11 @@ However, the first and most important thing to do for building a
strong application is to show your skills by contributing to a large
open source project like Zulip, to show that you can work effectively
in a large codebase (it doesn't matter what part of Zulip, and we're
happy to consider work in other open source projects). The quality of
happy to consider work in other open source projects). The quality of
your best work is more important to us than the quantity; so be sure
to test your work before submitting it for review and follow our
coding guidelines (and don't worry if you make mistakes in your first
few contributions! Everyone makes mistakes getting started. Just
few contributions! Everyone makes mistakes getting started. Just
make sure you don't make the same mistakes next time).
Once you have several PRs merged (or at least one significant PR
@@ -206,8 +206,8 @@ online.
These are the seeds of ideas; you will need to do research on the
Zulip codebase, read issues on GitHub, and talk with developers to put
together a complete project proposal. It's also fine for you to come
up with your own project ideas. As you'll see below, you can put
together a complete project proposal. It's also fine for you to come
up with your own project ideas. As you'll see below, you can put
together a great project around one of the
[area labels](https://github.com/zulip/zulip/labels) on GitHub; each
has a cluster of problems in one part of the Zulip project that we'd
@@ -217,13 +217,13 @@ We don't believe in labeling projects by difficulty (e.g. a project
that involves writing a lot of documentation will be hard for some
great programmers, and a UI design project might be hard for a great
backend programmer, while a great writer might have trouble doing
performance work). To help you find a great project, we list the
performance work). To help you find a great project, we list the
skills needed, and try to emphasize where strong skills with
particular tools are likely to be important for a given project.
For all of our projects, an important skill to develop is a good
command of Git; read [our Git guide](../git/overview.md) in full to
learn how to use it well. Of particular importance is mastering using
learn how to use it well. Of particular importance is mastering using
Git rebase so that you can construct commits that are clearly correct
and explain why they are correct. We highly recommend investing in
learning a [graphical Git client](../git/setup.md) and learning to
@@ -246,13 +246,13 @@ set of 8 issues may not be the right ones to invest in.
For 2021, we are particularly interested in GSoC students who have
strong skills at visual design, HTML/CSS, mobile development,
performance optimization, or Electron. So if you're a student with
performance optimization, or Electron. So if you're a student with
those skills and are looking for an organization to join, we'd love to
talk to you!
The Zulip project has a huge surface area, so even when we're focused
on something, a huge amount of essential work goes into other parts of
the project. Every area of Zulip could benefit from the work of a
the project. Every area of Zulip could benefit from the work of a
student with strong programming skills; so don't feel discouraged if
the areas mentioned above are not your main strength.
@@ -273,12 +273,12 @@ CSS](https://github.com/zulip/zulip/).
Zulip has a [nice framework](../documentation/api.md) for writing
API documentation built by past GSoC students based on the OpenAPI
standard with built-in automated tests of the data both the Python
and curl examples. However, the documentation isn't yet what we're
and curl examples. However, the documentation isn't yet what we're
hoping for: there are a few dozen endpoints that are missing,
several of which are quite important, the visual design isn't
perfect (especially for e.g. `GET /events`), many template could be
deleted with a bit of framework effort, etc. See the [API docs area
label][api-docs-area] for many specific projects in the area. Our
deleted with a bit of framework effort, etc. See the [API docs area
label][api-docs-area] for many specific projects in the area. Our
goal for the summer is for 1-2 students to resolve all open issues
related to the REST API documentation.
@@ -288,46 +288,46 @@ CSS](https://github.com/zulip/zulip/).
Zulip, including [default stream
groups](https://github.com/zulip/zulip/issues/13670), [Mute
User](https://github.com/zulip/zulip/issues/168), and [public
access](https://github.com/zulip/zulip/issues/13172). Expert: Tim
Abbott. Many of these issues have open PRs with substantial work
access](https://github.com/zulip/zulip/issues/13172). Expert: Tim
Abbott. Many of these issues have open PRs with substantial work
towards the goal, but each of them is likely to have dozens of
adjacent or follow-up tasks.
- Fill in gaps, fix bugs, and improve the framework for Zulip's
library of native integrations. We have about 100 integrations, but
there are a handful of important integrations that are missing. The
library of native integrations. We have about 100 integrations, but
there are a handful of important integrations that are missing. The
[the integrations label on
GitHub](https://github.com/zulip/zulip/labels/area%3A%20integrations)
lists some of the priorities here (many of which are great
preparatory projects); once those are cleared, we'll likely have
many more. **Skills required**: Strong Python experience, will to
do careful manual testing of third-party products. Fluent English,
many more. **Skills required**: Strong Python experience, will to
do careful manual testing of third-party products. Fluent English,
usability sense and/or technical writing skills are all pluses.
Expert: Eeshan Garg.
- Optimize performance and scalability, either for the web frontend or
the server. Zulip is already one of the faster webapps out there,
the server. Zulip is already one of the faster webapps out there,
but there are a bunch of ideas for how to make it substantially
faster. This is likely a particularly challenging project to do
faster. This is likely a particularly challenging project to do
well, since there are a lot of subtle interactions to understand.
**Skill recommended**: Strong debugging, communication, and code
reading skills are most important here. JavaScript experience; some
reading skills are most important here. JavaScript experience; some
Python/Django experience, some skill with CSS, ideally experience
using the Chrome Timeline profiling tools (but you can pick this up
as you go) can be useful depending on what profiling shows. Our
as you go) can be useful depending on what profiling shows. Our
[backend scalability design doc](../subsystems/performance.md) and
the [production issue label][prod-label] (where
performance/scalability issues tend to be filed) may be helpful
reading for the backend part of this. Expert: Steve Howell.
reading for the backend part of this. Expert: Steve Howell.
[prod-label]: https://github.com/zulip/zulip/issues?q=is%3Aopen+is%3Aissue+label%3A%22area%3A+production%22
- Extract JavaScript logic modules from the Zulip webapp that we'd
like to be able to share with the Zulip webapp. This work can have
like to be able to share with the Zulip webapp. This work can have
big benefits it terms of avoiding code duplication for complex
logic. We have prototyped for a few modules by migrating them to
logic. We have prototyped for a few modules by migrating them to
`static/shared/`; this project will involve closely collaborating
with the mobile team to prioritize the modules to migrate. **Skills
with the mobile team to prioritize the modules to migrate. **Skills
recommended**: JavaScript experience, careful refactoring, API
design, React.
@@ -338,12 +338,12 @@ CSS](https://github.com/zulip/zulip/).
permissions (and implementing the enforcement logic), adding an
OAuth system for presenting those controls to users, as well as
making the /integrations page UI have buttons to create a bot,
rather than sending users to the administration page. **Skills
rather than sending users to the administration page. **Skills
recommended**: Strong Python/Django; JavaScript, CSS, and design
sense helpful. Understanding of implementing OAuth providers,
sense helpful. Understanding of implementing OAuth providers,
e.g. having built a prototype with
[the Django OAuth toolkit](https://django-oauth-toolkit.readthedocs.io/en/latest/)
would be great to demonstrate as part of an application. The
would be great to demonstrate as part of an application. The
[Zulip integration writing guide](../documentation/integrations.md)
and
[integration documentation](https://zulip.com/integrations/)
@@ -351,69 +351,69 @@ CSS](https://github.com/zulip/zulip/).
and
[the integrations label on GitHub](https://github.com/zulip/zulip/labels/area%3A%20integrations)
has a bunch of good starter issues to demonstrate your skills if
you're interested in this area. Expert: Eeshan Garg.
you're interested in this area. Expert: Eeshan Garg.
- Extend Zulip's meta-integration that converts the Slack incoming webhook
API to post messages into Zulip. Zulip has several dozen native
integrations (https://zulip.com/integrations/), but Slack has a
ton more. We should build an interface to make all of Slacks
ton more. We should build an interface to make all of Slacks
numerous third-party integrations work with Zulip as well, by
basically building a Zulip incoming webhook interface that accepts
the Slack API (if you just put in a Zulip server URL as your "Slack
server"). **Skills required**: Strong Python experience; experience
with the Slack API a plus. Work should include documenting the
system and advertising it. Expert: Tim Abbott.
server"). **Skills required**: Strong Python experience; experience
with the Slack API a plus. Work should include documenting the
system and advertising it. Expert: Tim Abbott.
- Visual and user experience design work on the core Zulip web UI.
We're particularly excited about students who are interested in
making our CSS clean and readable as part of working on the UI.
**Skills required**: Design, HTML and CSS skills; JavaScript and
illustration experience are helpful. A great application would
illustration experience are helpful. A great application would
include PRs making small, clean improvements to the Zulip UI
(whether logged-in or logged-out pages). Expert: Aman Agrawal.
(whether logged-in or logged-out pages). Expert: Aman Agrawal.
- Build support for outgoing webhooks and slash commands into Zulip to
improve its chat-ops capabilities. There's an existing
improve its chat-ops capabilities. There's an existing
[pull request](https://github.com/zulip/zulip/pull/1393) with a lot
of work on the outgoing webhooks piece of this feature that would
need to be cleaned up and finished, and then we need to build support for slash
commands, some example integrations, and a full set of
documentation and tests. Recommended reading includes Slack's
documentation and tests. Recommended reading includes Slack's
documentation for these features, the Zulip message sending code
path, and the linked pull request. **Skills required**: Strong
Python/Django skills. Expert: Steve Howell.
Python/Django skills. Expert: Steve Howell.
- Build a system for managing Zulip bots entirely on the web.
Right now, there's a somewhat cumbersome process where you download
the API bindings, create a bot with an API key, put it in
configuration files, etc. We'd like to move to a model where a bot
configuration files, etc. We'd like to move to a model where a bot
could easily progress from being a quick prototype to being a third-party extension to
being built into Zulip. And then for built-in bots, one should be able to click a few
being built into Zulip. And then for built-in bots, one should be able to click a few
buttons of configuration on the web to set them up and include them in
your organization. We've developed a number of example bots
in the (`zulip_bots`](https://github.com/zulip/python-zulip-api/tree/master/zulip_bots)
your organization. We've developed a number of example bots
in the [`zulip_bots`](https://github.com/zulip/python-zulip-api/tree/main/zulip_bots)
PyPI package.
**Skills recommended**: Python and JavaScript/CSS, plus devops
skills (Linux deployment, Docker, Puppet etc.) are all useful here.
Experience writing tools using various popular APIs is helpful for
being able to make good choices. Expert: Steve Howell.
being able to make good choices. Expert: Steve Howell.
- Improve the UI and visual design of the existing Zulip settings and
administration pages while fixing bugs and adding new settings. The
administration pages while fixing bugs and adding new settings. The
pages have improved a great deal during recent GSoCs, but because
they have a ton of surface area, there's a lot to do. You can get a
they have a ton of surface area, there's a lot to do. You can get a
great sense of what needs to be done by playing with the
settings/administration/streams overlays in a development
environment. You can get experience working on the subsystem by
environment. You can get experience working on the subsystem by
working on some of [our open settings/admin
issues](https://github.com/zulip/zulip/labels/area%3A%20admin).
**Skills recommended**: JavaScript, HTML, CSS, and an eye for visual
design. Expert: Shubham Dhama.
design. Expert: Shubham Dhama.
- Build out the administration pages for Zulip to add new permissions
and other settings more features that will make Zulip better for
larger organizations. We get constant requests for these kinds of
features from Zulip users. The Zulip bug tracker has plentiful open
larger organizations. We get constant requests for these kinds of
features from Zulip users. The Zulip bug tracker has plentiful open
issues( [settings
(admin/org)](https://github.com/zulip/zulip/labels/area%3A%20settings%20%28admin%2Forg%29),
[settings
@@ -422,32 +422,32 @@ CSS](https://github.com/zulip/zulip/).
(user)](https://github.com/zulip/zulip/labels/area%3A%20settings%20%28user%29),
[stream
settings](https://github.com/zulip/zulip/labels/area%3A%20stream%20settings)
) in the space of improving the Zulip administrative UI. Many are
) in the space of improving the Zulip administrative UI. Many are
little bite-size fixes in those pages, which are great for getting a
feel for things, but a solid project here would be implementing 5-10
of the major missing features as full-stack development projects.
The first part of this project will be refactoring the admin UI
interfaces to require writing less semi-duplicate code for each
feature. **Skills recommended**: A good mix of Python/Django and
HTML/CSS/JavaScript skill is ideal. The system for adding new
HTML/CSS/JavaScript skill is ideal. The system for adding new
features is [well documented](../tutorials/new-feature-tutorial.md).
Expert: Shubham Dhama.
- Write cool new features for Zulip. Play around with the software,
- Write cool new features for Zulip. Play around with the software,
browse Zulip's issues for things that seem important, and suggest
something youd like to build! A great project can combine 3-5
significant features. Experts: Depends on the features!
something youd like to build! A great project can combine 3-5
significant features. Experts: Depends on the features!
- Work on Zulip's development and testing infrastructure. Zulip is a
- Work on Zulip's development and testing infrastructure. Zulip is a
project that takes great pride in building great tools for
development, but there's always more to do to make the experience
delightful. Significantly, a full 10% of Zulip's open issues are
delightful. Significantly, a full 10% of Zulip's open issues are
ideas for how to improve the project, and are
[in](https://github.com/zulip/zulip/labels/area%3A%20tooling)
[these](https://github.com/zulip/zulip/labels/area%3A%20testing-coverage)
[four](https://github.com/zulip/zulip/labels/area%3A%20testing-infrastructure)
[labels](https://github.com/zulip/zulip/labels/area%3A%20provision)
for tooling improvements. A good place to start is
for tooling improvements. A good place to start is
[backend test coverage](https://github.com/zulip/zulip/issues/7089).
This is a somewhat unusual project, in that it would likely consist
@@ -459,14 +459,14 @@ CSS](https://github.com/zulip/zulip/).
A possible specific larger project in this space is working on
adding [mypy](../testing/mypy.md) stubs
for Django in mypy to make our type checking more powerful. Read
for Django in mypy to make our type checking more powerful. Read
[our mypy blog post](https://blog.zulip.org/2016/10/13/static-types-in-python-oh-mypy/)
for details on how mypy works and is integrated into Zulip. This
for details on how mypy works and is integrated into Zulip. This
specific project is ideal for a strong contributor interested in
type systems.
**Skills required**: Python, some DevOps, and a passion for checking
your work carefully. A strong applicant for this will have
your work carefully. A strong applicant for this will have
completed several projects in these areas.
Experts: Anders Kaseorg (provision, testing), Steve Howell (tooling, testing).
@@ -476,25 +476,25 @@ CSS](https://github.com/zulip/zulip/).
[python](https://github.com/zulip/python-zulip-api),
[JavaScript](https://github.com/zulip/zulip-js),
[PHP](https://packagist.org/packages/mrferos/zulip-php), and
[Haskell](https://hackage.haskell.org/package/hzulip)). The
[Haskell](https://hackage.haskell.org/package/hzulip)). The
JavaScript bindings are a particularly high priority, since they are
a project that hasn't gotten a lot of attention since being adopted
from its original author, and we'd like to convert them to
Typescript. **Skills required**: Experience with the target
language and API design. Expert: Depends on language.
Typescript. **Skills required**: Experience with the target
language and API design. Expert: Depends on language.
- Develop [**@zulipbot**](https://github.com/zulip/zulipbot), the GitHub
workflow bot for the Zulip organization and its repositories. By utilizing the
[GitHub API](https://developer.github.com/v3/),
[**@zulipbot**](https://github.com/zulipbot) improves the experience of Zulip
contributors by managing the issues and pull requests in the Zulip repositories,
such as assigning issues to contributors and appropriately labeling issues with
their current status to help contributors gain a better understanding of which
issues are being worked on. Since the project is in its early stages of
development, there are a variety of possible tasks that can be done, including
adding new features, writing unit tests and creating a testing framework, and
writing documentation. **Skills required**: Node.js, ECMAScript 6, and API
experience. Experts: Cynthia Lin, Joshua Pan.
workflow bot for the Zulip organization and its repositories. By utilizing the
[GitHub API](https://developer.github.com/v3/),
[**@zulipbot**](https://github.com/zulipbot) improves the experience of Zulip
contributors by managing the issues and pull requests in the Zulip repositories,
such as assigning issues to contributors and appropriately labeling issues with
their current status to help contributors gain a better understanding of which
issues are being worked on. Since the project is in its early stages of
development, there are a variety of possible tasks that can be done, including
adding new features, writing unit tests and creating a testing framework, and
writing documentation. **Skills required**: Node.js, ECMAScript 6, and API
experience. Experts: Cynthia Lin, Joshua Pan.
### React Native mobile app
@@ -505,12 +505,12 @@ Experts: Greg Price, Chris Bobbe.
The highest priority for the Zulip project overall is improving the
Zulip React Native mobile app.
- Work on issues and polish for the app. You can see the open issues
[here](https://github.com/zulip/zulip-mobile/issues). There are a
- Work on issues and polish for the app. You can see the open issues
[here](https://github.com/zulip/zulip-mobile/issues). There are a
few hundred open issues across the project, and likely many more
problems that nobody has found yet; in the short term, it needs
polish, bug finding/squashing, and debugging. So browse the open
issues, play with the app, and get involved! Goals include parity
polish, bug finding/squashing, and debugging. So browse the open
issues, play with the app, and get involved! Goals include parity
with the webapp (in terms of what you can do), parity with Slack (in
terms of the visuals), world-class scrolling and narrowing
performance, and a great codebase.
@@ -518,15 +518,15 @@ Zulip React Native mobile app.
A good project proposal here will bundle together a few focus areas
that you want to make really great (e.g. the message composing,
editing, and reacting experience), that you can work on over the
summer. We'd love to have multiple students working on this area if
summer. We'd love to have multiple students working on this area if
we have enough strong applicants.
**Skills required**: Strong programming experience, especially in
reading the documentation of unfamiliar projects and communicating
what you learned. JavaScript and React experience are great pluses,
as are iOS or Android development/design experience is useful as
well. You'll need to learn React Native as part of getting
involved. There's tons of good online tutorials, courses, etc.
reading the documentation of unfamiliar projects and communicating
what you learned. JavaScript and React experience are great pluses,
as are iOS or Android development/design experience is useful as
well. You'll need to learn React Native as part of getting
involved. There's tons of good online tutorials, courses, etc.
### Electron desktop app
@@ -535,14 +535,14 @@ Code:
Experts: Anders Kaseorg, Akash Nimare, Abhighyan Khaund.
- Contribute to our [Electron-based desktop client
application](https://github.com/zulip/zulip-desktop). There's
application](https://github.com/zulip/zulip-desktop). There's
plenty of feature/UI work to do, but focus areas for us include
things to (1) improve the release process for the app, using
automated testing, TypeScript, etc. and (2) polish the UI. Browse
automated testing, TypeScript, etc. and (2) polish the UI. Browse
the open issues and get involved!
**Skills required**: JavaScript experience, Electron experience. You
can learn electron as part of your application!
**Skills required**: JavaScript experience, Electron experience. You
can learn electron as part of your application!
Good preparation for desktop app projects is to (1) try out the app
and see if you can find bugs or polish problems lacking open issues
@@ -556,10 +556,10 @@ Experts: Aman Agrawal, Neil Pilgrim.
- Work on Zulip Terminal, the official terminal client for Zulip.
zulip-terminal is already a basic usable client, but it needs a lot
of work to approach the webapp's quality level. We would be happy
to accept multiple strong students to work on this project. Our
of work to approach the webapp's quality level. We would be happy
to accept multiple strong students to work on this project. Our
goal for this summer is to improve its quality enough that we can
upgrade it from an alpha to an advertised feature. **Skills
upgrade it from an alpha to an advertised feature. **Skills
required**: Python 3 development skills, good communication and
project management skills, good at reading code and testing.
@@ -569,7 +569,7 @@ Code: [zulip-archive](https://github.com/zulip/zulip-archive)
Experts: Rein Zustand, Steve Howell
- Work on zulip-archive, which provides a Google-indexable read-only
archive of Zulip conversations. The issue tracker for the project
archive of Zulip conversations. The issue tracker for the project
has a great set of introductory/small projects; the overall goal is
to make the project super convenient to use for our OSS communities.
**Skills useful**: Python 3, reading feedback from users, CSS,
@@ -584,16 +584,16 @@ two before the application deadline. That way, the whole developer
community -- not just the mentors and administrators -- have a chance
to give you feedback and help you improve your proposal.
Where should you publish your draft? We prefer Dropbox Paper or
Where should you publish your draft? We prefer Dropbox Paper or
Google Docs, since those platforms allow people to look at the text
without having to log in or download a particular app, and you can
update the draft as you improve your idea. In either case, you should
update the draft as you improve your idea. In either case, you should
post the draft for feedback in chat.zulip.org.
Rough is fine! The ideal first draft to get feedback from the
community on should include primarily (1) links to your contributions
to Zulip (or other projects) and (2) a paragraph or two explaining
what you plan to work on. Your friends are likely better able to help
what you plan to work on. Your friends are likely better able to help
you improve the sections of your application explaining who you are,
and this helps the community focus feedback on the areas you can most
improve (e.g. either doing more contributions or adjusting the project

View File

@@ -88,14 +88,14 @@ materials](https://developers.google.com/open-source/gsoc/resources/manual).
- If you work in one a smaller Zulip project
(e.g. `zulip-terminal`), follow the project on GitHub so you can
keep track of what's happening there. For folks working in
keep track of what's happening there. For folks working in
`zulip/zulip`, doing that will send you too many notifications.
So instead, we recommend that you join Zulip's GitHub teams that
relate to your projects and/or interests, so that you see new
issues and PRs coming in that are relevant to your work. When we
issues and PRs coming in that are relevant to your work. When we
label an issue or PR with one of our area labels, `zulipbot` will
automatically mention the relevant teams for that area,
subscribing you to those issues/PR threads. You can browse the
subscribing you to those issues/PR threads. You can browse the
area teams here: https://github.com/orgs/zulip/teams (You need to
be a member of the Zulip organization to see them;
ask Tim for an invite if needed).
@@ -225,7 +225,6 @@ materials](https://developers.google.com/open-source/gsoc/resources/manual).
your contributions to the open source world this summer will be something you
can be proud of for the rest of your life.
## What makes a successful summer
Success for the student means a few things, in order of importance:
@@ -247,7 +246,6 @@ Success for the student means a few things, in order of importance:
student has implemented. That section of code should be more readable,
better-tested, and have clearer documentation.
## Extra notes for mentors
- You're personally accountable for your student having a successful summer. If
@@ -268,9 +266,9 @@ Success for the student means a few things, in order of importance:
plan it, you can get several round trips in per day even with big timezone
differences like USA + India.
- What exactly you focus on in your mentorship will vary from week to week and
depend somewhat on what the student needs. It might be any combination of
these things:
- What exactly you focus on in your mentorship will vary from week to week and
depend somewhat on what the student needs. It might be any combination of
these things:
- Helping the student plan, chunk, and prioritize their work.

View File

@@ -11,68 +11,68 @@ helps a lot in preventing bugs.
Commits must be coherent:
- It should pass tests (so test updates needed by a change should be
in the same commit as the original change, not a separate "fix the
tests that were broken by the last commit" commit).
- It should be safe to deploy individually, or explain in detail in
the commit message as to why it isn't (maybe with a [manual] tag).
So implementing a new API endpoint in one commit and then adding the
security checks in a future commit should be avoided -- the security
checks should be there from the beginning.
- Error handling should generally be included along with the code that
might trigger the error.
- TODO comments should be in the commit that introduces the issue or
the functionality with further work required.
- It should pass tests (so test updates needed by a change should be
in the same commit as the original change, not a separate "fix the
tests that were broken by the last commit" commit).
- It should be safe to deploy individually, or explain in detail in
the commit message as to why it isn't (maybe with a [manual] tag).
So implementing a new API endpoint in one commit and then adding the
security checks in a future commit should be avoided -- the security
checks should be there from the beginning.
- Error handling should generally be included along with the code that
might trigger the error.
- TODO comments should be in the commit that introduces the issue or
the functionality with further work required.
Commits should generally be minimal:
- Significant refactorings should be done in a separate commit from
functional changes.
- Moving code from one file to another should be done in a separate
commits from functional changes or even refactoring within a file.
- 2 different refactorings should be done in different commits.
- 2 different features should be done in different commits.
- If you find yourself writing a commit message that reads like a list
of somewhat dissimilar things that you did, you probably should have
just done multiple commits.
- Significant refactorings should be done in a separate commit from
functional changes.
- Moving code from one file to another should be done in a separate
commits from functional changes or even refactoring within a file.
- 2 different refactorings should be done in different commits.
- 2 different features should be done in different commits.
- If you find yourself writing a commit message that reads like a list
of somewhat dissimilar things that you did, you probably should have
just done multiple commits.
When not to be overly minimal:
- For completely new features, you don't necessarily need to split out
new commits for each little subfeature of the new feature. E.g., if
you're writing a new tool from scratch, it's fine to have the
initial tool have plenty of options/features without doing separate
commits for each one. That said, reviewing a 2000-line giant blob of
new code isn't fun, so please be thoughtful about submitting things
in reviewable units.
- Don't bother to split backend commits from frontend commits, even
though the backend can often be coherent on its own.
- For completely new features, you don't necessarily need to split out
new commits for each little subfeature of the new feature. E.g., if
you're writing a new tool from scratch, it's fine to have the
initial tool have plenty of options/features without doing separate
commits for each one. That said, reviewing a 2000-line giant blob of
new code isn't fun, so please be thoughtful about submitting things
in reviewable units.
- Don't bother to split backend commits from frontend commits, even
though the backend can often be coherent on its own.
Other considerations:
- Overly fine commits are easy to squash later, but not vice versa.
So err toward small commits, and the code reviewer can advise on
squashing.
- If a commit you write doesn't pass tests, you should usually fix
that by amending the commit to fix the bug, not writing a new "fix
tests" commit on top of it.
- Overly fine commits are easy to squash later, but not vice versa.
So err toward small commits, and the code reviewer can advise on
squashing.
- If a commit you write doesn't pass tests, you should usually fix
that by amending the commit to fix the bug, not writing a new "fix
tests" commit on top of it.
Zulip expects you to structure the commits in your pull requests to form
a clean history before we will merge them. It's best to write your
a clean history before we will merge them. It's best to write your
commits following these guidelines in the first place, but if you don't,
you can always fix your history using `git rebase -i` (more on that
[here](../git/fixing-commits.md)).
Never mix multiple changes together in a single commit, but it's great
to include several related changes, each in their own commit, in a
single pull request. If you notice an issue that is only somewhat
single pull request. If you notice an issue that is only somewhat
related to what you were working on, but you feel that it's too minor
to create a dedicated pull request, feel free to append it as an
additional commit in the pull request for your main project (that
commit should have a clear explanation of the bug in its commit
message). This way, the bug gets fixed, but this independent change
is highlighted for reviewers. Or just create a dedicated pull request
for it. Whatever you do, don't squash unrelated changes together in a
message). This way, the bug gets fixed, but this independent change
is highlighted for reviewers. Or just create a dedicated pull request
for it. Whatever you do, don't squash unrelated changes together in a
single commit; the reviewer will ask you to split the changes out into
their own commits.
@@ -91,21 +91,22 @@ First, check out
of commits with good commit messages.
The first line of the commit message is the **summary**. The summary:
* is written in the imperative (e.g., "Fix ...", "Add ...")
* is kept short (max 76 characters, ideally less), while concisely
- is written in the imperative (e.g., "Fix ...", "Add ...")
- is kept short (max 76 characters, ideally less), while concisely
explaining what the commit does
* is clear about what part of the code is affected -- often by prefixing
- is clear about what part of the code is affected -- often by prefixing
with the name of the subsystem and a colon, like "zjsunit: ..." or "docs: ..."
* is a complete sentence.
- is a complete sentence.
### Good summaries:
Below is an example of a good commit summary line. It starts with the
prefix "provision:", using lowercase "**p**". Next, "Improve performance of
Below is an example of a good commit summary line. It starts with the
prefix "provision:", using lowercase "**p**". Next, "Improve performance of
install npm." starts with a capital "**I**", uses imperative tense,
and ends with a period.
> *provision: Improve performance of installing npm.*
> _provision: Improve performance of installing npm._
Here are some more positive examples:
@@ -121,16 +122,15 @@ Here are some more positive examples:
> gather_subscriptions: Fix exception handling bad input.
Compare "_gather_subscriptions: Fix exception handling bad input._" with:
Compare "*gather_subscriptions: Fix exception handling bad input.*" with:
* "*gather_subscriptions was broken*", which doesn't explain how
- "_gather_subscriptions was broken_", which doesn't explain how
it was broken (and isn't in the imperative)
* "*Fix exception when given bad input*", in which it's impossible to
- "_Fix exception when given bad input_", in which it's impossible to
tell from the summary what part of the codebase was changed
* "*gather_subscriptions: Fixing exception when given bad input.*",
- "_gather_subscriptions: Fixing exception when given bad input._",
not in the imperative
* "*gather_subscriptions: Fixed exception when given bad input.*",
- "_gather_subscriptions: Fixed exception when given bad input._",
not in the imperative
The summary is followed by a blank line, and then the body of the
@@ -143,29 +143,29 @@ automatically catch common mistakes in the commit message itself.
### Message body:
- The body is written in prose, with full paragraphs; each paragraph should
be separated from the next by a single blank line.
- The body explains:
- why and how the change was made
- any manual testing you did in addition to running the automated tests
- any aspects of the commit that you think are questionable and
you'd like special attention applied to.
- If the commit makes performance improvements, you should generally
include some rough benchmarks showing that it actually improves the
performance.
- When you fix a GitHub issue, [mark that you've fixed the issue in
your commit
message](https://help.github.com/en/articles/closing-issues-via-commit-messages)
so that the issue is automatically closed when your code is merged.
Zulip's preferred style for this is to have the final paragraph of
the commit message read e.g. "Fixes: \#123.".
- Avoid `Partially fixes #1234`; GitHub's regular expressions ignore
the "partially" and close the issue. `Fixes part of #1234` is a good alternative.
- Any paragraph content in the commit message should be line-wrapped
to about 68 characters per line, but no more than 70, so that your
commit message will be reasonably readable in `git log` in a normal
terminal. You may find it helpful to:
- configure Git to use your preferred editor, with the EDITOR
- The body is written in prose, with full paragraphs; each paragraph should
be separated from the next by a single blank line.
- The body explains:
- why and how the change was made
- any manual testing you did in addition to running the automated tests
- any aspects of the commit that you think are questionable and
you'd like special attention applied to.
- If the commit makes performance improvements, you should generally
include some rough benchmarks showing that it actually improves the
performance.
- When you fix a GitHub issue, [mark that you've fixed the issue in
your commit
message](https://help.github.com/en/articles/closing-issues-via-commit-messages)
so that the issue is automatically closed when your code is merged.
Zulip's preferred style for this is to have the final paragraph of
the commit message read e.g. "Fixes: \#123.".
- Avoid `Partially fixes #1234`; GitHub's regular expressions ignore
the "partially" and close the issue. `Fixes part of #1234` is a good alternative.
- Any paragraph content in the commit message should be line-wrapped
to about 68 characters per line, but no more than 70, so that your
commit message will be reasonably readable in `git log` in a normal
terminal. You may find it helpful to:
- configure Git to use your preferred editor, with the EDITOR
environment variable or `git config --global core.editor`, and
- configure the editor to automatically wrap text to 70 or fewer
- configure the editor to automatically wrap text to 70 or fewer
columns per line (all text editors support this).

View File

@@ -6,89 +6,89 @@ repositories in order to create a better workflow for Zulip contributors.
Its purpose is to work around various limitations in GitHub's
permissions and notifications systems to make it possible to have a
much more democractic workflow for our contributors. It allows anyone
much more democractic workflow for our contributors. It allows anyone
to self-assign or label an issue, not just the core contributors
trusted with full write access to the repository (which is the only
model GitHub supports).
## Usage
* **Claim an issue** — Comment `@zulipbot claim` on the issue you want
to claim; **@zulipbot** will assign you to the issue and label the issue as
**in progress**.
- **Claim an issue** — Comment `@zulipbot claim` on the issue you want
to claim; **@zulipbot** will assign you to the issue and label the issue as
**in progress**.
* If you're a new contributor, **@zulipbot** will give you read-only
- If you're a new contributor, **@zulipbot** will give you read-only
collaborator access to the repository and leave a welcome message on the
issue you claimed.
* You can also claim an issue that you've opened by including
- You can also claim an issue that you've opened by including
`@zulipbot claim` in the body of your issue.
* If you accidentally claim an issue you didn't want to claim, comment
- If you accidentally claim an issue you didn't want to claim, comment
`@zulipbot abandon` to abandon an issue.
* **Label your issues** — Add appropriate labels to issues that you opened by
including `@zulipbot add` in an issue comment or the body of your issue
followed by the desired labels enclosed within double quotes (`""`).
- **Label your issues** — Add appropriate labels to issues that you opened by
including `@zulipbot add` in an issue comment or the body of your issue
followed by the desired labels enclosed within double quotes (`""`).
* For example, to add the **bug** and **help wanted** labels to your
- For example, to add the **bug** and **help wanted** labels to your
issue, comment or include `@zulipbot add "bug" "help wanted"` in the
issue body.
* You'll receive an error message if you try to add any labels to your issue
- You'll receive an error message if you try to add any labels to your issue
that don't exist in your repository.
* If you accidentally added the wrong labels, you can remove them by commenting
- If you accidentally added the wrong labels, you can remove them by commenting
`@zulipbot remove` followed by the desired labels enclosed with double quotes
(`""`).
* **Find unclaimed issues** — Use the [GitHub search
feature](https://help.github.com/en/articles/using-search-to-filter-issues-and-pull-requests)
to find unclaimed issues by adding one of the following filters to your search:
- **Find unclaimed issues** — Use the [GitHub search
feature](https://help.github.com/en/articles/using-search-to-filter-issues-and-pull-requests)
to find unclaimed issues by adding one of the following filters to your search:
* `-label: "in progress"` (excludes issues labeled with the **in progress** label)
- `-label: "in progress"` (excludes issues labeled with the **in progress** label)
* `no:assignee` (shows issues without assignees)
- `no:assignee` (shows issues without assignees)
Issues labeled with the **in progress** label and/or assigned to other users have
already been claimed.
Issues labeled with the **in progress** label and/or assigned to other users have
already been claimed.
* **Collaborate in area label teams** — Receive notifications on
issues and pull requests within your fields of expertise on the
[Zulip server repository](https://github.com/zulip/zulip) by joining
the Zulip server
[area label teams](https://github.com/orgs/zulip/teams?utf8=✓&query=Server)
(Note: this link only works for members of the Zulip organization;
we'll happily add you if you're interested). These teams correspond
to the repository's
[area labels](https://github.com/zulip/zulip/labels), although some
teams are associated with multiple labels; for example, the **area:
message-editing** and **area: message view** labels are both related
to the
[Server message view](https://github.com/orgs/zulip/teams/server-message-view)
team. Feel free to join as many area label teams as as you'd like!
- **Collaborate in area label teams** — Receive notifications on
issues and pull requests within your fields of expertise on the
[Zulip server repository](https://github.com/zulip/zulip) by joining
the Zulip server
[area label teams](https://github.com/orgs/zulip/teams?utf8=✓&query=Server)
(Note: this link only works for members of the Zulip organization;
we'll happily add you if you're interested). These teams correspond
to the repository's
[area labels](https://github.com/zulip/zulip/labels), although some
teams are associated with multiple labels; for example, the **area:
message-editing** and **area: message view** labels are both related
to the
[Server message view](https://github.com/orgs/zulip/teams/server-message-view)
team. Feel free to join as many area label teams as as you'd like!
After your request to join an area label team is approved, you'll receive
notifications for any issues labeled with the team's corresponding area
label as well as any pull requests that reference issues labeled with your
team's area label.
After your request to join an area label team is approved, you'll receive
notifications for any issues labeled with the team's corresponding area
label as well as any pull requests that reference issues labeled with your
team's area label.
* **Track inactive claimed issues** — If a claimed issue has not been updated
for a week, **@zulipbot** will post a comment on the inactive issue to ask the
assignee(s) if they are still working on the issue.
- **Track inactive claimed issues** — If a claimed issue has not been updated
for a week, **@zulipbot** will post a comment on the inactive issue to ask the
assignee(s) if they are still working on the issue.
If you see this comment on an issue you claimed, you should post a comment
on the issue to notify **@zulipbot** that you're still working on it.
If you see this comment on an issue you claimed, you should post a comment
on the issue to notify **@zulipbot** that you're still working on it.
If **@zulipbot** does not receive a response from the assignee within 3 days
of an inactive issue prompt, **@zulipbot** will automatically remove the
issue's current assignee(s) and the "in progress" label to allow others to
work on an inactive issue.
If **@zulipbot** does not receive a response from the assignee within 3 days
of an inactive issue prompt, **@zulipbot** will automatically remove the
issue's current assignee(s) and the "in progress" label to allow others to
work on an inactive issue.
### Contributing
If you wish to help develop and contribute to **@zulipbot**, check out the
[zulip/zulipbot](https://github.com/zulip/zulipbot) repository on GitHub and read
the project's [contributing
guidelines](https://github.com/zulip/zulipbot/blob/master/.github/CONTRIBUTING.md#contributing) for
guidelines](https://github.com/zulip/zulipbot/blob/main/.github/CONTRIBUTING.md#contributing) for
more information.

View File

@@ -14,9 +14,9 @@ through the real flow.
The steps to do this are a variation of the steps discussed in the
production documentation, including the comments in
`zproject/prod_settings_template.py`. The differences here are driven
`zproject/prod_settings_template.py`. The differences here are driven
by the fact that `dev_settings.py` is in Git, so it is inconvenient
for local [settings configuration](../subsystems/settings.md). As a
for local [settings configuration](../subsystems/settings.md). As a
result, in the development environment, we allow setting certain
settings in the untracked file `zproject/dev-secrets.conf` (which is
also serves as `/etc/zulip/zulip-secrets.conf`).
@@ -28,105 +28,106 @@ methods supported by Zulip.
Zulip's default EmailAuthBackend authenticates users by verifying
control over their email address, and then allowing them to set a
password for their account. There are two development environment
password for their account. There are two development environment
details worth understanding:
* All of our authentication flows in the development environment have
- All of our authentication flows in the development environment have
special links to the `/emails` page (advertised in `/devtools`),
which shows all emails that the Zulip server has "sent" (emails are
not actually sent by the development environment), to make it
convenient to click through the UI of signup, password reset, etc.
* There's a management command, `manage.py print_initial_password
username@example.com`, that prints out **default** passwords for the
development environment users. Note that if you change a user's
password in the development environment, those passwords will no longer
work. It also prints out the user's **current** API key.
- There's a management command,
`manage.py print_initial_password username@example.com`, that prints
out **default** passwords for the development environment users.
Note that if you change a user's password in the development
environment, those passwords will no longer work. It also prints
out the user's **current** API key.
### Google
* Visit [the Google developer
console](https://console.developers.google.com) and navigate to "APIs
& services" > "Credentials". Create a "Project", which will correspond
to your dev environment.
- Visit [the Google developer
console](https://console.developers.google.com) and navigate to "APIs
& services" > "Credentials". Create a "Project", which will correspond
to your dev environment.
* Navigate to "APIs & services" > "Library", and find the "Identity
Toolkit API". Choose "Enable".
- Navigate to "APIs & services" > "Library", and find the "Identity
Toolkit API". Choose "Enable".
* Return to "Credentials", and select "Create credentials". Choose
- Return to "Credentials", and select "Create credentials". Choose
"OAuth client ID", and follow prompts to create a consent screen, etc.
For "Authorized redirect URIs", fill in
`http://zulipdev.com:9991/complete/google/` .
* You should get a client ID and a client secret. Copy them. In
- You should get a client ID and a client secret. Copy them. In
`dev-secrets.conf`, set `social_auth_google_key` to the client ID
and `social_auth_google_secret` to the client secret.
### GitHub
* Register an OAuth2 application with GitHub at one of
- Register an OAuth2 application with GitHub at one of
<https://github.com/settings/developers> or
<https://github.com/organizations/ORGNAME/settings/developers>.
Specify `http://zulipdev.com:9991/complete/github/` as the callback URL.
* You should get a page with settings for your new application,
showing a client ID and a client secret. In `dev-secrets.conf`, set
- You should get a page with settings for your new application,
showing a client ID and a client secret. In `dev-secrets.conf`, set
`social_auth_github_key` to the client ID and `social_auth_github_secret`
to the client secret.
### GitLab
* Register an OAuth application with GitLab at
- Register an OAuth application with GitLab at
<https://gitlab.com/oauth/applications>.
Specify `http://zulipdev.com:9991/complete/gitlab` as the callback URL.
* You should get a page containing the Application ID and Secret for
your new application. In `dev-secrets.conf`, enter the Application
- You should get a page containing the Application ID and Secret for
your new application. In `dev-secrets.conf`, enter the Application
ID as `social_auth_gitlab_key` and the Secret as
`social_auth_gitlab_secret`.
### Apple
* Visit <https://developer.apple.com/account/resources/>,
- Visit <https://developer.apple.com/account/resources/>,
Enable App ID and Create a Services ID with the instructions in
<https://help.apple.com/developer-account/?lang=en#/dev1c0e25352> .
When prompted for a "Return URL", enter
`http://zulipdev.com:9991/complete/apple/` .
* [Create a Sign in with Apple private key](https://help.apple.com/developer-account/?lang=en#/dev77c875b7e)
- [Create a Sign in with Apple private key](https://help.apple.com/developer-account/?lang=en#/dev77c875b7e)
* In `dev-secrets.conf`, set
* `social_auth_apple_services_id` to your
"Services ID" (eg. com.application.your).
* `social_auth_apple_app_id` to "App ID" or "Bundle ID".
This is only required if you are testing Apple auth on iOS.
* `social_auth_apple_key` to your "Key ID".
* `social_auth_apple_team` to your "Team ID".
* Put the private key file you got from apple at the path
- In `dev-secrets.conf`, set
- `social_auth_apple_services_id` to your
"Services ID" (eg. com.application.your).
- `social_auth_apple_app_id` to "App ID" or "Bundle ID".
This is only required if you are testing Apple auth on iOS.
- `social_auth_apple_key` to your "Key ID".
- `social_auth_apple_team` to your "Team ID".
- Put the private key file you got from apple at the path
`zproject/dev_apple.key`.
### SAML
* Sign up for a [developer Okta account](https://developer.okta.com/).
* Set up SAML authentication by following
- Sign up for a [developer Okta account](https://developer.okta.com/).
- Set up SAML authentication by following
[Okta's documentation](https://developer.okta.com/docs/guides/saml-application-setup/overview/).
Specify:
* `http://localhost:9991/complete/saml/` for the "Single sign on URL"`.
* `http://localhost:9991` for the "Audience URI (SP Entity ID)".
* Skip "Default RelayState".
* Skip "Name ID format".
* Set 'Email` for "Application username format".
* Provide "Attribute statements" of `email` to `user.email`,
`first_name` to `user.firstName`, and `last_name` to `user.lastName`.
* Assign at least one account in the "Assignments" tab. You'll use it for
- `http://localhost:9991/complete/saml/` for the "Single sign on URL"`.
- `http://localhost:9991` for the "Audience URI (SP Entity ID)".
- Skip "Default RelayState".
- Skip "Name ID format".
- Set 'Email` for "Application username format".
- Provide "Attribute statements" of `email` to `user.email`,
`first_name` to `user.firstName`, and `last_name` to `user.lastName`.
- Assign at least one account in the "Assignments" tab. You'll use it for
signing up / logging in to Zulip.
* Visit the big "Setup instructions" button on the "Sign on" tab.
* Edit `zproject/dev-secrets.conf` to add the two values provided:
* Set `saml_url = http...` from "Identity Provider Single Sign-On
URL".
* Set `saml_entity_id = http://...` from "Identity Provider Issuer".
* Download the certificate and put it at the path `zproject/dev_saml.cert`.
* Now you should have working SAML authentication!
* You can sign up to the target realm with the account that you've "assigned"
- Visit the big "Setup instructions" button on the "Sign on" tab.
- Edit `zproject/dev-secrets.conf` to add the two values provided:
- Set `saml_url = http...` from "Identity Provider Single Sign-On
URL".
- Set `saml_entity_id = http://...` from "Identity Provider Issuer".
- Download the certificate and put it at the path `zproject/dev_saml.cert`.
- Now you should have working SAML authentication!
- You can sign up to the target realm with the account that you've "assigned"
in the previous steps (if the account's email address is allowed in the realm,
so you may have to change the realm settings to allow the appropriate email domain)
and then you'll be able to log in freely. Alternatively, you can create an account
@@ -136,7 +137,7 @@ to your dev environment.
Some OAuth providers (such as Facebook) require HTTPS on the callback
URL they post back to, which isn't supported directly by the Zulip
development environment. If you run a
development environment. If you run a
[remote Zulip development server](../development/remote.md), we have
instructions for
[an nginx reverse proxy with SSL](../development/remote.html#using-an-nginx-reverse-proxy)
@@ -147,23 +148,24 @@ that you can use for your development efforts.
Before Zulip 2.0, one of the more common classes of bug reports with
Zulip's authentication was users having trouble getting [LDAP
authentication](../production/authentication-methods.html#ldap-including-active-directory)
working. The root cause was because setting up a local LDAP server
working. The root cause was because setting up a local LDAP server
for development was difficult, which meant most developers were unable
to work on fixing even simple issues with it.
We solved this problem for our unit tests long ago by using the
popular [fakeldap](https://github.com/zulip/fakeldap) library. And in
popular [fakeldap](https://github.com/zulip/fakeldap) library. And in
2018, we added convenient support for using fakeldap in the Zulip
development environment as well, so that you can go through all the
actual flows for LDAP configuration.
- To enable fakeldap, set `FAKE_LDAP_MODE` in
`zproject/dev_settings.py` to one of the following options. For more
information on these modes, refer to
[our production docs](../production/authentication-methods.html#ldap-including-active-directory):
`zproject/dev_settings.py` to one of the following options. For more
information on these modes, refer to
[our production docs](../production/authentication-methods.html#ldap-including-active-directory):
- `a`: If users' email addresses are in LDAP and used as username.
- `b`: If LDAP only has usernames but email addresses are of the form
username@example.com
username@example.com
- `c`: If LDAP usernames are completely unrelated to email addresses.
- To disable fakeldap, set `FAKE_LDAP_MODE` back to `None`.
@@ -173,8 +175,8 @@ information on these modes, refer to
`ldapuser1`).
- `FAKE_LDAP_NUM_USERS` in `zproject/dev_settings.py` can be used to
specify the number of LDAP users to be added. The default value for
the number of LDAP users is 8.
specify the number of LDAP users to be added. The default value for
the number of LDAP users is 8.
### Testing avatar and custom profile field synchronization
@@ -184,14 +186,14 @@ contain data one might want to sync, including avatars and custom
profile fields.
We also have configured `AUTH_LDAP_USER_ATTR_MAP` in
`zproject/dev_settings.py` to sync several of those fields. For
`zproject/dev_settings.py` to sync several of those fields. For
example:
* Modes `a` and `b` will set the user's avatar on account creation and
- Modes `a` and `b` will set the user's avatar on account creation and
update it when `manage.py sync_ldap_user_data` is run.
* Mode `b` is configured to automatically have the `birthday` and
- Mode `b` is configured to automatically have the `birthday` and
`Phone number` custom profile fields populated/synced.
* Mode `a` is configured to deactivate/reactivate users whose accounts
- Mode `a` is configured to deactivate/reactivate users whose accounts
are disabled in LDAP when `manage.py sync_ldap_user_data` is run.
(Note that you'll likely need to edit
`zerver/lib/dev_ldap_directory.py` to ensure there are some accounts
@@ -202,7 +204,7 @@ example:
For our automated tests, we generally configure custom LDAP data for
each individual test, because that generally means one can understand
exactly what data is being used in the test without looking at other
resources. It also gives us more freedom to edit the development
resources. It also gives us more freedom to edit the development
environment directory without worrying about tests.
## Two factor authentication
@@ -212,13 +214,13 @@ Zulip uses [django-two-factor-auth][0] as a beta 2FA integration.
To enable 2FA, set `TWO_FACTOR_AUTHENTICATION_ENABLED` in settings to
`True`, then log in to Zulip and add an OTP device from the settings
page. Once the device is added, password based authentication will ask
for a one-time-password. In the development environment, this
for a one-time-password. In the development environment, this
one-time-password will be printed to the console when you try to
log in. Just copy-paste it into the form field to continue.
log in. Just copy-paste it into the form field to continue.
Direct development logins don't prompt for 2FA one-time-passwords, so
to test 2FA in development, make sure that you log in using a
password. You can get the passwords for the default test users using
password. You can get the passwords for the default test users using
`./manage.py print_initial_password`.
## Password form implementation
@@ -226,7 +228,7 @@ password. You can get the passwords for the default test users using
By default, Zulip uses `autocomplete=off` for password fields where we
enter the current password, and `autocomplete="new-password"` for
password fields where we create a new account or change the existing
password. This prevents the browser from auto-filling the existing
password. This prevents the browser from auto-filling the existing
password.
Visit <https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/autocomplete> for more details.

View File

@@ -28,8 +28,8 @@ performs well.
Zulip also supports a wide range of ways to install the Zulip
development environment:
* On Linux platforms, you can **[install directly][install-direct]**.
* On Windows, you can **[install directly][install-via-wsl]** via WSL 2.
- On Linux platforms, you can **[install directly][install-direct]**.
- On Windows, you can **[install directly][install-via-wsl]** via WSL 2.
## Slow internet connections
@@ -65,8 +65,8 @@ need to.
Once you've installed the Zulip development environment, you'll want
to read these documents to learn how to use it:
* [Using the development environment][using-dev-env]
* [Testing][testing] (and [Configuring CI][ci])
- [Using the development environment][using-dev-env]
- [Testing][testing] (and [Configuring CI][ci])
And if you've set up the Zulip development environment on a remote
machine, take a look at our tips for

View File

@@ -16,12 +16,12 @@ need to.
The best way to connect to your server is using the command line tool `ssh`.
* On macOS and Linux/UNIX, `ssh` is a part of Terminal.
* On Windows, `ssh` comes with [Bash for Git][git-bash].
- On macOS and Linux/UNIX, `ssh` is a part of Terminal.
- On Windows, `ssh` comes with [Bash for Git][git-bash].
Open *Terminal* or *Bash for Git*, and connect with the following:
Open _Terminal_ or _Bash for Git_, and connect with the following:
```
```console
$ ssh username@host
```
@@ -32,19 +32,20 @@ networks.
## Setting up user accounts
You will need a non-root user account with sudo privileges to set up
the Zulip development environment. If you have one already, continue
the Zulip development environment. If you have one already, continue
to the next section.
You can create a new user with sudo privileges by running the
following commands as root:
* You can create a `zulipdev` user by running the command `adduser
zulipdev`. Run through the prompts to assign a password and user
information. (You can pick any username you like for this user
account.)
* You can add the user to the sudo group by running the command
`usermod -aG sudo zulipdev`.
* Finally, you can switch to the user by running the command `su -
zulipdev` (or just log in to that user using `ssh`).
- You can create a `zulipdev` user by running the command
`adduser zulipdev`. Run through the prompts to assign a password and
user information. (You can pick any username you like for this user
account.)
- You can add the user to the sudo group by running the command
`usermod -aG sudo zulipdev`.
- Finally, you can switch to the user by running the command
`su - zulipdev` (or just log in to that user using `ssh`).
## Setting up the development environment
@@ -75,7 +76,7 @@ Once you have set up the development environment, you can start up the
development server with the following command in the directory where
you cloned Zulip:
```
```bash
./tools/run-dev.py --interface=''
```
@@ -93,12 +94,12 @@ developing on your laptop).
To properly secure your remote development environment, you can
[port forward](https://help.ubuntu.com/community/SSH/OpenSSH/PortForwarding)
using ssh instead of running the development environment on an exposed
interface. For example, if you're running Zulip on a remote server
interface. For example, if you're running Zulip on a remote server
such as a DigitalOcean Droplet or an AWS EC2 instance, you can set up
port-forwarding to access Zulip by running the following command in
your terminal:
```
```bash
ssh -L 3000:127.0.0.1:9991 <username>@<remote_server_ip> -N
```
@@ -112,10 +113,10 @@ environment][rtd-using-dev-env].
To see changes on your remote development server, you need to do one of the following:
* [Edit locally](#editing-locally): Clone Zulip code to your computer and
- [Edit locally](#editing-locally): Clone Zulip code to your computer and
then use your favorite editor to make changes. When you want to see changes
on your remote Zulip development instance, sync with Git.
* [Edit remotely](#editing-remotely): Edit code directly on your remote
- [Edit remotely](#editing-remotely): Edit code directly on your remote
Zulip development instance using a [Web-based IDE](#web-based-ide) (recommended for
beginners) or a [command line editor](#command-line-editors), or a
[desktop IDE](#desktop-gui-editors) using a plugin to sync your
@@ -126,12 +127,12 @@ To see changes on your remote development server, you need to do one of the foll
If you want to edit code locally install your favorite text editor. If you
don't have a favorite, here are some suggestions:
* [atom](https://atom.io/)
* [emacs](https://www.gnu.org/software/emacs/)
* [vim](https://www.vim.org/)
* [spacemacs](https://github.com/syl20bnr/spacemacs)
* [sublime](https://www.sublimetext.com/)
* [PyCharm](https://www.jetbrains.com/pycharm/)
- [atom](https://atom.io/)
- [emacs](https://www.gnu.org/software/emacs/)
- [vim](https://www.vim.org/)
- [spacemacs](https://github.com/syl20bnr/spacemacs)
- [sublime](https://www.sublimetext.com/)
- [PyCharm](https://www.jetbrains.com/pycharm/)
Next, follow our [Git and GitHub guide](../git/index.md) to clone and configure
your fork of zulip on your local computer.
@@ -149,7 +150,7 @@ guide][rtd-git-guide]. In brief, the steps are as follows.
On your **local computer**:
1. Open *Terminal* (macOS/Linux) or *Git for BASH*.
1. Open _Terminal_ (macOS/Linux) or _Git for BASH_.
2. Change directory to where you cloned Zulip (e.g. `cd zulip`).
3. Use `git add` and `git commit` to stage and commit your changes (if you
haven't already).
@@ -160,7 +161,7 @@ Be sure to replace `branchname` with the name of your actual feature branch.
Once `git push` has completed successfully, you are ready to fetch the commits
from your remote development instance:
1. In *Terminal* or *Git BASH*, connect to your remote development
1. In _Terminal_ or _Git BASH_, connect to your remote development
instance with `ssh user@host`.
2. Change to the zulip directory (e.g., `cd zulip`).
3. Fetch new commits from GitHub with `git fetch origin`.
@@ -172,10 +173,10 @@ from your remote development instance:
There are a few good ways to edit code in your remote development
environment:
* With a command-line editor like vim or emacs run over SSH.
* With a desktop GUI editor like VS Code or Atom and a plugin for
- With a command-line editor like vim or emacs run over SSH.
- With a desktop GUI editor like VS Code or Atom and a plugin for
syncing your changes to the remote server.
* With a web-based IDE like CodeAnywhere.
- With a web-based IDE like CodeAnywhere.
We document these options below; we recommend using whatever editor
you prefer for development in general.
@@ -193,16 +194,17 @@ Similar packages/extensions exist for other popular code editors as
well; contributions of precise documentation for them are welcome!
- [VSCode Remote - SSH][vscode-remote-ssh]: Lets you use Visual Studio
Code against a remote repository with a similar user experience to
developing locally.
Code against a remote repository with a similar user experience to
developing locally.
[vscode-remote-ssh]: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh
- [rmate](https://github.com/textmate/rmate) for TextMate + VS Code:
1. Install the extension
[Remote VSCode](https://marketplace.visualstudio.com/items?itemName=rafaelmaiolla.remote-vscode).
2. On your remote machine, run:
```
```console
$ mkdir -p ~/bin
$ curl -Lo ~/bin/rmate https://raw.githubusercontent.com/textmate/rmate/master/bin/rmate
$ chmod a+x ~/bin/rmate
@@ -210,11 +212,11 @@ developing locally.
3. Make sure the remote server is running in VS Code (you can
force-start through the Command Palette).
4. SSH to your remote machine using
```
```console
$ ssh -R 52698:localhost:52698 user@example.org
```
5. On your remote machine, run
```
```console
$ rmate [options] file
```
and the file should open up in VS Code. Any changes you make now will be saved remotely.
@@ -226,20 +228,20 @@ a command line text editor on the remote machine.
Two editors often available by default on Linux systems are:
* **Nano**: A very simple, beginner-friendly editor. However, it lacks a lot of
- **Nano**: A very simple, beginner-friendly editor. However, it lacks a lot of
features useful for programming, such as syntax highlighting, so we only
recommended it for quick edits to things like configuration files. Launch by
running command `nano <filename>`. Exit by pressing *Ctrl-X*.
running command `nano <filename>`. Exit by pressing _Ctrl-X_.
* **[Vim](https://www.vim.org/)**: A very powerful editor that can take a while
to learn. Launch by running `vim <filename>`. Quit Vim by pressing *Esc*,
typing `:q`, and then pressing *Enter*. Vim comes with a program to learn it
- **[Vim](https://www.vim.org/)**: A very powerful editor that can take a while
to learn. Launch by running `vim <filename>`. Quit Vim by pressing _Esc_,
typing `:q`, and then pressing _Enter_. Vim comes with a program to learn it
called `vimtutor` (just run that command to start it).
Other options include:
* [emacs](https://www.gnu.org/software/emacs/)
* [spacemacs](https://github.com/syl20bnr/spacemacs)
- [emacs](https://www.gnu.org/software/emacs/)
- [spacemacs](https://github.com/syl20bnr/spacemacs)
##### Web-based IDE
@@ -250,12 +252,12 @@ started working quickly, we recommend web-based IDE
To set up Codeanywhere for Zulip:
1. Create a [Codeanywhere][codeanywhere] account and log in.
2. Create a new **SFTP-SSH** project. Use *Public key* for authentication.
2. Create a new **SFTP-SSH** project. Use _Public key_ for authentication.
3. Click **GET YOUR PUBLIC KEY** to get the new public key that
Codeanywhere generates when you create a new project. Add this public key to
`~/.ssh/authorized_keys` on your remote development instance.
4. Once you've added the new public key to your remote development instance, click
*CONNECT*.
_CONNECT_.
Now your workspace should look similar this:
![Codeanywhere workspace][img-ca-workspace]
@@ -264,9 +266,9 @@ Now your workspace should look similar this:
Next, read the following to learn more about developing for Zulip:
* [Git & GitHub guide][rtd-git-guide]
* [Using the development environment][rtd-using-dev-env]
* [Testing][rtd-testing]
- [Git & GitHub guide][rtd-git-guide]
- [Using the development environment][rtd-using-dev-env]
- [Testing][rtd-testing]
[install-direct]: ../development/setup-advanced.html#installing-directly-on-ubuntu-debian-centos-or-fedora
[install-vagrant]: ../development/setup-vagrant.md
@@ -282,7 +284,7 @@ Next, read the following to learn more about developing for Zulip:
For some applications (e.g. developing an OAuth2 integration for
Facebook), you may need your Zulip development to have a valid SSL
certificate. While `run-dev.py` doesn't support that, you can do this
certificate. While `run-dev.py` doesn't support that, you can do this
with an `nginx` reverse proxy sitting in front of `run-dev.py.`.
The following instructions assume you have a Zulip Droplet working and
@@ -290,32 +292,33 @@ that the user is `zulipdev`; edit accordingly if the situation is
different.
1. First, get an SSL certificate; you can use
[our certbot wrapper script used for production](../production/ssl-certificates.html#certbot-recommended)
by running the following commands as root:
```
# apt install -y crudini
mkdir -p /var/lib/zulip/certbot-webroot/
# if nginx running this will fail and you need to run `service nginx stop`
/home/zulipdev/zulip/scripts/setup/setup-certbot \
hostname.example.com --no-zulip-conf \
--email=username@example.com --method=standalone
```
[our certbot wrapper script used for production](../production/ssl-certificates.html#certbot-recommended)
by running the following commands as root:
```bash
# apt install -y crudini
mkdir -p /var/lib/zulip/certbot-webroot/
# if nginx running this will fail and you need to run `service nginx stop`
/home/zulipdev/zulip/scripts/setup/setup-certbot \
hostname.example.com \
--email=username@example.com --method=standalone
```
1. Install nginx configuration:
```
apt install -y nginx-full
cp -a /home/zulipdev/zulip/tools/droplets/zulipdev /etc/nginx/sites-available/
ln -nsf /etc/nginx/sites-available/zulipdev /etc/nginx/sites-enabled/
nginx -t # Verifies your nginx configuration
service nginx reload # Actually enabled your nginx configuration
```
```bash
apt install -y nginx-full
cp -a /home/zulipdev/zulip/tools/droplets/zulipdev /etc/nginx/sites-available/
ln -nsf /etc/nginx/sites-available/zulipdev /etc/nginx/sites-enabled/
nginx -t # Verifies your nginx configuration
service nginx reload # Actually enabled your nginx configuration
```
1. Edit `zproject/dev_settings.py` to set `EXTERNAL_URI_SCHEME =
"https://"`, so that URLs served by the development environment
will be HTTPS.
1. Edit `zproject/dev_settings.py` to set
`EXTERNAL_URI_SCHEME = "https://"`, so that URLs served by the
development environment will be HTTPS.
1. Start the Zulip development environment with the following command:
```
```bash
env EXTERNAL_HOST="hostname.example.com" ./tools/run-dev.py --interface=''
```

View File

@@ -1,4 +1,4 @@
```eval_rst
```{eval-rst}
:orphan:
```
@@ -70,14 +70,14 @@ Once your remote dev instance is ready:
Once you've confirmed you can connect to your remote server, take a look at:
* [developing remotely](../development/remote.md) for tips on using the remote dev
- [developing remotely](../development/remote.md) for tips on using the remote dev
instance, and
* our [Git & GitHub guide](../git/index.md) to learn how to use Git with Zulip.
- our [Git & GitHub guide](../git/index.md) to learn how to use Git with Zulip.
Next, read the following to learn more about developing for Zulip:
* [Using the development environment](../development/using.md)
* [Testing](../testing/testing.md)
- [Using the development environment](../development/using.md)
- [Testing](../testing/testing.md)
[github-join]: https://github.com/join
[github-help-add-ssh-key]: https://help.github.com/en/articles/adding-a-new-ssh-key-to-your-github-account

View File

@@ -2,22 +2,22 @@
Contents:
* [Installing directly on Ubuntu, Debian, CentOS, or Fedora](#installing-directly-on-ubuntu-debian-centos-or-fedora)
* [Installing directly on Windows 10 with WSL 2](#installing-directly-on-windows-10-with-wsl-2)
* [Using the Vagrant Hyper-V provider on Windows](#using-the-vagrant-hyper-v-provider-on-windows-beta)
* [Newer versions of supported platforms](#newer-versions-of-supported-platforms)
* [Installing directly on cloud9](#installing-on-cloud9)
- [Installing directly on Ubuntu, Debian, CentOS, or Fedora](#installing-directly-on-ubuntu-debian-centos-or-fedora)
- [Installing directly on Windows 10 with WSL 2](#installing-directly-on-windows-10-with-wsl-2)
- [Using the Vagrant Hyper-V provider on Windows](#using-the-vagrant-hyper-v-provider-on-windows-beta)
- [Newer versions of supported platforms](#newer-versions-of-supported-platforms)
- [Installing directly on cloud9](#installing-on-cloud9)
## Installing directly on Ubuntu, Debian, CentOS, or Fedora
If you'd like to install a Zulip development environment on a computer
that's running one of:
* Ubuntu 20.04 Focal, 18.04 Bionic
* Debian 10 Buster, 11 Bullseye (beta)
* CentOS 7 (beta)
* Fedora 33 (beta)
* RHEL 7 (beta)
- Ubuntu 20.04 Focal, 18.04 Bionic
- Debian 10 Buster, 11 Bullseye (beta)
- CentOS 7 (beta)
- Fedora 33 (beta)
- RHEL 7 (beta)
You can just run the Zulip provision script on your machine.
@@ -26,23 +26,22 @@ If you are using a [remote server](../development/remote.md), see
the
[section on creating appropriate user accounts](../development/remote.html#setting-up-user-accounts).
```eval_rst
.. warning::
There is no supported uninstallation process with this
method. If you want that, use the Vagrant environment, where you can
just do `vagrant destroy` to clean up the development environment.
```
:::{warning}
There is no supported uninstallation process with this
method. If you want that, use the Vagrant environment, where you can
just do `vagrant destroy` to clean up the development environment.
:::
Start by [cloning your fork of the Zulip repository][zulip-rtd-git-cloning]
and [connecting the Zulip upstream repository][zulip-rtd-git-connect]:
```
```bash
git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git
cd zulip
git remote add -f upstream https://github.com/zulip/zulip.git
```
```
```bash
# On CentOS/RHEL, you must first install epel-release, and then python36,
# and finally you must run `sudo ln -nsf /usr/bin/python36 /usr/bin/python3`
# On Fedora, you must first install python3
@@ -64,27 +63,27 @@ the [WSL 2](https://docs.microsoft.com/en-us/windows/wsl/wsl2-about)
installation method described here.
1. Install WSL 2 by following the instructions provided by Microsoft
[here](https://docs.microsoft.com/en-us/windows/wsl/wsl2-install).
[here](https://docs.microsoft.com/en-us/windows/wsl/wsl2-install).
1. Install the `Ubuntu 18.04` Linux distribution from the Microsoft
Store.
1. Launch the `Ubuntu 18.04` shell and run the following commands:
```
```bash
sudo apt update && sudo apt upgrade
sudo apt install rabbitmq-server memcached redis-server postgresql
```
1. Open `/etc/rabbitmq/rabbitmq-env.conf` using e.g.:
```
```bash
sudo vim /etc/rabbitmq/rabbitmq-env.conf
```
Add the following lines at the end of your file and save:
```
```ini
NODE_IP_ADDRESS=127.0.0.1
NODE_PORT=5672
```
@@ -92,14 +91,15 @@ installation method described here.
1. Make sure you are inside the WSL disk and not in a Windows mounted disk.
You will run into permission issues if you run `provision` from `zulip`
in a Windows mounted disk.
```
```bash
cd ~ # or cd /home/USERNAME
```
1. [Clone your fork of the Zulip repository][zulip-rtd-git-cloning]
and [connecting the Zulip upstream repository][zulip-rtd-git-connect]:
```
```bash
git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git ~/zulip
cd zulip
git remote add -f upstream https://github.com/zulip/zulip.git
@@ -109,7 +109,7 @@ installation method described here.
start it (click `Allow access` if you get popups for Windows Firewall
blocking some services)
```
```bash
# Start database, cache, and other services
./tools/wsl/start_services
# Install/update the Zulip development environment
@@ -120,11 +120,10 @@ installation method described here.
./tools/run-dev.py
```
```eval_rst
.. note::
If you shut down WSL, after starting it again, you will have to manually start
the services using ``./tools/wsl/start_services``.
```
:::{note}
If you shut down WSL, after starting it again, you will have to manually start
the services using `./tools/wsl/start_services`.
:::
1. If you are facing problems or you see error messages after running `./tools/run-dev.py`,
you can try running `./tools/provision` again.
@@ -132,7 +131,7 @@ installation method described here.
1. [Visual Studio Code Remote - WSL](https://code.visualstudio.com/docs/remote/wsl) is
recommended for editing files when developing with WSL.
1. You're done! You can pick up the [documentation on using the
1. You're done! You can pick up the [documentation on using the
Zulip development
environment](../development/setup-vagrant.html#step-4-developing),
ignoring the parts about `vagrant` (since you're not using it).
@@ -154,7 +153,7 @@ expected.
1. Start by [cloning your fork of the Zulip repository][zulip-rtd-git-cloning]
and [connecting the Zulip upstream repository][zulip-rtd-git-connect]:
```
```bash
git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git
cd zulip
git remote add -f upstream https://github.com/zulip/zulip.git
@@ -169,7 +168,7 @@ expected.
You should get output like this:
```text
```console
Bringing machine 'default' up with 'hyperv' provider...
==> default: Verifying Hyper-V is enabled...
==> default: Verifying Hyper-V is accessible...
@@ -203,36 +202,35 @@ expected.
1. Set the `EXTERNAL_HOST` environment variable.
```bash
```console
(zulip-py3-venv) vagrant@ubuntu-18:/srv/zulip$ export EXTERNAL_HOST="$(hostname -I | xargs):9991"
(zulip-py3-venv) vagrant@ubuntu-18:/srv/zulip$ echo $EXTERNAL_HOST
```
The output will be like:
```text
```console
172.28.122.156:9991
```
Make sure you note down this down. This is where your zulip development web
server can be accessed.
```eval_rst
.. important::
The output of the above command changes every time you restart the Vagrant
development machine. Thus, it will have to be run every time you bring one up.
This quirk is one reason this method is marked experimental.
```
:::{important}
The output of the above command changes every time you restart the Vagrant
development machine. Thus, it will have to be run every time you bring one up.
This quirk is one reason this method is marked experimental.
:::
1. You should now be able to start the Zulip development server.
```bash
```console
(zulip-py3-venv) vagrant@ubuntu-18:/srv/zulip$ ./tools/run-dev.py
```
The output will look like:
```text
```console
Starting Zulip on:
http://172.30.24.235:9991/
@@ -253,11 +251,11 @@ expected.
1. If you get the error `Hyper-V could not initialize memory`, this is
likely because your system has insufficient free memory to start
the virtual machine. You can generally work around this error by
closing all other running programs and running `vagrant up
--provider=hyperv` again. You can reopen the other programs after
the provisioning is completed. If it still isn't enough, try
restarting your system and running the command again.
the virtual machine. You can generally work around this error by
closing all other running programs and running
`vagrant up --provider=hyperv` again. You can reopen the other
programs after the provisioning is completed. If it still isn't
enough, try restarting your system and running the command again.
2. Be patient the first time you run `./tools/run-dev.py`.
@@ -276,7 +274,7 @@ these platforms reliably and easily, so we no longer maintain manual
installation instructions for these platforms.
If `tools/provision` doesn't yet support a newer release of Debian or
Ubuntu that you're using, we'd love to add support for it. It's
Ubuntu that you're using, we'd love to add support for it. It's
likely only a few lines of changes to `tools/lib/provision.py` and
`scripts/lib/setup-apt-repo` if you'd like to do it yourself and
submit a pull request, or you can ask for help in
@@ -291,20 +289,21 @@ that lets you write, run, and debug your code with just a browser. It
includes a code editor, debugger, and terminal.
This section documents how to set up the Zulip development environment
in a Cloud9 workspace. If you don't have an existing Cloud9 account,
in a Cloud9 workspace. If you don't have an existing Cloud9 account,
you can sign up [here](https://aws.amazon.com/cloud9/).
* Create a Workspace, and select the blank template.
* Resize the workspace to be 1GB of memory and 4GB of disk
- Create a Workspace, and select the blank template.
- Resize the workspace to be 1GB of memory and 4GB of disk
space. (This is under free limit for both the old Cloud9 and the AWS
Free Tier).
* Clone the zulip repo: `git clone --config pull.rebase
https://github.com/<your-username>/zulip.git`
* Restart rabbitmq-server since its broken on Cloud9: `sudo service
rabbitmq-server restart`.
* And run provision `cd zulip && ./tools/provision`, once this is done.
* Activate the Zulip virtual environment by `source
/srv/zulip-py3-venv/bin/activate` or by opening a new terminal.
- Clone the zulip repo:
`git clone --config pull.rebase https://github.com/<your-username>/zulip.git`
- Restart rabbitmq-server since its broken on Cloud9:
`sudo service rabbitmq-server restart`.
- And run provision `cd zulip && ./tools/provision`, once this is done.
- Activate the Zulip virtual environment by
`source /srv/zulip-py3-venv/bin/activate` or by opening a new
terminal.
#### Install zulip-cloud9
@@ -312,7 +311,7 @@ There's a NPM package, `zulip-cloud9`, that provides a wrapper around
the Zulip development server for use in the Cloud9 environment.
Note: `npm i -g zulip-cloud9` does not work in zulip's virtual
environment. Although by default, any packages installed in workspace
environment. Although by default, any packages installed in workspace
folder (i.e. the top level folder) are added to `$PATH`.
```bash
@@ -325,7 +324,7 @@ If you get error of the form `bash: cannot find command zulip-dev`,
you need to start a new terminal.
Your development server would be running at
`https://<workspace-name>-<username>.c9users.io` on port 8080. You
`https://<workspace-name>-<username>.c9users.io` on port 8080. You
dont need to add `:8080` to your URL, since the Cloud9 proxy should
automatically forward the connection. You might want to visit
[zulip-cloud9 repo](https://github.com/cPhost/zulip-cloud9) and it's

View File

@@ -10,16 +10,17 @@ or a Linux container (for Ubuntu) inside which the Zulip server and
all related services will run.
Contents:
* [Requirements](#requirements)
* [Step 0: Set up Git & GitHub](#step-0-set-up-git-github)
* [Step 1: Install prerequisites](#step-1-install-prerequisites)
* [Step 2: Get Zulip code](#step-2-get-zulip-code)
* [Step 3: Start the development environment](#step-3-start-the-development-environment)
* [Step 4: Developing](#step-4-developing)
* [Troubleshooting and common errors](#troubleshooting-and-common-errors)
* [Specifying an Ubuntu mirror](#specifying-an-ubuntu-mirror)
* [Specifying a proxy](#specifying-a-proxy)
* [Customizing CPU and RAM allocation](#customizing-cpu-and-ram-allocation)
- [Requirements](#requirements)
- [Step 0: Set up Git & GitHub](#step-0-set-up-git-github)
- [Step 1: Install prerequisites](#step-1-install-prerequisites)
- [Step 2: Get Zulip code](#step-2-get-zulip-code)
- [Step 3: Start the development environment](#step-3-start-the-development-environment)
- [Step 4: Developing](#step-4-developing)
- [Troubleshooting and common errors](#troubleshooting-and-common-errors)
- [Specifying an Ubuntu mirror](#specifying-an-ubuntu-mirror)
- [Specifying a proxy](#specifying-a-proxy)
- [Customizing CPU and RAM allocation](#customizing-cpu-and-ram-allocation)
**If you encounter errors installing the Zulip development
environment,** check [troubleshooting and common
@@ -32,10 +33,10 @@ server](../contributing/chat-zulip-org.md) for real-time help or
When reporting your issue, please include the following information:
* host operating system
* installation method (Vagrant or direct)
* whether or not you are using a proxy
* a copy of Zulip's `vagrant` provisioning logs, available in
- host operating system
- installation method (Vagrant or direct)
- whether or not you are using a proxy
- a copy of Zulip's `vagrant` provisioning logs, available in
`/var/log/provision.log` on your virtual machine
### Requirements
@@ -65,7 +66,7 @@ to GitHub working on your machine.
Follow our [Git guide][set-up-git] in order to install Git, set up a
GitHub account, create an SSH key to access code on GitHub
efficiently, etc. Be sure to create an SSH key and add it to your
efficiently, etc. Be sure to create an SSH key and add it to your
GitHub account using
[these instructions](https://help.github.com/en/articles/generating-an-ssh-key).
@@ -73,10 +74,10 @@ GitHub account using
Jump to:
* [macOS](#macos)
* [Ubuntu](#ubuntu)
* [Debian](#debian)
* [Windows](#windows-10)
- [macOS](#macos)
- [Ubuntu](#ubuntu)
- [Debian](#debian)
- [Windows](#windows-10)
#### macOS
@@ -94,14 +95,14 @@ Now you are ready for [Step 2: Get Zulip code](#step-2-get-zulip-code).
##### 1. Install Vagrant, Docker, and Git
```
```console
christie@ubuntu-desktop:~
$ sudo apt install vagrant docker.io git
```
##### 2. Add yourself to the `docker` group:
```
```console
christie@ubuntu-desktop:~
$ sudo adduser $USER docker
Adding user `christie' to group `docker' ...
@@ -109,10 +110,10 @@ Adding user christie to group docker
Done.
```
You will need to reboot for this change to take effect. If it worked,
You will need to reboot for this change to take effect. If it worked,
you will see `docker` in your list of groups:
```
```console
christie@ubuntu-desktop:~
$ groups | grep docker
christie adm cdrom sudo dip plugdev lpadmin sambashare docker
@@ -124,9 +125,9 @@ If you had previously installed and removed an older version of
Docker, an [Ubuntu
bug](https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1844894)
may prevent Docker from being automatically enabled and started after
installation. You can check using the following:
installation. You can check using the following:
```
```console
$ systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
@@ -137,7 +138,7 @@ If the service is not running, you'll see `Active: inactive (dead)` on
the second line, and will need to enable and start the Docker service
using the following:
```
```bash
sudo systemctl unmask docker
sudo systemctl enable docker
sudo systemctl start docker
@@ -154,16 +155,15 @@ Debian](https://docs.docker.com/install/linux/docker-ce/debian/).
#### Windows 10
```eval_rst
.. note::
We recommend using `WSL 2 for Windows development <../development/setup-advanced.html#installing-directly-on-windows-10-with-wsl-2>`_.
```
:::{note}
We recommend using [WSL 2 for Windows development](../development/setup-advanced.html#installing-directly-on-windows-10-with-wsl-2).
:::
1. Install [Git for Windows][git-bash], which installs *Git BASH*.
1. Install [Git for Windows][git-bash], which installs _Git BASH_.
2. Install [VirtualBox][vbox-dl] (latest).
3. Install [Vagrant][vagrant-dl] (latest).
(Note: While *Git BASH* is recommended, you may also use [Cygwin][cygwin-dl].
(Note: While _Git BASH_ is recommended, you may also use [Cygwin][cygwin-dl].
If you do, make sure to **install default required packages** along with
**git**, **curl**, **openssh**, and **rsync** binaries.)
@@ -189,13 +189,13 @@ In **Git for BASH**:
Open **Git BASH as an administrator** and run:
```
```console
$ git config --global core.symlinks true
```
Now confirm the setting:
```
```console
$ git config core.symlinks
true
```
@@ -210,7 +210,7 @@ In **Cygwin**:
Open a Cygwin window **as an administrator** and do this:
```
```console
christie@win10 ~
$ echo 'export "CYGWIN=$CYGWIN winsymlinks:native"' >> ~/.bash_profile
```
@@ -218,7 +218,7 @@ $ echo 'export "CYGWIN=$CYGWIN winsymlinks:native"' >> ~/.bash_profile
Next, close that Cygwin window and open another. If you `echo` $CYGWIN you
should see:
```
```console
christie@win10 ~
$ echo $CYGWIN
winsymlinks:native
@@ -229,7 +229,7 @@ Now you are ready for [Step 2: Get Zulip code](#step-2-get-zulip-code).
(Note: The **GitHub Desktop client** for Windows has a bug where it
will automatically set `git config core.symlink false` on a repository
if you use it to clone a repository, which will break the Zulip
development environment, because we use symbolic links. For that
development environment, because we use symbolic links. For that
reason, we recommend avoiding using GitHub Desktop client to clone
projects and to instead follow these instructions exactly.)
@@ -244,7 +244,7 @@ projects and to instead follow these instructions exactly.)
[clone your fork of the Zulip repository](../git/cloning.html#step-1b-clone-to-your-machine) and
[connect the Zulip upstream repository](../git/cloning.html#step-1c-connect-your-fork-to-zulip-upstream):
```
```bash
git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git
cd zulip
git remote add -f upstream https://github.com/zulip/zulip.git
@@ -255,7 +255,7 @@ This will create a 'zulip' directory and download the Zulip code into it.
Don't forget to replace YOURUSERNAME with your Git username. You will see
something like:
```
```console
christie@win10 ~
$ git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git
Cloning into 'zulip'...
@@ -276,7 +276,7 @@ environment](#step-3-start-the-development-environment).
Change into the zulip directory and tell vagrant to start the Zulip
development environment with `vagrant up`:
```
```bash
# On Windows or macOS:
cd zulip
vagrant plugin install vagrant-vbguest
@@ -298,56 +298,57 @@ does the following:
- runs the `tools/provision` script inside the virtual machine/container, which
downloads all required dependencies, sets up the python environment for
the Zulip development server, and initializes a default test
database. We call this process "provisioning", and it is documented
database. We call this process "provisioning", and it is documented
in some detail in our [dependencies documentation](../subsystems/dependencies.md).
You will need an active internet connection during the entire
process. (See [Specifying a proxy](#specifying-a-proxy) if you need a
proxy to access the internet.) `vagrant up` can fail while
provisioning if your Internet connection is unreliable. To retry, you
provisioning if your Internet connection is unreliable. To retry, you
can use `vagrant provision` (`vagrant up` will just boot the guest
without provisioning after the first time). Other common issues are
without provisioning after the first time). Other common issues are
documented in the
[Troubleshooting and common errors](#troubleshooting-and-common-errors)
section. If that doesn't help, please visit
section. If that doesn't help, please visit
[#provision help](https://chat.zulip.org/#narrow/stream/21-provision-help)
in the [Zulip development community server](../contributing/chat-zulip-org.md) for
real-time help.
On Windows, you will see the message `The system cannot find the path
specified.` several times. This is normal and is not a problem.
On Windows, you will see the message
`The system cannot find the path specified.` several times. This is
normal and is not a problem.
Once `vagrant up` has completed, connect to the development
environment with `vagrant ssh`:
```
```console
christie@win10 ~/zulip
$ vagrant ssh
```
You should see output that starts like this:
```
```console
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-54-generic x86_64)
```
Congrats, you're now inside the Zulip development environment!
You can confirm this by looking at the command prompt, which starts
with `(zulip-py3-venv)vagrant@`. If it just starts with `vagrant@`, your
with `(zulip-py3-venv)vagrant@`. If it just starts with `vagrant@`, your
provisioning failed and you should look at the
[troubleshooting section](#troubleshooting-and-common-errors).
Next, start the Zulip server:
```
```console
(zulip-py3-venv) vagrant@ubuntu-bionic:/srv/zulip
$ ./tools/run-dev.py
```
You will see several lines of output starting with something like:
```
```console
2016-05-04 22:20:33,895 INFO: process_fts_updates starting
Recompiling templates
2016-05-04 18:20:34,804 INFO: Not in recovery; listening for FTS updates
@@ -362,9 +363,10 @@ Quit the server with CTRL-C.
2016-05-04 18:20:40,722 INFO Tornado 95.5% busy over the past 0.0 seconds
Performing system checks...
```
And ending with something similar to:
```
```console
http://localhost:9994/webpack-dev-server/
webpack result is served from http://localhost:9991/webpack/
content is served from /srv/zulip
@@ -385,7 +387,7 @@ The Zulip server will continue to run and send output to the terminal window.
When you navigate to Zulip in your browser, check your terminal and you
should see something like:
```
```console
2016-05-04 18:21:57,547 INFO 127.0.0.1 GET 302 582ms (+start: 417ms) / (unauth@zulip via ?)
[04/May/2016 18:21:57]"GET / HTTP/1.0" 302 0
2016-05-04 18:21:57,568 INFO 127.0.0.1 GET 301 4ms /login (unauth@zulip via ?)
@@ -409,7 +411,7 @@ development environment on the virtual machine/container.
Each component of the Zulip development server will automatically
restart itself or reload data appropriately when you make changes. So,
to see your changes, all you usually have to do is reload your
browser. More details on how this works are available below.
browser. More details on how this works are available below.
Zulip's whitespace rules are all enforced by linters, so be sure to
run `tools/lint` often to make sure you're following our coding style
@@ -438,10 +440,10 @@ guide][rtd-git-guide].
If after rebasing onto a new version of the Zulip server, you receive
new errors while starting the Zulip server or running tests, this is
probably not because Zulip's master branch is broken. Instead, this
probably not because Zulip's `main` branch is broken. Instead, this
is likely because we've recently merged changes to the development
environment provisioning process that you need to apply to your
development environment. To update your environment, you'll need to
development environment. To update your environment, you'll need to
re-provision your vagrant machine using `vagrant provision` (this just
runs `tools/provision` from your Zulip checkout inside the Vagrant
guest); this should complete in about a minute.
@@ -460,16 +462,16 @@ help.
If you ever want to recreate your development environment again from
scratch (e.g. to test a change you've made to the provisioning
process, or because you think something is broken), you can do so
using `vagrant destroy` and then `vagrant up`. This will usually be
using `vagrant destroy` and then `vagrant up`. This will usually be
much faster than the original `vagrant up` since the base image is
already cached on your machine (it takes about 5 minutes to run with a
fast Internet connection).
Any additional programs (e.g. Zsh, emacs, etc.) or configuration that
you may have installed in the development environment will be lost
when you recreate it. To address this, you can create a script called
when you recreate it. To address this, you can create a script called
`tools/custom_provision` in your Zulip Git checkout; and place any
extra setup commands there. Vagrant will run `tools/custom_provision`
extra setup commands there. Vagrant will run `tools/custom_provision`
every time you run `vagrant provision` (or create a Vagrant guest via
`vagrant up`).
@@ -484,7 +486,7 @@ can halt vagrant from another Terminal/Git BASH window.
From the window where run-dev.py is running:
```
```console
2016-05-04 18:33:13,330 INFO 127.0.0.1 GET 200 92ms /register/ (unauth@zulip via ?)
^C
KeyboardInterrupt
@@ -493,9 +495,10 @@ logout
Connection to 127.0.0.1 closed.
christie@win10 ~/zulip
```
Now you can suspend the development environment:
```
```console
christie@win10 ~/zulip
$ vagrant suspend
==> default: Saving VM state and suspending execution...
@@ -503,7 +506,7 @@ $ vagrant suspend
If `vagrant suspend` doesn't work, try `vagrant halt`:
```
```console
christie@win10 ~/zulip
$ vagrant halt
==> default: Attempting graceful shutdown of VM...
@@ -520,7 +523,7 @@ pass the `--provider` option required above). You will also need to
connect to the virtual machine with `vagrant ssh` and re-start the
Zulip server:
```
```console
christie@win10 ~/zulip
$ vagrant up
$ vagrant ssh
@@ -533,15 +536,15 @@ $ ./tools/run-dev.py
Next, read the following to learn more about developing for Zulip:
* [Git & GitHub guide][rtd-git-guide]
* [Using the development environment][rtd-using-dev-env]
* [Testing][rtd-testing] (and [Configuring CI][ci] to
run the full test suite against any branches you push to your fork,
which can help you optimize your development workflow).
- [Git & GitHub guide][rtd-git-guide]
- [Using the development environment][rtd-using-dev-env]
- [Testing][rtd-testing] (and [Configuring CI][ci] to
run the full test suite against any branches you push to your fork,
which can help you optimize your development workflow).
### Troubleshooting and common errors
Below you'll find a list of common errors and their solutions. Most
Below you'll find a list of common errors and their solutions. Most
issues are resolved by just provisioning again (by running
`./tools/provision` (from `/srv/zulip`) inside the Vagrant guest or
equivalently `vagrant provision` from outside).
@@ -549,17 +552,17 @@ equivalently `vagrant provision` from outside).
If these solutions aren't working for you or you encounter an issue not
documented below, there are a few ways to get further help:
* Ask in [#provision help](https://chat.zulip.org/#narrow/stream/21-provision-help)
- Ask in [#provision help](https://chat.zulip.org/#narrow/stream/21-provision-help)
in the [Zulip development community server](../contributing/chat-zulip-org.md).
* [File an issue](https://github.com/zulip/zulip/issues).
- [File an issue](https://github.com/zulip/zulip/issues).
When reporting your issue, please include the following information:
* host operating system
* installation method (Vagrant or direct)
* whether or not you are using a proxy
* a copy of Zulip's `vagrant` provisioning logs, available in
`/var/log/provision.log` on your virtual machine. If you choose to
- host operating system
- installation method (Vagrant or direct)
- whether or not you are using a proxy
- a copy of Zulip's `vagrant` provisioning logs, available in
`/var/log/provision.log` on your virtual machine. If you choose to
post just the error output, please include the **beginning of the
error output**, not just the last few lines.
@@ -568,11 +571,11 @@ usually helpful.
#### Vagrant guest doesn't show (zulip-py3-venv) at start of prompt
This is caused by provisioning failing to complete successfully. You
This is caused by provisioning failing to complete successfully. You
can see the errors in `var/log/provision.log`; it should end with
something like this:
```
```text
ESC[94mZulip development environment setup succeeded!ESC[0m
```
@@ -589,7 +592,8 @@ shell and run `vagrant ssh` again to get the virtualenv setup properly.
#### Vagrant was unable to mount VirtualBox shared folders
For the following error:
```
```console
Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel
@@ -603,19 +607,19 @@ was:
If this error starts happening unexpectedly, then just run:
```
```bash
vagrant halt
vagrant up
```
to reboot the guest. After this, you can do `vagrant provision` and
to reboot the guest. After this, you can do `vagrant provision` and
`vagrant ssh`.
#### ssl read error
If you receive the following error while running `vagrant up`:
```
```console
SSL read: error:00000000:lib(0):func(0):reason(0), errno 104
```
@@ -627,14 +631,14 @@ better network connection).
When running `vagrant up` or `provision`, if you see the following error:
```
```console
==> default: E:unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
```
It means that your local apt repository has been corrupted, which can
usually be resolved by executing the command:
```
```bash
apt-get -f install
```
@@ -642,12 +646,12 @@ apt-get -f install
On running `vagrant ssh`, if you see the following error:
```
```console
ssh_exchange_identification: Connection closed by remote host
```
It usually means the Vagrant guest is not running, which is usually
solved by rebooting the Vagrant guest via `vagrant halt; vagrant up`. See
solved by rebooting the Vagrant guest via `vagrant halt; vagrant up`. See
[Vagrant was unable to communicate with the guest machine](#vagrant-was-unable-to-communicate-with-the-guest-machine)
for more details.
@@ -655,7 +659,7 @@ for more details.
If you receive the following error while running `vagrant up`:
```
```console
==> default: Traceback (most recent call last):
==> default: File "./emoji_dump.py", line 75, in <module>
==> default:
@@ -669,9 +673,9 @@ Then Vagrant was not able to create a symbolic link.
First, if you are using Windows, **make sure you have run Git BASH (or
Cygwin) as an administrator**. By default, only administrators can
create symbolic links on Windows. Additionally [UAC][windows-uac], a
create symbolic links on Windows. Additionally [UAC][windows-uac], a
Windows feature intended to limit the impact of malware, can prevent
even administrator accounts from creating symlinks. [Turning off
even administrator accounts from creating symlinks. [Turning off
UAC][disable-uac] will allow you to create symlinks. You can also try
some of the solutions mentioned
[here](https://superuser.com/questions/124679/how-do-i-create-a-link-in-windows-7-home-premium-as-a-regular-user).
@@ -681,7 +685,7 @@ some of the solutions mentioned
If you ran Git BASH as administrator but you already had VirtualBox
running, you might still get this error because VirtualBox is not
running as administrator. In that case: close the Zulip VM with
running as administrator. In that case: close the Zulip VM with
`vagrant halt`; close any other VirtualBox VMs that may be running;
exit VirtualBox; and try again with `vagrant up --provision` from a
Git BASH running as administrator.
@@ -697,7 +701,7 @@ Get the name of your virtual machine by running `vboxmanage list vms` and
then print out the custom settings for this virtual machine with
`vboxmanage getextradata YOURVMNAME enumerate`:
```
```console
christie@win10 ~/zulip
$ vboxmanage list vms
"zulip_default_1462498139595_55484" {5a65199d-8afa-4265-b2f6-6b1f162f157d}
@@ -716,7 +720,7 @@ If `vboxmanage enumerate` prints nothing, or shows a value of 0 for
VBoxInternal2/SharedFoldersEnableSymlinksCreate/srv_zulip, then enable
symbolic links by running this command in Terminal/Git BASH/Cygwin:
```
```bash
vboxmanage setextradata YOURVMNAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/srv_zulip 1
```
@@ -727,10 +731,10 @@ The virtual machine needs to be shut down when you run this command.
If you get an error message on Windows about lack of Windows Home
support for Hyper-V when running `vagrant up`, the problem is that
Windows is incorrectly attempting to use Hyper-V rather than
Virtualbox as the virtualization provider. You can fix this by
Virtualbox as the virtualization provider. You can fix this by
explicitly passing the virtualbox provider to `vagrant up`:
```
```console
christie@win10 ~/zulip
$ vagrant up --provide=virtualbox
```
@@ -739,15 +743,15 @@ $ vagrant up --provide=virtualbox
If you see the following error after running `vagrant up`:
```
```console
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Error: Connection timeout. Retrying...
default: Error: Connection timeout. Retrying...
default: Error: Connection timeout. Retrying...
```
A likely cause is that hardware virtualization is not enabled for your
computer. This must be done via your computer's BIOS settings. Look for a
setting called VT-x (Intel) or (AMD-V).
@@ -762,7 +766,7 @@ this post](https://stackoverflow.com/questions/22575261/vagrant-stuck-connection
If you see the following error when you run `vagrant up`:
```
```console
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
@@ -782,26 +786,26 @@ the timeout ("config.vm.boot_timeout") value.
```
This has a range of possible causes, that usually amount to a bug in
Virtualbox or Vagrant. If you see this error, you usually can fix it
Virtualbox or Vagrant. If you see this error, you usually can fix it
by rebooting the guest via `vagrant halt; vagrant up`.
#### Vagrant up fails with subprocess.CalledProcessError
The `vagrant up` command basically does the following:
* Downloads an Ubuntu image and starts it using a Vagrant provider.
* Uses `vagrant ssh` to connect to that Ubuntu guest, and then runs
- Downloads an Ubuntu image and starts it using a Vagrant provider.
- Uses `vagrant ssh` to connect to that Ubuntu guest, and then runs
`tools/provision`, which has a lot of subcommands that are
executed via Python's `subprocess` module. These errors mean that
executed via Python's `subprocess` module. These errors mean that
one of those subcommands failed.
To debug such errors, you can log in to the Vagrant guest machine by
running `vagrant ssh`, which should present you with a standard shell
prompt. You can debug interactively by using e.g. `cd zulip &&
./tools/provision`, and then running the individual subcommands
that failed. Once you've resolved the problem, you can rerun
`tools/provision` to proceed; the provisioning system is designed
to recover well from failures.
prompt. You can debug interactively by using e.g.
`cd zulip && ./tools/provision`, and then running the individual
subcommands that failed. Once you've resolved the problem, you can
rerun `tools/provision` to proceed; the provisioning system is
designed to recover well from failures.
The Zulip provisioning system is generally highly reliable; the most common
cause of issues here is a poor network connection (or one where you need a
@@ -809,18 +813,19 @@ proxy to access the Internet and haven't [configured the development
environment to use it](#specifying-a-proxy).
Once you've provisioned successfully, you'll get output like this:
```
```console
Zulip development environment setup succeeded!
(zulip-py3-venv) vagrant@vagrant-base-trusty-amd64:~/zulip$
```
If the `(zulip-py3-venv)` part is missing, this is because your
installation failed the first time before the Zulip virtualenv was
created. You can fix this by just closing the shell and running
created. You can fix this by just closing the shell and running
`vagrant ssh` again, or using `source /srv/zulip-py3-venv/bin/activate`.
Finally, if you encounter any issues that weren't caused by your
Internet connection, please report them! We try hard to keep Zulip
Internet connection, please report them! We try hard to keep Zulip
development environment provisioning free of bugs.
##### `pip install` fails during `vagrant up` on Ubuntu
@@ -829,14 +834,14 @@ Likely causes are:
1. Networking issues
2. Insufficient RAM. Check whether you've allotted at least two
gigabytes of RAM, which is the minimum Zulip
[requires](../development/setup-vagrant.html#requirements). If
not, go to your VM settings and increase the RAM, then restart
the VM.
gigabytes of RAM, which is the minimum Zulip
[requires](../development/setup-vagrant.html#requirements). If
not, go to your VM settings and increase the RAM, then restart
the VM.
##### yarn install warnings
```
```console
$ yarn install
yarn install v0.24.5
[1/4] Resolving packages...
@@ -853,7 +858,7 @@ It is okay to proceed and start the Zulip server.
#### VBoxManage errors related to VT-x or WHvSetupPartition
```
```console
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
@@ -866,7 +871,7 @@ VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component ConsoleWrap,
or
```
```console
Stderr: VBoxManage.exe: error: Call to WHvSetupPartition failed: ERROR_SUCCESS (Last=0xc000000d/87) (VERR_NEM_VM_CREATE_FAILED)
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component ConsoleWrap, interface IConsole
```
@@ -875,14 +880,15 @@ First, ensure that hardware virtualization support (VT-x or AMD-V) is
enabled in your BIOS.
If the error persists, you may have run into an incompatibility
between VirtualBox and Hyper-V on Windows. To disable Hyper-V, open
command prompt as administrator, run `bcdedit /set
hypervisorlaunchtype off`, and reboot. If you need to enable it
later, run `bcdedit /deletevalue hypervisorlaunchtype`, and reboot.
between VirtualBox and Hyper-V on Windows. To disable Hyper-V, open
command prompt as administrator, run
`bcdedit /set hypervisorlaunchtype off`, and reboot. If you need to
enable it later, run `bcdedit /deletevalue hypervisorlaunchtype`, and
reboot.
#### OSError: [Errno 26] Text file busy
```
```console
default: Traceback (most recent call last):
default: File "/srv/zulip-py3-venv/lib/python3.6/shutil.py", line 426, in _rmtree_safe_fd
@@ -892,26 +898,26 @@ default: OSError: [Errno 26] Text file busy: 'baremetrics'
This error is caused by a
[bug](https://www.virtualbox.org/ticket/19004) in recent versions of
the VirtualBox Guest Additions for Linux on Windows hosts. You can
the VirtualBox Guest Additions for Linux on Windows hosts. You can
check the running version of VirtualBox Guest Additions with this
command:
```
```bash
vagrant ssh -- 'modinfo -F version vboxsf'
```
The bug has not been fixed upstream as of this writing, but you may be
able to work around it by downgrading VirtualBox Guest Additions to
6.0.4. To do this, create a `~/.zulip-vagrant-config` file and add
6.0.4. To do this, create a `~/.zulip-vagrant-config` file and add
this line:
```
```text
VBOXADD_VERSION 6.0.4
```
Then run these commands (yes, reload is needed twice):
```
```bash
vagrant plugin install vagrant-vbguest
vagrant reload
vagrant reload --provision
@@ -920,14 +926,14 @@ vagrant reload --provision
### Specifying an Ubuntu mirror
Bringing up a development environment for the first time involves
downloading many packages from the Ubuntu archive. The Ubuntu cloud
downloading many packages from the Ubuntu archive. The Ubuntu cloud
images use the global mirror `http://archive.ubuntu.com/ubuntu/` by
default, but you may find that you can speed up the download by using
a local mirror closer to your location. To do this, create
a local mirror closer to your location. To do this, create
`~/.zulip-vagrant-config` and add a line like this, replacing the URL
as appropriate:
```
```text
UBUNTU_MIRROR http://us.archive.ubuntu.com/ubuntu/
```
@@ -937,14 +943,14 @@ If you need to use a proxy server to access the Internet, you will
need to specify the proxy settings before running `Vagrant up`.
First, install the Vagrant plugin `vagrant-proxyconf`:
```
```bash
vagrant plugin install vagrant-proxyconf
```
Then create `~/.zulip-vagrant-config` and add the following lines to
it (with the appropriate values in it for your proxy):
```
```text
HTTP_PROXY http://proxy_host:port
HTTPS_PROXY http://proxy_host:port
NO_PROXY localhost,127.0.0.1,.example.com,.zulipdev.com
@@ -953,14 +959,14 @@ NO_PROXY localhost,127.0.0.1,.example.com,.zulipdev.com
For proxies that require authentication, the config will be a bit more
complex, e.g.:
```
```text
HTTP_PROXY http://userName:userPassword@192.168.1.1:8080
HTTPS_PROXY http://userName:userPassword@192.168.1.1:8080
NO_PROXY localhost,127.0.0.1,.example.com,.zulipdev.com
```
You'll want to **double-check** your work for mistakes (a common one
is using `https://` when your proxy expects `http://`). Invalid proxy
is using `https://` when your proxy expects `http://`). Invalid proxy
configuration can cause confusing/weird exceptions; if you're using a
proxy and get an error, the first thing you should investigate is
whether you entered your proxy configuration correctly.
@@ -976,9 +982,9 @@ then do a `vagrant reload`.
### Using a different port for Vagrant
You can also change the port on the host machine that Vagrant uses by
adding to your `~/.zulip-vagrant-config` file. E.g. if you set:
adding to your `~/.zulip-vagrant-config` file. E.g. if you set:
```
```text
HOST_PORT 9971
```
@@ -989,7 +995,7 @@ If you'd like to be able to connect to your development environment from other
machines than the VM host, you can manually set the host IP address in the
'~/.zulip-vagrant-config' file as well. For example, if you set:
```
```text
HOST_IP_ADDR 0.0.0.0
```
@@ -1007,7 +1013,7 @@ described here are ignored).
Our default Vagrant settings allocate 2 cpus with 2GiB of memory for
the guest, which is sufficient to run everything in the development
environment. If your host system has more CPUs, or you have enough
environment. If your host system has more CPUs, or you have enough
RAM that you'd like to allocate more than 2GiB to the guest, you can
improve performance of the Zulip development environment by allocating
more resources.
@@ -1015,14 +1021,14 @@ more resources.
To do so, create a `~/.zulip-vagrant-config` file containing the
following lines:
```
```text
GUEST_CPUS <number of cpus>
GUEST_MEMORY_MB <system memory (in MB)>
```
For example:
```
```text
GUEST_CPUS 4
GUEST_MEMORY_MB 8192
```

View File

@@ -1,23 +1,24 @@
# Testing the installer
Zulip's install process is tested as part of [its continuous
integrations suite][CI], but that only tests the most common
integrations suite][ci], but that only tests the most common
configurations; when making changes to more complicated [installation
options][installer-docs], Zulip provides tooling to repeatedly test
the installation process in a clean environment each time.
[CI]: https://github.com/zulip/zulip/actions/workflows/production-suite.yml?query=branch%3Amaster
[ci]: https://github.com/zulip/zulip/actions/workflows/production-suite.yml?query=branch%3Amain
[installer-docs]: ../production/install.md
## Configuring
Using the test installer framework requires a Linux operating system;
it will not work on WSL, for instance. It requires at least 3G of
it will not work on WSL, for instance. It requires at least 3G of
RAM, in order to accommodate the VMs and the steps which build the
release assets.
To begin, install the LXC toolchain:
```
```bash
sudo apt-get install lxc lxc-utils
```
@@ -32,7 +33,8 @@ You only need to do this step once per time you work on a set of
changes, to refresh the package that the installer uses. The installer
doesn't work cleanly out of a source checkout; it wants a release
checkout, so we build a tarball of one of those first:
```
```bash
./tools/build-release-tarball test-installer
```
@@ -43,10 +45,11 @@ as the last step; for example,
Next, unpack that file into a local directory; we will make any
changes we want in our source checkout and copy them into this
directory. The test installer needs the release directory to be named
`zulip-server`, so we rename it and move it appropriately. In the
`zulip-server`, so we rename it and move it appropriately. In the
first line, you'll need to substitute the actual path that you got for
the tarball, above:
```
```bash
tar xzf /tmp/tmp.fepqqNBWxp/zulip-server-test-installer.tar.gz
mkdir zulip-test-installer
mv zulip-server-test-installer zulip-test-installer/zulip-server
@@ -65,7 +68,8 @@ into the installer.
For example, to test an install onto Ubuntu 20.04 "Focal", we might
call:
```
```bash
sudo ./tools/test-install/install \
-r focal \
./zulip-test-installer/ \
@@ -82,7 +86,8 @@ take a while.
Regardless of if the install succeeds or fails, it will stay running
so you can inspect it. You can see all of the containers which are
running, and their randomly-generated names, by running:
```
```bash
sudo lxc-ls -f
```
@@ -90,7 +95,8 @@ sudo lxc-ls -f
After using `lxc-ls` to list containers, you can choose one of them
and connect to its terminal:
```
```bash
sudo lxc-attach --clear-env -n zulip-install-focal-PUvff
```
@@ -98,24 +104,25 @@ sudo lxc-attach --clear-env -n zulip-install-focal-PUvff
To destroy all containers (but leave the base containers, which speed
up the initial install):
```
```bash
sudo ./tools/test-install/destroy-all -f
```
To destroy just one container:
```
```bash
sudo lxc-destroy -f -n zulip-install-focal-PUvff
```
### Iterating on the installer
Iterate on the installer by making changes to your source tree,
copying them into the release directory, and re-running the installer,
which will start up a new container. Here, we update just the
`scripts` and `puppet` directories of the release directory:
```
```bash
rsync -az scripts puppet zulip-test-installer/zulip-server/
sudo ./tools/test-install/install \
@@ -124,4 +131,3 @@ sudo ./tools/test-install/install \
--hostname=zulip.example.net \
--email=username@example.net
```

View File

@@ -1,10 +1,9 @@
Using the development environment
=================================
# Using the development environment
This page describes the basic edit/refresh workflows for working with
the Zulip development environment. Generally, the development
the Zulip development environment. Generally, the development
environment will automatically update as soon as you save changes
using your editor. Details for work on the [server](#server),
using your editor. Details for work on the [server](#server),
[webapp](#web), and [mobile apps](#mobile) are below.
If you're working on authentication methods or need to use the [Zulip
@@ -13,79 +12,80 @@ the development environment][authentication-dev-server].
## Common
* Zulip's master branch moves quickly, and you should rebase
constantly with e.g. `git fetch upstream; git rebase
upstream/master` to avoid developing on an old version of the Zulip
codebase (leading to unnecessary merge conflicts).
* Remember to run `tools/provision` to update your development
- Zulip's `main` branch moves quickly, and you should rebase
constantly with e.g.
`git fetch upstream; git rebase upstream/main` to avoid developing
on an old version of the Zulip codebase (leading to unnecessary
merge conflicts).
- Remember to run `tools/provision` to update your development
environment after switching branches; it will run in under a second
if no changes are required.
* After making changes, you'll often want to run the
- After making changes, you'll often want to run the
[linters](../testing/linters.md) and relevant [test
suites](../testing/testing.md). Consider using our [Git pre-commit
suites](../testing/testing.md). Consider using our [Git pre-commit
hook](../git/zulip-tools.html#set-up-git-repo-script) to
automatically lint whenever you make a commit.
* All of our test suites are designed to support quickly testing just
- All of our test suites are designed to support quickly testing just
a single file or test case, which you should take advantage of to
save time.
* Many useful development tools, including tools for rebuilding the
- Many useful development tools, including tools for rebuilding the
database with different test data, are documented in-app at
`https://localhost:9991/devtools`.
* If you want to restore your development environment's database to a
- If you want to restore your development environment's database to a
pristine state, you can use `./tools/rebuild-dev-database`.
## Server
* For changes that don't affect the database model, the Zulip
- For changes that don't affect the database model, the Zulip
development environment will automatically detect changes and
restart:
* The main Django/Tornado server processes are run on top of
- The main Django/Tornado server processes are run on top of
Django's [manage.py runserver][django-runserver], which will
automatically restart them when you save changes to Python code
they use. You can watch this happen in the `run-dev.py` console
they use. You can watch this happen in the `run-dev.py` console
to make sure the backend has reloaded.
* The Python queue workers will also automatically restart when you
- The Python queue workers will also automatically restart when you
save changes, as long as they haven't crashed (which can happen if
they reloaded into a version with a syntax error).
* If you change the database schema (`zerver/models.py`), you'll need
- If you change the database schema (`zerver/models.py`), you'll need
to use the [Django migrations
process](../subsystems/schema-migrations.md); see also the [new
feature tutorial][new-feature-tutorial] for an example.
* While testing server changes, it's helpful to watch the `run-dev.py`
- While testing server changes, it's helpful to watch the `run-dev.py`
console output, which will show tracebacks for any 500 errors your
Zulip development server encounters (which are probably caused by
bugs in your code).
* To manually query Zulip's database interactively, use `./manage.py
shell` or `manage.py dbshell`.
* The database(s) used for the automated tests are independent from
- To manually query Zulip's database interactively, use
`./manage.py shell` or `manage.py dbshell`.
- The database(s) used for the automated tests are independent from
the one you use for manual testing in the UI, so changes you make to
the database manually will never affect the automated tests.
## Web
* Once the development server (`run-dev.py`) is running, you can visit
- Once the development server (`run-dev.py`) is running, you can visit
<http://localhost:9991/> in your browser.
* By default, the development server homepage just shows a list of the
- By default, the development server homepage just shows a list of the
users that exist on the server and you can log in as any of them by
just clicking on a user.
* This setup saves time for the common case where you want to test
- This setup saves time for the common case where you want to test
something other than the login process.
* You can test the login or registration process by clicking the
- You can test the login or registration process by clicking the
links for the normal login page.
* Most changes will take effect automatically. Details:
* If you change CSS files, your changes will appear immediately via
- Most changes will take effect automatically. Details:
- If you change CSS files, your changes will appear immediately via
webpack hot module replacement.
* If you change JavaScript code (`static/js`) or Handlebars
- If you change JavaScript code (`static/js`) or Handlebars
templates (`static/templates`), the browser window will be
reloaded automatically.
* For Jinja2 backend templates (`templates/*`), you'll need to reload
- For Jinja2 backend templates (`templates/*`), you'll need to reload
the browser window to see your changes.
* Any JavaScript exceptions encountered while using the webapp in a
- Any JavaScript exceptions encountered while using the webapp in a
development environment will be displayed as a large notice, so you
don't need to watch the JavaScript console for exceptions.
* Both Chrome and Firefox have great debuggers, inspectors, and
- Both Chrome and Firefox have great debuggers, inspectors, and
profilers in their built-in developer tools.
* `debug.js` has some occasionally useful JavaScript profiling code.
- `debug.js` has some occasionally useful JavaScript profiling code.
## Mobile

View File

@@ -4,7 +4,7 @@ This document explains the system for documenting [Zulip's REST
API](https://zulip.com/api/rest).
Zulip's API documentation is an essential resource both for users and
for the developers of Zulip's mobile and terminal apps. Our vision is
for the developers of Zulip's mobile and terminal apps. Our vision is
for the documentation to be sufficiently good that developers of
Zulip's apps should never need to look at the server's implementation
to answer questions about the API's semantics.
@@ -15,44 +15,44 @@ and remains so as Zulip's API evolves.
In particular, the top goal for this system is that all mistakes in
verifiable content (i.e. not the English explanations) should cause
the Zulip test suite to fail. This is incredibly important, because
the Zulip test suite to fail. This is incredibly important, because
once you notice one error in API documentation, you no longer trust it
to be correct, which ends up wasting the time of its users.
Since it's very difficult to not make little mistakes when writing any
untested code, the only good solution to this is a way to test
the documentation. We found dozens of errors in the process of adding
the documentation. We found dozens of errors in the process of adding
the validation Zulip has today.
Our API documentation is defined by a few sets of files:
* Most data describing API endpoints and examples is stored in our
- Most data describing API endpoints and examples is stored in our
[OpenAPI configuration](../documentation/openapi.md) at
`zerver/openapi/zulip.yaml`.
* The top-level templates live under `templates/zerver/api/*`, and are
- The top-level templates live under `templates/zerver/api/*`, and are
written using the Markdown framework that powers our [user
docs](../documentation/user.md), with some special extensions for
rendering nice code blocks and example responses. We expect to
rendering nice code blocks and example responses. We expect to
eventually remove most of these files where it is possible to
fully generate the documentation from the OpenAPI files.
* The text for the Python examples comes from a test suite for the
- The text for the Python examples comes from a test suite for the
Python API documentation (`zerver/openapi/python_examples.py`; run via
`tools/test-api`). The `generate_code_example` macro will magically
`tools/test-api`). The `generate_code_example` macro will magically
read content from that test suite and render it as the code example.
This structure ensures that Zulip's API documentation is robust to a
wide range of possible typos and other bugs in the API
documentation.
* The JavaScript examples are similarly generated and tested using
- The JavaScript examples are similarly generated and tested using
`zerver/openapi/javascript_examples.js`.
* The cURL examples are generated and tested using
- The cURL examples are generated and tested using
`zerver/openapi/curl_param_value_generators.py`.
* The REST API index
- The REST API index
(`templates/zerver/help/include/rest-endpoints.md`) in the broader
/api left sidebar (`templates/zerver/api/sidebar_index.md`).
* We have an extensive set of tests designed to validate that the data
- We have an extensive set of tests designed to validate that the data
in this file is correct, `zerver/tests/test_openapi.py` compares
every endpoint's accepted parameters in `views` code with those
declared in `zulip.yaml`. And [backend test
declared in `zulip.yaml`. And [backend test
suite](../testing/testing-with-django.md) and checks that every API
response served during our extensive backend test suite matches one
the declared OpenAPI schema for that endpoint.
@@ -74,10 +74,10 @@ We highly recommend looking at those resources while reading this page.
If you look at the documentation for existing endpoints, you'll notice
that a typical endpoint's documentation is divided into four sections:
* The top-level **Description**
* **Usage examples**
* **Arguments**
* **Responses**
- The top-level **Description**
- **Usage examples**
- **Arguments**
- **Responses**
The rest of this guide describes how each of these sections works.
@@ -94,14 +94,14 @@ relevant feature in `/help/`.
### Usage examples
We display usage examples in three languages: Python, JavaScript and
`curl`; we may add more in the future. Every endpoint should have
`curl`; we may add more in the future. Every endpoint should have
Python and `curl` documentation; `JavaScript` is optional as we don't
consider that API library to be fully supported. The examples are
consider that API library to be fully supported. The examples are
defined using a special Markdown extension
(`zerver/openapi/markdown_extension.py`). To use this extension, one
(`zerver/openapi/markdown_extension.py`). To use this extension, one
writes a Markdown file block that looks something like this:
```
```md
{start_tabs}
{tab|python}
@@ -121,10 +121,10 @@ writes a Markdown file block that looks something like this:
For the Python examples, you'll write the example in
`zerver/openapi/python_examples.py`, and it'll be run and verified
automatically in Zulip's automated test suite. The code there will
automatically in Zulip's automated test suite. The code there will
look something like this:
``` python
```python
@openapi_test_function('/messages/render:post')
def render_message(client: Client) -> None:
# {code_example|start}
@@ -139,14 +139,14 @@ def render_message(client: Client) -> None:
```
This is an actual Python function which will be run as part of the
`tools/test-api` test suite. The `validate_against_opanapi_schema`
`tools/test-api` test suite. The `validate_against_opanapi_schema`
function will verify that the result of that request is as defined in
the examples in `zerver/openapi/zulip.yaml`.
To run as part of the testsuite, the `render_message` function needs
to be called from `test_messages` (or one of the other functions at
the bottom of the file). The final function, `test_the_api`, is what
actually runs the tests. Tests with the `openapi_test_function`
the bottom of the file). The final function, `test_the_api`, is what
actually runs the tests. Tests with the `openapi_test_function`
decorator that are not called will fail tests, as will new endpoints
that are not covered by an `openapi_test_function`-decorated test.
@@ -164,12 +164,12 @@ wherever that string appears in the API documentation.
### Parameters
We have a separate Markdown extension to document the parameters that
an API endpoint supports. You'll see this in files like
an API endpoint supports. You'll see this in files like
`templates/zerver/api/render-message.md` via the following Markdown
directive (implemented in
`zerver/lib/markdown/api_arguments_table_generator.py`):
```
```md
{generate_api_arguments_table|zulip.yaml|/messages/render:post}
```
@@ -186,24 +186,24 @@ You can use the following Markdown directive to render the fixtures
defined in the OpenAPI `zulip.yaml` for a given endpoint and status
code:
```
```md
{generate_code_example|/messages/render:post|fixture(200)}
```
## Step by step guide
This section offers a step-by-step process for adding documentation
for a new API endpoint. It assumes you've read and understood the
for a new API endpoint. It assumes you've read and understood the
above.
1. Start by adding [OpenAPI format](../documentation/openapi.md)
data to `zerver/openapi/zulip.yaml` for the endpoint. If you
data to `zerver/openapi/zulip.yaml` for the endpoint. If you
copy-paste (which is helpful to get the indentation structure
right), be sure to update all the content that you copied to
correctly describe your endpoint!
In order to do this, you need to figure out how the endpoint in
question works by reading the code! To understand how arguments
question works by reading the code! To understand how arguments
are specified in Zulip backend endpoints, read our [REST API
tutorial][rest-api-tutorial], paying special attention to the
details of `REQ` and `has_request_variables`.
@@ -215,13 +215,14 @@ above.
declared using `REQ`.
You can check your formatting using these helpful tools.
* `tools/check-openapi` will verify the syntax of `zerver/openapi/zulip.yaml`.
* `tools/test-backend zerver/tests/test_openapi.py`; this test compares
your documentation against the code and can find many common
mistakes in how arguments are declared.
* `test-backend`: The full Zulip backend test suite will fail if
- `tools/check-openapi` will verify the syntax of `zerver/openapi/zulip.yaml`.
- `tools/test-backend zerver/tests/test_openapi.py`; this test compares
your documentation against the code and can find many common
mistakes in how arguments are declared.
- `test-backend`: The full Zulip backend test suite will fail if
any actual API responses generated by the tests don't match your
defined OpenAPI schema. Use `test-backend --rerun` for a fast
defined OpenAPI schema. Use `test-backend --rerun` for a fast
edit/refresh cycle when debugging.
[rest-api-tutorial]: ../tutorials/writing-views.html#writing-api-rest-endpoints
@@ -229,7 +230,7 @@ above.
1. Add a function for the endpoint you'd like to document to
`zerver/openapi/python_examples.py`, decorated with
`@openapi_test_function`. `render_message` is a good example to
follow. There are generally two key pieces to your test: (1) doing
follow. There are generally two key pieces to your test: (1) doing
an API query and (2) verifying its result has the expected format
using `validate_against_openapi_schema`.
@@ -237,7 +238,7 @@ above.
bindings don't have a dedicated method for a specific API call,
you may either use `client.call_endpoint` or add a dedicated
function to the [zulip PyPI
package](https://github.com/zulip/python-zulip-api/tree/master/zulip).
package](https://github.com/zulip/python-zulip-api/tree/main/zulip).
Ultimately, the goal is for every endpoint to be documented the
latter way, but it's useful to be able to write working
documentation for an endpoint that isn't supported by
@@ -248,11 +249,11 @@ above.
function will be called when running `test-api`.
1. Capture the JSON response returned by the API call (the test
"fixture"). The easiest way to do this is add an appropriate print
"fixture"). The easiest way to do this is add an appropriate print
statement (usually `json.dumps(result, indent=4, sort_keys=True)`),
and then run `tools/test-api`. You can also use
and then run `tools/test-api`. You can also use
<https://jsonformatter.curiousconcept.com/> to format the JSON
fixtures. Add the fixture to the `example` subsection of the
fixtures. Add the fixture to the `example` subsection of the
`responses` section for the endpoint in
`zerver/openapi/zulip.yaml`.
@@ -266,7 +267,7 @@ above.
code example on our `/api` page.
1. Finally, write the Markdown file for your API endpoint under
`templates/zerver/api/`. This is usually pretty easy to template
`templates/zerver/api/`. This is usually pretty easy to template
off existing endpoints; but refer to the system explanations above
for details.
@@ -275,7 +276,7 @@ above.
1. Test your endpoint, pretending to be a new user in a hurry, by
visiting it via the links on `http://localhost:9991/api` (the API
docs are rendered from the Markdown source files on page load, so
just reload to see an updated version as you edit). You should
just reload to see an updated version as you edit). You should
make sure that copy-pasting the code in your examples works, and
post an example of the output in the pull request.
@@ -285,22 +286,22 @@ above.
in `zerver/openapi/zulip.yaml`, which mentions the API feature level
at which they were added.
[javascript-examples]: https://github.com/zulip/zulip-js/tree/master/examples
[javascript-examples]: https://github.com/zulip/zulip-js/tree/main/examples
## Why a custom system?
Given that our documentation is written in large part using the
OpenAPI format, why maintain a custom Markdown system for displaying
it? There's several major benefits to this system:
it? There's several major benefits to this system:
* It is extremely common for API documentation to become out of date
- It is extremely common for API documentation to become out of date
as an API evolves; this automated testing system helps make it
possible for Zulip to maintain accurate documentation without a lot
of manual management.
* Every Zulip server can host correct API documentation for its
- Every Zulip server can host correct API documentation for its
version, with the key variables (like the Zulip server URL) already
pre-substituted for the user.
* We're able to share implementation language and visual styling with
- We're able to share implementation language and visual styling with
our Help Center, which is especially useful for the extensive
non-REST API documentation pages (e.g. our bot framework).

View File

@@ -10,23 +10,23 @@ integrations).
Usually, this involves a few steps:
* Add text explaining all of the steps required to set up the
- Add text explaining all of the steps required to set up the
integration, including what URLs to use, etc. See
[Writing guidelines](#writing-guidelines) for detailed writing guidelines.
Zulip's pre-defined Markdown macros can be used for some of these steps.
See [Markdown macros](#markdown-macros) for further details.
* Make sure you've added your integration to
- Make sure you've added your integration to
`zerver/lib/integrations.py` in both the `WEBHOOK_INTEGRATIONS`
section (or `INTEGRATIONS` if not a webhook), and the
`DOC_SCREENSHOT_CONFIG` sections. These registries configure your
`DOC_SCREENSHOT_CONFIG` sections. These registries configure your
integration to appear on the `/integrations` page and make it
possible to automatically generate the screenshot of a sample
message (which is important for the screenshots to be updated as
Zulip's design changes).
* You'll need to add an SVG graphic
- You'll need to add an SVG graphic
of your integration's logo under the
`static/images/integrations/logos/<name>.svg`, where `<name>` is the
name of the integration, all in lower case; you can usually find them in the
@@ -40,13 +40,13 @@ Usually, this involves a few steps:
If you cannot find an SVG graphic of the logo, please find and include a PNG
image of the logo instead.
* Finally, generate a message sent by the integration and take a screenshot of
- Finally, generate a message sent by the integration and take a screenshot of
the message to provide an example message in the documentation.
If your new integration is an incoming webhook integration, you can generate
the screenshot using `tools/generate-integration-docs-screenshot`:
```sh
```bash
./tools/generate-integration-docs-screenshot --integration integrationname
```
@@ -71,82 +71,82 @@ always create a new macro by adding a new file to that folder.
Here are a few common macros used to document Zulip's integrations:
* `{!create-stream.md!}` macro - Recommends that users create a dedicated
- `{!create-stream.md!}` macro - Recommends that users create a dedicated
stream for a given integration. Usually the first step is setting up an
integration or incoming webhook. For an example rendering, see **Step 1** of
[the docs for Zulip's GitHub integration][GitHub].
[the docs for Zulip's GitHub integration][github-integration].
* `{!create-bot-construct-url-indented.md!}` macro - Instructs users to create a bot
- `{!create-bot-construct-url-indented.md!}` macro - Instructs users to create a bot
for a given integration and construct a webhook URL using the bot API key
and stream name. The URL is generated automatically for every incoming webhook
by using attributes in the `WebhookIntegration` class in
[zerver/lib/integrations.py][integrations-file].
This macro is usually used right after `{!create-stream!}`. For an example
rendering, see **Step 2** of [the docs for Zulip's GitHub integration][GitHub].
rendering, see **Step 2** of [the docs for Zulip's GitHub integration][github-integration].
**Note:** If special configuration is
required to set up the URL and you can't use this macro, be sure to use the
`{{ api_url }}` template variable, so that your integration
documentation will provide the correct URL for whatever server it is
deployed on. If special configuration is required to set the `SITE`
variable, you should document that too.
**Note:** If special configuration is
required to set up the URL and you can't use this macro, be sure to use the
`{{ api_url }}` template variable, so that your integration
documentation will provide the correct URL for whatever server it is
deployed on. If special configuration is required to set the `SITE`
variable, you should document that too.
* `{!append-stream-name.md!}` macro - Recommends appending `&stream=stream_name`
- `{!append-stream-name.md!}` macro - Recommends appending `&stream=stream_name`
to a URL in cases where supplying a stream name in the URL is optional.
Supplying a stream name is optional for most Zulip integrations. If you use
`{!create-bot-construct-url-indented.md!}`, this macro need not be used.
* `{!append-topic.md!}` macro - Recommends appending `&topic=my_topic` to a URL
- `{!append-topic.md!}` macro - Recommends appending `&topic=my_topic` to a URL
to supply a custom topic for webhook notification messages. Supplying a custom
topic is optional for most Zulip integrations. If you use
`{!create-bot-construct-url-indented.md!}`, this macro need not be used.
* `{!congrats.md!}` macro - Inserts congratulatory lines signifying the
- `{!congrats.md!}` macro - Inserts congratulatory lines signifying the
successful setup of a given integration. This macro is usually used at
the end of the documentation, right before the sample message screenshot.
For an example rendering, see the end of
[the docs for Zulip's GitHub integration][GitHub].
[the docs for Zulip's GitHub integration][github-integration].
* `{!download-python-bindings.md!}` macro - Links to Zulip's
- `{!download-python-bindings.md!}` macro - Links to Zulip's
[API page](https://zulip.com/api/) to download and install Zulip's
API bindings. This macro is usually used in non-webhook integration docs under
`templates/zerver/integrations/<integration_name>.md`. For an example
rendering, see **Step 2** of
[the docs for Zulip's Codebase integration][codebase].
* `{!change-zulip-config-file.md!}` macro - Instructs users to create a bot and
- `{!change-zulip-config-file.md!}` macro - Instructs users to create a bot and
specify said bot's credentials in the config file for a given non-webhook
integration. This macro is usually used in non-webhook integration docs under
`templates/zerver/integrations/<integration_name>.md`. For an example
rendering, see **Step 4** of
[the docs for Zulip's Codebase integration][codebase].
* `{!git-append-branches.md!}` and `{!git-webhook-url-with-branches.md!}` -
- `{!git-append-branches.md!}` and `{!git-webhook-url-with-branches.md!}` -
These two macros explain how to specify a list of branches in the webhook URL
to filter notifications in our Git-related webhooks. For an example rendering,
see the last paragraph of **Step 2** in
[the docs for Zulip's GitHub integration][GitHub].
[the docs for Zulip's GitHub integration][github-integration].
* `{!webhook-url.md!}` - Used internally by `{!create-bot-construct-url-indented.md!}`
- `{!webhook-url.md!}` - Used internally by `{!create-bot-construct-url-indented.md!}`
to generate the webhook URL.
* `{!zulip-config.md!}` - Used internally by `{!change-zulip-config-file.md!}`
- `{!zulip-config.md!}` - Used internally by `{!change-zulip-config-file.md!}`
to specify the lines in the config file for a non-webhook integration.
* `{!webhook-url-with-bot-email.md!}` - Used in certain non-webhook integrations
- `{!webhook-url-with-bot-email.md!}` - Used in certain non-webhook integrations
to generate URLs of the form:
```
https://bot_email:bot_api_key@yourZulipDomain.zulipchat.com/api/v1/external/beanstalk
```
```text
https://bot_email:bot_api_key@yourZulipDomain.zulipchat.com/api/v1/external/beanstalk
```
For an example rendering, see
[Zulip's Beanstalk integration](https://zulip.com/integrations/doc/beanstalk).
For an example rendering, see
[Zulip's Beanstalk integration](https://zulip.com/integrations/doc/beanstalk).
[GitHub]: https://zulip.com/integrations/doc/github
[github-integration]: https://zulip.com/integrations/doc/github
[codebase]: https://zulip.com/integrations/doc/codebase
[beanstalk]: https://zulip.com/integrations/doc/beanstalk
[integrations-file]: https://github.com/zulip/zulip/blob/master/zerver/lib/integrations.py
[integrations-file]: https://github.com/zulip/zulip/blob/main/zerver/lib/integrations.py
## Writing guidelines
@@ -189,7 +189,6 @@ concrete guidelines.
- Follow the organization and wording of existing docs as much as possible.
### Guidelines for specific steps
Most doc files should start with a generic sentence about the

View File

@@ -1,14 +1,14 @@
# OpenAPI configuration
[OpenAPI][openapi-spec] is a popular format for describing an API. An
[OpenAPI][openapi-spec] is a popular format for describing an API. An
OpenAPI file can be used by various tools to generate documentation
for the API or even basic client-side bindings for dozens of
programming languages.
Zulip's API is described in `zerver/openapi/zulip.yaml`. Our aim is
Zulip's API is described in `zerver/openapi/zulip.yaml`. Our aim is
for that file to fully describe every endpoint in the Zulip API, and
for the Zulip test suite to fail should the API every change without a
corresponding adjustment to the documentation. In particular,
corresponding adjustment to the documentation. In particular,
essentially all content in Zulip's [REST API
documentation](../documentation/api.md) is generated from our OpenAPI
file.
@@ -40,7 +40,8 @@ types of authentication, and configure other settings. Once defined,
information in this section rarely changes.
For example, the `swagger` and `info` objects look like this:
```
```yaml
# Basic Swagger UI info
openapi: 3.0.1
info:
@@ -79,7 +80,7 @@ expects a GET request with one
Basic authentication, and returns a JSON response containing `msg`,
`result`, and `presence` values.
```
```yaml
/users/{user}/presence:
get:
description: Get presence data for another user.
@@ -116,10 +117,10 @@ The
[Definitions Object](https://swagger.io/specification/#definitionsObject)
contains schemas referenced by other objects. For example,
`MessageResponse`, the response from the `/messages` endpoint,
contains three required parameters. Two are strings, and one is an
contains three required parameters. Two are strings, and one is an
integer.
```
```yaml
MessageResponse:
type: object
required:
@@ -144,14 +145,14 @@ You can find more examples, including GET requests and nested objects, in
We're collecting decisions we've made on how our Swagger YAML files
should be organized here:
* Use shared definitions and YAML anchors to avoid duplicating content
- Use shared definitions and YAML anchors to avoid duplicating content
where possible.
## Tips for working with YAML:
You can edit YAML files in any text editor. Indentation defines
blocks, so whitespace is important (as it is in Python.) TAB
characters are not permitted. If your editor has an option to replace
characters are not permitted. If your editor has an option to replace
tabs with spaces, this is helpful.
You can also use the
@@ -169,21 +170,21 @@ correct.
### Formatting help:
* Comments begin with a # character.
- Comments begin with a # character.
* Descriptions do not need to be in quotes, and may use common
- Descriptions do not need to be in quotes, and may use common
Markdown format options like inline code \` (backtick) and `#`
headings.
* A single `|` (pipe) character begins a multi-line description on the
next line. Single spaced lines (one newline at the end of each) are
joined. Use an extra blank line for a paragraph break. We prefer
- A single `|` (pipe) character begins a multi-line description on the
next line. Single spaced lines (one newline at the end of each) are
joined. Use an extra blank line for a paragraph break. We prefer
to use this format for all descriptions because it doesn't require
extra effort to expand.
### Examples:
```
```yaml
Description: |
This description has multiple lines.
Sometimes descriptions can go on for

View File

@@ -2,27 +2,27 @@
Zulip has three major documentation systems:
* Developer and sysadmin documentation: Documentation for people
- Developer and sysadmin documentation: Documentation for people
actually interacting with the Zulip codebase (either by developing
it or installing it), and written in Markdown.
* Core website documentation: Complete webpages for complex topics,
- Core website documentation: Complete webpages for complex topics,
written in HTML, JavaScript, and CSS (using the Django templating
system). These roughly correspond to the documentation someone
might look at when deciding whether to use Zulip. We don't expect
system). These roughly correspond to the documentation someone
might look at when deciding whether to use Zulip. We don't expect
to ever have more than about 10 pages written using this system.
* User-facing documentation: Our scalable system for documenting
- User-facing documentation: Our scalable system for documenting
Zulip's huge collection of specific features without a lot of
overhead or duplicated code/syntax, written in Markdown. We have
several hundred pages written using this system. There are 3
overhead or duplicated code/syntax, written in Markdown. We have
several hundred pages written using this system. There are 3
branches of this documentation:
* User documentation (with a target audience of individual Zulip
users),
* Integrations documentation (with a target audience of IT folks
setting up integrations), and
* API documentation (with a target audience of developers writing
code to extend Zulip).
- User documentation (with a target audience of individual Zulip
users),
- Integrations documentation (with a target audience of IT folks
setting up integrations), and
- API documentation (with a target audience of developers writing
code to extend Zulip).
These three systems are documented in detail.
@@ -30,10 +30,10 @@ These three systems are documented in detail.
What you are reading right now is part of the collection of
documentation targeted at developers and people running their own
Zulip servers. These docs are written in
Zulip servers. These docs are written in
[CommonMark Markdown](https://commonmark.org/) with a small bit of rST.
We've chosen Markdown because it is
[easy to write](https://commonmark.org/help/). The source for Zulip's
[easy to write](https://commonmark.org/help/). The source for Zulip's
developer documentation is at `docs/` in the Zulip Git repository, and
they are served in production at
[zulip.readthedocs.io](https://zulip.readthedocs.io/en/latest/).
@@ -43,12 +43,12 @@ your changes), the dependencies are automatically installed as part of
Zulip development environment provisioning, and you can build the
documentation using:
```
```bash
./tools/build-docs
```
and then opening `http://127.0.0.1:9991/docs/index.html` in your
browser. The raw files are available at
browser. The raw files are available at
`file:///path/to/zulip/docs/_build/html/index.html` in your browser
(so you can also use e.g. `firefox docs/_build/html/index.html` from
the root of your Zulip checkout).
@@ -73,10 +73,10 @@ dependencies).
Zulip has around 10 HTML documentation pages under `templates/zerver`
for specific major topics, like the features list, client apps,
integrations, hotkeys, API bindings, etc. These documents often have
integrations, hotkeys, API bindings, etc. These documents often have
somewhat complex HTML and JavaScript, without a great deal of common
patterns between them other than inheriting from the `portico.html`
template. We generally avoid adding new pages to this collection
template. We generally avoid adding new pages to this collection
unless there's a good reason, but we don't intend to migrate them,
either, since this system gives us the flexibility to express these
important elements of the product clearly.
@@ -91,16 +91,16 @@ to do the things one does a lot in each type of documentation.
### General user documentation
Zulip's [help center](https://zulip.com/help/) documentation is
designed to explain how the product works to end users. We aim for
designed to explain how the product works to end users. We aim for
this to be clear, concise, correct, and readable to nontechnical
audiences where possible. See our guide on [writing user
audiences where possible. See our guide on [writing user
documentation](user.md).
### Integrations documentation
Zulip's [integrations documentation](https://zulip.com/integrations)
is user-facing documentation explaining to end users how to setup each
of Zulip's more than 100 integrations. There is a detailed [guide on
of Zulip's more than 100 integrations. There is a detailed [guide on
documenting integrations](integrations.md), including style guidelines
to ensure that the documentation is high quality and consistent.
@@ -111,7 +111,7 @@ guide](https://zulip.com/api/integrations-overview).
Zulip's [API documentation](https://zulip.com/api/) is intended to make
it easy for a technical user to write automation tools that interact
with Zulip. This documentation also serves as our main mechanism for
with Zulip. This documentation also serves as our main mechanism for
Zulip server developers to communicate with client developers about
how the Zulip API works.
@@ -123,31 +123,31 @@ details on how to contribute to this documentation.
Zulip has several automated test suites that we run in CI and
recommend running locally when making significant edits:
* `tools/lint` catches a number of common mistakes, and we highly
recommend
[using our linter pre-commit hook](../git/zulip-tools.html#set-up-git-repo-script).
See the [main linter doc](../testing/linters.md) for more details.
- `tools/lint` catches a number of common mistakes, and we highly
recommend
[using our linter pre-commit hook](../git/zulip-tools.html#set-up-git-repo-script).
See the [main linter doc](../testing/linters.md) for more details.
* The ReadTheDocs docs are built and the links tested by
`tools/test-documentation`, which runs `build-docs` and then checks
all the links.
- The ReadTheDocs docs are built and the links tested by
`tools/test-documentation`, which runs `build-docs` and then checks
all the links.
There's an exclude list for the link testing at this horrible path:
`tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py`,
which is relevant for flaky links.
* The API docs are tested by `tools/test-api`, which does some basic
payload verification. Note that this test does not check for broken
links (those are checked by `test-help-documentation`).
- The API docs are tested by `tools/test-api`, which does some basic
payload verification. Note that this test does not check for broken
links (those are checked by `test-help-documentation`).
* `tools/test-help-documentation` checks `/help/`, `/api/`,
- `tools/test-help-documentation` checks `/help/`, `/api/`,
`/integrations/`, and the core website ("portico") documentation for
broken links. Note that the "portico" documentation check has a
broken links. Note that the "portico" documentation check has a
manually maintained whitelist of pages, so if you add a new page to
this site, you will need to edit `PorticoDocumentationSpider` to add it.
* `tools/test-backend test_docs.py` tests various internal details of
the variable substitution logic, as well as rendering. It's
- `tools/test-backend test_docs.py` tests various internal details of
the variable substitution logic, as well as rendering. It's
essential when editing the documentation framework, but not
something you'll usually need to interact with when editing
documentation.

View File

@@ -8,12 +8,13 @@ There are two types of documents: articles about specific features, and a
handful of longer guides.
The feature articles serve a few different purposes:
* Feature discovery, for someone browsing the `/help` page, and looking at
- Feature discovery, for someone browsing the `/help` page, and looking at
the set of titles.
* Public documentation of our featureset, for someone googling "can zulip do .."
* Canned responses to support questions; if someone emails a Zulip admin
- Public documentation of our featureset, for someone googling "can zulip do .."
- Canned responses to support questions; if someone emails a Zulip admin
asking "how do I change my name", they can reply with a link to the doc.
* Feature explanations for new Zulip users and admins, especially for
- Feature explanations for new Zulip users and admins, especially for
organization settings.
This system is designed to make writing and maintaining such documentation
@@ -29,7 +30,7 @@ ReadTheDocs, since Zulip supports running a server completely disconnected
from the Internet, and we'd like the documentation to be available in that
environment.
The source for this user documentation is the Markdown files under
The source for this user documentation is the Markdown files under
`templates/zerver/help/` in the
[main Zulip server repository](https://github.com/zulip/zulip). The file
`foo.md` is automatically rendered by the `render_markdown_path` function in
@@ -40,7 +41,7 @@ are usually linked from `static/images/help/`.
This means that you can contribute to the Zulip user documentation by just
adding to or editing the collection of Markdown files under
`templates/zerver/help`. If you have the Zulip development environment
`templates/zerver/help`. If you have the Zulip development environment
set up, you simply need to reload your browser on
`http://localhost:9991/help/foo` to see the latest version of `foo.md`
rendered.
@@ -52,22 +53,22 @@ experience with.
Tips for adding a new article:
* Find an existing article in the same section of the help documentation,
- Find an existing article in the same section of the help documentation,
and copy the format, wording, style, etc as closely as you can.
* If the feature exists in other team chat products, check out their
- If the feature exists in other team chat products, check out their
documentation for inspiration.
* Fewer words is better than more. Many Zulip users have English as a second
- Fewer words is better than more. Many Zulip users have English as a second
language.
* Try to put yourself in the shoes of a new Zulip user. What would you want
- Try to put yourself in the shoes of a new Zulip user. What would you want
to know?
* The goal of user-facing documentation is not to be comprehensive. The goal
- The goal of user-facing documentation is not to be comprehensive. The goal
is to give the right bits of information for the intended audience.
* Real estate in the left sidebar is somewhat precious. Minor features
- Real estate in the left sidebar is somewhat precious. Minor features
should rarely get their own article.
An anti-pattern is trying to make up for bad UX by adding user
@@ -98,24 +99,24 @@ allows .." rather than "we also allow ..". `You` is ok and used liberally.
Zulip's Markdown processor allows you to include several special features in
your documentation to help improve its readability:
* Since raw HTML is supported in Markdown, you can include arbitrary
HTML/CSS in your documentation as needed.
* Code blocks allow you to highlight syntax, similar to Zulip's own Markdown.
* Anchor tags can be used to link to headers in other documents.
* [Images](#images) of Zulip UI can be added to documentation.
* Inline [icons](#icons) used to refer to features in the Zulip UI.
* You can utilize [macros](#macros) to limit repeated content in the
documentation.
* You can create special highlight warning blocks using
[tips and warnings](#tips-and-warnings).
* You can create tabs using [Markdown tab switcher](#tab-switcher).
- Since raw HTML is supported in Markdown, you can include arbitrary
HTML/CSS in your documentation as needed.
- Code blocks allow you to highlight syntax, similar to Zulip's own Markdown.
- Anchor tags can be used to link to headers in other documents.
- [Images](#images) of Zulip UI can be added to documentation.
- Inline [icons](#icons) used to refer to features in the Zulip UI.
- You can utilize [macros](#macros) to limit repeated content in the
documentation.
- You can create special highlight warning blocks using
[tips and warnings](#tips-and-warnings).
- You can create tabs using [Markdown tab switcher](#tab-switcher).
### Images
Images and screenshots should be included in user documentation only
if they will help guide the user in how to do something (e.g. if the
image will make it much clearer which element on the page the user
should interact with). For instance, an image of an element should
should interact with). For instance, an image of an element should
not be included if the element the user needs to interact with is the
only thing on the page, but images can be included to show the end
result of an interaction with the UI.
@@ -126,8 +127,8 @@ instructions for something simple look long and complicated.
When taking screenshots, the image should never include the whole
Zulip browser window in a screenshot; instead, it should only show
relevant parts of the app. In addition, the screenshot should always
come *after* the text that describes it, never before.
relevant parts of the app. In addition, the screenshot should always
come _after_ the text that describes it, never before.
Images are often a part of a numbered step and must be indented four
spaces to be formatted correctly.
@@ -141,40 +142,40 @@ base class `icon-vector` and have dropped support for it. We now only support
icons from [FontAwesome](https://fontawesome.com/v4.7.0/) (version 4.7.0) which
make use of `fa` as a base class.
* cog (<i class="fa fa-cog"></i>) icon — `cog (<i
class="fa fa-cog"></i>) icon`
* down chevron (<i class="fa fa-chevron-down"></i>) icon —
`down chevron (<i class="fa fa-chevron-down"></i>) icon`
* eye (<i class="fa fa-eye"></i>) icon — `eye (<i
class="fa fa-eye"></i>) icon`
* file (<i class="fa fa-file-code-o"></i>) icon — `file (<i
class="fa fa-file-code-o"></i>) icon`
* filled star (<i class="fa fa-star"></i>) icon —
`filled star (<i class="fa fa-star"></i>) icon`
* formatting (<i class="fa fa-font"></i>) icon —
`formatting (<i class="fa fa-font"></i>) icon`
* menu (<i class="fa fa-bars"></i>) icon — `menu (<i
class="fa fa-bars"></i>) icon`
* overflow ( <i class="fa fa-ellipsis-v"></i> ) icon —
`overflow ( <i class="fa fa-ellipsis-v"></i> ) icon`
* paperclip (<i class="fa fa-paperclip"></i>) icon —
`paperclip (<i class="fa fa-paperclip"></i>) icon`
* pencil (<i class="fa fa-pencil"></i>) icon —
`pencil (<i class="fa fa-pencil"></i>) icon`
* pencil and paper (<i class="fa fa-pencil-square-o"></i>) icon —
`pencil and paper (<i class="fa fa-pencil-square-o"></i>) icon`
* plus (<i class="fa fa-plus"></i>) icon —
`plus (<i class="fa fa-plus"></i>) icon`
* smiley face (<i class="fa fa-smile-o"></i>) icon —
`smiley face (<i class="fa fa-smile-o"></i>) icon`
* star (<i class="fa fa-star-o"></i>) icon —
`star (<i class="fa fa-star-o"></i>) icon`
* trash (<i class="fa fa-trash-o"></i>) icon —
`trash (<i class="fa fa-trash-o"></i>) icon`
* video-camera (<i class="fa fa-video-camera"></i>) icon —
`video-camera (<i class="fa fa-video-camera"></i>) icon`
* x (<i class="fa fa-times"></i>) icon —
`x (<i class="fa fa-times"></i>) icon`
- cog (<i class="fa fa-cog"></i>) icon —
`cog (<i class="fa fa-cog"></i>) icon`
- down chevron (<i class="fa fa-chevron-down"></i>) icon —
`down chevron (<i class="fa fa-chevron-down"></i>) icon`
- eye (<i class="fa fa-eye"></i>) icon —
`eye (<i class="fa fa-eye"></i>) icon`
- file (<i class="fa fa-file-code-o"></i>) icon —
`file (<i class="fa fa-file-code-o"></i>) icon`
- filled star (<i class="fa fa-star"></i>) icon —
`filled star (<i class="fa fa-star"></i>) icon`
- formatting (<i class="fa fa-font"></i>) icon —
`formatting (<i class="fa fa-font"></i>) icon`
- menu (<i class="fa fa-bars"></i>) icon —
`menu (<i class="fa fa-bars"></i>) icon`
- overflow ( <i class="fa fa-ellipsis-v"></i> ) icon —
`overflow ( <i class="fa fa-ellipsis-v"></i> ) icon`
- paperclip (<i class="fa fa-paperclip"></i>) icon —
`paperclip (<i class="fa fa-paperclip"></i>) icon`
- pencil (<i class="fa fa-pencil"></i>) icon —
`pencil (<i class="fa fa-pencil"></i>) icon`
- pencil and paper (<i class="fa fa-pencil-square-o"></i>) icon —
`pencil and paper (<i class="fa fa-pencil-square-o"></i>) icon`
- plus (<i class="fa fa-plus"></i>) icon —
`plus (<i class="fa fa-plus"></i>) icon`
- smiley face (<i class="fa fa-smile-o"></i>) icon —
`smiley face (<i class="fa fa-smile-o"></i>) icon`
- star (<i class="fa fa-star-o"></i>) icon —
`star (<i class="fa fa-star-o"></i>) icon`
- trash (<i class="fa fa-trash-o"></i>) icon —
`trash (<i class="fa fa-trash-o"></i>) icon`
- video-camera (<i class="fa fa-video-camera"></i>) icon —
`video-camera (<i class="fa fa-video-camera"></i>) icon`
- x (<i class="fa fa-times"></i>) icon —
`x (<i class="fa fa-times"></i>) icon`
### Macros
@@ -186,22 +187,22 @@ The source for macros is the Markdown files under
`templates/zerver/help/include` in the
[main Zulip server repository](https://github.com/zulip/zulip).
* **Administrator only feature** `{!admin-only.md!}`: Notes that the feature
- **Administrator only feature** `{!admin-only.md!}`: Notes that the feature
is only available to organization administrators.
* **Message actions** `{!message-actions.md!}`: First step to navigating to
- **Message actions** `{!message-actions.md!}`: First step to navigating to
the on-hover message actions.
* **Message actions menu** `{!message-actions-menu.md!}`: Navigate to the
- **Message actions menu** `{!message-actions-menu.md!}`: Navigate to the
message actions menu.
* **Save changes** `{!save-changes.md!}`: Save changes after modifying
- **Save changes** `{!save-changes.md!}`: Save changes after modifying
organization settings.
* **Stream actions** `{!stream-actions.md!}`: Navigate to the stream actions
- **Stream actions** `{!stream-actions.md!}`: Navigate to the stream actions
menu from the left sidebar.
* **Start composing** `{!start-composing.md!}`: Open the compose box.
- **Start composing** `{!start-composing.md!}`: Open the compose box.
### Tips and warnings
@@ -210,7 +211,7 @@ instructions. For instance, it may address a common problem users may
encounter while following the instructions, or point to an option for power
users.
```
```md
!!! tip ""
If you've forgotten your password, see the
[Change your password](/help/change-your-password) page for
@@ -220,7 +221,7 @@ users.
A **warning** is a note on what happens when there is some kind of problem.
Tips are more common than warnings.
```
```md
!!! warn ""
**Note:** If you attempt to input a nonexistent stream name, an error
message will appear.
@@ -230,27 +231,29 @@ All tips/warnings should appear inside tip/warning blocks. There
should be only one tip/warning inside each block, and they usually
should be formatted as a continuation of a numbered step.
### Tab switcher
### Tab switcher
Our Markdown processor supports easily creating a tab switcher widget
design to easily show the instructions for different
[platforms](https://zulip.com/help/logging-out) in user docs,
languages in API docs, etc. To create a tab switcher, write:
languages in API docs, etc. To create a tab switcher, write:
{start_tabs}
{tab|desktop-web}
# First tab's content
{tab|ios}
# Second tab's content
{tab|android}
# Third tab's content
{end_tabs}
```md
{start_tabs}
{tab|desktop-web}
# First tab's content
{tab|ios}
# Second tab's content
{tab|android}
# Third tab's content
{end_tabs}
```
The tab identifiers (e.g. `desktop-web` above) and their mappings to
the tabs' labels are declared in
[zerver/lib/markdown/tabbed_sections.py][tabbed-sections-code].
[tabbed-sections-code]: https://github.com/zulip/zulip/blob/master/zerver/lib/markdown/tabbed_sections.py
[tabbed-sections-code]: https://github.com/zulip/zulip/blob/main/zerver/lib/markdown/tabbed_sections.py
This widget can also be used just to create a nice box around a set of
instructions

View File

@@ -5,111 +5,111 @@ See also [fixing commits][fix-commit]
## Common commands
- add
- `git add foo.py`
- `git add foo.py`
- checkout
- `git checkout -b new-branch-name`
- `git checkout master`
- `git checkout old-branch-name`
- `git checkout -b new-branch-name`
- `git checkout main`
- `git checkout old-branch-name`
- commit
- `git commit -m "topic: Commit message title."`
- `git commit --amend`: Modify the previous commit.
- `git commit -m "topic: Commit message title."`
- `git commit --amend`: Modify the previous commit.
- config
- `git config --global core.editor nano`
- `git config --global core.symlinks true`
- `git config --global core.editor nano`
- `git config --global core.symlinks true`
- diff
- `git diff`
- `git diff --cached`
- `git diff HEAD~2..`
- `git diff`
- `git diff --cached`
- `git diff HEAD~2..`
- fetch
- `git fetch origin`
- `git fetch upstream`
- `git fetch origin`
- `git fetch upstream`
- grep
- `git grep update_unread_counts`
- `git grep update_unread_counts`
- log
- `git log`
- `git log`
- pull
- `git pull --rebase`: **Use this**. Zulip uses a [rebase oriented workflow][git-overview].
- `git pull` (with no options): Will either create a merge commit
(which you don't want) or do the same thing as `git pull --rebase`,
depending on [whether you've configured Git properly][git-config-clone]
- `git pull --rebase`: **Use this**. Zulip uses a [rebase oriented workflow][git-overview].
- `git pull` (with no options): Will either create a merge commit
(which you don't want) or do the same thing as `git pull --rebase`,
depending on [whether you've configured Git properly][git-config-clone]
- push
- `git push origin +branch-name`
- `git push origin +branch-name`
- rebase
- `git rebase -i HEAD~3`
- `git rebase -i master`
- `git rebase upstream/master`
- `git rebase -i HEAD~3`
- `git rebase -i main`
- `git rebase upstream/main`
- reflog
- `git reflog | head -10`
- `git reflog | head -10`
- remote
- `git remote -v`
- `git remote -v`
- reset
- `git reset HEAD~2`
- `git reset HEAD~2`
- rm
- `git rm oops.txt`
- `git rm oops.txt`
- show
- `git show HEAD`
- `git show HEAD~~~`
- `git show master`
- `git show HEAD`
- `git show HEAD~~~`
- `git show main`
- status
- `git status`
- `git status`
## Detailed cheat sheet
- add
- `git add foo.py`: add `foo.py` to the staging area
- `git add foo.py bar.py`: add `foo.py` AND `bar.py` to the staging area
- `git add -u`: Adds all tracked files to the staging area.
- `git add foo.py`: add `foo.py` to the staging area
- `git add foo.py bar.py`: add `foo.py` AND `bar.py` to the staging area
- `git add -u`: Adds all tracked files to the staging area.
- checkout
- `git checkout -b new-branch-name`: create branch `new-branch-name` and switch to/check out that new branch
- `git checkout master`: switch to your `master` branch
- `git checkout old-branch-name`: switch to an existing branch `old-branch-name`
- `git checkout -b new-branch-name`: create branch `new-branch-name` and switch to/check out that new branch
- `git checkout main`: switch to your `main` branch
- `git checkout old-branch-name`: switch to an existing branch `old-branch-name`
- commit
- `git commit -m "commit message"`: It is recommended to type a
multiline commit message, however.
- `git commit`: Opens your default text editor to write a commit message.
- `git commit --amend`: changing the last commit message. Read more [here][fix-commit]
- `git commit -m "commit message"`: It is recommended to type a
multiline commit message, however.
- `git commit`: Opens your default text editor to write a commit message.
- `git commit --amend`: changing the last commit message. Read more [here][fix-commit]
- config
- `git config --global core.editor nano`: set core editor to `nano` (you can set this to `vim` or others)
- `git config --global core.symlinks true`: allow symbolic links
- `git config --global core.editor nano`: set core editor to `nano` (you can set this to `vim` or others)
- `git config --global core.symlinks true`: allow symbolic links
- diff
- `git diff`: display the changes you have made to all files
- `git diff --cached`: display the changes you have made to staged files
- `git diff HEAD~2..`: display the 2 most recent changes you have made to files
- `git diff`: display the changes you have made to all files
- `git diff --cached`: display the changes you have made to staged files
- `git diff HEAD~2..`: display the 2 most recent changes you have made to files
- fetch
- `git fetch origin`: fetch origin repository
- `git fetch upstream`: fetch upstream repository
- `git fetch origin`: fetch origin repository
- `git fetch upstream`: fetch upstream repository
- grep
- `git grep update_unread_counts static/js`: Search our JS for references to update_unread_counts.
- `git grep update_unread_counts static/js`: Search our JS for references to update_unread_counts.
- log
- `git log`: show commit logs
- `git log --oneline | head`: To quickly see the latest ten commits on a branch.
- `git log`: show commit logs
- `git log --oneline | head`: To quickly see the latest ten commits on a branch.
- pull
- `git pull --rebase`: rebase your changes on top of master.
- `git pull` (with no options): Will either create a merge commit
(which you don't want) or do the same thing as `git pull --rebase`,
depending on [whether you've configured Git properly][git-config-clone]
- `git pull --rebase`: rebase your changes on top of `main`.
- `git pull` (with no options): Will either create a merge commit
(which you don't want) or do the same thing as `git pull --rebase`,
depending on [whether you've configured Git properly][git-config-clone]
- push
- `git push origin branch-name`: push you commits to the origin repository *only if* there are no conflicts.
Use this when collaborating with others to prevent overwriting their work.
- `git push origin +branch-name`: force push your commits to your origin repository.
- `git push origin branch-name`: push you commits to the origin repository _only if_ there are no conflicts.
Use this when collaborating with others to prevent overwriting their work.
- `git push origin +branch-name`: force push your commits to your origin repository.
- rebase
- `git rebase -i HEAD~3`: interactive rebasing current branch with first three items on HEAD
- `git rebase -i master`: interactive rebasing current branch with master branch
- `git rebase upstream/master`: rebasing current branch with master branch from upstream repository
- `git rebase -i HEAD~3`: interactive rebasing current branch with first three items on HEAD
- `git rebase -i main`: interactive rebasing current branch with `main` branch
- `git rebase upstream/main`: rebasing current branch with `main` branch from upstream repository
- reflog
- `git reflog | head -10`: manage reference logs for the past 10 commits
- `git reflog | head -10`: manage reference logs for the past 10 commits
- remote
- `git remote -v`: display your origin and upstream repositories
- `git remote -v`: display your origin and upstream repositories
- reset
- `git reset HEAD~2`: reset two most recent commits
- `git reset HEAD~2`: reset two most recent commits
- rm
- `git rm oops.txt`: remove `oops.txt`
- `git rm oops.txt`: remove `oops.txt`
- show
- `git show HEAD`: display most recent commit
- `git show HEAD~~~`: display third most recent commit
- `git show master`: display most recent commit on `master`
- `git show HEAD`: display most recent commit
- `git show HEAD~~~`: display third most recent commit
- `git show main`: display most recent commit on `main`
- status
- `git status`: show the working tree status, unstaged and staged files
- `git status`: show the working tree status, unstaged and staged files
[fix-commit]: fixing-commits.md
[git-config-clone]: cloning.html#step-1b-clone-to-your-machine

View File

@@ -20,7 +20,7 @@ the main server app, this is [zulip/zulip][github-zulip-zulip].
Next, clone your fork to your local machine:
```
```console
$ git clone --config pull.rebase https://github.com/YOUR_USERNAME/zulip.git
Cloning into 'zulip'
remote: Counting objects: 86768, done.
@@ -32,12 +32,12 @@ Checking connectivity... done.
```
(The `--config pull.rebase` option configures Git so that `git pull`
will behave like `git pull --rebase` by default. Using `git pull
--rebase` to update your changes to resolve merge conflicts is
expected by essentially all of open source projects, including Zulip.
You can also set that option after cloning using `git config --add
pull.rebase true`, or just be careful to always run `git pull
--rebase`, never `git pull`).
will behave like `git pull --rebase` by default. Using
`git pull --rebase` to update your changes to resolve merge conflicts
is expected by essentially all of open source projects, including
Zulip. You can also set that option after cloning using
`git config --add pull.rebase true`, or just be careful to always run
`git pull --rebase`, never `git pull`).
Note: If you receive an error while cloning, you may not have [added your ssh
key to GitHub][github-help-add-ssh-key].
@@ -56,7 +56,7 @@ your fork.
First, show the currently configured remote repository:
```
```console
$ git remote -v
origin git@github.com:YOUR_USERNAME/zulip.git (fetch)
origin git@github.com:YOUR_USERNAME/zulip.git (push)
@@ -65,10 +65,10 @@ origin git@github.com:YOUR_USERNAME/zulip.git (push)
Note: If you've cloned the repository using a graphical client, you may already
have the upstream remote repository configured. For example, when you clone
[zulip/zulip][github-zulip-zulip] with the GitHub desktop client it configures
the remote repository `zulip` and you see the following output from `git remote
-v`:
the remote repository `zulip` and you see the following output from
`git remote -v`:
```
```console
origin git@github.com:YOUR_USERNAME/zulip.git (fetch)
origin git@github.com:YOUR_USERNAME/zulip.git (push)
zulip https://github.com/zulip/zulip.git (fetch)
@@ -78,13 +78,13 @@ zulip https://github.com/zulip/zulip.git (push)
If your client hasn't automatically configured a remote for zulip/zulip, you'll
need to with:
```
```console
$ git remote add -f upstream https://github.com/zulip/zulip.git
```
Finally, confirm that the new remote repository, upstream, has been configured:
```
```console
$ git remote -v
origin git@github.com:YOUR_USERNAME/zulip.git (fetch)
origin git@github.com:YOUR_USERNAME/zulip.git (push)
@@ -115,13 +115,13 @@ will run tests for new refs you push to GitHub and email you the outcome
Running CI against your fork can help save both your and the
Zulip maintainers time by making it easy to test a change fully before
submitting a pull request. We generally recommend a workflow where as
submitting a pull request. We generally recommend a workflow where as
you make changes, you use a fast edit-refresh cycle running individual
tests locally until your changes work. But then once you've gotten
tests locally until your changes work. But then once you've gotten
the tests you'd expect to be relevant to your changes working, push a
branch to run the full test suite in GitHub Actions before
you create a pull request. While you wait for GitHub Actions jobs
to run, you can start working on your next task. When the tests finish,
you create a pull request. While you wait for GitHub Actions jobs
to run, you can start working on your next task. When the tests finish,
you can create a pull request that you already know passes the tests.
GitHub Actions will run all the jobs by default on your forked repository.

View File

@@ -6,7 +6,7 @@ What happens when you would like to collaborate with another contributor and
they have work-in-progress on their own fork of Zulip? No problem! Just add
their fork as a remote and pull their changes.
```
```console
$ git remote add <username> https://github.com/<username>/zulip.git
$ git fetch <username>
```
@@ -15,12 +15,13 @@ Now you can check out their branch just like you would any other. You can name
the branch anything you want, but using both the username and branch name will
help you keep things organized.
```
```console
$ git checkout -b <username>/<branchname>
```
You can choose to rename the branch if you prefer:
```
```bash
git checkout -b <custombranchname> <username>/<branchname>
```
@@ -31,27 +32,28 @@ pull request locally. GitHub provides a special syntax
([details][github-help-co-pr-locally]) for this since pull requests are
specific to GitHub rather than Git.
First, fetch and create a branch for the pull request, replacing *ID* and
*BRANCHNAME* with the ID of the pull request and your desired branch name:
First, fetch and create a branch for the pull request, replacing _ID_ and
_BRANCHNAME_ with the ID of the pull request and your desired branch name:
```
```console
$ git fetch upstream pull/ID/head:BRANCHNAME
```
Now switch to the branch:
```
```console
$ git checkout BRANCHNAME
```
Now you work on this branch as you would any other.
Note: you can use the scripts provided in the tools/ directory to fetch pull
requests. You can read more about what they do [here][tools-PR].
```
requests. You can read more about what they do [here][tools-pr].
```bash
tools/fetch-rebase-pull-request <PR-number>
tools/fetch-pull-request <PR-number>
```
[github-help-co-pr-locally]: https://help.github.com/en/articles/checking-out-pull-requests-locally
[tools-PR]: ../git/zulip-tools.html#fetch-a-pull-request-and-rebase
[tools-pr]: ../git/zulip-tools.html#fetch-a-pull-request-and-rebase

View File

@@ -1,35 +1,45 @@
# Fixing commits
This is mostly from
[here](https://help.github.com/en/articles/changing-a-commit-message#rewriting-the-most-recent-commit-message).
## Fixing the last commit
### Changing the last commit message
1. `git commit --amend -m "New message"`
### Changing the last commit
1. Make your changes to the files
2. Run `git add <filename>` to add one file or `git add <filename1> <filename2> ...` to add multiple files
3. `git commit --amend`
## Fixing older commits
### Changing commit messages
1. `git rebase -i HEAD~5` (if, for example, you are editing some of the last five commits)
2. For each commit that you want to change the message, change `pick` to `reword`, and save
3. Change the commit messages
### Deleting old commits
1. `git rebase -i HEAD~n` where `n` is the number of commits you are looking at
2. For each commit that you want to delete, change `pick` to `drop`, and save
## Squashing commits
Sometimes, you want to make one commit out of a bunch of commits. To do this,
1. `git rebase -i HEAD~n` where `n` is the number of commits you are interested in
2. Change `pick` to `squash` on the lines containing the commits you want to squash and save
## Reordering commits
1. `git rebase -i HEAD~n` where `n` is the number of commits you are interested in
2. Reorder the lines containing the commits and save
## Pushing commits after tidying them
1. `git push origin +my-feature-branch` (Note the `+` there and substitute your actual branch name.)
1. `git push origin +my-feature-branch` (Note the `+` there and substitute your actual branch name.)

View File

@@ -10,7 +10,7 @@ with these details in mind:
[repository][github-zulip], if you are working on something else besides
Zulip server) to your own account and then create feature/issue branches.
When you're ready to get feedback, submit a work-in-progress (WIP) pull
request. *We encourage you to submit WIP pull requests early and often.*
request. _We encourage you to submit WIP pull requests early and often._
- We use a **[rebase][gitbook-rebase]-oriented workflow.** We do not use merge
commits. This means you should use `git fetch` followed by `git rebase`
@@ -22,15 +22,15 @@ with these details in mind:
We use this strategy in order to avoid the extra commits that appear
when another branch is merged, that clutter the commit history (it's
popular with other large projects such as Django). This makes
popular with other large projects such as Django). This makes
Zulip's commit history more readable, but a side effect is that many
pull requests we merge will be reported by GitHub's UI as *closed*
instead of *merged*, since GitHub has poor support for
pull requests we merge will be reported by GitHub's UI as _closed_
instead of _merged_, since GitHub has poor support for
rebase-oriented workflows.
- We have a **[code style guide][zulip-rtd-code-style]**, a **[commit message
guide][zulip-rtd-commit-messages]**, and strive for each commit to be *a
minimal coherent idea* (see **[commit
guide][zulip-rtd-commit-messages]**, and strive for each commit to be _a
minimal coherent idea_ (see **[commit
discipline][zulip-rtd-commit-discipline]** for details).
- We provide **many tools to help you submit quality code.** These include
@@ -48,7 +48,7 @@ with these details in mind:
Finally, install the [Zulip developer environment][zulip-rtd-dev-overview], and then
[configure continuous integration for your fork][zulip-git-guide-fork-ci].
***
---
The following sections will help you be awesome with Zulip and Git/GitHub in a
rebased-based workflow. Read through it if you're new to Git, to a rebase-based

View File

@@ -3,7 +3,7 @@
When you're ready for feedback, submit a pull request. Pull requests
are a feature specific to GitHub. They provide a simple, web-based way
to submit your work (often called "patches") to a project. It's called
a *pull request* because you're asking the project to *pull changes*
a _pull request_ because you're asking the project to _pull changes_
from your fork.
If you're unfamiliar with how to create a pull request, you can check
@@ -20,22 +20,22 @@ requests early and often. This allows you to share your code to make
it easier to get feedback and help with your changes. Prefix the
titles of work-in-progress pull requests with **[WIP]**, which in our
project means that you don't think your pull request is ready to be
merged (e.g. it might not work or pass tests). This sets expectations
merged (e.g. it might not work or pass tests). This sets expectations
correctly for any feedback from other developers, and prevents your
work from being merged before you're confident in it.
## Create a pull request
### Step 0: Make sure you're on a feature branch (not `master`)
### Step 0: Make sure you're on a feature branch (not `main`)
It is important to [work on a feature
branch](using.html#work-on-a-feature-branch) when creating a pull
request. Your new pull request will be inextricably linked with your
request. Your new pull request will be inextricably linked with your
branch while it is open, so you will need to reserve your branch only
for changes related to your issue, and avoid introducing extraneous
changes for other issues or from upstream.
If you are working on a branch named `master`, you need to create and
If you are working on a branch named `main`, you need to create and
switch to a feature branch before proceeding.
### Step 1: Update your branch with git rebase
@@ -44,9 +44,9 @@ The best way to update your branch is with `git fetch` and `git rebase`. Do not
use `git pull` or `git merge` as this will create merge commits. See [keep your
fork up to date][keep-up-to-date] for details.
Here's an example (you would replace *issue-123* with the name of your feature branch):
Here's an example (you would replace _issue-123_ with the name of your feature branch):
```
```console
$ git checkout issue-123
Switched to branch 'issue-123'
@@ -56,9 +56,9 @@ remote: Compressing objects: 100% (23/23), done.
remote: Total 69 (delta 49), reused 39 (delta 39), pack-reused 7
Unpacking objects: 100% (69/69), done.
From https://github.com/zulip/zulip
69fa600..43e21f6 master -> upstream/master
69fa600..43e21f6 main -> upstream/main
$ git rebase upstream/master
$ git rebase upstream/main
First, rewinding head to replay your work on top of it...
Applying: troubleshooting tip about provisioning
@@ -68,7 +68,7 @@ Applying: troubleshooting tip about provisioning
Once you've updated your local feature branch, push the changes to GitHub:
```
```console
$ git push origin issue-123
Counting objects: 6, done.
Delta compression using up to 4 threads.
@@ -83,7 +83,7 @@ To git@github.com:christi3k/zulip.git
If your push is rejected with error **failed to push some refs** then you need
to prefix the name of your branch with a `+`:
```
```console
$ git push origin +issue-123
Counting objects: 6, done.
Delta compression using up to 4 threads.
@@ -117,7 +117,7 @@ pull request** button.
Alternatively, if you've recently pushed to your fork, you will see a green
**Compare & pull request** button.
You'll see the *Open a pull request* page:
You'll see the _Open a pull request_ page:
![images-create-pr]

View File

@@ -8,20 +8,20 @@ on reviewing changes by other contributors.
Display changes between index and working tree (what is not yet staged for commit):
```
```console
$ git diff
```
Display changes between index and last commit (what you have staged for commit):
```
```console
$ git diff --cached
```
Display changes in working tree since last commit (changes that are staged as
well as ones that are not):
```
```console
$ git diff HEAD
```
@@ -31,34 +31,34 @@ Use any git-ref to compare changes between two commits on the current branch.
Display changes between commit before last and last commit:
```
```console
$ git diff HEAD^ HEAD
```
Display changes between two commits using their hashes:
```
```console
$ git diff e2f404c 7977169
```
## Changes between branches
Display changes between tip of topic branch and tip of master branch:
Display changes between tip of `topic` branch and tip of `main` branch:
```
$ git diff topic master
```console
$ git diff topic main
```
Display changes that have occurred on master branch since topic branch was created:
Display changes that have occurred on `main` branch since `topic` branch was created:
```
$ git diff topic...master
```console
$ git diff topic...main
```
Display changes you've committed so far since creating a branch from upstream/master:
Display changes you've committed so far since creating a branch from `upstream/main`:
```
$ git diff upstream/master...HEAD
```console
$ git diff upstream/main...HEAD
```
[zulip-rtd-review]: ../contributing/code-reviewing.md

View File

@@ -15,14 +15,14 @@ You'll also need a GitHub account, which you can sign up for
[here][github-join].
We highly recommend you create an SSH key if you don't already have
one and [add it to your GitHub account][github-help-add-ssh-key]. If
one and [add it to your GitHub account][github-help-add-ssh-key]. If
you don't, you'll have to type your GitHub username and password every
time you interact with GitHub, which is usually several times a day.
We also highly recommend the following:
- [Configure Git][gitbook-config] with your name and email and
[aliases][gitbook-aliases] for commands you'll use often. We
[aliases][gitbook-aliases] for commands you'll use often. We
recommend using your full name (not just your first name), since
that's what we'll use to give credit to your work in places like the
Zulip release notes.

View File

@@ -2,9 +2,10 @@
When you install Git, it adds a manual entry for `gitglossary`. You can view
this glossary by running `man gitglossary`. Below we've included the Git terms
you'll encounter most often along with their definitions from *gitglossary*.
you'll encounter most often along with their definitions from _gitglossary_.
## branch
A "branch" is an active line of development. The most recent commit
on a branch is referred to as the tip of that branch. The tip of
the branch is referenced by a branch head, which moves forward as
@@ -14,14 +15,17 @@ working tree is associated with just one of them (the "current" or
"checked out" branch), and HEAD points to that branch.
## cache
Obsolete for: index
## checkout
The action of updating all or part of the working tree with a tree
object or blob from the object database, and updating the index and
HEAD if the whole working tree has been pointed at a new branch.
## commit
As a noun: A single point in the Git history; the entire history of
a project is represented as a set of interrelated commits. The word
"commit" is often used by Git in the same places other revision
@@ -33,6 +37,7 @@ state in the Git history, by creating a new commit representing the
current state of the index and advancing HEAD to point at the new
## fast-forward
A fast-forward is a special type of merge where you have a revision
and you are "merging" another branch's changes that happen to be a
descendant of what you have. In such these cases, you do not make a
@@ -41,19 +46,23 @@ happen frequently on a remote-tracking branch of a remote
repository.
## fetch
Fetching a branch means to get the branch's head ref from a remote
repository, to find out which objects are missing from the local
object database, and to get them, too. See also [git-fetch(1)](https://git-scm.com/docs/git-fetch)
## hash
In Git's context, synonym for object name.
## head
A named reference to the commit at the tip of a branch. Heads are
stored in a file in $GIT_DIR/refs/heads/ directory, except when
using packed refs. See also [git-pack-refs(1)](https://git-scm.com/docs/git-pack-refs).
## HEAD
The current branch. In more detail: Your working tree is normally
derived from the state of the tree referred to by HEAD. HEAD is a
reference to one of the heads in your repository, except when using
@@ -61,15 +70,18 @@ a detached HEAD, in which case it directly references an arbitrary
commit.
## index
A collection of files with stat information, whose contents are
stored as objects. The index is a stored version of your working
tree. Truth be told, it can also contain a second, and even a third
version of a working tree, which are used when merging.
## pull
Pulling a branch means to fetch it and merge it. See also [git-pull(1)](https://git-scm.com/docs/git-pull)
## push
Pushing a branch means to get the branch's head ref from a remote
repository, find out if it is a direct ancestor to the branch's
local head ref, and in that case, putting all objects, which are
@@ -79,5 +91,6 @@ the remote head ref. If the remote head is not an ancestor to the
local head, the push fails.
## rebase
To reapply a series of changes from a branch to a different base,
and reset the head of that branch to the result.

View File

@@ -2,22 +2,22 @@
Whether you're new to Git or have experience with another version control
system (VCS), it's a good idea to learn a bit about how Git works. We recommend
this excellent presentation *[Understanding Git][understanding-git]* from
this excellent presentation _[Understanding Git][understanding-git]_ from
Nelson Elhage and Anders Kaseorg and the [Git Basics][gitbook-basics] chapter
from *Pro Git* by Scott Chacon and Ben Straub.
from _Pro Git_ by Scott Chacon and Ben Straub.
Here are the top things to know:
- **Git works on snapshots.** Unlike other version control systems (e.g.,
Subversion, Perforce, Bazaar), which track files and changes to those files
made over time, Git tracks *snapshots* of your project. Each time you commit
made over time, Git tracks _snapshots_ of your project. Each time you commit
or otherwise make a change to your repository, Git takes a snapshot of your
project and stores a reference to that snapshot. If a file hasn't changed,
Git creates a link to the identical file rather than storing it again.
- **Most Git operations are local.** Git is a distributed version control
system, so once you've cloned a repository, you have a complete copy of that
repository's *entire history*. Staging, committing, branching, and browsing
repository's _entire history_. Staging, committing, branching, and browsing
history are all things you can do locally without network access and without
immediately affecting any remote repositories. To make or receive changes
from remote repositories, you need to `git fetch`, `git pull`, or `git push`.
@@ -45,9 +45,9 @@ Here are the top things to know:
- **Cloning a repository creates a working copy.** Every working copy has a
`.git` subdirectory, which contains its own Git repository. The `.git`
subdirectory also tracks the *index*, a staging area for changes that will
become part of the next commit. All files outside of `.git` is the *working
tree*.
subdirectory also tracks the _index_, a staging area for changes that will
become part of the next commit. All files outside of `.git` is the _working
tree_.
- **Files tracked with Git have possible three states: committed, modified, and
staged.** Committed files are those safely stored in your local `.git`
@@ -56,8 +56,8 @@ Here are the top things to know:
changes but have not yet been marked for inclusion in the next commit; they
have not been added to the index.
- **Git commit workflow is as follows.** Edit files in your *working tree*. Add
to the *index* (that is *stage*) with `git add`. *Commit* to the HEAD of the
- **Git commit workflow is as follows.** Edit files in your _working tree_. Add
to the _index_ (that is _stage_) with `git add`. _Commit_ to the HEAD of the
current branch with `git commit`.
[gitbook-basics]: https://git-scm.com/book/en/v2/Getting-Started-Git-Basics

View File

@@ -26,8 +26,8 @@ A merge commit is usually created when you've run `git pull` or `git merge`.
You'll know you're creating a merge commit if you're prompted for a commit
message and the default is something like this:
```
Merge branch 'master' of https://github.com/zulip/zulip
```text
Merge branch 'main' of https://github.com/zulip/zulip
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
@@ -38,13 +38,13 @@ Merge branch 'master' of https://github.com/zulip/zulip
And the first entry for `git log` will show something like:
```
```console
commit e5f8211a565a5a5448b93e98ed56415255546f94
Merge: 13bea0e e0c10ed
Author: Christie Koehler <ck@christi3k.net>
Date: Mon Oct 10 13:25:51 2016 -0700
Merge branch 'master' of https://github.com/zulip/zulip
Merge branch 'main' of https://github.com/zulip/zulip
```
Some graphical Git clients may also create merge commits.
@@ -52,10 +52,10 @@ Some graphical Git clients may also create merge commits.
To undo a merge commit, first run `git reflog` to identify the commit you want
to roll back to:
```
```console
$ git reflog
e5f8211 HEAD@{0}: pull upstream master: Merge made by the 'recursive' strategy.
e5f8211 HEAD@{0}: pull upstream main: Merge made by the 'recursive' strategy.
13bea0e HEAD@{1}: commit: test commit for docs.
```
@@ -67,19 +67,18 @@ by `git pull` and `13bea0e HEAD@{1}:` is the last commit I made before running
Once you'd identified the ref you want to revert to, you can do so with [git
reset][gitbook-reset]:
```
```console
$ git reset --hard 13bea0e
HEAD is now at 13bea0e test commit for docs.
```
```eval_rst
.. important::
``git reset --hard <commit>`` will discard all changes in your
working directory and index since the commit you're resetting to with
``<commit>``. *This is the main way you can lose work in Git*. If you need
to keep any changes that are in your working directory or that you have
committed, use ``git reset --merge <commit>`` instead.
```
:::{important}
`git reset --hard <commit>` will discard all changes in your
working directory and index since the commit you're resetting to with
`<commit>`. _This is the main way you can lose work in Git_. If you need
to keep any changes that are in your working directory or that you have
committed, use `git reset --merge <commit>` instead.
:::
You can also use the relative reflog `HEAD@{1}` instead of the commit hash,
just keep in mind that this changes as you run git commands.
@@ -87,17 +86,17 @@ just keep in mind that this changes as you run git commands.
Now when you look at the output of `git reflog`, you should see that the tip of your branch points to your
last commit `13bea0e` before the merge:
```
```console
$ git reflog
13bea0e HEAD@{2}: reset: moving to HEAD@{1}
e5f8211 HEAD@{3}: pull upstream master: Merge made by the 'recursive' strategy.
e5f8211 HEAD@{3}: pull upstream main: Merge made by the 'recursive' strategy.
13bea0e HEAD@{4}: commit: test commit for docs.
```
And the first entry `git log` shows is this:
```
```console
commit 13bea0e40197b1670e927a9eb05aaf50df9e8277
Author: Christie Koehler <ck@christi3k.net>
Date: Mon Oct 10 13:25:38 2016 -0700
@@ -115,32 +114,32 @@ with `git cherry-pick` ([docs][gitbook-git-cherry-pick]).
For example, let's say you just committed "some work" and your `git log` looks
like this:
```
* 67aea58 (HEAD -> master) some work
```console
* 67aea58 (HEAD -> main) some work
* 13bea0e test commit for docs.
```
You then mistakenly run `git reset --hard 13bea0e`:
```
```console
$ git reset --hard 13bea0e
HEAD is now at 13bea0e test commit for docs.
$ git log
* 13bea0e (HEAD -> master) test commit for docs.
* 13bea0e (HEAD -> main) test commit for docs.
```
And then realize you actually needed to keep commit 67aea58. First, use `git
reflog` to confirm that commit you want to restore and then run `git
cherry-pick <commit>`:
And then realize you actually needed to keep commit 67aea58. First, use
`git reflog` to confirm that commit you want to restore and then run
`git cherry-pick <commit>`:
```
```console
$ git reflog
13bea0e HEAD@{0}: reset: moving to 13bea0e
67aea58 HEAD@{1}: commit: some work
$ git cherry-pick 67aea58
[master 67aea58] some work
[main 67aea58] some work
Date: Thu Oct 13 11:51:19 2016 -0700
1 file changed, 1 insertion(+)
create mode 100644 test4.txt
@@ -154,13 +153,13 @@ which ever branch you are rebasing on top of, is to code that has been changed
by those new commits.
For example, while I'm working on a file, another contributor makes a change to
that file, submits a pull request and has their code merged into master.
that file, submits a pull request and has their code merged into `main`.
Usually this is not a problem, but in this case the other contributor made a
change to a part of the file I also want to change. When I try to bring my
branch up to date with `git fetch` and then `git rebase upstream/master`, I see
branch up to date with `git fetch` and then `git rebase upstream/main`, I see
the following:
```
```console
First, rewinding head to replay your work on top of it...
Applying: test change for docs
Using index info to reconstruct a base tree...
@@ -178,11 +177,11 @@ To check out the original branch and stop rebasing, run "git rebase --abort".
```
This message tells me that Git was not able to apply my changes to README.md
after bringing in the new commits from upstream/master.
after bringing in the new commits from upstream/main.
Running `git status` also gives me some information:
```
```console
rebase in progress; onto 5ae56e6
You are currently rebasing branch 'docs-test' on '5ae56e6'.
(fix conflicts and then run "git rebase --continue")
@@ -204,10 +203,12 @@ and `>>>>>>>`) markers to indicate where in files there are conflicts.
Tip: You can see recent changes made to a file by running the following
commands:
```
```bash
git fetch upstream
git log -p upstream/master -- /path/to/file
git log -p upstream/main -- /path/to/file
```
You can use this to compare the changes that you have made to a file with the
ones in upstream, helping you avoid undoing changes from a previous commit when
you are rebasing.
@@ -215,7 +216,7 @@ you are rebasing.
Once you've done that, save the file(s), stage them with `git add` and then
continue the rebase with `git rebase --continue`:
```
```console
$ git add README.md
$ git rebase --continue
@@ -234,14 +235,14 @@ pay attention and do a bit of work to ensure all of your work is readily
available.
Recall that most Git operations are local. When you commit your changes with
`git commit` they are safely stored in your *local* Git database only. That is,
until you *push* the commits to GitHub, they are only available on the computer
`git commit` they are safely stored in your _local_ Git database only. That is,
until you _push_ the commits to GitHub, they are only available on the computer
where you committed them.
So, before you stop working for the day, or before you switch computers, push
all of your commits to GitHub with `git push`:
```
```console
$ git push origin <branchname>
```
@@ -254,7 +255,7 @@ But if you're switching to another computer on which you have already cloned
Zulip, you need to update your local Git database with new refs from your
GitHub fork. You do this with `git fetch`:
```
```console
$ git fetch <usermame>
```
@@ -262,11 +263,11 @@ Ideally you should do this before you have made any commits on the same branch
on the second computer. Then you can `git merge` on whichever branch you need
to update:
```
```console
$ git checkout <my-branch>
Switched to branch '<my-branch>'
$ git merge origin/master
$ git merge origin/main
```
**If you have already made commits on the second computer that you need to

View File

@@ -8,7 +8,7 @@ determine the currently checked out branch several ways.
One way is with [git status][gitbook-git-status]:
```
```console
$ git status
On branch issue-demo
nothing to commit, working directory clean
@@ -17,23 +17,23 @@ nothing to commit, working directory clean
Another is with [git branch][gitbook-git-branch] which will display all local
branches, with a star next to the current branch:
```
```console
$ git branch
* issue-demo
master
main
```
To see even more information about your branches, including remote branches,
use `git branch -vva`:
```
```console
$ git branch -vva
* issue-123 517468b troubleshooting tip about provisioning
master f0eaee6 [origin/master] bug: Fix traceback in get_missed_message_token_from_address().
remotes/origin/HEAD -> origin/master
main f0eaee6 [origin/main] bug: Fix traceback in get_missed_message_token_from_address().
remotes/origin/HEAD -> origin/main
remotes/origin/issue-1234 4aeccb7 Another test commit, with longer message.
remotes/origin/master f0eaee6 bug: Fix traceback in get_missed_message_token_from_address().
remotes/upstream/master dbeab6a Optimize checks of test database state by moving into Python.
remotes/origin/main f0eaee6 bug: Fix traceback in get_missed_message_token_from_address().
remotes/upstream/main dbeab6a Optimize checks of test database state by moving into Python.
```
You can also configure [Bash][gitbook-other-envs-bash] and
@@ -46,48 +46,48 @@ from Zulip's main repositories.
**Note about git pull**: You might be used to using `git pull` on other
projects. With Zulip, because we don't use merge commits, you'll want to avoid
it. Rather than using `git pull`, which by default is a shortcut for `git fetch
&& git merge FETCH_HEAD` ([docs][gitbook-git-pull]), you should use `git fetch`
and then `git rebase`.
it. Rather than using `git pull`, which by default is a shortcut for
`git fetch && git merge FETCH_HEAD` ([docs][gitbook-git-pull]), you
should use `git fetch` and then `git rebase`.
First, [fetch][gitbook-fetch] changes from Zulip's upstream repository you
configured in the step above:
```
```console
$ git fetch upstream
```
Next, check out your `master` branch and [rebase][gitbook-git-rebase] it on top
of `upstream/master`:
Next, check out your `main` branch and [rebase][gitbook-git-rebase] it on top
of `upstream/main`:
```
$ git checkout master
Switched to branch 'master'
```console
$ git checkout main
Switched to branch 'main'
$ git rebase upstream/master
$ git rebase upstream/main
```
This will rollback any changes you've made to master, update it from
`upstream/master`, and then re-apply your changes. Rebasing keeps the commit
This will rollback any changes you've made to `main`, update it from
`upstream/main`, and then re-apply your changes. Rebasing keeps the commit
history clean and readable.
When you're ready, [push your changes][github-help-push] to your remote fork.
Make sure you're in branch `master` and then run `git push`:
Make sure you're in branch `main` and then run `git push`:
```
$ git checkout master
$ git push origin master
```console
$ git checkout main
$ git push origin main
```
You can keep any branch up to date using this method. If you're working on a
feature branch (see next section), which we recommend, you would change the
command slightly, using the name of your `feature-branch` rather than `master`:
command slightly, using the name of your `feature-branch` rather than `main`:
```
```console
$ git checkout feature-branch
Switched to branch 'feature-branch'
$ git rebase upstream/master
$ git rebase upstream/main
$ git push origin feature-branch
```
@@ -99,25 +99,25 @@ feature. Recall from [how Git is different][how-git-is-different] that
**Git is designed for lightweight branching and merging.** You can and should
create as many branches as you'd like.
First, make sure your master branch is up-to-date with Zulip upstream ([see
First, make sure your `main` branch is up-to-date with Zulip upstream ([see
how][zulip-git-guide-up-to-date]).
Next, from your master branch, create a new tracking branch, providing a
Next, from your `main` branch, create a new tracking branch, providing a
descriptive name for your feature branch:
```
$ git checkout master
Switched to branch 'master'
```console
$ git checkout main
Switched to branch 'main'
$ git checkout -b issue-1755-fail2ban
Switched to a new branch 'issue-1755-fail2ban'
```
Alternatively, you can create a new branch explicitly based off
`upstream/master`:
`upstream/main`:
```
$ git checkout -b issue-1755-fail2ban upstream/master
```console
$ git checkout -b issue-1755-fail2ban upstream/main
Switched to a new branch 'issue-1755-fail2ban'
```
@@ -135,7 +135,7 @@ Recall that files tracked with Git have possible three states:
committed, modified, and staged.
To prepare a commit, first add the files with changes that you want
to include in your commit to your staging area. You *add* both new files and
to include in your commit to your staging area. You _add_ both new files and
existing ones. You can also remove files from staging when necessary.
### Get status of working directory
@@ -146,7 +146,7 @@ staged, use `git status`.
If you have no changes in the working directory, you'll see something like
this:
```
```console
$ git status
On branch issue-123
nothing to commit, working directory clean
@@ -154,7 +154,7 @@ nothing to commit, working directory clean
If you have unstaged changes, you'll see something like this:
```
```console
On branch issue-123
Untracked files:
(use "git add <file>..." to include in what will be committed)
@@ -166,14 +166,15 @@ nothing added to commit but untracked files present (use "git add" to track)
### Stage additions with git add
To add changes to your staging area, use `git add <filename>`. Because `git
add` is all about staging the changes you want to commit, you use it to add
*new files* as well as *files with changes* to your staging area.
To add changes to your staging area, use `git add <filename>`. Because
`git add` is all about staging the changes you want to commit, you use
it to add _new files_ as well as _files with changes_ to your staging
area.
Continuing our example from above, after we run `git add newfile.py`, we'll see
the following from `git status`:
```
```console
On branch issue-123
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
@@ -187,13 +188,12 @@ view changes to files you haven't yet staged, just use `git diff`.
If you want to add all changes in the working directory, use `git add -A`
([documentation][gitbook-add]).
You can also stage changes using your graphical Git client.
If you stage a file, you can undo it with `git reset HEAD <filename>`. Here's
an example where we stage a file `test3.txt` and then unstage it:
```
```console
$ git add test3.txt
On branch issue-1234
Changes to be committed:
@@ -222,7 +222,7 @@ stage the file for deletion and leave it in your working directory.
To stage a file for deletion and **remove** it from your working directory, use
`git rm <filename>`:
```
```console
$ git rm test.txt
rm 'test.txt'
@@ -240,7 +240,7 @@ ls: No such file or directory
To stage a file for deletion and **keep** it in your working directory, use
`git rm --cached <filename>`:
```
```console
$ git rm --cached test2.txt
rm 'test2.txt'
@@ -258,7 +258,7 @@ test2.txt
If you stage a file for deletion with the `--cached` option, and haven't yet
run `git commit`, you can undo it with `git reset HEAD <filename>`:
```
```console
$ git reset HEAD test2.txt
```
@@ -273,7 +273,7 @@ with `git commit -m "My commit message."` to include a commit message.
Here's an example of committing with the `-m` for a one-line commit message:
```
```console
$ git commit -m "Add a test commit for docs."
[issue-123 173e17a] Add a test commit for docs.
1 file changed, 1 insertion(+)
@@ -295,7 +295,7 @@ messages][zulip-rtd-commit-messages] for details.
Here's an example of a longer commit message that will be used for a pull request:
```
```text
Integrate Fail2Ban.
Updates Zulip logging to put an unambiguous entry into the logs such
@@ -317,13 +317,13 @@ testing in a more production-like environment.
The final paragraph indicates that this commit addresses and fixes issue #1755.
When you submit your pull request, GitHub will detect and link this reference
to the appropriate issue. Once your commit is merged into zulip/master, GitHub
to the appropriate issue. Once your commit is merged into `upstream/main`, GitHub
will automatically close the referenced issue. See [Closing issues via commit
messages][github-help-closing-issues] for details.
Note in particular that GitHub's regular expressions for this feature
are sloppy, so phrases like `Partially fixes #1234` will automatically
close the issue. Phrases like `Fixes part of #1234` are a good
close the issue. Phrases like `Fixes part of #1234` are a good
alternative.
Make as many commits as you need to to address the issue or implement your feature.
@@ -335,9 +335,9 @@ This ensures your work is backed up should something happen to your local
machine and allows others to follow your progress. It also allows you to
[work from multiple computers][self-multiple-computers] without losing work.
Pushing to a feature branch is just like pushing to master:
Pushing to a feature branch is just like pushing to `main`:
```
```console
$ git push origin <branch-name>
Counting objects: 6, done.
Delta compression using up to 4 threads.
@@ -367,7 +367,7 @@ your commit history be able to clearly understand your progression of work?
On the command line, you can use the `git log` command to display an easy to
read list of your commits:
```
```console
$ git log --all --graph --oneline --decorate
* 4f8d75d (HEAD -> 1754-docs-add-git-workflow) docs: Add details about configuring Travis CI.
@@ -376,7 +376,7 @@ $ git log --all --graph --oneline --decorate
* 985116b docs: Add graphic client recs to Git Guide.
* 3c40103 docs: Add stubs for remaining Git Guide sections.
* fc2c01e docs: Add git guide quickstart.
| * f0eaee6 (upstream/master) bug: Fix traceback in get_missed_message_token_from_address().
| * f0eaee6 (upstream/main) bug: Fix traceback in get_missed_message_token_from_address().
```
Alternatively, use your graphical client to view the history for your feature branch.
@@ -404,7 +404,7 @@ Any time you alter history for commits you have already pushed to GitHub,
you'll need to prefix the name of your branch with a `+`. Without this, your
updates will be rejected with a message such as:
```
```console
$ git push origin 1754-docs-add-git-workflow
To git@github.com:christi3k/zulip.git
! [rejected] 1754-docs-add-git-workflow -> 1754-docs-add-git-workflow (non-fast-forward)
@@ -413,13 +413,12 @@ hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
```
Re-running the command with `+<branch>` allows the push to continue by
re-writing the history for the remote repository:
```
```console
$ git push origin +1754-docs-add-git-workflow
Counting objects: 12, done.
Delta compression using up to 4 threads.
@@ -429,7 +428,6 @@ Total 12 (delta 8), reused 0 (delta 0)
remote: Resolving deltas: 100% (8/8), completed with 2 local objects.
To git@github.com:christi3k/zulip.git
+ 2d49e2d...bfb2433 1754-docs-add-git-workflow -> 1754-docs-add-git-workflow (forced update)
```
This is perfectly okay to do on your own feature branches, especially if you're

View File

@@ -3,8 +3,8 @@
When you work on Zulip code, there are three copies of the Zulip Git
repository that you are generally concerned with:
- The `upstream` remote. This is the [official Zulip
repository](https://github.com/zulip/zulip) on GitHub. You probably
- The `upstream` remote. This is the [official Zulip
repository](https://github.com/zulip/zulip) on GitHub. You probably
don't have write access to this repository.
- The **origin** remote: Your personal remote repository on GitHub.
You'll use this to share your code and create [pull requests](../git/pull-requests.md).
@@ -31,7 +31,7 @@ Sometimes you want to publish commits. Here are some scenarios:
Finally, the Zulip core team will occasionally want your changes!
- The Zulip core team can accept your changes and add them to
the official repo, usually on the master branch.
the official repo, usually on the `main` branch.
## Relevant Git commands
@@ -44,7 +44,7 @@ working copies:
- `git push`: This pushes code from your local repository to one of the remotes.
- `git remote`: This helps you configure short names for remotes.
- `git pull`: This pulls code, but by default creates a merge commit
(which you definitely don't want). However, if you've followed our
[cloning documentation](../git/cloning.md), this will do `git pull
--rebase` instead, which is the only mode you'll want to use when
working on Zulip.
(which you definitely don't want). However, if you've followed our
[cloning documentation](../git/cloning.md), this will do
`git pull --rebase` instead, which is the only mode you'll want to
use when working on Zulip.

View File

@@ -5,7 +5,7 @@ time when working with Git on the Zulip project.
## Set up Git repo script
**Extremely useful**. In the `tools` directory of
**Extremely useful**. In the `tools` directory of
[zulip/zulip][github-zulip-zulip] you'll find a bash script
`setup-git-repo`. This script installs a pre-commit hook, which will
run each time you `git commit` to automatically run
@@ -16,7 +16,7 @@ notices or warnings it displays.
It's simple to use. Make sure you're in the clone of zulip and run the following:
```
```console
$ ./tools/setup-git-repo
```
@@ -24,7 +24,7 @@ The script doesn't produce any output if successful. To check that the hook has
been installed, print a directory listing for `.git/hooks` and you should see
something similar to:
```
```console
$ ls -l .git/hooks
pre-commit -> ../../tools/pre-commit
```
@@ -41,16 +41,16 @@ described above in that it does not create a branch for the pull request
checkout.
**This tool checks for uncommitted changes, but it will move the
current branch using `git reset --hard`. Use with caution.**
current branch using `git reset --hard`. Use with caution.**
First, make sure you are working in a branch you want to move (in this
example, we'll use the local `master` branch). Then run the script
example, we'll use the local `main` branch). Then run the script
with the ID number of the pull request as the first argument.
```
$ git checkout master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
```console
$ git checkout main
Switched to branch 'main'
Your branch is up-to-date with 'origin/main'.
$ ./tools/reset-to-pull-request 1900
+ request_id=1900
@@ -70,11 +70,11 @@ HEAD is now at 2bcd1d8 troubleshooting tip about provisioning
`tools/fetch-rebase-pull-request` is a short-cut for [checking out a pull
request locally][zulip-git-guide-fetch-pr] in its own branch and then updating it with any
changes from upstream/master with `git rebase`.
changes from `upstream/main` with `git rebase`.
Run the script with the ID number of the pull request as the first argument.
```
```console
$ tools/fetch-rebase-pull-request 1913
+ request_id=1913
+ git fetch upstream pull/1913/head
@@ -84,8 +84,8 @@ remote: Total 4 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (4/4), done.
From https://github.com/zulip/zulip
* branch refs/pull/1913/head -> FETCH_HEAD
+ git checkout upstream/master -b review-1913
Branch review-1913 set up to track remote branch master from upstream.
+ git checkout upstream/main -b review-1913
Branch review-1913 set up to track remote branch main from upstream.
Switched to a new branch 'review-1913'
+ git reset --hard FETCH_HEAD
HEAD is now at 99aa2bf Add provision.py fails issue in common errors
@@ -96,12 +96,12 @@ Current branch review-1913 is up to date.
## Fetch a pull request without rebasing
`tools/fetch-pull-request` is a similar to `tools/fetch-rebase-pull-request`, but
it does not rebase the pull request against upstream/master, thereby getting
it does not rebase the pull request against `upstream/main`, thereby getting
exactly the same repository state as the commit author had.
Run the script with the ID number of the pull request as the first argument.
```
```console
$ tools/fetch-pull-request 5156
+ git diff-index --quiet HEAD
+ request_id=5156
@@ -118,18 +118,18 @@ HEAD is now at 5a1e982 tools: Update clean-branches to clean review branches.
## Push to a pull request
`tools/push-to-pull-request` is primarily useful for maintainers who
are merging other users' commits into a Zulip repository. After doing
are merging other users' commits into a Zulip repository. After doing
`reset-to-pull-request` or `fetch-pull-request` and making some
changes, you can push a branch back to a pull request with e.g.
`tools/push-to-pull-request 1234`. This is useful for a few things:
`tools/push-to-pull-request 1234`. This is useful for a few things:
* Getting CI to run and enabling you to use the GitHub "Merge" buttons
- Getting CI to run and enabling you to use the GitHub "Merge" buttons
to merge a PR after you make some corrections to a PR, without
waiting for an extra round trip with the PR author.
* For commits that aren't ready to merge yet, communicating clearly
- For commits that aren't ready to merge yet, communicating clearly
any changes you'd like to see happen that are easier for you to
explain by just editing the code than in words.
* Saving a contributor from needing to duplicate any rebase work that
- Saving a contributor from needing to duplicate any rebase work that
you did as part of integrating parts of the PR.
You'll likely want to comment on the PR after doing so, to ensure that
@@ -139,23 +139,23 @@ next batch of changes.
Note that in order to do this you need permission to do such a push,
which GitHub offers by default to users with write access to the
repository. For multiple developers collaborating on a PR, you can
repository. For multiple developers collaborating on a PR, you can
achieve this by granting other users permission to write to your fork.
## Delete unimportant branches
`tools/clean-branches` is a shell script that removes branches that are either:
1. Local branches that are ancestors of origin/master.
2. Branches in origin that are ancestors of origin/master and named like `$USER-*`.
1. Local branches that are ancestors of `origin/main`.
2. Branches in origin that are ancestors of `origin/main` and named like `$USER-*`.
3. Review branches created by `tools/fetch-rebase-pull-request` and `tools/fetch-pull-request`.
First, make sure you are working in branch `master`. Then run the script without any
First, make sure you are working in branch `main`. Then run the script without any
arguments for default behavior. Since removing review branches can inadvertently remove any
feature branches whose names are like `review-*`, it is not done by default. To
use it, run `tools/clean-branches --reviews`.
```
```console
$ tools/clean-branches --reviews
Deleting local branch review-original-5156 (was 5a1e982)
```
@@ -163,12 +163,13 @@ Deleting local branch review-original-5156 (was 5a1e982)
## Merge conflict on yarn.lock file
If there is a merge conflict on yarn.lock, yarn should be run to
regenerate the file. *Important* don't delete the yarn.lock file. Check out the
latest one from origin/master so that yarn knows the previous asset versions.
regenerate the file. _Important_ don't delete the yarn.lock file. Check out the
latest one from `origin/main` so that yarn knows the previous asset versions.
Run the following commands
```
git checkout origin/master -- yarn.lock
```bash
git checkout origin/main -- yarn.lock
yarn install
git add yarn.lock
git rebase --continue

View File

@@ -1,10 +1,8 @@
Zulip architectural overview
============================
# Zulip architectural overview
Key codebases
-------------
## Key codebases
The main Zulip codebase is at <https://github.com/zulip/zulip>. It
The main Zulip codebase is at <https://github.com/zulip/zulip>. It
contains the Zulip backend (written in Python 3.x and Django), the
webapp (written in JavaScript and TypeScript) and our library of
incoming webhook [integrations](https://zulip.com/integrations)
@@ -37,22 +35,21 @@ translations.
In this overview, we'll mainly discuss the core Zulip server and web
application.
Usage assumptions and concepts
------------------------------
## Usage assumptions and concepts
Zulip is a real-time team chat application meant to provide a great
experience for a wide range of organizations, from companies to
volunteer projects to groups of friends, ranging in size from a small
team to 10,000s of users. It has [hundreds of
team to 10,000s of users. It has [hundreds of
features](https://zulip.com/features) both larger and small, and
supports dedicated apps for iOS, Android, Linux, Windows, and macOS,
all modern web browsers, several cross-protocol chat clients, and
numerous dedicated [Zulip API](https://zulip.com/api) clients
(e.g. bots).
A server can host multiple Zulip *realms* (organizations), each on its
own (sub)domain. While most installations host only one organization, some
such as zulip.com host thousands. Each organization is a private
A server can host multiple Zulip _realms_ (organizations), each on its
own (sub)domain. While most installations host only one organization, some
such as zulip.com host thousands. Each organization is a private
chamber with its own users, streams, customizations, and so on. This
means that one person might be a user of multiple Zulip realms. The
administrators of an organization have a great deal of control over
@@ -61,15 +58,14 @@ more on security considerations and options, see [the security model
section](../production/security-model.md) and the [Zulip Help
Center](https://zulip.com/help).
Components
----------
## Components
![architecture-simple](../images/architecture_simple.png)
![architecture-simple](../images/architecture_simple.png)
### Django and Tornado
Zulip is primarily implemented in the
[Django](https://www.djangoproject.com/) Python web framework. We
[Django](https://www.djangoproject.com/) Python web framework. We
also make use of [Tornado](https://www.tornadoweb.org) for the
real-time push system.
@@ -86,10 +82,10 @@ connection from every running client. For this reason, it's
responsible for event (message) delivery, but not much else. We try to
avoid any blocking calls in Tornado because we don't want to delay
delivery to thousands of other connections (as this would make Zulip
very much not real-time). For instance, we avoid doing cache or
very much not real-time). For instance, we avoid doing cache or
database queries inside the Tornado code paths, since those blocking
requests carry a very high performance penalty for a single-threaded,
asynchronous server system. (In principle, we could do non-blocking
asynchronous server system. (In principle, we could do non-blocking
requests to those services, but the Django-based database libraries we
use in most of our codebase using don't support that, and in any case,
our architecture doesn't require Tornado to do that).
@@ -116,8 +112,8 @@ For more details on the frontend, see our documentation on
[directory structure](../overview/directory-structure.md), and
[the static asset pipeline](../subsystems/html-css.html#static-asset-pipeline).
[Jinja2]: http://jinja.pocoo.org/
[Handlebars]: https://handlebarsjs.com/
[jinja2]: http://jinja.pocoo.org/
[handlebars]: https://handlebarsjs.com/
### nginx
@@ -130,19 +126,19 @@ according to the rules laid down in the many config files found in
important of these files. It explains what happens when requests come in
from outside.
- In production, all requests to URLs beginning with `/static/` are
served from the corresponding files in `/home/zulip/prod-static/`,
and the production build process (`tools/build-release-tarball`)
compiles, minifies, and installs the static assets into the
`prod-static/` tree form. In development, files are served directly
from `/static/` in the Git repository.
- Requests to `/json/events` and `/api/v1/events`, i.e. the
real-time push system, are sent to the Tornado server.
- Requests to all other paths are sent to the Django app running via
`uWSGI` via `unix:/home/zulip/deployments/uwsgi-socket`.
- In production, all requests to URLs beginning with `/static/` are
served from the corresponding files in `/home/zulip/prod-static/`,
and the production build process (`tools/build-release-tarball`)
compiles, minifies, and installs the static assets into the
`prod-static/` tree form. In development, files are served directly
from `/static/` in the Git repository.
- Requests to `/json/events` and `/api/v1/events`, i.e. the
real-time push system, are sent to the Tornado server.
- Requests to all other paths are sent to the Django app running via
`uWSGI` via `unix:/home/zulip/deployments/uwsgi-socket`.
- By default (i.e. if `LOCAL_UPLOADS_DIR` is set), nginx will serve
user-uploaded content like avatars, custom emoji, and uploaded
files. However, one can configure Zulip to store these in a cloud
files. However, one can configure Zulip to store these in a cloud
storage service like Amazon S3 instead.
Note that we do not use `nginx` in the development environment, opting
@@ -168,7 +164,7 @@ memcached is used to cache database model
objects. `zerver/lib/cache.py` and `zerver/lib/cache_helpers.py`
manage putting things into memcached, and invalidating the cache when
values change. The memcached configuration is in
`puppet/zulip/files/memcached.conf`. See our
`puppet/zulip/files/memcached.conf`. See our
[caching guide](../subsystems/caching.md) to learn how this works in
detail.
@@ -181,25 +177,27 @@ Redis is configured in `zulip/puppet/zulip/files/redis` and it's a
pretty standard configuration except for the last line, which turns off
persistence:
# Zulip-specific configuration: disable saving to disk.
save ""
```text
# Zulip-specific configuration: disable saving to disk.
save ""
```
People often wonder if we could replace memcached with Redis (or
replace RabbitMQ with Redis, with some loss of functionality).
The answer is likely yes, but it wouldn't improve Zulip.
Operationally, our current setup is likely easier to develop and run
in production than a pure Redis system would be. Meanwhile, the
in production than a pure Redis system would be. Meanwhile, the
perceived benefit for using Redis is usually to reduce memory
consumption by running fewer services, and no such benefit would
materialize:
* Our cache uses significant memory, but that memory usage would be
- Our cache uses significant memory, but that memory usage would be
essentially the same with Redis as it is with memcached.
* All of these services have low minimum memory requirements, and in
- All of these services have low minimum memory requirements, and in
fact our applications for Redis and RabbitMQ do not use significant
memory even at scale.
* We would likely need to run multiple Redis services (with different
- We would likely need to run multiple Redis services (with different
configurations) in order to ensure the pure LRU use case (memcached)
doesn't push out data that we want to persist until expiry
(Redis-based rate limiting) or until consumed (RabbitMQ-based
@@ -219,7 +217,7 @@ and the Tornado push system.
Two simple wrappers around `pika` (the Python RabbitMQ client) are in
`zulip/zerver/lib/queue.py`. There's an asynchronous client for use in
Tornado and a more general client for use elsewhere. Most of the
Tornado and a more general client for use elsewhere. Most of the
processes started by Supervisor are queue processors that continually
pull things out of a RabbitMQ queue and handle them; they are defined
in `zerver/worker/queue_processors.py`.
@@ -241,7 +239,7 @@ list of stopwords used by a PostgreSQL extension.
In a development environment, configuration of that PostgreSQL
extension is handled by `tools/postgresql-init-dev-db` (invoked by
`tools/provision`). That file also manages setting up the
`tools/provision`). That file also manages setting up the
development PostgreSQL user.
`tools/provision` also invokes `tools/rebuild-dev-database`
@@ -265,53 +263,53 @@ component of the Zulip server (e.g.
## Glossary
This section gives names for some of the elements in the Zulip UI used
in Zulip development conversations. In general, our goal is to
in Zulip development conversations. In general, our goal is to
minimize the set of terminology listed here by giving elements
self-explanatory names.
* **bankruptcy**: When a user has been off Zulip for several days and
has hundreds of unread messages, they are prompted for whether
they want to mark all their unread messages as read. This is
called "declaring bankruptcy" (in reference to the concept in
finance).
- **bankruptcy**: When a user has been off Zulip for several days and
has hundreds of unread messages, they are prompted for whether
they want to mark all their unread messages as read. This is
called "declaring bankruptcy" (in reference to the concept in
finance).
* **chevron**: A small downward-facing arrow next to a message's
timestamp, offering contextual options, e.g., "Reply", "Mute [this
topic]", or "Link to this conversation". To avoid visual clutter,
the chevron only appears in the web UI upon hover.
- **chevron**: A small downward-facing arrow next to a message's
timestamp, offering contextual options, e.g., "Reply", "Mute [this
topic]", or "Link to this conversation". To avoid visual clutter,
the chevron only appears in the web UI upon hover.
* **ellipsis**: A small vertical three dot icon (technically called
as ellipsis-v), present in sidebars as a menu icon.
It offers contextual options for global filters (All messages
and Starred messages), stream filters and topics in left
sidebar and users in right sidebar. To avoid visual clutter
ellipsis only appears in the web UI upon hover.
- **ellipsis**: A small vertical three dot icon (technically called
as ellipsis-v), present in sidebars as a menu icon.
It offers contextual options for global filters (All messages
and Starred messages), stream filters and topics in left
sidebar and users in right sidebar. To avoid visual clutter
ellipsis only appears in the web UI upon hover.
* **huddle**: What the codebase calls a "group private message".
- **huddle**: What the codebase calls a "group private message".
* **message editing**: If the realm admin allows it, then after a user
posts a message, the user has a few minutes to click "Edit" and
change the content of their message. If they do, Zulip adds a
marker such as "(EDITED)" at the top of the message, visible to
anyone who can see the message.
- **message editing**: If the realm admin allows it, then after a user
posts a message, the user has a few minutes to click "Edit" and
change the content of their message. If they do, Zulip adds a
marker such as "(EDITED)" at the top of the message, visible to
anyone who can see the message.
* **realm**: What the codebase calls an "organization" in the UI.
- **realm**: What the codebase calls an "organization" in the UI.
* **recipient bar**: A visual indication of the context of a message
or group of messages, displaying the stream and topic or private
message recipient list, at the top of a group of messages. A
typical 1-line message to a new recipient shows to the user as
three lines of content: first the recipient bar, second the
sender's name and avatar alongside the timestamp (and, on hover,
the star and the chevron), and third the message content. The
recipient bar is or contains hyperlinks to help the user narrow.
- **recipient bar**: A visual indication of the context of a message
or group of messages, displaying the stream and topic or private
message recipient list, at the top of a group of messages. A
typical 1-line message to a new recipient shows to the user as
three lines of content: first the recipient bar, second the
sender's name and avatar alongside the timestamp (and, on hover,
the star and the chevron), and third the message content. The
recipient bar is or contains hyperlinks to help the user narrow.
* **star**: Zulip allows a user to mark any message they can see,
public or private, as "starred". A user can easily access messages
they've starred through the "Starred messages" link in the
left sidebar, or use "is:starred" as a narrow or a search
constraint. Whether a user has or has not starred a particular
message is private; other users and realm admins don't know
whether a message has been starred, or by whom.
- **star**: Zulip allows a user to mark any message they can see,
public or private, as "starred". A user can easily access messages
they've starred through the "Starred messages" link in the
left sidebar, or use "is:starred" as a narrow or a search
constraint. Whether a user has or has not starred a particular
message is private; other users and realm admins don't know
whether a message has been starred, or by whom.
* **subject**: What the codebase calls a "topic" in many places.
- **subject**: What the codebase calls a "topic" in many places.

File diff suppressed because it is too large Load Diff

View File

@@ -13,173 +13,172 @@ Zulip uses the [Django web
framework](https://docs.djangoproject.com/en/1.8/), so a lot of these
paths will be familiar to Django developers.
* `zproject/urls.py` Main
- `zproject/urls.py` Main
[Django routes file](https://docs.djangoproject.com/en/1.8/topics/http/urls/).
Defines which URLs are handled by which view functions or templates.
* `zerver/models.py` Main
- `zerver/models.py` Main
[Django models](https://docs.djangoproject.com/en/1.8/topics/db/models/)
file. Defines Zulip's database tables.
file. Defines Zulip's database tables.
* `zerver/lib/*.py` Most library code.
- `zerver/lib/*.py` Most library code.
* `zerver/lib/actions.py` Most code doing writes to user-facing
database tables lives here. In particular, we have a policy that
- `zerver/lib/actions.py` Most code doing writes to user-facing
database tables lives here. In particular, we have a policy that
all code calling `send_event` to trigger [pushing data to
clients](../subsystems/events-system.md) must live here.
* `zerver/views/*.py` Most [Django views](https://docs.djangoproject.com/en/1.8/topics/http/views/).
- `zerver/views/*.py` Most [Django views](https://docs.djangoproject.com/en/1.8/topics/http/views/).
* `zerver/webhooks/` Webhook views and tests for [Zulip's incoming webhook integrations](
https://zulip.com/api/incoming-webhooks-overview).
- `zerver/webhooks/` Webhook views and tests for [Zulip's incoming webhook integrations](https://zulip.com/api/incoming-webhooks-overview).
* `zerver/tornado/views.py` Tornado views.
- `zerver/tornado/views.py` Tornado views.
* `zerver/worker/queue_processors.py` [Queue workers](../subsystems/queuing.md).
- `zerver/worker/queue_processors.py` [Queue workers](../subsystems/queuing.md).
* `zerver/lib/markdown/` [Backend Markdown processor](../subsystems/markdown.md).
- `zerver/lib/markdown/` [Backend Markdown processor](../subsystems/markdown.md).
* `zproject/backends.py` [Authentication backends](https://docs.djangoproject.com/en/1.8/topics/auth/customizing/).
- `zproject/backends.py` [Authentication backends](https://docs.djangoproject.com/en/1.8/topics/auth/customizing/).
-------------------------------------------------------------------
---
### HTML templates
See [our docs](../subsystems/html-css.md) for details on Zulip's
templating systems.
* `templates/zerver/` For [Jinja2](http://jinja.pocoo.org/) templates
- `templates/zerver/` For [Jinja2](http://jinja.pocoo.org/) templates
for the backend (for zerver app; logged-in content is in `templates/zerver/app`).
* `static/templates/` [Handlebars](https://handlebarsjs.com/) templates for the frontend.
- `static/templates/` [Handlebars](https://handlebarsjs.com/) templates for the frontend.
----------------------------------------
---
### JavaScript, TypeScript, and other static assets
* `static/js/` Zulip's own JavaScript and TypeScript sources.
- `static/js/` Zulip's own JavaScript and TypeScript sources.
* `static/styles/` Zulip's own CSS.
- `static/styles/` Zulip's own CSS.
* `static/images/` Zulip's images.
- `static/images/` Zulip's images.
* `static/third/` Third-party JavaScript and CSS that has been vendored.
- `static/third/` Third-party JavaScript and CSS that has been vendored.
* `node_modules/` Third-party JavaScript installed via `yarn`.
- `node_modules/` Third-party JavaScript installed via `yarn`.
* `static/assets/` For assets not to be served to the web (e.g. the system to
generate our favicons).
- `static/assets/` For assets not to be served to the web (e.g. the system to
generate our favicons).
-----------------------------------------------------------------------
---
### Tests
* `zerver/tests/` Backend tests.
- `zerver/tests/` Backend tests.
* `frontend_tests/node_tests/` Node Frontend unit tests.
- `frontend_tests/node_tests/` Node Frontend unit tests.
* `frontend_tests/puppeteer_tests/` Puppeteer frontend integration tests.
- `frontend_tests/puppeteer_tests/` Puppeteer frontend integration tests.
* `tools/test-*` Developer-facing test runner scripts.
- `tools/test-*` Developer-facing test runner scripts.
-----------------------------------------------------
---
### Management commands
These are distinguished from scripts, below, by needing to run a
Django context (i.e. with database access).
* `zerver/management/commands/`
- `zerver/management/commands/`
[Management commands](../subsystems/management-commands.md) one might run at a
production deployment site (e.g. scripts to change a value or
deactivate a user properly).
* `zilencer/management/commands/` includes some dev-specific
commands such as `populate_db`, which are not included in
the production distribution.
- `zilencer/management/commands/` includes some dev-specific
commands such as `populate_db`, which are not included in
the production distribution.
---------------------------------------------------------------
---
### Scripts
* `scripts/` Scripts that production deployments might run manually
- `scripts/` Scripts that production deployments might run manually
(e.g., `restart-server`).
* `scripts/lib/` Scripts that are needed on production deployments but
- `scripts/lib/` Scripts that are needed on production deployments but
humans should never run directly.
* `scripts/setup/` Scripts that production deployments will only run
- `scripts/setup/` Scripts that production deployments will only run
once, during installation.
* `tools/` Scripts used only in a Zulip development environment.
- `tools/` Scripts used only in a Zulip development environment.
These are not included in production release tarballs for Zulip, so
that we can include scripts here one wouldn't want someone to run in
production accidentally (e.g. things that delete the Zulip database
without prompting).
* `tools/setup/` Subdirectory of `tools/` for things only used during
- `tools/setup/` Subdirectory of `tools/` for things only used during
the development environment setup process.
* `tools/ci/` Subdirectory of `tools/` for things only used to
set up and run our tests in CI. Actual test suites should
- `tools/ci/` Subdirectory of `tools/` for things only used to
set up and run our tests in CI. Actual test suites should
go in `tools/`.
---------------------------------------------------------
---
### API and bots
* See the [Zulip API repository](https://github.com/zulip/python-zulip-api).
- See the [Zulip API repository](https://github.com/zulip/python-zulip-api).
Zulip's Python API bindings, a number of Zulip integrations and
bots, and a framework for running and testing Zulip bots, used to be
developed in the main Zulip server repo but are now in their own repo.
* `templates/zerver/integrations/` (within `templates/zerver/`, above).
- `templates/zerver/integrations/` (within `templates/zerver/`, above).
Documentation for these integrations.
-------------------------------------------------------------------------
---
### Production Puppet configuration
This is used to deploy essentially all configuration in production.
* `puppet/zulip/` For configuration for production deployments.
- `puppet/zulip/` For configuration for production deployments.
* `puppet/zulip/manifests/profile/standalone.pp` Main manifest for Zulip standalone deployments.
- `puppet/zulip/manifests/profile/standalone.pp` Main manifest for Zulip standalone deployments.
-----------------------------------------------------------------------
---
### Additional Django apps
* `confirmation` Email confirmation system.
- `confirmation` Email confirmation system.
* `analytics` Analytics for the Zulip server administrator (needs work to
- `analytics` Analytics for the Zulip server administrator (needs work to
be useful to normal Zulip sites).
* `corporate` The old Zulip.com website. Not included in production
- `corporate` The old Zulip.com website. Not included in production
distribution.
* `zilencer` Primarily used to hold management commands that aren't
used in production. Not included in production distribution.
- `zilencer` Primarily used to hold management commands that aren't
used in production. Not included in production distribution.
-----------------------------------------------------------------------
---
### Jinja2 compatibility files
* `zproject/jinja2/__init__.py` Jinja2 environment.
- `zproject/jinja2/__init__.py` Jinja2 environment.
-----------------------------------------------------------------------
---
### Translation files
* `locale/` Backend (Django) and frontend translation data files.
- `locale/` Backend (Django) and frontend translation data files.
-----------------------------------------------------------------------
---
### Documentation
* `docs/` Source for this documentation.
- `docs/` Source for this documentation.
--------------------------------------------------------------
---
You can consult the repository's `.gitattributes` file to see exactly
which components are excluded from production releases (release

View File

@@ -4,20 +4,20 @@ This page details the release lifecycle for the Zulip server and
client-apps, well as our policies around backwards-compatibility and
security support policies. In short:
* We recommend always running the latest releases of the Zulip clients
- We recommend always running the latest releases of the Zulip clients
and servers. Server upgrades are designed to Just Work; mobile and
desktop client apps update automatically.
* The server and client apps are backwards and forwards compatible
- The server and client apps are backwards and forwards compatible
across a wide range of versions. So while it's important to upgrade
the server to get security updates, bug fixes, and new features, the
mobile and desktop apps will continue working for at least 18 months
if you don't do so.
* New server releases are announced via the low-traffic
- New server releases are announced via the low-traffic
[zulip-announce email
list](https://groups.google.com/forum/#!forum/zulip-announce). We
highly recommend subscribing so that you are notified about new
security releases.
* Zulip Cloud runs the branch that will become the next major
- Zulip Cloud runs the branch that will become the next major
server/webapp release, so it is always "newer" than the latest
stable release.
@@ -28,16 +28,16 @@ server repository][zulip-server].
### Stable releases
* Zulip Server **stable releases**, such as Zulip 4.5.
- Zulip Server **stable releases**, such as Zulip 4.5.
Organizations self-hosting Zulip primarily use stable releases.
* The numbering scheme is simple: the first digit indicates the major
release series (which we'll refer to as "4.x"). (Before Zulip 3.0,
- The numbering scheme is simple: the first digit indicates the major
release series (which we'll refer to as "4.x"). (Before Zulip 3.0,
Zulip versions had another digit, e.g. 1.9.2 was a bug fix release
in the Zulip 1.9.x major release series).
* [New major releases][blog-major-releases], like Zulip 4.0, are
- [New major releases][blog-major-releases], like Zulip 4.0, are
published every 3-6 months, and contain hundreds of features, bug
fixes, and improvements to Zulip's internals.
* New maintenance releases, like 4.3, are published roughly once a
- New maintenance releases, like 4.3, are published roughly once a
month. Maintenance releases are designed to have no risky changes
and be easy to reverse, to minimize stress for administrators. When
upgrading to a new major release series, We recommend always
@@ -45,7 +45,7 @@ server repository][zulip-server].
you use the latest version of the upgrade code.
Starting with Zulip 4.0, the Zulip webapp displays the current server
version in the gear menu. With older releases, the server version is
version in the gear menu. With older releases, the server version is
available [via the API](https://zulip.com/api/get-server-settings).
This ReadTheDocs documentation has a widget in the lower-left corner
@@ -60,25 +60,25 @@ the Zulip server itself (E.g. `https://zulip.example.com/help/`).
Many Zulip servers run versions from Git that have not been published
in a stable release.
* [Zulip Cloud](https://zulip.com) essentially runs the master
branch. It is usually a few days behind master (with some
- [Zulip Cloud](https://zulip.com) essentially runs the `main`
branch. It is usually a few days behind `main` (with some
cherry-picked bug fixes), but can fall up to 2 weeks behind when
major UI or internals changes mean we'd like to bake changes longer
on chat.zulip.org before exposing them to the full Zulip Cloud
userbase.
* [chat.zulip.org][chat-zulip-org], the bleeding-edge server for the
Zulip development community, is upgraded to master several times
every week. We also often "test deploy" changes not yet in master
- [chat.zulip.org][chat-zulip-org], the bleeding-edge server for the
Zulip development community, is upgraded to `main` several times
every week. We also often "test deploy" changes not yet in `main`
to chat.zulip.org to facilitate design feedback.
* We maintain Git branches with names like `4.x` containing backported
commits from master that we plan to include in the next maintenance
release. Self hosters can [upgrade][upgrade-from-git] to these
- We maintain Git branches with names like `4.x` containing backported
commits from `main` that we plan to include in the next maintenance
release. Self hosters can [upgrade][upgrade-from-git] to these
stable release branches to get bug fixes staged for the next stable
release (which is very useful when you reported a bug whose fix we
choose to backport). We support these branches as though they were a
stable release.
* Self-hosters who want new features not yet present in a major
release can [upgrade to master][upgrading-to-master] or run [a fork
- Self-hosters who want new features not yet present in a major
release can [upgrade to `main`][upgrading-to-main] or run [a fork
of Zulip][fork-zulip].
### Compatibility and upgrading
@@ -89,15 +89,15 @@ for self-hosters, has no regressions, and that the [Zulip upgrade
process](../production/upgrade-or-modify.md) Just Works.
The Zulip server and clients apps are all carefully engineered to
ensure compatibility with old versions. In particular:
ensure compatibility with old versions. In particular:
* The Zulip mobile and desktop apps maintain backwards-compatibility
code to support any Zulip server since 2.1.0. (They may also work
- The Zulip mobile and desktop apps maintain backwards-compatibility
code to support any Zulip server since 2.1.0. (They may also work
with older versions, with a degraded experience).
* Zulip maintains an [API changelog](https://zulip.com/api/changelog)
- Zulip maintains an [API changelog](https://zulip.com/api/changelog)
detailing all changes to the API to make it easy for client
developers to do this correctly.
* The Zulip server preserves backwards-compatibility in its API to
- The Zulip server preserves backwards-compatibility in its API to
support versions of the mobile and desktop apps released in roughly
the last year. Because these clients auto-update, generally there
are only a handful of active clients left by the time we desupport a
@@ -117,7 +117,7 @@ bug fix release, transparently documenting the issue(s) using the
industry-standard [CVE advisory process](https://cve.mitre.org/).
When new security releases are published, we simultaneously publish
the fixes to the `master` and stable release branches (E.g. `4.x`), so
the fixes to the `main` and stable release branches (E.g. `4.x`), so
that anyone using those branches can immediately upgrade as well.
See also our [security model][security-model] documentation.
@@ -130,9 +130,9 @@ Starting with Zulip 4.0, the Zulip webapp will display a banner
warning users of a server running a Zulip release that is more than 18
months old. We do this for a few reasons:
* It is unlikely that a server of that age is not vulnerable to
- It is unlikely that a server of that age is not vulnerable to
a security bug in Zulip or one of its dependencies.
* The Zulip mobile and desktop apps are only guaranteed to support
- The Zulip mobile and desktop apps are only guaranteed to support
server versions less than 18 months old.
The nag will appear only to organization administrators starting a
@@ -147,7 +147,7 @@ You can adjust the deadline for your installation by setting e.g.
For platforms we support, like Debian and Ubuntu, Zulip aims to
support all versions of the upstream operating systems that are fully
supported by the vendor. We document how to correctly [upgrade the
supported by the vendor. We document how to correctly [upgrade the
operating system][os-upgrade] for a Zulip server, including how to
correctly chain upgrades when the latest Zulip release no longer
supports your OS.
@@ -163,11 +163,11 @@ releases, and do not support them in production.
The Zulip server project uses several GitHub labels to structure
communication within the project about priorities:
* The [high priority][label-high] label tags issues that we consider
- The [high priority][label-high] label tags issues that we consider
important. This label is meant to be a determination of importance
that can be done quickly and then used as an input to planning
processes.
* The [release goal][label-release-goal] label is used for work that
- The [release goal][label-release-goal] label is used for work that
we hope to include in the next major release. The related [post
release][label-post-release] label is used to track work we want to
focus on shortly after the next major release.
@@ -177,7 +177,7 @@ aggregate, just as important as the big things. Most resolved issues
do not have any of these priority labels.
We welcome participation from our user community in influencing the
Zulip roadmap. If a bug or missing feature is causing significant
Zulip roadmap. If a bug or missing feature is causing significant
pain for you, we'd love to hear from you, either in
[chat.zulip.org](../contributing/chat-zulip-org.md) or on the relevant
GitHub issue. Please an include an explanation of your use case: such
@@ -192,12 +192,12 @@ Zulip's client apps officially support all Zulip server versions (and
Git commits) released in the previous 18 months, matching the behavior
of our [upgrade nag](#upgrade-nag).
* The Zulip mobile apps release new versions from the development
- The Zulip mobile apps release new versions from the development
branch frequently (usually every couple weeks). Except when fixing a
critical bug, releases are first published to our [beta
channels][mobile-beta].
* The Zulip desktop apps are implemented in [Electron][electron], the
- The Zulip desktop apps are implemented in [Electron][electron], the
browser-based desktop application framework used by essentially all
modern chat applications. The Zulip UI in these apps is served from
the Zulip server (and thus can vary between tabs when it is
@@ -227,19 +227,13 @@ core community, like the Python and JavaScript bindings, are released
independently as needed.
[electron]: https://www.electronjs.org/
[upgrading-to-master]: ../production/upgrade-or-modify.html#upgrading-to-master
[upgrading-to-main]: ../production/upgrade-or-modify.html#upgrading-to-main
[os-upgrade]: ../production/upgrade-or-modify.html#upgrading-the-operating-system
[chat-zulip-org]: ../contributing/chat-zulip-org.md
[fork-zulip]: ../production/upgrade-or-modify.html#modifying-zulip
[zulip-server]: https://github.com/zulip/zulip
[mobile-beta]: https://github.com/zulip/zulip-mobile#using-the-beta
[label-blocker]:
https://github.com/zulip/zulip/issues?q=is%3Aissue+is%3Aopen+label%3A%22priority%3A+blocker%22
[label-high]:
https://github.com/zulip/zulip/issues?q=is%3Aissue+is%3Aopen+label%3A%22priority%3A+high%22
[label-release-goal]:
https://github.com/zulip/zulip/issues?q=is%3Aissue+is%3Aopen+label%3A%22release+goal%22
[label-post-release]:
https://github.com/zulip/zulip/issues?q=is%3Aissue+is%3Aopen+label%3A%22post+release%22
[label-blocker]: https://github.com/zulip/zulip/issues?q=is%3Aissue+is%3Aopen+label%3A%22priority%3A+blocker%22
[label-high]: https://github.com/zulip/zulip/issues?q=is%3Aissue+is%3Aopen+label%3A%22priority%3A+high%22
[label-release-goal]: https://github.com/zulip/zulip/issues?q=is%3Aissue+is%3Aopen+label%3A%22release+goal%22
[label-post-release]: https://github.com/zulip/zulip/issues?q=is%3Aissue+is%3Aopen+label%3A%22post+release%22

View File

@@ -1,6 +1,6 @@
# Authentication methods
Zulip supports a wide variety of authentication methods. Some of them
Zulip supports a wide variety of authentication methods. Some of them
require configuration to set up.
To configure or disable authentication methods on your Zulip server,
@@ -19,22 +19,25 @@ Users set a password with the Zulip server, and log in with their
email and password.
When first setting up your Zulip server, this method must be used for
creating the initial realm and user. You can disable it after that.
creating the initial realm and user. You can disable it after that.
## Plug-and-play SSO (Google, GitHub, GitLab)
With just a few lines of configuration, your Zulip server can
authenticate users with any of several single-sign-on (SSO)
authentication providers:
* Google accounts, with `GoogleAuthBackend`
* GitHub accounts, with `GitHubAuthBackend`
* GitLab accounts, with `GitLabAuthBackend`
* Microsoft Azure Active Directory, with `AzureADAuthBackend`
- Google accounts, with `GoogleAuthBackend`
- GitHub accounts, with `GitHubAuthBackend`
- GitLab accounts, with `GitLabAuthBackend`
- Microsoft Azure Active Directory, with `AzureADAuthBackend`
Each of these requires one to a handful of lines of configuration in
`settings.py`, as well as a secret in `zulip-secrets.conf`. Details
`settings.py`, as well as a secret in `zulip-secrets.conf`. Details
are documented in your `settings.py`.
(ldap)=
## LDAP (including Active Directory)
Zulip supports retrieving information about users via LDAP, and
@@ -43,26 +46,28 @@ optionally using LDAP as an authentication mechanism.
In either configuration, you will need to do the following:
1. These instructions assume you have an installed Zulip server and
are logged into a shell there. You can have created an
are logged into a shell there. You can have created an
organization already using EmailAuthBackend, or plan to create the
organization using LDAP authentication.
1. Tell Zulip how to connect to your LDAP server:
* Fill out the section of your `/etc/zulip/settings.py` headed "LDAP
- Fill out the section of your `/etc/zulip/settings.py` headed "LDAP
integration, part 1: Connecting to the LDAP server".
* If a password is required, put it in
- If a password is required, put it in
`/etc/zulip/zulip-secrets.conf` by setting
`auth_ldap_bind_password`. For example: `auth_ldap_bind_password
= abcd1234`.
`auth_ldap_bind_password`. For example:
`auth_ldap_bind_password = abcd1234`.
1. Decide how you want to map the information in your LDAP database to
users' account data in Zulip. For each Zulip user, two closely
users' account data in Zulip. For each Zulip user, two closely
related concepts are:
* their **email address**. Zulip needs this in order to send, for
- their **email address**. Zulip needs this in order to send, for
example, a notification when they're offline and another user
sends a PM.
* their **Zulip username**. This means the name the user types into the
Zulip login form. You might choose for this to be the user's
- their **Zulip username**. This means the name the user types into the
Zulip login form. You might choose for this to be the user's
email address (`sam@example.com`), or look like a traditional
"username" (`sam`), or be something else entirely, depending on
your environment.
@@ -71,78 +76,82 @@ In either configuration, you will need to do the following:
in your LDAP database.
1. Tell Zulip how to map the user information in your LDAP database to
the form it needs for authentication. There are three supported
the form it needs for authentication. There are three supported
ways to set up the username and/or email mapping:
(A) Using email addresses as Zulip usernames, if LDAP has each
user's email address:
* Make `AUTH_LDAP_USER_SEARCH` a query by email address.
* Set `AUTH_LDAP_REVERSE_EMAIL_SEARCH` to the same query with
`%(email)s` rather than `%(user)s` as the search parameter.
* Set `AUTH_LDAP_USERNAME_ATTR` to the name of the LDAP
attribute for the user's LDAP username in the search result
for `AUTH_LDAP_REVERSE_EMAIL_SEARCH`.
user's email address:
- Make `AUTH_LDAP_USER_SEARCH` a query by email address.
- Set `AUTH_LDAP_REVERSE_EMAIL_SEARCH` to the same query with
`%(email)s` rather than `%(user)s` as the search parameter.
- Set `AUTH_LDAP_USERNAME_ATTR` to the name of the LDAP
attribute for the user's LDAP username in the search result
for `AUTH_LDAP_REVERSE_EMAIL_SEARCH`.
(B) Using LDAP usernames as Zulip usernames, with email addresses
formed consistently like `sam` -> `sam@example.com`:
* Set `AUTH_LDAP_USER_SEARCH` to query by LDAP username
* Set `LDAP_APPEND_DOMAIN = "example.com"`.
formed consistently like `sam` -> `sam@example.com`:
- Set `AUTH_LDAP_USER_SEARCH` to query by LDAP username
- Set `LDAP_APPEND_DOMAIN = "example.com"`.
(C) Using LDAP usernames as Zulip usernames, with email addresses
taken from some other attribute in LDAP (for example, `mail`):
* Set `AUTH_LDAP_USER_SEARCH` to query by LDAP username
* Set `LDAP_EMAIL_ATTR = "mail"`.
* Set `AUTH_LDAP_REVERSE_EMAIL_SEARCH` to a query that will find
an LDAP user given their email address (i.e. a search by
`LDAP_EMAIL_ATTR`). For example:
```
AUTH_LDAP_REVERSE_EMAIL_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(email)s)")
```
* Set `AUTH_LDAP_USERNAME_ATTR` to the name of the LDAP
attribute for the user's LDAP username in that search result.
taken from some other attribute in LDAP (for example, `mail`):
- Set `AUTH_LDAP_USER_SEARCH` to query by LDAP username
- Set `LDAP_EMAIL_ATTR = "mail"`.
- Set `AUTH_LDAP_REVERSE_EMAIL_SEARCH` to a query that will find
an LDAP user given their email address (i.e. a search by
`LDAP_EMAIL_ATTR`). For example:
```python
AUTH_LDAP_REVERSE_EMAIL_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(email)s)")
```
- Set `AUTH_LDAP_USERNAME_ATTR` to the name of the LDAP
attribute for the user's LDAP username in that search result.
You can quickly test whether your configuration works by running:
```
```bash
/home/zulip/deployments/current/manage.py query_ldap username
```
from the root of your Zulip installation. If your configuration is
from the root of your Zulip installation. If your configuration is
working, that will output the full name for your user (and that user's
email address, if it isn't the same as the "Zulip username").
**Active Directory**: Most Active Directory installations will use one
of the following configurations:
* To access by Active Directory username:
```
AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(sAMAccountName=%(user)s)")
AUTH_LDAP_REVERSE_EMAIL_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(email)s)")
AUTH_LDAP_USERNAME_ATTR = "sAMAccountName"
```
- To access by Active Directory username:
* To access by Active Directory email address:
```
AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(user)s)")
AUTH_LDAP_REVERSE_EMAIL_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(email)s)")
AUTH_LDAP_USERNAME_ATTR = "mail"
```
```python
AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(sAMAccountName=%(user)s)")
AUTH_LDAP_REVERSE_EMAIL_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(email)s)")
AUTH_LDAP_USERNAME_ATTR = "sAMAccountName"
```
- To access by Active Directory email address:
```python
AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(user)s)")
AUTH_LDAP_REVERSE_EMAIL_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(email)s)")
AUTH_LDAP_USERNAME_ATTR = "mail"
```
**If you are using LDAP for authentication**: you will need to enable
the `zproject.backends.ZulipLDAPAuthBackend` auth backend, in
`AUTHENTICATION_BACKENDS` in `/etc/zulip/settings.py`. After doing so
`AUTHENTICATION_BACKENDS` in `/etc/zulip/settings.py`. After doing so
(and as always [restarting the Zulip server](settings.md) to ensure
your settings changes take effect), you should be able to log in to
Zulip by entering your email address and LDAP password on the Zulip
login form.
You may also want to configure Zulip's settings for [inviting new
users](https://zulip.com/help/invite-new-users). If LDAP is the
users](https://zulip.com/help/invite-new-users). If LDAP is the
only enabled authentication method, the main use case for Zulip's
invitation feature is selecting the initial streams for invited users
(invited users will still need to use their LDAP password to create an
@@ -154,7 +163,7 @@ Zulip can automatically synchronize data declared in
`AUTH_LDAP_USER_ATTR_MAP` from LDAP into Zulip, via the following
management command:
```
```bash
/home/zulip/deployments/current/manage.py sync_ldap_user_data
```
@@ -165,11 +174,12 @@ We recommend running this command in a **regular cron job**, to pick
up changes made on your LDAP server.
All of these data synchronization options have the same model:
* New users will be populated automatically with the
- New users will be populated automatically with the
name/avatar/etc. from LDAP (as configured) on account creation.
* The `manage.py sync_ldap_user_data` cron job will automatically
- The `manage.py sync_ldap_user_data` cron job will automatically
update existing users with any changes that were made in LDAP.
* You can easily test your configuration using `manage.py query_ldap`.
- You can easily test your configuration using `manage.py query_ldap`.
Once you're happy with the configuration, remember to restart the
Zulip server with
`/home/zulip/deployments/current/scripts/restart-server` so that
@@ -193,10 +203,10 @@ or `jpegPhoto` attribute in LDAP) by configuring the `avatar` key in
Starting with Zulip 2.0, Zulip supports syncing
[custom profile fields][custom-profile-fields] from LDAP / Active
Directory. To configure this, you first need to
Directory. To configure this, you first need to
[configure some custom profile fields][custom-profile-fields] for your
Zulip organization. Then, define a mapping from the fields you'd like
to sync from LDAP to the corresponding LDAP attributes. For example,
Zulip organization. Then, define a mapping from the fields you'd like
to sync from LDAP to the corresponding LDAP attributes. For example,
if you have a custom profile field `LinkedIn Profile` and the
corresponding LDAP attribute is `linkedinProfile` then you just need
to add `'custom_profile_field__linkedin_profile': 'linkedinProfile'`
@@ -207,22 +217,22 @@ to the `AUTH_LDAP_USER_ATTR_MAP`.
#### Automatically deactivating users with Active Directory
Starting with Zulip 2.0, Zulip supports synchronizing the
disabled/deactivated status of users from Active Directory. You can
configure this by uncommenting the sample line `"userAccountControl":
"userAccountControl",` in `AUTH_LDAP_USER_ATTR_MAP` (and restarting
the Zulip server). Zulip will then treat users that are disabled via
the "Disable Account" feature in Active Directory as deactivated in
Zulip.
disabled/deactivated status of users from Active Directory. You can
configure this by uncommenting the sample line
`"userAccountControl": "userAccountControl",` in
`AUTH_LDAP_USER_ATTR_MAP` (and restarting the Zulip server). Zulip
will then treat users that are disabled via the "Disable Account"
feature in Active Directory as deactivated in Zulip.
Users disabled in active directory will be immediately unable to log in
to Zulip, since Zulip queries the LDAP/Active Directory server on
every login attempt. The user will be fully deactivated the next time
every login attempt. The user will be fully deactivated the next time
your `manage.py sync_ldap_user_data` cron job runs (at which point
they will be forcefully logged out from all active browser sessions,
appear as deactivated in the Zulip UI, etc.).
This feature works by checking for the `ACCOUNTDISABLE` flag on the
`userAccountControl` field in Active Directory. See
`userAccountControl` field in Active Directory. See
[this handy resource](https://jackstromberg.com/2013/01/useraccountcontrol-attributeflag-values/)
for details on the various `userAccountControl` flags.
@@ -231,28 +241,28 @@ for details on the various `userAccountControl` flags.
Starting with Zulip 2.0, Zulip supports automatically deactivating
users if they are not found by the `AUTH_LDAP_USER_SEARCH` query
(either because the user is no longer in LDAP/Active Directory, or
because the user no longer matches the query). This feature is
because the user no longer matches the query). This feature is
enabled by default if LDAP is the only authentication backend
configured on the Zulip server. Otherwise, you can enable this
configured on the Zulip server. Otherwise, you can enable this
feature by setting `LDAP_DEACTIVATE_NON_MATCHING_USERS` to `True` in
`/etc/zulip/settings.py`. Nonmatching users will be fully deactivated
`/etc/zulip/settings.py`. Nonmatching users will be fully deactivated
the next time your `manage.py sync_ldap_user_data` cron job runs.
#### Other fields
Other fields you may want to sync from LDAP include:
* Boolean flags; `is_realm_admin` (the organization's administrator
permission) is the main one. You can use the
- Boolean flags; `is_realm_admin` (the organization's administrator
permission) is the main one. You can use the
[AUTH_LDAP_USER_FLAGS_BY_GROUP][django-auth-booleans] feature of
`django-auth-ldap` to configure a group to get this permissions.
(We don't recommend using this flags feature for managing
`is_active` because deactivating a user this way would not disable
any active sessions the user might have; see the above discussion of
automatic deactivation for how to do that properly).
* String fields like `default_language` (e.g. `en`) or `timezone`, if
- String fields like `default_language` (e.g. `en`) or `timezone`, if
you have that data in the right format in your LDAP database.
* [Coming soon][custom-profile-fields-ldap]: Support for syncing
- [Coming soon][custom-profile-fields-ldap]: Support for syncing
[custom profile fields](https://zulip.com/help/add-custom-profile-fields)
from your LDAP database.
@@ -260,14 +270,15 @@ You can look at the [full list of fields][models-py] in the Zulip user
model; search for `class UserProfile`, but the above should cover all
the fields that would be useful to sync from your LDAP databases.
[models-py]: https://github.com/zulip/zulip/blob/master/zerver/models.py
[models-py]: https://github.com/zulip/zulip/blob/main/zerver/models.py
[django-auth-booleans]: https://django-auth-ldap.readthedocs.io/en/latest/users.html#easy-attributes
[custom-profile-fields-ldap]: https://github.com/zulip/zulip/issues/10976
### Multiple LDAP searches
To do the union of multiple LDAP searches, use `LDAPSearchUnion`. For example:
```
To do the union of multiple LDAP searches, use `LDAPSearchUnion`. For example:
```python
AUTH_LDAP_USER_SEARCH = LDAPSearchUnion(
LDAPSearch("ou=users,dc=example,dc=com", ldap.SCOPE_SUBTREE, "(uid=%(user)s)"),
LDAPSearch("ou=otherusers,dc=example,dc=com", ldap.SCOPE_SUBTREE, "(uid=%(user)s)"),
@@ -278,7 +289,7 @@ AUTH_LDAP_USER_SEARCH = LDAPSearchUnion(
You can restrict access to your Zulip server to a set of LDAP groups
using the `AUTH_LDAP_REQUIRE_GROUP` and `AUTH_LDAP_DENY_GROUP`
settings in `/etc/zulip/settings.py`. See the
settings in `/etc/zulip/settings.py`. See the
[upstream django-auth-ldap documentation][upstream-ldap-groups] for
details.
@@ -289,7 +300,7 @@ details.
If you're hosting multiple Zulip organizations, you can restrict which
users have access to which organizations.
This is done by setting `org_membership` in `AUTH_LDAP_USER_ATTR_MAP` to the name of
the LDAP attribute which will contain a list of subdomains that the
the LDAP attribute which will contain a list of subdomains that the
user should be allowed to access.
For the root subdomain, `www` in the list will work, or any other of
@@ -297,7 +308,8 @@ For the root subdomain, `www` in the list will work, or any other of
For example, with `org_membership` set to `department`, a user with
the following attributes will have access to the root and `engineering` subdomains:
```
```text
...
department: engineering
department: www
@@ -305,7 +317,7 @@ department: www
```
More complex access control rules are possible via the
`AUTH_LDAP_ADVANCED_REALM_ACCESS_CONTROL` setting. Note that
`AUTH_LDAP_ADVANCED_REALM_ACCESS_CONTROL` setting. Note that
`org_membership` takes precedence over
`AUTH_LDAP_ADVANCED_REALM_ACCESS_CONTROL`:
@@ -313,34 +325,33 @@ More complex access control rules are possible via the
2. If `org_membership` is not set or does not allow access,
`AUTH_LDAP_ADVANCED_REALM_ACCESS_CONTROL` will control access.
This contains a map keyed by the organization's subdomain. The
This contains a map keyed by the organization's subdomain. The
organization list with multiple maps, that contain a map with an attribute, and a required
value for that attribute. If for any of the attribute maps, all user's
LDAP attributes match what is configured, access is granted.
```eval_rst
.. warning::
Restricting access using these mechanisms only affects authentication via LDAP,
and won't prevent users from accessing the organization using any other
authentication backends that are enabled for the organization.
```
:::{warning}
Restricting access using these mechanisms only affects authentication via LDAP,
and won't prevent users from accessing the organization using any other
authentication backends that are enabled for the organization.
:::
### Troubleshooting
Most issues with LDAP authentication are caused by misconfigurations of
the user and email search settings. Some things you can try to get to
the user and email search settings. Some things you can try to get to
the bottom of the problem:
* Review the instructions for the LDAP configuration type you're
- Review the instructions for the LDAP configuration type you're
using: (A), (B) or (C) (described above), and that you have
configured all of the required settings documented in the
instructions for that configuration type.
* Use the `manage.py query_ldap` tool to verify your configuration.
- Use the `manage.py query_ldap` tool to verify your configuration.
The output of the command will usually indicate the cause of any
configuration problem. For the LDAP integration to work, this
configuration problem. For the LDAP integration to work, this
command should be able to successfully fetch a complete, correct set
of data for the queried user.
* You can find LDAP-specific logs in `/var/log/zulip/ldap.log`. If
- You can find LDAP-specific logs in `/var/log/zulip/ldap.log`. If
you're asking for help with your setup, please provide logs from
this file (feel free to anonymize any email addresses to
`username@example.com`) in your report.
@@ -348,7 +359,7 @@ the bottom of the problem:
## SAML
Zulip 2.1 and later supports SAML authentication, used by Okta,
OneLogin, and many other IdPs (identity providers). You can configure
OneLogin, and many other IdPs (identity providers). You can configure
it as follows:
1. These instructions assume you have an installed Zulip server; if
@@ -361,39 +372,39 @@ it as follows:
1. Tell your IdP how to find your Zulip server:
* **SP Entity ID**: `https://yourzulipdomain.example.com`.
- **SP Entity ID**: `https://yourzulipdomain.example.com`.
The `Entity ID` should match the value of
`SOCIAL_AUTH_SAML_SP_ENTITY_ID` computed in the Zulip settings.
You can get the correct value by running the following:
`/home/zulip/deployments/current/scripts/get-django-setting
SOCIAL_AUTH_SAML_SP_ENTITY_ID`.
The `Entity ID` should match the value of
`SOCIAL_AUTH_SAML_SP_ENTITY_ID` computed in the Zulip settings.
You can get the correct value by running the following:
`/home/zulip/deployments/current/scripts/get-django-setting SOCIAL_AUTH_SAML_SP_ENTITY_ID`.
* **SSO URL**:
`https://yourzulipdomain.example.com/complete/saml/`. This is
the "SAML ACS url" in SAML terminology.
- **SSO URL**:
`https://yourzulipdomain.example.com/complete/saml/`. This is
the "SAML ACS url" in SAML terminology.
If you're
[hosting multiple organizations](../production/multiple-organizations.html#authentication),
you need to use `SOCIAL_AUTH_SUBDOMAIN`. For example,
if `SOCIAL_AUTH_SUBDOMAIN="auth"` and `EXTERNAL_HOST=zulip.example.com`,
this should be `https://auth.zulip.example.com/complete/saml/`.
If you're
[hosting multiple organizations](../production/multiple-organizations.html#authentication),
you need to use `SOCIAL_AUTH_SUBDOMAIN`. For example,
if `SOCIAL_AUTH_SUBDOMAIN="auth"` and `EXTERNAL_HOST=zulip.example.com`,
this should be `https://auth.zulip.example.com/complete/saml/`.
2. Tell Zulip how to connect to your SAML provider(s) by filling
1. Tell Zulip how to connect to your SAML provider(s) by filling
out the section of `/etc/zulip/settings.py` on your Zulip server
with the heading "SAML Authentication".
* You will need to update `SOCIAL_AUTH_SAML_ORG_INFO` with your
- You will need to update `SOCIAL_AUTH_SAML_ORG_INFO` with your
organization name (`displayname` may appear in the IdP's
authentication flow; `name` won't be displayed to humans).
* Fill out `SOCIAL_AUTH_SAML_ENABLED_IDPS` with data provided by
your identity provider. You may find [the python-social-auth
- Fill out `SOCIAL_AUTH_SAML_ENABLED_IDPS` with data provided by
your identity provider. You may find [the python-social-auth
SAML
docs](https://python-social-auth.readthedocs.io/en/latest/backends/saml.html)
helpful. You'll need to obtain several values from your IdP's
helpful. You'll need to obtain several values from your IdP's
metadata and enter them on the right-hand side of this
Python dictionary:
1. Set the outer `idp_name` key to be an identifier for your IdP,
e.g. `testshib` or `okta`. This field appears in URLs for
e.g. `testshib` or `okta`. This field appears in URLs for
parts of your Zulip server's SAML authentication flow.
2. The IdP should provide the `url` and `entity_id` values.
3. Save the `x509cert` value to a file; you'll use it in the
@@ -401,55 +412,57 @@ it as follows:
4. The values needed in the `attr_` fields are often configurable
in your IdP's interface when setting up SAML authentication
(referred to as "Attribute Statements" with Okta, or
"Attribute Mapping" with GSuite). You'll want to connect
"Attribute Mapping" with GSuite). You'll want to connect
these so that Zulip gets the email address (used as a unique
user ID) and name for the user.
5. The `display_name` and `display_icon` fields are used to
display the login/registration buttons for the IdP.
6. The `auto_signup` field determines how Zulip should handle
login attempts by users who don't have an account yet.
3. Install the certificate(s) required for SAML authentication. You
will definitely need the public certificate of your IdP. Some IdP
providers also support the Zulip server (Service Provider) having
a certificate used for encryption and signing. We detail these
steps as optional below, because they aren't required for basic
setup, and some IdPs like Okta don't fully support Service
Provider certificates. You should install them as follows:
1. Install the certificate(s) required for SAML authentication. You
will definitely need the public certificate of your IdP. Some IdP
providers also support the Zulip server (Service Provider) having
a certificate used for encryption and signing. We detail these
steps as optional below, because they aren't required for basic
setup, and some IdPs like Okta don't fully support Service
Provider certificates. You should install them as follows:
1. On your Zulip server, `mkdir -p /etc/zulip/saml/idps/`
2. Put the IDP public certificate in `/etc/zulip/saml/idps/{idp_name}.crt`
3. (Optional) Put the Zulip server public certificate in `/etc/zulip/saml/zulip-cert.crt`
and the corresponding private key in `/etc/zulip/saml/zulip-private-key.key`. Note that
the certificate should be the single X.509 certificate for the server, not a full chain of
trust, which consists of multiple certificates.
4. Set the proper permissions on these files and directories:
1. On your Zulip server, `mkdir -p /etc/zulip/saml/idps/`
2. Put the IDP public certificate in `/etc/zulip/saml/idps/{idp_name}.crt`
3. (Optional) Put the Zulip server public certificate in `/etc/zulip/saml/zulip-cert.crt`
and the corresponding private key in `/etc/zulip/saml/zulip-private-key.key`. Note that
the certificate should be the single X.509 certificate for the server, not a full chain of
trust, which consists of multiple certificates.
4. Set the proper permissions on these files and directories:
```
chown -R zulip.zulip /etc/zulip/saml/
find /etc/zulip/saml/ -type f -exec chmod 644 -- {} +
chmod 640 /etc/zulip/saml/zulip-private-key.key
```
```bash
chown -R zulip.zulip /etc/zulip/saml/
find /etc/zulip/saml/ -type f -exec chmod 644 -- {} +
chmod 640 /etc/zulip/saml/zulip-private-key.key
```
4. (Optional) If you configured the optional public and private server
1. (Optional) If you configured the optional public and private server
certificates above, you can enable the additional setting
`"authnRequestsSigned": True` in `SOCIAL_AUTH_SAML_SECURITY_CONFIG`
to have the SAMLRequests the server will be issuing to the IdP
signed using those certificates. Additionally, if the IdP supports
signed using those certificates. Additionally, if the IdP supports
it, you can upload the public certificate to enable encryption of
assertions in the SAMLResponses the IdP will send about
authenticated users.
5. Enable the `zproject.backends.SAMLAuthBackend` auth backend, in
`AUTHENTICATION_BACKENDS` in `/etc/zulip/settings.py`.
1. Enable the `zproject.backends.SAMLAuthBackend` auth backend, in
`AUTHENTICATION_BACKENDS` in `/etc/zulip/settings.py`.
6. [Restart the Zulip server](../production/settings.md) to ensure
your settings changes take effect. The Zulip login page should now
have a button for SAML authentication that you can use to log in or
create an account (including when creating a new organization).
1. [Restart the Zulip server](../production/settings.md) to ensure
your settings changes take effect. The Zulip login page should now
have a button for SAML authentication that you can use to log in or
create an account (including when creating a new organization).
7. If the configuration was successful, the server's metadata can be
found at `https://yourzulipdomain.example.com/saml/metadata.xml`. You
can use this for verifying your configuration or provide it to your
IdP.
1. If the configuration was successful, the server's metadata can be
found at `https://yourzulipdomain.example.com/saml/metadata.xml`. You
can use this for verifying your configuration or provide it to your
IdP.
[saml-help-center]: https://zulip.com/help/saml-authentication
@@ -457,7 +470,7 @@ IdP.
The above configuration is sufficient for Service Provider initialized
SSO, i.e. you can visit the Zulip webapp and click "Sign in with
{IdP}" and it'll correctly start the authentication flow. If you are
{IdP}" and it'll correctly start the authentication flow. If you are
not hosting multiple organizations, with Zulip 3.0+, the above
configuration is also sufficient for Identity Provider initiated SSO,
i.e. clicking a "Sign in to Zulip" button on the IdP's website can
@@ -465,15 +478,11 @@ correctly authenticate the user to Zulip.
If you're hosting multiple organizations and thus using the
`SOCIAL_AUTH_SUBDOMAIN` setting, you'll need to configure a custom
`RelayState` in your IdP of the form `{"subdomain":
"yourzuliporganization"}` to let Zulip know which organization to
authenticate the user to when they visit your SSO URL from the IdP.
(If the organization is on the root domain, use the empty string:
`{"subdomain": ""}`.).
```eval_rst
.. _ldap:
```
`RelayState` in your IdP of the form
`{"subdomain": "yourzuliporganization"}` to let Zulip know which
organization to authenticate the user to when they visit your SSO URL
from the IdP. (If the organization is on the root domain, use the
empty string: `{"subdomain": ""}`.).
### Restricting access to specific organizations
@@ -491,7 +500,7 @@ For example, with `attr_org_membership` set to `member`, a user with
the following attribute in their `AttributeStatement` will have access
to the root and `engineering` subdomains:
```
```xml
<saml2:Attribute Name="member" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">
www
@@ -502,7 +511,6 @@ to the root and `engineering` subdomains:
</saml2:Attribute>
```
## Apache-based SSO with `REMOTE_USER`
If you have any existing SSO solution where a preferred way to deploy
@@ -514,17 +522,17 @@ straightforward way to deploy that SSO solution with Zulip.
1. In `/etc/zulip/settings.py`, configure two settings:
* `AUTHENTICATION_BACKENDS`: `'zproject.backends.ZulipRemoteUserBackend'`,
- `AUTHENTICATION_BACKENDS`: `'zproject.backends.ZulipRemoteUserBackend'`,
and no other entries.
* `SSO_APPEND_DOMAIN`: see documentation in `settings.py`.
- `SSO_APPEND_DOMAIN`: see documentation in `settings.py`.
Make sure that you've restarted the Zulip server since making this
configuration change.
2. Edit `/etc/zulip/zulip.conf` and change the `puppet_classes` line to read:
```
```ini
puppet_classes = zulip::profile::standalone, zulip::apache_sso
```
@@ -533,7 +541,7 @@ straightforward way to deploy that SSO solution with Zulip.
4. To configure our SSO integration, edit a copy of
`/etc/apache2/sites-available/zulip-sso.example`, saving the result
as `/etc/apache2/sites-available/zulip-sso.conf`. The example sets
as `/etc/apache2/sites-available/zulip-sso.conf`. The example sets
up HTTP basic auth, with an `htpasswd` file; you'll want to replace
that with configuration for your SSO solution to authenticate the
user and set `REMOTE_USER`.
@@ -541,8 +549,9 @@ straightforward way to deploy that SSO solution with Zulip.
For testing, you may want to move ahead with the rest of the setup
using the `htpasswd` example configuration and demonstrate that
working end-to-end, before returning later to configure your SSO
solution. You can do that with the following steps:
```
solution. You can do that with the following steps:
```bash
/home/zulip/deployments/current/scripts/restart-server
cd /etc/apache2/sites-available/
cp zulip-sso.example zulip-sso.conf
@@ -551,9 +560,9 @@ straightforward way to deploy that SSO solution with Zulip.
5. Run `a2ensite zulip-sso` to enable the SSO integration within Apache.
6. Run `service apache2 reload` to use your new configuration. If
Apache isn't already running, you may need to run `service apache2
start` instead.
6. Run `service apache2 reload` to use your new configuration. If
Apache isn't already running, you may need to run
`service apache2 start` instead.
Now you should be able to visit your Zulip server in a browser (e.g.,
at `https://zulip.example.com/`) and log in via the SSO solution.
@@ -561,28 +570,28 @@ at `https://zulip.example.com/`) and log in via the SSO solution.
### Troubleshooting Apache-based SSO
Most issues with this setup tend to be subtle issues with the
hostname/DNS side of the configuration. Suggestions for how to
hostname/DNS side of the configuration. Suggestions for how to
improve this SSO setup documentation are very welcome!
* For example, common issues have to do with `/etc/hosts` not mapping
- For example, common issues have to do with `/etc/hosts` not mapping
`settings.EXTERNAL_HOST` to the Apache listening on
`127.0.0.1`/`localhost`.
* While debugging, it can often help to temporarily change the Apache
- While debugging, it can often help to temporarily change the Apache
config in `/etc/apache2/sites-available/zulip-sso` to listen on all
interfaces rather than just `127.0.0.1`.
* While debugging, it can also be helpful to change `proxy_pass` in
- While debugging, it can also be helpful to change `proxy_pass` in
`/etc/nginx/zulip-include/app.d/external-sso.conf` to point to a
more explicit URL, possibly not over HTTPS.
* The following log files can be helpful when debugging this setup:
- The following log files can be helpful when debugging this setup:
* `/var/log/zulip/{errors.log,server.log}` (the usual places)
* `/var/log/nginx/access.log` (nginx access logs)
* `/var/log/apache2/zulip_auth_access.log` (from the
`zulip-sso.conf` Apache config file; you may want to change
`LogLevel` in that file to "debug" to make this more verbose)
- `/var/log/zulip/{errors.log,server.log}` (the usual places)
- `/var/log/nginx/access.log` (nginx access logs)
- `/var/log/apache2/zulip_auth_access.log` (from the
`zulip-sso.conf` Apache config file; you may want to change
`LogLevel` in that file to "debug" to make this more verbose)
### Life of an Apache-based SSO login attempt
@@ -591,31 +600,31 @@ assuming you're using the example configuration with HTTP basic auth.
This summary should help with understanding what's going on as you try
to debug.
* Since you've configured `/etc/zulip/settings.py` to only define the
- Since you've configured `/etc/zulip/settings.py` to only define the
`zproject.backends.ZulipRemoteUserBackend`,
`zproject/computed_settings.py` configures `/accounts/login/sso/` as
`HOME_NOT_LOGGED_IN`. This makes `https://zulip.example.com/`
`HOME_NOT_LOGGED_IN`. This makes `https://zulip.example.com/`
(a.k.a. the homepage for the main Zulip Django app running behind
nginx) redirect to `/accounts/login/sso/` for a user that isn't
logged in.
* nginx proxies requests to `/accounts/login/sso/` to an Apache
- nginx proxies requests to `/accounts/login/sso/` to an Apache
instance listening on `localhost:8888`, via the config in
`/etc/nginx/zulip-include/app.d/external-sso.conf` (using the
upstream `localhost_sso`, defined in `/etc/nginx/zulip-include/upstreams`).
* The Apache `zulip-sso` site which you've enabled listens on
- The Apache `zulip-sso` site which you've enabled listens on
`localhost:8888` and (in the example config) presents the `htpasswd`
dialogue. (In a real configuration, it takes the user through
whatever more complex interaction your SSO solution performs.) The
dialogue. (In a real configuration, it takes the user through
whatever more complex interaction your SSO solution performs.) The
user provides correct login information, and the request reaches a
second Zulip Django app instance, running behind Apache, with
`REMOTE_USER` set. That request is served by
`REMOTE_USER` set. That request is served by
`zerver.views.remote_user_sso`, which just checks the `REMOTE_USER`
variable and either logs the user in or, if they don't have an
account already, registers them. The login sets a cookie.
account already, registers them. The login sets a cookie.
* After succeeding, that redirects the user back to `/` on port 443.
- After succeeding, that redirects the user back to `/` on port 443.
This request is sent by nginx to the main Zulip Django app, which
sees the cookie, treats them as logged in, and proceeds to serve
them the main app page normally.
@@ -623,43 +632,44 @@ to debug.
## Sign in with Apple
Zulip supports using the web flow for Sign in with Apple on
self-hosted servers. To do so, you'll need to do the following:
self-hosted servers. To do so, you'll need to do the following:
1. Visit [the Apple Developer site][apple-developer] and [Create a
Services ID.][apple-create-services-id]. When prompted for a "Return
URL", enter `https://zulip.example.com/complete/apple/` (using the
domain for your server).
Services ID.][apple-create-services-id]. When prompted for a "Return
URL", enter `https://zulip.example.com/complete/apple/` (using the
domain for your server).
1. Create a [Sign in with Apple private key][apple-create-private-key].
1. Store the resulting private key at
`/etc/zulip/apple-auth-key.p8`. Be sure to set
`/etc/zulip/apple-auth-key.p8`. Be sure to set
permissions correctly:
```
```bash
chown zulip:zulip /etc/zulip/apple-auth-key.p8
chmod 640 /etc/zulip/apple-auth-key.p8
```
1. Configure Apple authentication in `/etc/zulip/settings.py`:
* `SOCIAL_AUTH_APPLE_TEAM`: Your Team ID from Apple, which is a
- `SOCIAL_AUTH_APPLE_TEAM`: Your Team ID from Apple, which is a
string like "A1B2C3D4E5".
* `SOCIAL_AUTH_APPLE_SERVICES_ID`: The Services ID you created in
- `SOCIAL_AUTH_APPLE_SERVICES_ID`: The Services ID you created in
step 1, which might look like "com.example.services".
* `SOCIAL_AUTH_APPLE_APP_ID`: The App ID, or Bundle ID, of your
- `SOCIAL_AUTH_APPLE_APP_ID`: The App ID, or Bundle ID, of your
app that you used in step 1 to configure your Services ID.
This might look like "com.example.app".
* `SOCIAL_AUTH_APPLE_KEY`: Despite the name this is not a key, but
rather the Key ID of the key you created in step 2. This looks
- `SOCIAL_AUTH_APPLE_KEY`: Despite the name this is not a key, but
rather the Key ID of the key you created in step 2. This looks
like "F6G7H8I9J0".
* `AUTHENTICATION_BACKENDS`: Uncomment (or add) a line like
- `AUTHENTICATION_BACKENDS`: Uncomment (or add) a line like
`'zproject.backends.AppleAuthBackend',` to enable Apple auth
using the created configuration.
1. Register with Apple the email addresses or domains your Zulip
server sends email to users from. For instructions and background,
server sends email to users from. For instructions and background,
see the "Email Relay Service" subsection of
[this page][apple-get-started]. For details on what email
[this page][apple-get-started]. For details on what email
addresses Zulip sends from, see our
[outgoing email documentation][outgoing-email].
@@ -679,7 +689,7 @@ bit of code, and pull requests to add new backends are welcome.
For example, the
[Azure Active Directory integration](https://github.com/zulip/zulip/commit/49dbd85a8985b12666087f9ea36acb6f7da0aa4f)
was about 30 lines of code, plus some documentation and an
[automatically generated migration][schema-migrations]. We also have
[automatically generated migration][schema-migrations]. We also have
helpful developer documentation on
[testing auth backends](../development/authentication.md).
@@ -689,5 +699,5 @@ helpful developer documentation on
## Development only
The `DevAuthBackend` method is used only in development, to allow
passwordless login as any user in a development environment. It's
passwordless login as any user in a development environment. It's
mentioned on this page only for completeness.

View File

@@ -4,14 +4,14 @@ The default Zulip installation instructions will install a complete
Zulip server, with all of the services it needs, on a single machine.
For production deployment, however, it's common to want to do
something more complicated. This page documents the options for doing so.
something more complicated. This page documents the options for doing so.
## Installing Zulip from Git
To install a development version of Zulip from Git, just clone the Git
repository from GitHub:
```
```bash
# First, install Git if you don't have it installed already
sudo apt install git
git clone https://github.com/zulip/zulip.git zulip-server-git
@@ -21,25 +21,25 @@ and then
[continue the normal installation instructions](../production/install.html#step-2-install-zulip).
You can also [upgrade Zulip from Git](../production/upgrade-or-modify.html#upgrading-from-a-git-repository).
The most common use case for this is upgrading to `master` to get a
The most common use case for this is upgrading to `main` to get a
feature that hasn't made it into an official release yet (often
support for a new base OS release). See [upgrading to
master][upgrade-to-master] for notes on how `master` works and the
support for a new base OS release). See [upgrading to
main][upgrade-to-main] for notes on how `main` works and the
support story for it, and [upgrading to future
releases][upgrade-to-future-release] for notes on upgrading Zulip
afterwards.
In particular, we are always very glad to investigate problems with
installing Zulip from `master`; they are rare and help us ensure that
installing Zulip from `main`; they are rare and help us ensure that
our next major release has a reliable install experience.
[upgrade-to-master]: ../production/upgrade-or-modify.html#upgrading-to-master
[upgrade-to-main]: ../production/upgrade-or-modify.html#upgrading-to-main
[upgrade-to-future-release]: ../production/upgrade-or-modify.html#upgrading-to-future-releases
## Zulip in Docker
Zulip has an officially supported, experimental
[docker image](https://github.com/zulip/docker-zulip). Please note
[docker image](https://github.com/zulip/docker-zulip). Please note
that Zulip's [normal installer](../production/install.md) has been
extremely reliable for years, whereas the Docker image is new and has
rough edges, so we recommend the normal installer unless you have a
@@ -51,21 +51,21 @@ The Zulip installer supports the following advanced installer options
as well as those mentioned in the
[install](../production/install.html#installer-options) documentation:
* `--postgresql-version`: Sets the version of PostgreSQL that will be
installed. We currently support PostgreSQL 10, 11, 12, and 13.
- `--postgresql-version`: Sets the version of PostgreSQL that will be
installed. We currently support PostgreSQL 10, 11, 12, and 13.
* `--postgresql-missing-dictionaries`: Set
- `--postgresql-missing-dictionaries`: Set
`postgresql.missing_dictionaries` ([docs][doc-settings]) in the
Zulip settings, which omits some configuration needed for full-text
indexing. This should be used with [cloud managed databases like
RDS](#using-zulip-with-amazon-rds-as-the-database). This option
conflicts with `--no-overwrite-settings`.
* `--no-init-db`: This option instructs the installer to not do any
- `--no-init-db`: This option instructs the installer to not do any
database initialization. This should be used when you already have a
Zulip database.
* `--no-overwrite-settings`: This option preserves existing
- `--no-overwrite-settings`: This option preserves existing
`/etc/zulip` configuration files.
## Running Zulip's service dependencies on different machines
@@ -86,16 +86,16 @@ configuration to be completely modular.
For example, to install a Zulip Redis server on a machine, you can run
the following after unpacking a Zulip production release tarball:
```
```bash
env PUPPET_CLASSES=zulip::profile::redis ./scripts/setup/install
```
All puppet modules under `zulip::profile` are allowed to be configured
stand-alone on a host. You can see most likely manifests you might
stand-alone on a host. You can see most likely manifests you might
want to choose in the list of includes in [the main manifest for the
default all-in-one Zulip server][standalone.pp], though it's also
possible to subclass some of the lower-level manifests defined in that
directory if you want to customize. A good example of doing this is
directory if you want to customize. A good example of doing this is
in the [zulip_ops Puppet configuration][zulipchat-puppet] that we use
as part of managing chat.zulip.org and zulip.com.
@@ -116,10 +116,10 @@ below.
#### Step 1: Set up Zulip
Follow the [standard instructions](../production/install.md), with one
change. When running the installer, pass the `--no-init-db`
change. When running the installer, pass the `--no-init-db`
flag, e.g.:
```
```bash
sudo -s # If not already root
./zulip-server-*/scripts/setup/install --certbot \
--email=YOUR_EMAIL --hostname=YOUR_HOSTNAME \
@@ -130,7 +130,7 @@ The script also installs and starts PostgreSQL on the server by
default. We don't need it, so run the following command to
stop and disable the local PostgreSQL server.
```
```bash
sudo service postgresql stop
sudo update-rc.d postgresql disable
```
@@ -142,9 +142,9 @@ This complication will be removed in a future version.
Access an administrative `psql` shell on your PostgreSQL database, and
run the commands in `scripts/setup/create-db.sql` to:
* Create a database called `zulip`.
* Create a user called `zulip`.
* Now log in with the `zulip` user to create a schema called
- Create a database called `zulip`.
- Create a user called `zulip`.
- Now log in with the `zulip` user to create a schema called
`zulip` in the `zulip` database. You might have to grant `create`
privileges first for the `zulip` user to do this.
@@ -157,23 +157,23 @@ database provider for the available options.
In `/etc/zulip/settings.py` on your Zulip server, configure the
following settings with details for how to connect to your PostgreSQL
server. Your database provider should provide these details.
server. Your database provider should provide these details.
* `REMOTE_POSTGRES_HOST`: Name or IP address of the PostgreSQL server.
* `REMOTE_POSTGRES_PORT`: Port on the PostgreSQL server.
* `REMOTE_POSTGRES_SSLMODE`: SSL Mode used to connect to the server.
- `REMOTE_POSTGRES_HOST`: Name or IP address of the PostgreSQL server.
- `REMOTE_POSTGRES_PORT`: Port on the PostgreSQL server.
- `REMOTE_POSTGRES_SSLMODE`: SSL Mode used to connect to the server.
If you're using password authentication, you should specify the
password of the `zulip` user in /etc/zulip/zulip-secrets.conf as
follows:
```
```ini
postgres_password = abcd1234
```
Now complete the installation by running the following commands.
```
```bash
# Ask Zulip installer to initialize the PostgreSQL database.
su zulip -c '/home/zulip/deployments/current/scripts/setup/initialize-database'
@@ -191,61 +191,68 @@ configure that as follows:
with `/home/zulip/deployments/current/scripts/restart-server`.
1. Add the following block to `/etc/zulip/zulip.conf`:
```
[application_server]
nginx_listen_port = 12345
```
```ini
[application_server]
nginx_listen_port = 12345
```
1. As root, run
`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. This
will convert Zulip's main `nginx` configuration file to use your new
port.
`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. This
will convert Zulip's main `nginx` configuration file to use your new
port.
We also have documentation for a Zulip server [using HTTP][using-http] for use
behind reverse proxies.
[using-http]: ../production/deployment.html#configuring-zulip-to-allow-http
## Using an outgoing HTTP proxy
## Customizing the outgoing HTTP proxy
Zulip supports routing all of its outgoing HTTP and HTTPS traffic
through an HTTP `CONNECT` proxy, such as [Smokescreen][smokescreen];
this includes outgoing webhooks, image and website previews, and
mobile push notifications. You may wish to enable this feature to
provide a consistent egress point, or enforce access control on URLs
to prevent [SSRF][ssrf] against internal resources.
To protect against [SSRF][ssrf], Zulip 4.8 and above default to
routing all outgoing HTTP and HTTPS traffic through
[Smokescreen][smokescreen], an HTTP `CONNECT` proxy; this includes
outgoing webhooks, website previews, and mobile push notifications.
By default, the Camo image proxy will be automatically configured to
use a custom outgoing proxy, but does not use Smokescreen by default
because Camo includes similar logic to deny access to private
subnets. You can [override][proxy.enable_for_camo] this default
configuration if desired.
To use Smokescreen:
1. Add `, zulip::profile::smokescreen` to the list of `puppet_classes`
in `/etc/zulip/zulip.conf`. A typical value after this change is:
```
puppet_classes = zulip::profile::standalone, zulip::profile::smokescreen
```
1. Optionally, configure the [smokescreen ACLs][smokescreen-acls]. By
default, Smokescreen denies access to all [non-public IP
addresses](https://en.wikipedia.org/wiki/Private_network), including
127.0.0.1.
To use a custom outgoing proxy:
1. Add the following block to `/etc/zulip/zulip.conf`, substituting in
your proxy's hostname/IP and port:
```
[http_proxy]
host = 127.0.0.1
port = 4750
```
```ini
[http_proxy]
host = 127.0.0.1
port = 4750
```
1. As root, run
`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. This
will compile and install Smokescreen, reconfigure services to use
it, and restart Zulip.
`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. This
will reconfigure and restart Zulip.
If you would like to use an already-installed HTTP proxy, omit the
first step, and adjust the IP address and port in the second step
accordingly.
If you have a deployment with multiple frontend servers, or wish to
install Smokescreen on a separate host, you can apply the
`zulip::profile::smokescreen` Puppet class on that host, and follow
the above steps, setting the `[http_proxy]` block to point to that
host.
If you wish to disable the outgoing proxy entirely, follow the above
steps, configuring an empty `host` value.
Optionally, you can also configure the [Smokescreen ACL
list][smokescreen-acls]. By default, Smokescreen denies access to all
[non-public IP
addresses](https://en.wikipedia.org/wiki/Private_network), including
127.0.0.1, but allows traffic to all public Internet hosts.
In Zulip 4.7 and older, to enable SSRF protection via Smokescreen, you
will need to explicitly add the `zulip::profile::smokescreen` Puppet
class, and configure the `[http_proxy]` block as above.
[proxy.enable_for_camo]: #enable-for-camo
[smokescreen]: https://github.com/stripe/smokescreen
[smokescreen-acls]: https://github.com/stripe/smokescreen#acls
[ssrf]: https://owasp.org/www-community/attacks/Server_Side_Request_Forgery
@@ -275,31 +282,59 @@ HTTP as follows:
1. Add the following block to `/etc/zulip/zulip.conf`:
```
[application_server]
http_only = true
```
```ini
[application_server]
http_only = true
```
1. As root, run
`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. This
will convert Zulip's main `nginx` configuration file to allow HTTP
instead of HTTPS.
`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. This
will convert Zulip's main `nginx` configuration file to allow HTTP
instead of HTTPS.
1. Finally, restart the Zulip server, using
`/home/zulip/deployments/current/scripts/restart-server`.
`/home/zulip/deployments/current/scripts/restart-server`.
#### Configuring Zulip to trust proxies
Before placing Zulip behind a reverse proxy, it needs to be configured to trust
the client IP addresses that the proxy reports. This is important to have
accurate IP addresses in server logs, as well as in notification emails which
are sent to end users.
1. Determine the IP addresses of all reverse proxies you are setting up, as seen
from the Zulip host. Depending on your network setup, these may not be the
same as the public IP addresses of the reverse proxies.
1. Add the following block to `/etc/zulip/zulip.conf`.
```ini
[loadbalancer]
# Use the IP addresses you determined above, separated by commas.
ips = 192.168.0.100
```
1. Reconfigure Zulip with these settings. As root, run
`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. This will
adjust Zulip's `nginx` configuration file to accept the `X-Forwarded-For`
header when it is sent from one of the reverse proxy IPs.
1. Finally, restart the Zulip server, using
`/home/zulip/deployments/current/scripts/restart-server`.
### nginx configuration
For `nginx` configuration, there's two things you need to set up:
* The root `nginx.conf` file. We recommend using
- The root `nginx.conf` file. We recommend using
`/etc/nginx/nginx.conf` from your Zulip server for our recommended
settings. E.g. if you don't set `client_max_body_size`, it won't be
settings. E.g. if you don't set `client_max_body_size`, it won't be
possible to upload large files to your Zulip server.
* The `nginx` site-specific configuration (in
`/etc/nginx/sites-available`) for the Zulip app. The following
- The `nginx` site-specific configuration (in
`/etc/nginx/sites-available`) for the Zulip app. The following
example is a good starting point:
```
```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
@@ -323,9 +358,13 @@ Don't forget to update `server_name`, `ssl_certificate`,
`ssl_certificate_key` and `proxy_pass` with the appropriate values for
your installation.
[nginx-proxy-longpolling-config]: https://github.com/zulip/zulip/blob/master/puppet/zulip/files/nginx/zulip-include-common/proxy_longpolling
[standalone.pp]: https://github.com/zulip/zulip/blob/master/puppet/zulip/manifests/profile/standalone.pp
[zulipchat-puppet]: https://github.com/zulip/zulip/tree/master/puppet/zulip_ops/manifests
On the Zulip side, you will need to add the `nginx` server IP as a trusted
reverse proxy. Follow the instructions to [configure Zulip to trust
proxies](#configuring-zulip-to-trust-proxies).
[nginx-proxy-longpolling-config]: https://github.com/zulip/zulip/blob/main/puppet/zulip/files/nginx/zulip-include-common/proxy_longpolling
[standalone.pp]: https://github.com/zulip/zulip/blob/main/puppet/zulip/manifests/profile/standalone.pp
[zulipchat-puppet]: https://github.com/zulip/zulip/tree/main/puppet/zulip_ops/manifests
### Apache2 configuration
@@ -336,64 +375,68 @@ make the following changes in two configuration files.
1. Follow the instructions for [Configure Zulip to allow HTTP](#configuring-zulip-to-allow-http).
2. Add the following to `/etc/zulip/settings.py`:
```
EXTERNAL_HOST = 'zulip.example.com'
ALLOWED_HOSTS = ['zulip.example.com', '127.0.0.1']
USE_X_FORWARDED_HOST = True
```
```python
EXTERNAL_HOST = 'zulip.example.com'
ALLOWED_HOSTS = ['zulip.example.com', '127.0.0.1']
USE_X_FORWARDED_HOST = True
```
3. Restart your Zulip server with `/home/zulip/deployments/current/scripts/restart-server`.
4. Create an Apache2 virtual host configuration file, similar to the
following. Place it the appropriate path for your Apache2
4. Follow the instructions to [configure Zulip to trust
proxies](#configuring-zulip-to-trust-proxies). For this example, the reverse
proxy IP would be `127.0.0.1`.
5. Create an Apache2 virtual host configuration file, similar to the
following. Place it the appropriate path for your Apache2
installation and enable it (E.g. if you use Debian or Ubuntu, then
place it in `/etc/apache2/sites-available/zulip.example.com.conf`
and then run `a2ensite zulip.example.com && systemctl reload
apache2`):
and then run
`a2ensite zulip.example.com && systemctl reload apache2`):
```
<VirtualHost *:80>
ServerName zulip.example.com
RewriteEngine On
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
</VirtualHost>
```apache
<VirtualHost *:80>
ServerName zulip.example.com
RewriteEngine On
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
</VirtualHost>
<VirtualHost *:443>
ServerName zulip.example.com
<VirtualHost *:443>
ServerName zulip.example.com
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
RequestHeader set "X-Forwarded-SSL" expr=%{HTTPS}
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
RequestHeader set "X-Forwarded-SSL" expr=%{HTTPS}
RewriteEngine On
RewriteRule /(.*) http://localhost:5080/$1 [P,L]
RewriteEngine On
RewriteRule /(.*) http://localhost:5080/$1 [P,L]
<Location />
Require all granted
ProxyPass http://localhost:5080/ timeout=300
ProxyPassReverse http://localhost:5080/
ProxyPassReverseCookieDomain 127.0.0.1 zulip.example.com
</Location>
<Location />
Require all granted
ProxyPass http://localhost:5080/ timeout=300
ProxyPassReverse http://localhost:5080/
ProxyPassReverseCookieDomain 127.0.0.1 zulip.example.com
</Location>
SSLEngine on
SSLProxyEngine on
SSLCertificateFile /etc/letsencrypt/live/zulip.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/zulip.example.com/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/nginx/dhparam.pem"
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
SSLHonorCipherOrder off
SSLSessionTickets off
Header set Strict-Transport-Security "max-age=31536000"
</VirtualHost>
```
SSLEngine on
SSLProxyEngine on
SSLCertificateFile /etc/letsencrypt/live/zulip.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/zulip.example.com/privkey.pem
SSLOpenSSLConfCmd DHParameters "/etc/nginx/dhparam.pem"
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
SSLHonorCipherOrder off
SSLSessionTickets off
Header set Strict-Transport-Security "max-age=31536000"
</VirtualHost>
```
### HAProxy configuration
If you want to use HAProxy with Zulip, this `backend` config is a good
place to start.
```
```text
backend zulip
mode http
balance leastconn
@@ -403,7 +446,9 @@ backend zulip
Since this configuration uses the `http` mode, you will also need to
[configure Zulip to allow HTTP](#configuring-zulip-to-allow-http) as
described above.
described above. Additionally, you will need to [add the the HAProxy server IP
address as a trusted load balancer](#configuring-zulip-to-trust-proxies)
to have Zulip respect the addresses in `X-Forwarded-For` headers.
### Other proxies
@@ -411,40 +456,41 @@ If you're using another reverse proxy implementation, there are few
things you need to be careful about when configuring it:
1. Configure your reverse proxy (or proxies) to correctly maintain the
`X-Forwarded-For` HTTP header, which is supposed to contain the series
of IP addresses the request was forwarded through. You can verify
your work by looking at `/var/log/zulip/server.log` and checking it
has the actual IP addresses of clients, not the IP address of the
proxy server.
`X-Forwarded-For` HTTP header, which is supposed to contain the series
of IP addresses the request was forwarded through. Additionally,
[configure Zulip to respect the addresses sent by your reverse
proxies](#configuring-zulip-to-trust-proxies). You can verify
your work by looking at `/var/log/zulip/server.log` and checking it
has the actual IP addresses of clients, not the IP address of the
proxy server.
2. Ensure your proxy doesn't interfere with Zulip's use of
long-polling for real-time push from the server to your users'
browsers. This [nginx code snippet][nginx-proxy-longpolling-config]
does this.
long-polling for real-time push from the server to your users'
browsers. This [nginx code snippet][nginx-proxy-longpolling-config]
does this.
The key configuration options are, for the `/json/events` and
`/api/1/events` endpoints:
* `proxy_read_timeout 1200;`. It's critical that this be
- `proxy_read_timeout 1200;`. It's critical that this be
significantly above 60s, but the precise value isn't important.
* `proxy_buffering off`. If you don't do this, your `nginx` proxy may
- `proxy_buffering off`. If you don't do this, your `nginx` proxy may
return occasional 502 errors to clients using Zulip's events API.
3. The other tricky failure mode we've seen with `nginx` reverse
proxies is that they can load-balance between the IPv4 and IPv6
addresses for a given hostname. This can result in mysterious errors
that can be quite difficult to debug. Be sure to declare your
`upstreams` equivalent in a way that won't do load-balancing
unexpectedly (e.g. pointing to a DNS name that you haven't configured
with multiple IPs for your Zulip machine; sometimes this happens with
IPv6 configuration).
proxies is that they can load-balance between the IPv4 and IPv6
addresses for a given hostname. This can result in mysterious errors
that can be quite difficult to debug. Be sure to declare your
`upstreams` equivalent in a way that won't do load-balancing
unexpectedly (e.g. pointing to a DNS name that you haven't configured
with multiple IPs for your Zulip machine; sometimes this happens with
IPv6 configuration).
## System and deployment configuration
The file `/etc/zulip/zulip.conf` is used to configure properties of
the system and deployment; `/etc/zulip/settings.py` is used to
configure the application itself. The `zulip.conf` sections and
configure the application itself. The `zulip.conf` sections and
settings are described below.
### `[machine]`
@@ -456,11 +502,12 @@ The most common is **`zulip::profile::standalone`**, used for a
stand-alone single-host deployment.
[Components](../overview/architecture-overview.html#components) of
that include:
- **`zulip::profile::app_frontend`**
- **`zulip::profile::memcached`**
- **`zulip::profile::postgresql`**
- **`zulip::profile::redis`**
- **`zulip::profile::rabbitmq`**
- **`zulip::profile::app_frontend`**
- **`zulip::profile::memcached`**
- **`zulip::profile::postgresql`**
- **`zulip::profile::redis`**
- **`zulip::profile::rabbitmq`**
If you are using a [Apache as a single-sign-on
authenticator](../production/authentication-methods.html#apache-based-sso-with-remote-user),
@@ -472,21 +519,19 @@ Set to the string `enabled` if enabling the [multi-language PGroonga
search
extension](../subsystems/full-text-search.html#multi-language-full-text-search).
### `[deployment]`
#### `deploy_options`
Options passed by `upgrade-zulip` and `upgrade-zulip-from-git` into
`upgrade-zulip-stage-2`. These might be any of:
`upgrade-zulip-stage-2`. These might be any of:
- **`--skip-puppet`** skips doing Puppet/apt upgrades. The user will need
to run `zulip-puppet-apply` manually after the upgrade.
- **`--skip-migrations`** skips running database migrations. The
user will need to run `./manage.py migrate` manually after the upgrade.
- **`--skip-purge-old-deployments`** skips purging old deployments;
without it, only deployments with the last two weeks are kept.
- **`--skip-puppet`** skips doing Puppet/apt upgrades. The user will need
to run `zulip-puppet-apply` manually after the upgrade.
- **`--skip-migrations`** skips running database migrations. The
user will need to run `./manage.py migrate` manually after the upgrade.
- **`--skip-purge-old-deployments`** skips purging old deployments;
without it, only deployments with the last two weeks are kept.
Generally installations will not want to set any of these options; the
`--skip-*` options are primarily useful for reducing upgrade downtime
@@ -497,8 +542,6 @@ for servers that are upgraded frequently by core Zulip developers.
Default repository URL used when [upgrading from a Git
repository](../production/upgrade-or-modify.html#upgrading-from-a-git-repository).
### `[application_server]`
#### `http_only`
@@ -530,7 +573,7 @@ mode). The calculation is based on whether the system has enough
memory (currently 3.5GiB) to run a single-server Zulip installation in
the multiprocess mode.
Set to `true` or `false` to override the automatic calculation. This
Set to `true` or `false` to override the automatic calculation. This
override is useful both Docker systems (where the above algorithm
might see the host's memory, not the container's) and/or when using
remote servers for postgres, memcached, redis, and RabbitMQ.
@@ -548,18 +591,6 @@ Override the default uwsgi backlog of 128 connections.
Override the default `uwsgi` (Django) process count of 6 on hosts with
more than 3.5GiB of RAM, 4 on hosts with less.
### `[certbot]`
#### `auto_renew`
If set to the string `yes`, [Certbot will attempt to automatically
renew its certificate](../production/ssl-certificates.html#certbot-recommended). Do
no set by hand; use `scripts/setup/setup-certbot` to configure this.
### `[postfix]`
#### `mailname`
@@ -607,19 +638,9 @@ connections.
#### `version`
The version of PostgreSQL that is in use. Do not set by hand; use the
The version of PostgreSQL that is in use. Do not set by hand; use the
[PostgreSQL upgrade tool](../production/upgrade-or-modify.html#upgrading-postgresql).
### `[rabbitmq]`
#### `nodename`
The name used to identify the local RabbitMQ server; do not modify.
### `[memcached]`
#### `memory`
@@ -627,8 +648,6 @@ The name used to identify the local RabbitMQ server; do not modify.
Override the number of megabytes of memory that memcached should be
configured to consume; defaults to 1/8th of the total server memory.
### `[loadbalancer]`
#### `ips`
@@ -636,15 +655,27 @@ configured to consume; defaults to 1/8th of the total server memory.
Comma-separated list of IP addresses or netmasks of external
load balancers whose `X-Forwarded-For` should be respected.
### `[http_proxy]`
#### `host`
The hostname or IP address of an [outgoing HTTP `CONNECT`
proxy](#using-an-outgoing-http-proxy).
proxy](#customizing-the-outgoing-http-proxy). Defaults to `localhost`
if unspecified.
#### `port`
The TCP port of the HTTP `CONNECT` proxy on the host specified above.
Defaults to `4750` if unspecified.
#### `listen_address`
The IP address that Smokescreen should bind to and listen on.
Defaults to `127.0.0.1`.
#### `enable_for_camo`
Because Camo includes logic to deny access to private subnets, routing
its requests through Smokescreen is generally not necessary. Set to
'true' or 'false' to override the default, which uses the proxy only if
it is not the default of Smokescreen on a local host.

View File

@@ -1,42 +1,42 @@
# Incoming email integration
Zulip's incoming email gateway integration makes it possible to send
messages into Zulip by sending an email. It's highly recommended
messages into Zulip by sending an email. It's highly recommended
because it enables:
* When users reply to one of Zulip's message notification emails
- When users reply to one of Zulip's message notification emails
from their email client, the reply can go directly
into Zulip.
* Integrating third-party services that can send email notifications
into Zulip. See the [integration
- Integrating third-party services that can send email notifications
into Zulip. See the [integration
documentation](https://zulip.com/integrations/doc/email) for
details.
Once this integration is configured, each stream will have a special
email address displayed on the stream settings page. Emails sent to
email address displayed on the stream settings page. Emails sent to
that address will be delivered into the stream.
There are two ways to configure Zulip's email gateway:
1. Local delivery (recommended): A postfix server runs on the Zulip
server and passes the emails directly to Zulip.
1. Polling: A cron job running on the Zulip server checks an IMAP
inbox (`username@example.com`) every minute for new emails.
1. Local delivery (recommended): A postfix server runs on the Zulip
server and passes the emails directly to Zulip.
1. Polling: A cron job running on the Zulip server checks an IMAP
inbox (`username@example.com`) every minute for new emails.
The local delivery configuration is preferred for production because
it supports nicer looking email addresses and has no cron delay. The
it supports nicer looking email addresses and has no cron delay. The
polling option is convenient for testing/developing this feature
because it doesn't require a public IP address or setting up MX
records in DNS.
```eval_rst
.. note::
Incoming emails are rate-limited, with the following limits:
:::{note}
Incoming emails are rate-limited, with the following limits:
* 50 emails per minute.
* 120 emails per 5 minutes.
* 600 emails per hour.
```
- 50 emails per minute.
- 120 emails per 5 minutes.
- 600 emails per hour.
:::
## Local delivery setup
@@ -45,7 +45,7 @@ integration; you just need to enable and configure it as follows.
The main decision you need to make is what email domain you want to
use for the gateway; for this discussion we'll use
`emaildomain.example.com`. The email addresses used by the gateway
`emaildomain.example.com`. The email addresses used by the gateway
will look like `foo@emaildomain.example.com`, so we recommend using
`EXTERNAL_HOST` here.
@@ -55,33 +55,34 @@ using an [HTTP reverse proxy][reverse-proxy]).
1. Using your DNS provider, create a DNS MX (mail exchange) record
configuring email for `emaildomain.example.com` to be processed by
`hostname.example.com`. You can check your work using this command:
`hostname.example.com`. You can check your work using this command:
```
$ dig +short emaildomain.example.com -t MX
1 hostname.example.com
```
```console
$ dig +short emaildomain.example.com -t MX
1 hostname.example.com
```
1. Log in to your Zulip server; the remaining steps all happen there.
1. Add `, zulip::postfix_localmail` to `puppet_classes` in
`/etc/zulip/zulip.conf`. A typical value after this change is:
```
`/etc/zulip/zulip.conf`. A typical value after this change is:
```ini
puppet_classes = zulip::profile::standalone, zulip::postfix_localmail
```
1. If `hostname.example.com` is different from
1. If `hostname.example.com` is different from
`emaildomain.example.com`, add a section to `/etc/zulip/zulip.conf`
on your Zulip server like this:
```
[postfix]
mailname = emaildomain.example.com
```
```ini
[postfix]
mailname = emaildomain.example.com
```
This tells postfix to expect to receive emails at addresses ending
with `@emaildomain.example.com`, overriding the default of
`@hostname.example.com`.
This tells postfix to expect to receive emails at addresses ending
with `@emaildomain.example.com`, overriding the default of
`@hostname.example.com`.
1. Run `/home/zulip/deployments/current/scripts/zulip-puppet-apply`
(and answer `y`) to apply your new `/etc/zulip/zulip.conf`
@@ -93,33 +94,34 @@ using an [HTTP reverse proxy][reverse-proxy]).
1. Restart your Zulip server with
`/home/zulip/deployments/current/scripts/restart-server`.
Congratulations! The integration should be fully operational.
Congratulations! The integration should be fully operational.
[reverse-proxy]: ../production/deployment.html#putting-the-zulip-application-behind-a-reverse-proxy
## Polling setup
1. Create an email account dedicated to Zulip's email gateway
messages. We assume the address is of the form
`username@example.com`. The email provider needs to support the
standard model of delivering emails sent to
`username+stuff@example.com` to the `username@example.com` inbox.
messages. We assume the address is of the form
`username@example.com`. The email provider needs to support the
standard model of delivering emails sent to
`username+stuff@example.com` to the `username@example.com` inbox.
1. Edit `/etc/zulip/settings.py`, and set `EMAIL_GATEWAY_PATTERN` to
`"username+%s@example.com"`.
1. Set up IMAP for your email account and obtain the authentication details.
([Here's how it works with Gmail](https://support.google.com/mail/answer/7126229?hl=en))
([Here's how it works with Gmail](https://support.google.com/mail/answer/7126229?hl=en))
1. Configure IMAP access in the appropriate Zulip settings:
* Login and server connection details in `/etc/zulip/settings.py`
in the email gateway integration section (`EMAIL_GATEWAY_LOGIN` and others).
* Password in `/etc/zulip/zulip-secrets.conf` as `email_gateway_password`.
- Login and server connection details in `/etc/zulip/settings.py`
in the email gateway integration section (`EMAIL_GATEWAY_LOGIN` and others).
- Password in `/etc/zulip/zulip-secrets.conf` as `email_gateway_password`.
1. Install a cron job to poll the inbox every minute for new messages:
```
cd /home/zulip/deployments/current/
sudo cp puppet/zulip/files/cron.d/email-mirror /etc/cron.d/
```
```bash
cd /home/zulip/deployments/current/
sudo cp puppet/zulip/files/cron.d/email-mirror /etc/cron.d/
```
Congratulations! The integration should be fully operational.
Congratulations! The integration should be fully operational.

View File

@@ -6,13 +6,13 @@ email addresses and send notifications.
## How to configure
1. Identify an outgoing email (SMTP) account where you can have Zulip
send mail. If you don't already have one you want to use, see
send mail. If you don't already have one you want to use, see
[Email services](#email-services) below.
1. Fill out the section of `/etc/zulip/settings.py` headed "Outgoing
email (SMTP) settings". This includes the hostname and typically
email (SMTP) settings". This includes the hostname and typically
the port to reach your SMTP provider, and the username to log in to
it. You'll also want to fill out the noreply email section.
it. You'll also want to fill out the noreply email section.
1. Put the password for the SMTP user account in
`/etc/zulip/zulip-secrets.conf` by setting `email_password`. For
@@ -57,18 +57,18 @@ the best documentation).
If you don't have an existing outgoing SMTP provider, don't worry!
Each of the options we recommend above (as well as dozens of other
services) have free options. Once you've signed up, you'll want to
services) have free options. Once you've signed up, you'll want to
find the service's provided "SMTP credentials", and configure Zulip as
follows:
* The hostname like `EMAIL_HOST = 'smtp.mailgun.org'` in `/etc/zulip/settings.py`
* The username like `EMAIL_HOST_USER = 'username@example.com'` in
- The hostname like `EMAIL_HOST = 'smtp.mailgun.org'` in `/etc/zulip/settings.py`
- The username like `EMAIL_HOST_USER = 'username@example.com'` in
`/etc/zulip/settings.py`.
* The TLS setting as `EMAIL_USE_TLS = True` in
- The TLS setting as `EMAIL_USE_TLS = True` in
`/etc/zulip/settings.py`, for most providers
* The port as `EMAIL_PORT = 587` in `/etc/zulip/settings.py`, for most
- The port as `EMAIL_PORT = 587` in `/etc/zulip/settings.py`, for most
providers
* The password like `email_password = abcd1234` in `/etc/zulip/zulip-secrets.conf`.
- The password like `email_password = abcd1234` in `/etc/zulip/zulip-secrets.conf`.
### Using system email
@@ -78,7 +78,7 @@ configuration on the system that forwards email sent locally into your
corporate email system), you will likely need to use something like
these setting values:
```
```python
EMAIL_HOST = 'localhost'
EMAIL_PORT = 25
EMAIL_USE_TLS = False
@@ -88,7 +88,7 @@ EMAIL_HOST_USER = ""
We should emphasize that because modern spam filtering is very
aggressive, you should make sure your downstream email system is
configured to properly sign outgoing email sent by your Zulip server
(or check your spam folder) when using this configuration. See
(or check your spam folder) when using this configuration. See
[documentation on using Django with a local postfix server][postfix-email]
for additional advice.
@@ -97,32 +97,33 @@ for additional advice.
### Using Gmail for outgoing email
We don't recommend using an inbox product like Gmail for outgoing
email, because Gmail's anti-spam measures make this annoying. But if
email, because Gmail's anti-spam measures make this annoying. But if
you want to use a Gmail account to send outgoing email anyway, here's
how to make it work:
* Create a totally new Gmail account for your Zulip server; you don't
- Create a totally new Gmail account for your Zulip server; you don't
want Zulip's automated emails to come from your personal email address.
* If you're using 2-factor authentication on the Gmail account, you'll
- If you're using 2-factor authentication on the Gmail account, you'll
need to use an
[app-specific password](https://support.google.com/accounts/answer/185833).
* If you're not using 2-factor authentication, read this Google
- If you're not using 2-factor authentication, read this Google
support answer and configure that account as
["less secure"](https://support.google.com/accounts/answer/6010255);
Gmail doesn't allow servers to send outgoing email by default.
* Note also that the rate limits for Gmail are also quite low
- Note also that the rate limits for Gmail are also quite low
(e.g. 100 / day), so it's easy to get rate-limited if your server
has significant traffic. For more active servers, we recommend
has significant traffic. For more active servers, we recommend
moving to a free account on a transactional email service.
### Logging outgoing email to a file for prototyping
For prototyping, you might want to proceed without setting up an email
provider. If you want to see the emails Zulip would have sent, you
provider. If you want to see the emails Zulip would have sent, you
can log them to a file instead.
To do so, add these lines to `/etc/zulip/settings.py`:
```
```python
EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
EMAIL_FILE_PATH = '/var/log/zulip/emails'
```
@@ -137,22 +138,22 @@ later set up a real SMTP provider!
You can quickly test your outgoing email configuration using:
```
```bash
su zulip -c '/home/zulip/deployments/current/manage.py send_test_email user@example.com'
```
If it doesn't throw an error, it probably worked; you can confirm by
checking your email. You should get two emails: One sent by the
checking your email. You should get two emails: One sent by the
default From address for your Zulip server, and one sent by the
"noreply" From address.
If it doesn't work, check these common failure causes:
* Your hosting provider may block outgoing SMTP traffic in its default
firewall rules. Check whether the port `EMAIL_PORT` is blocked in
- Your hosting provider may block outgoing SMTP traffic in its default
firewall rules. Check whether the port `EMAIL_PORT` is blocked in
your hosting provider's firewall.
* Your SMTP server's permissions might not allow the email account
- Your SMTP server's permissions might not allow the email account
you're using to send email from the `noreply` email addresses used
by Zulip when sending confirmation emails.
@@ -162,19 +163,19 @@ If it doesn't work, check these common failure causes:
If necessary, you can set `ADD_TOKENS_TO_NOREPLY_ADDRESS` to `False`
in `/etc/zulip/settings.py` (which will cause these confirmation
emails to be sent from a consistent `noreply@` address). Disabling
emails to be sent from a consistent `noreply@` address). Disabling
`ADD_TOKENS_TO_NOREPLY_ADDRESS` is generally safe if you are not
using Zulip's feature that allows anyone to create an account in
your Zulip organization if they have access to an email address in a
certain domain. See [this article][helpdesk-attack] for details on
certain domain. See [this article][helpdesk-attack] for details on
the security issue with helpdesk software that
`ADD_TOKENS_TO_NOREPLY_ADDRESS` helps protect against.
* Make sure you set the password in `/etc/zulip/zulip-secrets.conf`.
- Make sure you set the password in `/etc/zulip/zulip-secrets.conf`.
* Check the username and password for typos.
- Check the username and password for typos.
* Be sure to restart your Zulip server after editing either
- Be sure to restart your Zulip server after editing either
`settings.py` or `zulip-secrets.conf`, using
`/home/zulip/deployments/current/scripts/restart-server` .
Note that the `manage.py` command above will read the latest
@@ -186,33 +187,33 @@ If it doesn't work, check these common failure causes:
Here are a few final notes on what to look at when debugging why you
aren't receiving emails from Zulip:
* Most transactional email services have an "outgoing email" log where
- Most transactional email services have an "outgoing email" log where
you can inspect the emails that reached the service, whether an
email was flagged as spam, etc.
* Starting with Zulip 1.7, Zulip logs an entry in
- Starting with Zulip 1.7, Zulip logs an entry in
`/var/log/zulip/send_email.log` whenever it attempts to send an
email. The log entry includes whether the request succeeded or failed.
email. The log entry includes whether the request succeeded or failed.
* If attempting to send an email throws an exception, a traceback
- If attempting to send an email throws an exception, a traceback
should be in `/var/log/zulip/errors.log`, along with any other
exceptions Zulip encounters.
* If your SMTP provider uses SSL on port 465 (and not TLS on port
- If your SMTP provider uses SSL on port 465 (and not TLS on port
587), you need to set `EMAIL_PORT = 465` as well as replacing
`EMAIL_USE_TLS = True` with `EMAIL_USE_SSL = True`; otherwise, Zulip
will try to use the TLS protocol on port 465, which won't work.
* Zulip's email sending configuration is based on the standard Django
- Zulip's email sending configuration is based on the standard Django
[SMTP backend](https://docs.djangoproject.com/en/2.0/topics/email/#smtp-backend)
configuration. So if you're having trouble getting your email
configuration. So if you're having trouble getting your email
provider working, you may want to search for documentation related
to using your email provider with Django.
The one thing we've changed from the Django defaults is that we read
the email password from the `email_password` entry in the Zulip
secrets file, as part of our policy of not having any secret
information in the `/etc/zulip/settings.py` file. In other words,
information in the `/etc/zulip/settings.py` file. In other words,
if Django documentation references setting `EMAIL_HOST_PASSWORD`,
you should instead set `email_password` in
`/etc/zulip/zulip-secrets.conf`.

View File

@@ -1,4 +1,4 @@
```eval_rst
```{eval-rst}
:orphan:
```
@@ -8,61 +8,65 @@ Zulip 1.7 and 1.9 each contain some significant database migrations
that can take several minutes to run.
The upgrade process automatically minimizes disruption by running
these first, before beginning the user-facing downtime. However, if
these first, before beginning the user-facing downtime. However, if
you'd like to watch the downtime phase of the upgrade closely, you
can run them manually before starting the upgrade:
1. Log in to your Zulip server as the `zulip` user (or as `root` and
then run `su zulip` to drop privileges), and `cd
/home/zulip/deployments/current`
2. Run `./manage.py dbshell`. This will open a shell connected to the
PostgreSQL database.
then run `su zulip` to drop privileges), and
`cd /home/zulip/deployments/current`
2. Run `./manage.py dbshell`. This will open a shell connected to the
PostgreSQL database.
3. In the PostgreSQL shell, run the following commands:
CREATE INDEX CONCURRENTLY
zerver_usermessage_is_private_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 2048) != 0;
```postgresql
CREATE INDEX CONCURRENTLY
zerver_usermessage_is_private_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 2048) != 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_active_mobile_push_notification_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 4096) != 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_active_mobile_push_notification_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 4096) != 0;
```
(These first migrations are the only new ones in Zulip 1.9).
(These first migrations are the only new ones in Zulip 1.9).
CREATE INDEX CONCURRENTLY
zerver_usermessage_mentioned_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 8) != 0;
```postgresql
CREATE INDEX CONCURRENTLY
zerver_usermessage_mentioned_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 8) != 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_starred_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 2) != 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_starred_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 2) != 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_has_alert_word_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 512) != 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_has_alert_word_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 512) != 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_wildcard_mentioned_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 8) != 0 OR (flags & 16) != 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_wildcard_mentioned_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 8) != 0 OR (flags & 16) != 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_unread_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 1) = 0;
CREATE INDEX CONCURRENTLY
zerver_usermessage_unread_message_id
ON zerver_usermessage (user_profile_id, message_id)
WHERE (flags & 1) = 0;
```
These will take some time to run, during which the server will
continue to serve user traffic as usual with no disruption. Once they
continue to serve user traffic as usual with no disruption. Once they
finish, you can proceed with installing Zulip 1.7.
To help you estimate how long these will take on your server: count
the number of UserMessage rows, with `select COUNT(*) from zerver_usermessage;`
at the `./manage.py dbshell` prompt. At the time these migrations
at the `./manage.py dbshell` prompt. At the time these migrations
were run on chat.zulip.org, it had 75M UserMessage rows; the first 5
indexes took about 1 minute each to create, and the final,
"unread_message" index took more like 10 minutes.

View File

@@ -5,28 +5,28 @@ move data from one Zulip server to another, do backups, compliance
work, or migrate from your own servers to the hosted Zulip Cloud
service (or back):
* The [Backup](#backups) tool is designed for exact restoration of a
- The [Backup](#backups) tool is designed for exact restoration of a
Zulip server's state, for disaster recovery, testing with production
data, or hardware migration. This tool has a few limitations:
data, or hardware migration. This tool has a few limitations:
* Backups must be restored on a server running the same Zulip
- Backups must be restored on a server running the same Zulip
version (most precisely, one where `manage.py showmigrations` has
the same output).
* Backups must be restored on a server running the same PostgreSQL
- Backups must be restored on a server running the same PostgreSQL
version.
* Backups aren't useful for migrating organizations between
- Backups aren't useful for migrating organizations between
self-hosting and Zulip Cloud (which may require renumbering all
the users/messages/etc.).
We highly recommend this tool in situations where it is applicable,
because it is highly optimized and highly stable, since the hard
work is done by the built-in backup feature of PostgreSQL. We also
work is done by the built-in backup feature of PostgreSQL. We also
document [backup details](#backup-details) for users managing
backups manually.
* The logical [Data export](#data-export) tool is designed for
- The logical [Data export](#data-export) tool is designed for
migrating data between Zulip Cloud and other Zulip servers, as well
as various auditing purposes. The logical export tool produces a
as various auditing purposes. The logical export tool produces a
`.tar.gz` archive with most of the Zulip database data encoded in
JSON filesa format shared by our [data
import](#import-into-a-new-zulip-server) tools for third-party
@@ -34,20 +34,20 @@ service (or back):
[Slack](https://zulip.com/help/import-from-slack).
Like the backup tool, logical data exports must be imported on a
Zulip server running the same version. However, logical data
Zulip server running the same version. However, logical data
exports can be imported on Zulip servers running a different
PostgreSQL version or hosting a different set of Zulip
organizations. We recommend this tool in cases where the backup
organizations. We recommend this tool in cases where the backup
tool isn't applicable, including situations where an easily
machine-parsable export format is desired.
* Zulip also has an [HTML archive
- Zulip also has an [HTML archive
tool](https://github.com/zulip/zulip-archive), which is primarily
intended for public archives, but can also be useful to
inexpensively preserve public stream conversations when
decommissioning a Zulip organization.
* It's possible to set up [PostgreSQL streaming
- It's possible to set up [PostgreSQL streaming
replication](#postgresql-streaming-replication) and the [S3 file
upload
backend](../production/upload-backends.html#s3-backend-configuration)
@@ -57,7 +57,7 @@ service (or back):
The Zulip server has a built-in backup tool:
```
```bash
# As the zulip user
/home/zulip/deployments/current/manage.py backup
# Or as root
@@ -65,10 +65,11 @@ su zulip -c '/home/zulip/deployments/current/manage.py backup'
```
The backup tool provides the following options:
- `--output=/tmp/backup.tar.gz`: Filename to write the backup tarball
to (default: write to a file in `/tmp`). On success, the
to (default: write to a file in `/tmp`). On success, the
console output will show the path to the output tarball.
- `--skip-db`: Skip backup of the database. Useful if you're using a
- `--skip-db`: Skip backup of the database. Useful if you're using a
remote PostgreSQL host with its own backup system and just need to
back up non-database state.
- `--skip-uploads`: If `LOCAL_UPLOADS_DIR` is set, user-uploaded files
@@ -82,9 +83,9 @@ server's state on another machine perfectly.
First, [install a new Zulip server through Step 3][install-server]
with the same version of both the base OS and Zulip from your previous
installation. Then, run as root:
installation. Then, run as root:
```
```bash
/home/zulip/deployments/current/scripts/setup/restore-backup /path/to/backup
```
@@ -109,11 +110,11 @@ errors when trying to access it via `zuliptest.example.com`.
If you're not sure what versions were in use when a given backup was
created, you can get that information via the files in the backup
tarball: `postgres-version`, `os-version`, and `zulip-version`. The
tarball: `postgres-version`, `os-version`, and `zulip-version`. The
following command may be useful for viewing these files without
extracting the entire archive.
```
```bash
tar -Oaxf /path/to/archive/zulip-backup-rest.tar.gz zulip-backup/zulip-version
```
@@ -128,27 +129,27 @@ server, including the database, settings, secrets from
The following data is not included in these backup archives,
and you may want to back up separately:
* The server access/error logs from `/var/log/zulip`. The Zulip
- The server access/error logs from `/var/log/zulip`. The Zulip
server only appends to logs, and they can be very large compared to
the rest of the data for a Zulip server.
* Files uploaded with the Zulip
[S3 file upload backend](../production/upload-backends.md). We
- Files uploaded with the Zulip
[S3 file upload backend](../production/upload-backends.md). We
don't include these for two reasons. First, the uploaded file data
in S3 can easily be many times larger than the rest of the backup,
and downloading it all to a server doing a backup could easily
exceed its disk capacity. Additionally, S3 is a reliable persistent
exceed its disk capacity. Additionally, S3 is a reliable persistent
storage system with its own high-quality tools for doing backups.
* SSL certificates. These are not included because they are
- SSL certificates. These are not included because they are
particularly security-sensitive and are either trivially replaced
(if generated via Certbot) or provided by the system administrator.
For completeness, Zulip's backups do not include certain highly
transient state that Zulip doesn't store in a database. For example,
transient state that Zulip doesn't store in a database. For example,
typing status data, API rate-limiting counters, and RabbitMQ queues
that are essentially always empty in a healthy server (like outgoing
emails to send). You can check whether these queues are empty using
emails to send). You can check whether these queues are empty using
`rabbitmqctl list_queues`.
#### Backup details
@@ -156,41 +157,41 @@ emails to send). You can check whether these queues are empty using
This section is primarily for users managing backups themselves
(E.g. if they're using a remote PostgreSQL database with an existing
backup strategy), and also serves as documentation for what is
included in the backups generated by Zulip's standard tools. The
included in the backups generated by Zulip's standard tools. The
data includes:
* The PostgreSQL database. You can back this up with any standard
database export or backup tool. Zulip has built-in support for taking
daily incremental backups using
[wal-g](https://github.com/wal-g/wal-g); these backups are stored for
30 days in S3. If you have an Amazon S3 bucket you wish to store for
storing the backups, edit `/etc/zulip/zulip-secrets.conf` on the
PostgreSQL server to add:
- The PostgreSQL database. You can back this up with any standard
database export or backup tool. Zulip has built-in support for taking
daily incremental backups using
[wal-g](https://github.com/wal-g/wal-g); these backups are stored for
30 days in S3. If you have an Amazon S3 bucket you wish to store for
storing the backups, edit `/etc/zulip/zulip-secrets.conf` on the
PostgreSQL server to add:
```
s3_backups_key = # aws public key
s3_backups_secret_key = # aws secret key
s3_backups_bucket = # name of S3 backup
```
```ini
s3_backups_key = # aws public key
s3_backups_secret_key = # aws secret key
s3_backups_bucket = # name of S3 backup
```
After adding the secrets, run
`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. You
can (and should) monitor that backups are running regularly via
the Nagios plugin installed into
`/usr/lib/nagios/plugins/zulip_postgresql_backups/check_postgresql_backup`.
After adding the secrets, run
`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. You
can (and should) monitor that backups are running regularly via
the Nagios plugin installed into
`/usr/lib/nagios/plugins/zulip_postgresql_backups/check_postgresql_backup`.
* Any user-uploaded files. If you're using S3 as storage for file
uploads, this is backed up in S3. But if you have instead set
`LOCAL_UPLOADS_DIR`, any files uploaded by users (including avatars)
will be stored in that directory and you'll want to back it up.
- Any user-uploaded files. If you're using S3 as storage for file
uploads, this is backed up in S3. But if you have instead set
`LOCAL_UPLOADS_DIR`, any files uploaded by users (including avatars)
will be stored in that directory and you'll want to back it up.
* Your Zulip configuration including secrets from `/etc/zulip/`.
E.g. if you lose the value of `secret_key`, all users will need to
log in again when you set up a replacement server since you won't be
able to verify their cookies. If you lose `avatar_salt`, any
user-uploaded avatars will need to be re-uploaded (since avatar
filenames are computed using a hash of `avatar_salt` and user's
email), etc.
- Your Zulip configuration including secrets from `/etc/zulip/`.
E.g. if you lose the value of `secret_key`, all users will need to
log in again when you set up a replacement server since you won't be
able to verify their cookies. If you lose `avatar_salt`, any
user-uploaded avatars will need to be re-uploaded (since avatar
filenames are computed using a hash of `avatar_salt` and user's
email), etc.
[export-import]: ../production/export-and-import.md
@@ -198,27 +199,27 @@ email), etc.
To restore from a manual backup, the process is basically the reverse of the above:
* Install new server as normal by downloading a Zulip release tarball
- Install new server as normal by downloading a Zulip release tarball
and then using `scripts/setup/install`. You don't need
to run the `initialize-database` second stage which puts default
data into the database.
* Unpack to `/etc/zulip` the `settings.py` and `zulip-secrets.conf` files
- Unpack to `/etc/zulip` the `settings.py` and `zulip-secrets.conf` files
from your backups.
* If you ran `initialize-database` anyway above, you'll want to run
- If you ran `initialize-database` anyway above, you'll want to run
`scripts/setup/postgresql-init-db` to drop the initial database first.
* Restore your database from the backup.
- Restore your database from the backup.
* Reconfigure rabbitmq to use the password from `secrets.conf`
- Reconfigure rabbitmq to use the password from `secrets.conf`
by running, as root, `scripts/setup/configure-rabbitmq`.
* If you're using local file uploads, restore those files to the path
- If you're using local file uploads, restore those files to the path
specified by `settings.LOCAL_UPLOADS_DIR` and (if appropriate) any
logs.
* Start the server using `scripts/restart-server`.
- Start the server using `scripts/restart-server`.
This restoration process can also be used to migrate a Zulip
installation from one server to another.
@@ -233,11 +234,11 @@ that they are up to date using the Nagios plugin at:
Zulip has database configuration for using PostgreSQL streaming
replication. You can see the configuration in these files:
* `puppet/zulip_ops/manifests/profile/postgresql.pp`
* `puppet/zulip_ops/files/postgresql/*`
- `puppet/zulip_ops/manifests/profile/postgresql.pp`
- `puppet/zulip_ops/files/postgresql/*`
We use this configuration for Zulip Cloud, and it works well in
production, but it's not fully generic. Contributions to make it a
production, but it's not fully generic. Contributions to make it a
supported and documented option for other installations are
appreciated.
@@ -254,24 +255,24 @@ backups.
### Preventing changes during the export
For best results, you'll want to shut down access to the organization
before exporting; so that nobody can send new messages (etc.) while
you're exporting data. There are two ways to do this:
before exporting; so that nobody can send new messages (etc.) while
you're exporting data. There are two ways to do this:
1. `./scripts/stop-server`, which stops the whole server. This is
preferred if you're not hosting multiple organizations, because it has
no side effects other than disabling the Zulip server for the
duration.
1. `./scripts/stop-server`, which stops the whole server. This is
preferred if you're not hosting multiple organizations, because it has
no side effects other than disabling the Zulip server for the
duration.
1. Pass `--deactivate` to `./manage export`, which first deactivates
the target organization, logging out all active login sessions and
preventing all accounts from logging in or accessing the API. This is
preferred for environments like Zulip Cloud where you might want to
export a single organization without disrupting any other users, and
the intent is to move hosting of the organization (and forcing users
to re-log in would be required as part of the hosting migration
anyway).
the target organization, logging out all active login sessions and
preventing all accounts from logging in or accessing the API. This is
preferred for environments like Zulip Cloud where you might want to
export a single organization without disrupting any other users, and
the intent is to move hosting of the organization (and forcing users
to re-log in would be required as part of the hosting migration
anyway).
We include both options in the instructions below, commented out so
that neither runs (using the `# ` at the start of the lines). If
that neither runs (using the `# ` at the start of the lines). If
you'd like to use one of these options, remove the `# ` at the start
of the lines for the appropriate option.
@@ -280,7 +281,7 @@ of the lines for the appropriate option.
Log in to a shell on your Zulip server as the `zulip` user. Run the
following commands:
```
```bash
cd /home/zulip/deployments/current
# ./scripts/stop-server
# export DEACTIVATE_FLAG="--deactivate" # Deactivates the organization
@@ -291,80 +292,81 @@ cd /home/zulip/deployments/current
the default organization hosted at the Zulip server's root domain.)
This will generate a tarred archive with a name like
`/tmp/zulip-export-zcmpxfm6.tar.gz`. The archive contains several
`/tmp/zulip-export-zcmpxfm6.tar.gz`. The archive contains several
JSON files (containing the Zulip organization's data) as well as an
archive of all the organization's uploaded files.
## Import into a new Zulip server
1. [Install a new Zulip server](../production/install.md),
**skipping Step 3** (you'll create your Zulip organization via the data
import tool instead).
* Ensure that the Zulip server you're importing into is running the same
version of Zulip as the server you're exporting from.
**skipping Step 3** (you'll create your Zulip organization via the data
import tool instead).
* For exports from Zulip Cloud (zulip.com), you need to [upgrade to
master][upgrade-zulip-from-git], since we run run master on
Zulip Cloud:
- Ensure that the Zulip server you're importing into is running the same
version of Zulip as the server you're exporting from.
```
/home/zulip/deployments/current/scripts/upgrade-zulip-from-git master
```
- For exports from Zulip Cloud (zulip.com), you need to [upgrade to
`main`][upgrade-zulip-from-git], since we run run `main` on
Zulip Cloud:
It is not sufficient to be on the latest stable release, as
zulip.com runs pre-release versions of Zulip that are often
several months of development ahead of the latest release.
```bash
/home/zulip/deployments/current/scripts/upgrade-zulip-from-git main
```
* Note that if your server has limited free RAM, you'll want to
shut down the Zulip server with `./scripts/stop-server` while
you run the import, since our minimal system requirements do not
budget extra RAM for running the data import tool.
It is not sufficient to be on the latest stable release, as
zulip.com runs pre-release versions of Zulip that are often
several months of development ahead of the latest release.
- Note that if your server has limited free RAM, you'll want to
shut down the Zulip server with `./scripts/stop-server` while
you run the import, since our minimal system requirements do not
budget extra RAM for running the data import tool.
2. If your new Zulip server is meant to fully replace a previous Zulip
server, you may want to copy some settings from `/etc/zulip` to your
new server to reuse the server-level configuration and secret keys
from your old server. There are a few important details to understand
about doing so:
server, you may want to copy some settings from `/etc/zulip` to your
new server to reuse the server-level configuration and secret keys
from your old server. There are a few important details to understand
about doing so:
* Copying `/etc/zulip/settings.py` and `/etc/zulip/zulip.conf` is
safe and recommended. Care is required when copying secrets from
- Copying `/etc/zulip/settings.py` and `/etc/zulip/zulip.conf` is
safe and recommended. Care is required when copying secrets from
`/etc/zulip/zulip-secrets.conf` (details below).
* If you copy `zulip_org_id` and `zulip_org_key` (the credentials
- If you copy `zulip_org_id` and `zulip_org_key` (the credentials
for the [mobile push notifications
service](../production/mobile-push-notifications.md)), you should
be very careful to make sure the no users had their IDs
renumbered during the import process (this can be checked using
`manage.py shell` with some care). The push notifications
`manage.py shell` with some care). The push notifications
service has a mapping of which `user_id` values are associated
with which devices for a given Zulip server (represented by the
`zulip_org_id` registration). This means that if any `user_id`
`zulip_org_id` registration). This means that if any `user_id`
values were renumbered during the import and you don't register a
new `zulip_org_id`, push notifications meant for the user who now
has ID 15 may be sent to devices registered by the user who had
user ID 15 before the data export (yikes!). The solution is
user ID 15 before the data export (yikes!). The solution is
simply to not copy these settings and re-register your server for
mobile push notifications if any users had their IDs renumbered
during the logical export/import process.
* If you copy the `rabbitmq_password` secret from
- If you copy the `rabbitmq_password` secret from
`zulip-secrets.conf`, you'll need to run
`scripts/setup/configure-rabbitmq` to update your local RabbitMQ
installation to use the password in your Zulip secrets file.
* You will likely want to copy `camo_key` (required to avoid
- You will likely want to copy `camo_key` (required to avoid
breaking certain links) and any settings you added related to
authentication and email delivery so that those work on your new
server.
* Copying `avatar_salt` is not recommended, due to similar issues
to the mobile push notifications service. Zulip will
- Copying `avatar_salt` is not recommended, due to similar issues
to the mobile push notifications service. Zulip will
automatically rewrite avatars at URLs appropriate for the new
user IDs, and using the same avatar salt (and same server URL)
post import could result in issues with browsers caching the
avatar image improperly for users whose ID was renumbered.
3. Log in to a shell on your Zulip server as the `zulip` user. Run the
following commands, replacing the filename with the path to your data
export tarball:
following commands, replacing the filename with the path to your data
export tarball:
```
```bash
cd ~
tar -xf /path/to/export/file/zulip-export-zcmpxfm6.tar.gz
cd /home/zulip/deployments/current
@@ -386,7 +388,7 @@ custom subdomain, e.g. if you already have an existing organization on the
root domain. Replace the last three lines above with the following, after replacing
`<subdomain>` with the desired subdomain.
```
```bash
./manage.py import <subdomain> ~/zulip-export-zcmpxfm6
./manage.py reactivate_realm -r <subdomain> # Reactivates the organization
```
@@ -400,15 +402,16 @@ Your users will need to either authenticate using something like
Google auth or start by resetting their passwords.
You can use the `./manage.py send_password_reset_email` command to
send password reset emails to your users. We
send password reset emails to your users. We
recommend starting with sending one to yourself for testing:
```
```bash
./manage.py send_password_reset_email -u username@example.com
```
and then once you're ready, you can email them to everyone using e.g.
```
```bash
./manage.py send_password_reset_email -r '' --all-users
```
@@ -418,15 +421,15 @@ and then once you're ready, you can email them to everyone using e.g.
If you did a test import of a Zulip organization, you may want to
delete the test import data from your Zulip server before doing a
final import. You can **permanently delete** all data from a Zulip
final import. You can **permanently delete** all data from a Zulip
organization using the following procedure:
* Start a [Zulip management shell](../production/management-commands.html#manage-py-shell)
* In the management shell, run the following commands, replacing `""`
- Start a [Zulip management shell](../production/management-commands.html#manage-py-shell)
- In the management shell, run the following commands, replacing `""`
with the subdomain if [you are hosting the organization on a
subdomain](../production/multiple-organizations.md):
```
```python
realm = Realm.objects.get(string_id="")
realm.delete()
```
@@ -434,7 +437,8 @@ realm.delete()
The output contains details on the objects deleted from the database.
Now, exit the management shell and run this to clear Zulip's cache:
```
```bash
/home/zulip/deployments/current/scripts/setup/flush-memcached
```
@@ -444,7 +448,7 @@ can additionally delete all file uploads, avatars, and custom emoji on
a Zulip server (across **all organizations**) with the following
command:
```
```bash
rm -rf /home/zulip/uploads/*/*
```
@@ -454,7 +458,7 @@ in the management shell before deleting the organization from the
database (this will be `2` for the first organization created on a
Zulip server, shown in the example below), e.g.:
```
```bash
rm -rf /home/zulip/uploads/*/2/
```

View File

@@ -17,7 +17,7 @@ To enable this integration, you need to get a production API key from
1. Choose **SDK** as product type and click **Next Step**.
1. Enter a name and a description for your app and click on **Create
New App**. The hostname for your Zulip server is a fine name.
New App**. The hostname for your Zulip server is a fine name.
1. You will receive a beta API key. Apply for a production API key
by following the steps mentioned by GIPHY on the same page.
@@ -36,12 +36,10 @@ follows:
1. Restart the Zulip server with
`/home/zulip/deployments/current/scripts/restart-server`.
Congratulations! You've configured the GIPHY integration for your
Zulip server. Your users can now use the integration as described in
[the help center article][help-center-giphy]. (A browser reload may
Congratulations! You've configured the GIPHY integration for your
Zulip server. Your users can now use the integration as described in
[the help center article][help-center-giphy]. (A browser reload may
be required).
[help-center-giphy]: https://zulip.com/help/animated-gifs-from-giphy
[giphy-dashboard]: https://developers.giphy.com/dashboard/

View File

@@ -1,4 +1,4 @@
```eval_rst
```{eval-rst}
:orphan:
```
@@ -11,38 +11,45 @@ configuration files in /etc, so we recommend against installing it on
a server running other nginx or django apps.
But if you do, here are some things you can do that may make it
possible to retain your existing site. However, this is *NOT*
possible to retain your existing site. However, this is _NOT_
recommended, and you may break your server. Make sure you have backups
and a provisioning script ready to go to wipe and restore your
existing services if (when) your server goes down.
These instructions are only for experts. If you're not an experienced
These instructions are only for experts. If you're not an experienced
Linux sysadmin, you will have a much better experience if you get a
dedicated VM to install Zulip on instead (or [use
zulip.com](https://zulip.com).
zulip.com](https://zulip.com)).
### Nginx
Copy your existing nginx configuration to a backup and then merge the
one created by Zulip into it:
```shell
```bash
sudo cp /etc/nginx/nginx.conf /etc/nginx.conf.before-zulip-install
sudo wget -O /etc/nginx/nginx.conf.zulip \
https://raw.githubusercontent.com/zulip/zulip/master/puppet/zulip/files/nginx/nginx.conf
https://raw.githubusercontent.com/zulip/zulip/main/puppet/zulip/templates/nginx.conf.template.erb
sudo meld /etc/nginx/nginx.conf /etc/nginx/nginx.conf.zulip # be sure to merge to the right
```
Since the file in Zulip is an [ERB Puppet
template](https://puppet.com/docs/puppet/7/lang_template_erb.html),
you will also need to replace any `<%= ... %>` sections with
appropriate content. For instance `<%= @ca_crt %>` should be replaced
with `/etc/ssl/certs/ca-certificates.crt` on Debian and Ubuntu
installs.
After the Zulip installation completes, then you can overwrite (or
merge) your new nginx.conf with the installed one:
```shell
```console
$ sudo meld /etc/nginx/nginx.conf.zulip /etc/nginx/nginx.conf # be sure to merge to the right
$ sudo service nginx restart
```
Zulip's Puppet configuration will change the ownership of
`/var/log/nginx` so that the `zulip` user can access it. Depending on
`/var/log/nginx` so that the `zulip` user can access it. Depending on
your configuration, this may or may not cause problems.
### Puppet
@@ -51,13 +58,13 @@ If you have a Puppet server running on your server, you will get an
error message about not being able to connect to the client during the
install process:
```shell
```console
puppet-agent[29873]: Could not request certificate: Failed to open TCP connection to puppet:8140
```
So you'll need to shut down any Puppet servers.
```shell
```console
$ sudo service puppet-agent stop
$ sudo service puppet stop
```
@@ -66,9 +73,9 @@ $ sudo service puppet stop
Zulip expects to install PostgreSQL 12, and find that listening on
port 5432; any other version of PostgreSQL that is detected at install
time will cause the install to abort. If you already have PostgreSQL
time will cause the install to abort. If you already have PostgreSQL
installed, you can pass `--postgresql-version=` to the installer to
have it use that version. It will replace the package with the latest
have it use that version. It will replace the package with the latest
from the PostgreSQL apt repository, but existing data will be
retained.
@@ -78,7 +85,7 @@ that.
### Memcached, Redis, and RabbitMQ
Zulip will, by default, configure these services for its use. The
Zulip will, by default, configure these services for its use. The
configuration we use is pretty basic, but if you're using them for
something else, you'll want to make sure the configurations are
compatible.
@@ -90,6 +97,6 @@ We don't provide a convenient way to uninstall a Zulip server.
## No support, but contributions welcome!
Most of the limitations are things we'd accept a pull request to fix;
we welcome contributions to shrink this list of gotchas. Chat with us
we welcome contributions to shrink this list of gotchas. Chat with us
in the [chat.zulip.org community](../contributing/chat-zulip-org.md) if you're
interested in helping!

View File

@@ -14,28 +14,28 @@ you can create a test organization at <https://zulip.com/new>.
## Step 1: Download the latest release
Download and unpack [the latest built server
tarball](https://www.zulip.org/dist/releases/zulip-server-latest.tar.gz)
with the following commands:
Download and unpack [the latest server
release](https://download.zulip.com/server/zulip-server-latest.tar.gz)
(**Zulip Server {{ LATEST_RELEASE_VERSION }}**) with the following commands:
```
```bash
cd $(mktemp -d)
wget https://www.zulip.org/dist/releases/zulip-server-latest.tar.gz
wget https://download.zulip.com/server/zulip-server-latest.tar.gz
tar -xf zulip-server-latest.tar.gz
```
* If you'd like to verify the download, we
[publish the sha256sums of our release tarballs](https://www.zulip.org/dist/releases/SHA256SUMS.txt).
* You can also
[install a pre-release version of Zulip](../production/deployment.html#installing-zulip-from-git)
using code from our [repository on GitHub](https://github.com/zulip/zulip/).
- If you'd like to verify the download, we
[publish the sha256sums of our release tarballs](https://download.zulip.com/server/SHA256SUMS.txt).
- You can also
[install a pre-release version of Zulip](../production/deployment.html#installing-zulip-from-git)
using code from our [repository on GitHub](https://github.com/zulip/zulip/).
## Step 2: Install Zulip
To set up Zulip with the most common configuration, you can run the
installer as follows:
```
```bash
sudo -s # If not already root
./zulip-server-*/scripts/setup/install --certbot \
--email=YOUR_EMAIL --hostname=YOUR_HOSTNAME
@@ -48,24 +48,26 @@ If the script gives an error, consult [Troubleshooting](#troubleshooting) below.
#### Installer options
* `--email=you@example.com`: The email address of the person or team
- `--email=you@example.com`: The email address of the person or team
who should get support and error emails from this Zulip server.
This becomes `ZULIP_ADMINISTRATOR` ([docs][doc-settings]) in the
Zulip settings.
* `--hostname=zulip.example.com`: The user-accessible domain name for
- `--hostname=zulip.example.com`: The user-accessible domain name for
this Zulip server, i.e., what users will type in their web browser.
This becomes `EXTERNAL_HOST` ([docs][doc-settings]) in the Zulip
settings.
* `--self-signed-cert`: With this option, the Zulip installer
generates a self-signed SSL certificate for the server. This isn't
- `--self-signed-cert`: With this option, the Zulip installer
generates a self-signed SSL certificate for the server. This isn't
suitable for production use, but may be convenient for testing.
* `--certbot`: With this option, the Zulip installer automatically
obtains an SSL certificate for the server [using Certbot][doc-certbot].
If you'd prefer to acquire an SSL certificate yourself in any other
way, it's easy to [provide it to Zulip][doc-ssl-manual].
- `--certbot`: With this option, the Zulip installer automatically
obtains an SSL certificate for the server [using
Certbot][doc-certbot], and configures a cron job to renew the
certificate automatically. If you'd prefer to acquire an SSL
certificate yourself in any other way, it's easy to [provide it to
Zulip][doc-ssl-manual].
You can see the more advanced installer options in our [deployment options][doc-deployment-options]
documentation.
@@ -77,7 +79,7 @@ documentation.
## Step 3: Create a Zulip organization, and log in
On success, the install script prints a link. If you're [restoring a
On success, the install script prints a link. If you're [restoring a
backup][zulip-backups] or importing your data from [Slack][slack-import],
or another Zulip server, you should stop here
and return to the import instructions.
@@ -85,59 +87,61 @@ and return to the import instructions.
[slack-import]: https://zulip.com/help/import-from-slack
[zulip-backups]: ../production/export-and-import.html#backups
Otherwise, open the link in a browser. Follow the prompts to set up
Otherwise, open the link in a browser. Follow the prompts to set up
your organization, and your own user account as an administrator.
Then, log in!
The link is a secure one-time-use link. If you need another
later, you can generate a new one by running `manage.py
generate_realm_creation_link` on the server. See also our doc on
running [multiple organizations on the same server](multiple-organizations.md)
if that's what you're planning to do.
The link is a secure one-time-use link. If you need another
later, you can generate a new one by running
`manage.py generate_realm_creation_link` on the server. See also our
doc on running [multiple organizations on the same
server](multiple-organizations.md) if that's what you're planning to
do.
## Step 4: Configure and use
To really see Zulip in action, you'll need to get the people you work
together with using it with you.
* [Set up outgoing email](email.md) so Zulip can confirm new users'
- [Set up outgoing email](email.md) so Zulip can confirm new users'
email addresses and send notifications.
* Learn how to [get your organization started][realm-admin-docs] using
- Learn how to [get your organization started][realm-admin-docs] using
Zulip at its best.
Learning more:
* Subscribe to the [Zulip announcements email
list](https://groups.google.com/forum/#!forum/zulip-announce) for
server administrators. This extremely low-traffic list is for
important announcements, including [new
releases](../overview/release-lifecycle.md) and security issues. You
can also use the [RSS
feed](https://groups.google.com/forum/#!aboutgroup/zulip-announce).
* Follow [Zulip on Twitter](https://twitter.com/zulip).
* Learn how to [configure your Zulip server settings](settings.md).
* Learn about [Backups, export and import](../production/export-and-import.md)
and [upgrading](../production/upgrade-or-modify.md) a production Zulip
server.
- Subscribe to the [Zulip announcements email
list](https://groups.google.com/forum/#!forum/zulip-announce) for
server administrators. This extremely low-traffic list is for
important announcements, including [new
releases](../overview/release-lifecycle.md) and security issues. You
can also use the [RSS
feed](https://groups.google.com/forum/#!aboutgroup/zulip-announce).
- Follow [Zulip on Twitter](https://twitter.com/zulip).
- Learn how to [configure your Zulip server settings](settings.md).
- Learn about [Backups, export and import](../production/export-and-import.md)
and [upgrading](../production/upgrade-or-modify.md) a production Zulip
server.
[realm-admin-docs]: https://zulip.com/help/getting-your-organization-started-with-zulip
```eval_rst
.. _installer-details:
```
(installer-details)=
## Details: What the installer does
The install script does several things:
* Creates the `zulip` user, which the various Zulip servers will run as.
* Creates `/home/zulip/deployments/`, which the Zulip code for this
deployment (and future deployments when you upgrade) goes into. At the
very end of the install process, the script moves the Zulip code tree
it's running from (which you unpacked from a tarball above) to a
directory there, and makes `/home/zulip/deployments/current` as a
symbolic link to it.
* Installs Zulip's various dependencies.
* Configures the various third-party services Zulip uses, including
PostgreSQL, RabbitMQ, Memcached and Redis.
* Initializes Zulip's database.
- Creates the `zulip` user, which the various Zulip servers will run as.
- Creates `/home/zulip/deployments/`, which the Zulip code for this
deployment (and future deployments when you upgrade) goes into. At the
very end of the install process, the script moves the Zulip code tree
it's running from (which you unpacked from a tarball above) to a
directory there, and makes `/home/zulip/deployments/current` as a
symbolic link to it.
- Installs Zulip's various dependencies.
- Configures the various third-party services Zulip uses, including
PostgreSQL, RabbitMQ, Memcached and Redis.
- Initializes Zulip's database.
If you'd like to deploy Zulip with these services on different
machines, check out our [deployment options documentation](deployment.md).
@@ -145,20 +149,20 @@ machines, check out our [deployment options documentation](deployment.md).
## Troubleshooting
**Install script.**
The Zulip install script is designed to be idempotent. This means
The Zulip install script is designed to be idempotent. This means
that if it fails, then once you've corrected the cause of the failure,
you can just rerun the script.
The install script automatically logs a transcript to
`/var/log/zulip/install.log`. In case of failure, you might find the
log handy for resolving the issue. Please include a copy of this log
`/var/log/zulip/install.log`. In case of failure, you might find the
log handy for resolving the issue. Please include a copy of this log
file in any bug reports.
**The `zulip` user's password.**
By default, the `zulip` user doesn't
have a password, and is intended to be accessed by `su zulip` from the
`root` user (or via SSH keys or a password, if you want to set those
up, but that's up to you as the system administrator). Most people
up, but that's up to you as the system administrator). Most people
who are prompted for a password when running `su zulip` turn out to
already have switched to the `zulip` user earlier in their session,
and can just skip that step.
@@ -172,7 +176,7 @@ how to debug.
**Community.** If the tips above don't help, please visit [#production
help][production-help] in the [Zulip development community
server][chat-zulip-org] for realtime help, and we'll try to help you
out! Please provide details like the full traceback from the bottom
out! Please provide details like the full traceback from the bottom
of `/var/log/zulip/errors.log` in your report (ideally in a [code
block][code-block]).

View File

@@ -1,4 +1,4 @@
```eval_rst
```{eval-rst}
:orphan:
```
@@ -8,6 +8,7 @@ This was once a long page covering a bunch of topics; those topics
have since all moved to dedicated pages:
### Monitoring
Moved to [Troubleshooting](../production/troubleshooting.html#monitoring).
### Securing your Zulip server

View File

@@ -1,16 +1,16 @@
# Management commands
Sometimes, you need to modify or inspect Zulip data from the command
line. To help with this, Zulip ships with over 100 command-line tools
line. To help with this, Zulip ships with over 100 command-line tools
implemented using the [Django management commands
framework][django-management].
## Running management commands
Start by logging in as the `zulip` user on the Zulip server. Then run
Start by logging in as the `zulip` user on the Zulip server. Then run
them as follows:
```
```bash
cd /home/zulip/deployments/current
# Start by reading the help
@@ -25,7 +25,7 @@ primarily want to use those in the `[zerver]` section as those are the
ones specifically built for Zulip.
As a warning, some of them are designed for specific use cases and may
cause problems if run in other situations. If you're not sure, it's
cause problems if run in other situations. If you're not sure, it's
worth reading the documentation (or the code, usually available at
`zerver/management/commands/`; they're generally very simple programs).
@@ -39,7 +39,7 @@ string ID (usually the subdomain).
You can see all the organizations on your Zulip server using
`./manage.py list_realms`.
```
```console
zulip@zulip:~$ /home/zulip/deployments/current/manage.py list_realms
id string_id name
-- --------- ----
@@ -54,14 +54,14 @@ unlikely to ever need to interact with that realm.)
Unless you are
[hosting multiple organizations on your Zulip server](../production/multiple-organizations.md),
your single Zulip organization on the root domain will have the empty
string (`''`) as its `string_id`. So you can run e.g.:
string (`''`) as its `string_id`. So you can run e.g.:
```
```console
zulip@zulip:~$ /home/zulip/deployments/current/manage.py show_admins -r ''
```
Otherwise, the `string_id` will correspond to the organization's
subdomain. E.g. on `it.zulip.example.com`, use
subdomain. E.g. on `it.zulip.example.com`, use
`/home/zulip/deployments/current/manage.py show_admins -r it`.
## manage.py shell
@@ -73,7 +73,7 @@ You can get an IPython shell with full access to code within the Zulip
project using `manage.py shell`, e.g., you can do the following to
change a user's email address:
```
```console
$ cd /home/zulip/deployments/current/
$ ./manage.py shell
In [1]: user_profile = get_user_profile_by_email("email@example.com")
@@ -86,11 +86,11 @@ formatting data from Zulip's tables for inspection; Zulip's own
you understand how the codebase is organized.
We recommend against directly editing objects and saving them using
Django's `object.save()`. While this will save your changes to the
Django's `object.save()`. While this will save your changes to the
database, for most objects, in addition to saving the changes to the
database, one may also need to flush caches, notify the apps and open
browser windows, and record the change in Zulip's `RealmAuditLog`
audit history table. For almost any data change you want to do, there
audit history table. For almost any data change you want to do, there
is already a function in `zerver.lib.actions.py` with a name like
`do_change_full_name` that updates that field and notifies clients
correctly.
@@ -102,36 +102,39 @@ access other functions, you'll need to import them yourself.
## Other useful manage.py commands
There are dozens of useful management commands under
`zerver/management/commands/`. We detail a few here:
`zerver/management/commands/`. We detail a few here:
* `./manage.py help`: Lists all available management commands.
* `./manage.py dbshell`: If you're more comfortable with raw SQL than
- `./manage.py help`: Lists all available management commands.
- `./manage.py dbshell`: If you're more comfortable with raw SQL than
Python, this will open a PostgreSQL SQL shell connected to the Zulip
server's database. Beware of changing data; editing data directly
server's database. Beware of changing data; editing data directly
with SQL will often not behave correctly because PostgreSQL doesn't
know to flush Zulip's caches or notify browsers of changes.
* `./manage.py send_custom_email`: Can be used to send an email to a set
of users. The `--help` documents how to run it from a `manage.py
shell` for use with more complex programmatically computed sets of
users.
* `./manage.py send_password_reset_email`: Sends password reset email(s)
- `./manage.py send_custom_email`: Can be used to send an email to a set
of users. The `--help` documents how to run it from a
`manage.py shell` for use with more complex programmatically
computed sets of users.
- `./manage.py send_password_reset_email`: Sends password reset email(s)
to one or more users.
* `./manage.py change_realm_subdomain`: Change subdomain of a realm.
* `./manage.py change_user_email`: Change a user's email address.
* `./manage.py change_user_role`: Can change are user's role
- `./manage.py change_realm_subdomain`: Change subdomain of a realm.
- `./manage.py change_user_email`: Change a user's email address.
- `./manage.py change_user_role`: Can change are user's role
(easier done [via the
UI](https://zulip.com/help/change-a-users-role)) or give bots the
`can_forge_sender` permission, which is needed for certain special API features.
* `./manage.py export_single_user`: does a limited version of the [main
- `./manage.py export_single_user`: does a limited version of the [main
export tools](../production/export-and-import.md) containing just
the messages accessible by a single user.
* `./manage.py reactivate_realm`: Reactivates a realm.
* `./manage.py deactivate_user`: Deactivates a user. This can be done
- `./manage.py reactivate_realm`: Reactivates a realm.
- `./manage.py deactivate_user`: Deactivates a user. This can be done
more easily in Zulip's organization administrator UI.
* `./manage.py delete_user`: Completely delete a user from the database.
- `./manage.py delete_user`: Completely delete a user from the database.
For most purposes, deactivating users is preferred, since that does not
alter message history for other users.
See the `./manage.py delete_user --help` documentation for details.
- `./manage.py clear_auth_rate_limit_history`: If a user failed authenticaton
attempts too many times and further attempts are disallowed by the rate limiter,
this can be used to reset the limit.
All of our management commands have internal documentation available
via `manage.py command_name --help`.
@@ -141,17 +144,17 @@ via `manage.py command_name --help`.
Zulip supports several mechanisms for running custom code on a
self-hosted Zulip server:
* Using an existing [integration][integrations] or writing your own
- Using an existing [integration][integrations] or writing your own
[webhook integration][webhook-integrations] or [bot][writing-bots].
* Writing a program using the [Zulip API][zulip-api].
* [Modifying the Zulip server][modifying-zulip].
* Using the interactive [management shell](#manage-py-shell),
- Writing a program using the [Zulip API][zulip-api].
- [Modifying the Zulip server][modifying-zulip].
- Using the interactive [management shell](#manage-py-shell),
documented above, for one-time work or prototyping.
* Writing a custom management command, detailed here.
- Writing a custom management command, detailed here.
Custom management commands are Python 3 programs that run inside
Zulip's context, so that they can access its libraries, database, and
code freely. They can be the best choice when you want to run custom
code freely. They can be the best choice when you want to run custom
code that is not permitted by Zulip's security model (and thus can't
be done more easily using the [REST API][zulip-api]) and that you
might want to run often (and so the interactive `manage.py shell` is

View File

@@ -2,47 +2,48 @@
Zulip's iOS and Android mobile apps support receiving push
notifications from Zulip servers to let users know when new messages
have arrived. This is an important feature to having a great
have arrived. This is an important feature to having a great
experience using the Zulip mobile apps.
For technical reasons (explained below), in order to deliver mobile
push notifications in the app store versions of our mobile apps, you
will need to register your Zulip server with the Zulip mobile push
notification service. This service will forward push notifications
notification service. This service will forward push notifications
generated by your server to the Zulip mobile app automatically.
## How to sign up
Starting with Zulip 1.6 for both Android and iOS, Zulip servers
support forwarding push notifications to a central push notification
forwarding service. Accessing this service requires outgoing HTTPS
forwarding service. Accessing this service requires outgoing HTTPS
access to the public Internet; if that is restricted by a proxy, you
will need to [configure Zulip to use your outgoing HTTP
proxy](../production/deployment.html#using-an-outgoing-http-proxy)
proxy](../production/deployment.html#customizing-the-outgoing-http-proxy)
first.
You can enable this for your Zulip server as follows:
1. Uncomment the `PUSH_NOTIFICATION_BOUNCER_URL =
'https://push.zulipchat.com'` line in your `/etc/zulip/settings.py`
file (i.e. remove the `#` at the start of the line), and
[restart your Zulip server](../production/settings.html#making-changes).
If you installed your Zulip server with a version older than 1.6,
you'll need to add the line (it won't be there to uncomment).
1. Uncomment the
`PUSH_NOTIFICATION_BOUNCER_URL = 'https://push.zulipchat.com'` line
in your `/etc/zulip/settings.py` file (i.e. remove the `#` at the
start of the line), and [restart your Zulip
server](../production/settings.html#making-changes). If you
installed your Zulip server with a version older than 1.6, you'll
need to add the line (it won't be there to uncomment).
1. If you're running Zulip 1.8.1 or newer, you can run the
registration command:
```
# As root:
su zulip -c '/home/zulip/deployments/current/manage.py register_server'
# Or as the zulip user, you can skip the `su zulip -c`:
/home/zulip/deployments/current/manage.py register_server
registration command:
```bash
# As root:
su zulip -c '/home/zulip/deployments/current/manage.py register_server'
# Or as the zulip user, you can skip the `su zulip -c`:
/home/zulip/deployments/current/manage.py register_server
# docker-zulip users can run this inside the container with `docker exec`:
docker exec -it -u zulip <container_name> /home/zulip/deployments/current/manage.py register_server
```
# docker-zulip users can run this inside the container with `docker exec`:
docker exec -it -u zulip <container_name> /home/zulip/deployments/current/manage.py register_server
```
This command will print the registration data it would send to the
mobile push notifications service, ask you to accept the terms of
service, and if you accept, register your server. If you have trouble,
@@ -52,24 +53,24 @@ You can enable this for your Zulip server as follows:
you'll each need to log out and log back in again in order to start
getting push notifications.
Congratulations! You've successfully set up the service.
Congratulations! You've successfully set up the service.
If you'd like to verify that everything is working, you can do the
following. Please follow the instructions carefully:
following. Please follow the instructions carefully:
* [Configure mobile push notifications to always be sent][mobile-notifications-always]
- [Configure mobile push notifications to always be sent][mobile-notifications-always]
(normally they're only sent if you're idle, which isn't ideal for
this sort of testing).
* On an Android device, download and log in to the
[Zulip Android app](https://play.google.com/store/apps/details?id=com.zulipmobile).
If you were already logged in before configuring the server, you'll
need to log out first, since the app only registers for push
notifications on login.
* Hit the home button, so Zulip is running in the background, and then
have **another user** send you a **private message** (By default,
Zulip only sends push notifications for private messages sent by other
users and messages mentioning you). A push notification should appear
in the Android notification area.
- On an Android device, download and log in to the
[Zulip Android app](https://play.google.com/store/apps/details?id=com.zulipmobile).
If you were already logged in before configuring the server, you'll
need to log out first, since the app only registers for push
notifications on login.
- Hit the home button, so Zulip is running in the background, and then
have **another user** send you a **private message** (By default,
Zulip only sends push notifications for private messages sent by other
users and messages mentioning you). A push notification should appear
in the Android notification area.
[mobile-notifications-always]: https://zulip.com/help/test-mobile-notifications
@@ -78,23 +79,23 @@ in the Android notification area.
Your server's registration includes the server's hostname and contact
email address (from `EXTERNAL_HOST` and `ZULIP_ADMINISTRATOR` in
`/etc/zulip/settings.py`, aka the `--hostname` and `--email` options
in the installer). You can update your server's registration data by
in the installer). You can update your server's registration data by
running `manage.py register_server` again.
If you'd like to rotate your server's API key for this service
(`zulip_org_key`), you need to use `manage.py register_server
--rotate-key` option; it will automatically generate a new
`zulip_org_key` and store that new key in
(`zulip_org_key`), you need to use
`manage.py register_server --rotate-key` option; it will automatically
generate a new `zulip_org_key` and store that new key in
`/etc/zulip/zulip-secrets.conf`.
## Why this is necessary
Both Google's and Apple's push notification services have a security
model that does not support mutually untrusted self-hosted servers
sending push notifications to the same app. In particular, when an
sending push notifications to the same app. In particular, when an
app is published to their respective app stores, one must compile into
the app a secret corresponding to the server that will be able to
publish push notifications for the app. This means that it is
publish push notifications for the app. This means that it is
impossible for a single app in their stores to receive push
notifications from multiple, mutually untrusted, servers.
@@ -105,39 +106,42 @@ forwarding service).
## Security and privacy
Use of the push notification bouncer is subject to the
[Zulipchat Terms of Service](https://zulip.com/terms/). By using
push notifications, you agree to those terms.
Use of the push notification bouncer is subject to the Zulip Cloud [Terms of
Service](https://zulip.com/policies/terms), [Privacy
Policy](https://zulip.com/policies/privacy) and [Rules of
Use](https://zulip.com/policies/rules). By using push notifications, you agree
to these terms.
We've designed this push notification bouncer service with security
and privacy in mind:
* A central design goal of the the Push Notification Service is to
- A central design goal of the the Push Notification Service is to
avoid any message content being stored or logged by the service,
even in error cases.
* The Push Notification Service only stores the necessary metadata for
- The Push Notification Service only stores the necessary metadata for
delivering the notifications to the appropriate devices, and nothing
else:
* The APNS/FCM tokens needed to securely send mobile push
notifications to iOS and Android devices, one per device
registered to be notified by your Zulip server.
* User ID numbers generated by your Zulip server, needed to route
a given notification to the appropriate set of mobile devices.
These user ID numbers are are opaque to the Push Notification
Service and Kandra Labs.
* The Push Notification Service receives (but does not store) the
- The APNS/FCM tokens needed to securely send mobile push
notifications to iOS and Android devices, one per device
registered to be notified by your Zulip server.
- User ID numbers generated by your Zulip server, needed to route
a given notification to the appropriate set of mobile devices.
These user ID numbers are are opaque to the Push Notification
Service and Kandra Labs.
- The Push Notification Service receives (but does not store) the
contents of individual mobile push notifications:
* The numeric message ID generated by your Zulip server.
* Metadata on the message's sender (name and avatar URL).
* Metadata on the message's recipient (stream name + ID, topic,
private message recipients, etc.).
* A timestamp.
* The message's content.
- The numeric message ID generated by your Zulip server.
- Metadata on the message's sender (name and avatar URL).
- Metadata on the message's recipient (stream name + ID, topic,
private message recipients, etc.).
- A timestamp.
- The message's content.
There's a `PUSH_NOTIFICATION_REDACT_CONTENT` setting available to
disable any message content being sent via the push notification
bouncer (i.e. message content will be replaced with
`***REDACTED***`). Note that this setting makes push notifications
`***REDACTED***`). Note that this setting makes push notifications
significantly less usable.
We plan to
@@ -145,14 +149,15 @@ and privacy in mind:
which would eliminate that usability tradeoff and additionally allow
us to not have any access to the other details mentioned in this
section.
* All of the network requests (both from Zulip servers to the Push
- All of the network requests (both from Zulip servers to the Push
Notification Service and from the Push Notification Service to the
relevant Google and Apple services) are encrypted over the wire with
SSL/TLS.
* The code for the push notification forwarding service is 100% open
- The code for the push notification forwarding service is 100% open
source and available as part of the
[Zulip server project on GitHub](https://github.com/zulip/zulip).
* The push notification forwarding servers are professionally managed
- The push notification forwarding servers are professionally managed
by a small team of security expert engineers.
If you have any questions about the security model, contact
@@ -167,10 +172,11 @@ Zulip open source project understand how many people are using Zulip,
and help us allocate resources towards supporting self-hosted
installations.
Our use of these statistics is governed by the same Terms of Service
and Privacy Policy that covers the Mobile Push Notifications Service
itself. If your organization does not want to submit these statistics,
you can disable this feature at any time by setting
Our use of these statistics is governed by the same [Terms of
Service](https://zulip.com/policies/terms) and [Privacy
Policy](https://zulip.com/policies/privacy) that covers the Mobile Push
Notifications Service itself. If your organization does not want to submit these
statistics, you can disable this feature at any time by setting
`SUBMIT_USAGE_STATISTICS=False` in `/etc/zulip/settings.py`.
## Sending push notifications directly from your server
@@ -182,18 +188,18 @@ Zulip mobile apps.
We don't recommend this path -- patching and shipping a production
mobile app can take dozens of hours to set up even for an experienced
developer, and even more time to maintain. And it doesn't provide
developer, and even more time to maintain. And it doesn't provide
material privacy benefits -- your organization's push notification
data would still go through Apple/Google's servers, just not Kandra
Labs'. Our view is the correct way to optimize for privacy is
end-to-end encryption of push notifications. But in the interest of
Labs'. Our view is the correct way to optimize for privacy is
end-to-end encryption of push notifications. But in the interest of
transparency, we document in this section roughly what's involved in
doing so.
As we discussed above, it is impossible for a single app in their
stores to receive push notifications from multiple, mutually
untrusted, servers. The Mobile Push Notification Service is one of
the possible solutions to this problem. The other possible solution
untrusted, servers. The Mobile Push Notification Service is one of
the possible solutions to this problem. The other possible solution
is for an individual Zulip server's administrators to build and
distribute their own copy of the Zulip mobile apps, hardcoding a key
that they possess.
@@ -204,8 +210,8 @@ the Zulip mobile apps (and there's nothing the Zulip team can do to
eliminate this onerous requirement).
The main work is distributing your own copies of the Zulip mobile apps
configured to use APNS/FCM keys that you generate. This is not for
the faint of heart! If you haven't done this before, be warned that
configured to use APNS/FCM keys that you generate. This is not for
the faint of heart! If you haven't done this before, be warned that
one can easily spend hundreds of dollars (on things like a DUNS number
registration) and a week struggling through the hoops Apple requires
to build and distribute an app through the Apple app store, even if
@@ -216,17 +222,18 @@ the app stores yourself.
If you've done that work, the Zulip server configuration for sending
push notifications through the new app is quite straightforward:
* Create a
- Create a
[FCM push notifications](https://firebase.google.com/docs/cloud-messaging)
key in the Google Developer console and set `android_gcm_api_key` in
`/etc/zulip/zulip-secrets.conf` to that key.
* Register for a
- Register for a
[mobile push notification certificate][apple-docs]
from Apple's developer console. Set `APNS_SANDBOX=False` and
from Apple's developer console. Set `APNS_SANDBOX=False` and
`APNS_CERT_FILE` to be the path of your APNS certificate file in
`/etc/zulip/settings.py`.
* Set the `APNS_TOPIC` and `ZULIP_IOS_APP_ID` settings to the ID for
- Set the `APNS_TOPIC` and `ZULIP_IOS_APP_ID` settings to the ID for
your app (for the official Zulip apps, they are both `org.zulip.Zulip`).
* Restart the Zulip server.
- Restart the Zulip server.
[apple-docs]: https://developer.apple.com/library/content/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html

View File

@@ -1,16 +1,16 @@
```eval_rst
```{eval-rst}
:orphan:
```
# Hosting multiple organizations
The vast majority of Zulip servers host just a single organization (or
"realm", as the Zulip code calls organizations). This article
"realm", as the Zulip code calls organizations). This article
documents what's involved in hosting multiple Zulip organizations on a
single server.
Throughout this article, we'll assume you're working on a Zulip server
with hostname `zulip.example.com`. You may also find the more
with hostname `zulip.example.com`. You may also find the more
[technically focused article on realms](../subsystems/realms.md) to be useful
reading.
@@ -18,7 +18,7 @@ reading.
Zulip's approach for supporting multiple organizations on a single
Zulip server is for each organization to be hosted on its own
subdomain. E.g. you'd have `org1.zulip.example.com` and
subdomain. E.g. you'd have `org1.zulip.example.com` and
`org2.zulip.example.com`.
Web security standards mean that one subdomain per organization is
@@ -28,36 +28,36 @@ server at the same time.
When you want to create a new organization, you need to do a few
things:
* If you're using Zulip older than 1.7, you'll need to set
- If you're using Zulip older than 1.7, you'll need to set
`REALMS_HAVE_SUBDOMAINS=True` in your `/etc/zulip/settings.py`
file. That setting is the default in 1.7 and later.
* Make sure you have SSL certificates for all of the subdomains you're
going to use. If you're using
file. That setting is the default in 1.7 and later.
- Make sure you have SSL certificates for all of the subdomains you're
going to use. If you're using
[our Let's Encrypt instructions](ssl-certificates.md), it's easy to
just specify multiple subdomains in your certificate request.
* If necessary, modify your `nginx` configuration to use your new
- If necessary, modify your `nginx` configuration to use your new
certificates.
* Use `./manage.py generate_realm_creation_link` again to create your
new organization. Review
- Use `./manage.py generate_realm_creation_link` again to create your
new organization. Review
[the install instructions](install.md) if you need a
refresher on how this works.
* If you're planning on using GitHub auth or another social
- If you're planning on using GitHub auth or another social
authentication method, review
[the notes on `SOCIAL_AUTH_SUBDOMAIN` below](#authentication).
For servers hosting a large number of organizations, like
[zulip.com](https://zulip.com), one can set `ROOT_DOMAIN_LANDING_PAGE
= True` in `/etc/zulip/settings.py` so that the homepage for the
server is a copy of the Zulip homepage.
[zulip.com](https://zulip.com), one can set
`ROOT_DOMAIN_LANDING_PAGE = True` in `/etc/zulip/settings.py` so that
the homepage for the server is a copy of the Zulip homepage.
### SSL certificates
You'll need to install an SSL certificate valid for all the
(sub)domains you're using your Zulip server with. You can get an SSL
(sub)domains you're using your Zulip server with. You can get an SSL
certificate covering several domains for free by using
[our Certbot wrapper tool](../production/ssl-certificates.html#after-zulip-is-already-installed),
though if you're going to host a large number of organizations, you
may want to get a wildcard certificate. You can also get a wildcard
may want to get a wildcard certificate. You can also get a wildcard
certificate for
[free using Certbot](https://community.letsencrypt.org/t/getting-wildcard-certificates-with-certbot/56285),
but because of the stricter security checks for acquiring a wildcard
@@ -71,7 +71,7 @@ If you'd like to use hostnames that are not subdomains of each other,
you can set the `REALM_HOSTS` setting in `/etc/zulip/settings.py` to a
Python dictionary, like this:
```
```python
REALM_HOSTS = {
'mysubdomain': 'hostname.example.com',
}
@@ -86,7 +86,7 @@ into the database.
### The root domain
Most Zulip servers host a single Zulip organization on the root domain
(e.g. `zulip.example.com`). The way this is implemented internally
(e.g. `zulip.example.com`). The way this is implemented internally
involves the organization having the empty string (`''`) as its
"subdomain".
@@ -95,7 +95,7 @@ on subdomains (e.g. `subdivision.zulip.example.com`), but this only
works well if there are no users in common between the two
organizations, because the auth cookies for the root domain are
visible to the subdomain (so it's not possible for a single
browser/client to be logged into both). So we don't recommend that
browser/client to be logged into both). So we don't recommend that
configuration.
### Authentication
@@ -103,7 +103,7 @@ configuration.
Many of Zulip's supported authentication methods (Google, GitHub,
SAML, etc.) can require providing the third-party authentication
provider with a whitelist of callback URLs to your Zulip server (or
even a single URL). For those vendors that support a whitelist, you
even a single URL). For those vendors that support a whitelist, you
can provide the callback URLs for each of your Zulip organizations.
The cleaner solution is to register a special subdomain, e.g.
@@ -118,10 +118,10 @@ avoid confusion as to why there's an extra realm when inspecting the
Zulip database.
Every Zulip server comes with 1 realm that isn't created by users: the
`zulipinternal` realm. By default, this realm only contains the Zulip "system
bots". You can get a list of these on your system via
`zulipinternal` realm. By default, this realm only contains the Zulip "system
bots". You can get a list of these on your system via
`./scripts/get-django-setting INTERNAL_BOTS`, but this is where bots
like "Notification Bot", "Welcome Bot", etc. exist. In the future,
like "Notification Bot", "Welcome Bot", etc. exist. In the future,
we're considering moving these bots to exist in every realm, so that
we wouldn't need the system realm anymore.

View File

@@ -1,4 +1,4 @@
```eval_rst
```{eval-rst}
:orphan:
```
@@ -8,18 +8,18 @@ When a user tries to set a password, we use [zxcvbn][zxcvbn] to check
that it isn't a weak one.
See discussion in [our main docs for server
admins](../production/security-model.html#passwords). This doc explains in more
admins](../production/security-model.html#passwords). This doc explains in more
detail how we set the default threshold (`PASSWORD_MIN_GUESSES`) we use.
First, read the doc section there. (It's short.)
First, read the doc section there. (It's short.)
Then, the CACM article ["Passwords and the Evolution of Imperfect
Authentication"][BHOS15] is comprehensive, educational, and readable,
Authentication"][bhos15] is comprehensive, educational, and readable,
and is especially recommended.
The CACM article is convincing that password requirements should be
set to make passwords withstand an online attack, but not an offline
one. Offline attacks are much less common, and there is a wide gap in
one. Offline attacks are much less common, and there is a wide gap in
the level of password strength required to beat them vs that for
online attacks -- and therefore in the level of user frustration that
such a requirement would cause.
@@ -36,9 +36,9 @@ overestimation (allowing a weak password) sharply degrades at 100k
guesses, while underestimation (rejecting a strong password) jumps up
just after 10k guesses, and grows steadily thereafter.
Moreover, the [Yahoo study][Bon12] shows that resistance to even 1M
Moreover, the [Yahoo study][bon12] shows that resistance to even 1M
guesses is more than nearly half of users accomplish with a freely
chosen password, and 100k is too much for about 20%. (See Figure 6.)
chosen password, and 100k is too much for about 20%. (See Figure 6.)
It doesn't make sense for a Zulip server to try to educate or push so
many users far beyond the security practices they're accustomed to; in
the few environments where users can be expected to work much harder
@@ -49,11 +49,11 @@ auth in Zulip entirely in favor of using that.
Our threshold of 10k guesses provides significant protection against
online attacks, and quite strong protection with appropriate
rate-limiting. On the other hand it stays within the range where
rate-limiting. On the other hand it stays within the range where
zxcvbn rarely underestimates the strength of a password too severely,
and only about 10% of users do worse than this without prompting.
[zxcvbn]: https://github.com/dropbox/zxcvbn
[BHOS15]: https://www.cl.cam.ac.uk/~fms27/papers/2015-BonneauHerOorSta-passwords.pdf
[bhos15]: https://www.cl.cam.ac.uk/~fms27/papers/2015-BonneauHerOorSta-passwords.pdf
[zxcvbn-paper]: https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_wheeler.pdf
[Bon12]: https://ieeexplore.ieee.org/document/6234435
[bon12]: https://ieeexplore.ieee.org/document/6234435

View File

@@ -1,13 +1,12 @@
PostgreSQL database details
=========================
# PostgreSQL database details
Starting with Zulip 3.0, Zulip supports a range of PostgreSQL
versions. PostgreSQL 13 is the current default for new installations;
versions. PostgreSQL 13 is the current default for new installations;
PostgreSQL 10, 11, and 12 are all supported.
Previous versions of Zulip used whatever version of PostgreSQL was
included with the base operating system (E.g. PostgreSQL 12 on Ubuntu
Focal, 10 on Ubuntu Bionic, and 9.6 on Ubuntu Xenial). We recommend
Focal, 10 on Ubuntu Bionic, and 9.6 on Ubuntu Xenial). We recommend
that installations currently using older PostgreSQL releases [upgrade
to PostgreSQL 13][upgrade-postgresql], as we may drop support for
older PostgreSQL in a future release.
@@ -32,34 +31,34 @@ called "zulip" in your database server. You can configure these
options in `/etc/zulip/settings.py` (the below descriptions are from the
PostgreSQL documentation):
* `REMOTE_POSTGRES_HOST`: Name or IP address of the remote host
* `REMOTE_POSTGRES_SSLMODE`: SSL Mode used to connect to the server,
- `REMOTE_POSTGRES_HOST`: Name or IP address of the remote host
- `REMOTE_POSTGRES_SSLMODE`: SSL Mode used to connect to the server,
different options you can use are:
* disable: I don't care about security, and I don't want to pay the
- disable: I don't care about security, and I don't want to pay the
overhead of encryption.
* allow: I don't care about security, but I will pay the overhead of
- allow: I don't care about security, but I will pay the overhead of
encryption if the server insists on it.
* prefer: I don't care about encryption, but I wish to pay the
- prefer: I don't care about encryption, but I wish to pay the
overhead of encryption if the server supports it.
* require: I want my data to be encrypted, and I accept the
- require: I want my data to be encrypted, and I accept the
overhead. I trust that the network will make sure I always connect
to the server I want.
* verify-ca: I want my data encrypted, and I accept the overhead. I
- verify-ca: I want my data encrypted, and I accept the overhead. I
want to be sure that I connect to a server that I trust.
* verify-full: I want my data encrypted, and I accept the
- verify-full: I want my data encrypted, and I accept the
overhead. I want to be sure that I connect to a server I trust,
and that it's the one I specify.
Then you should specify the password of the user zulip for the
database in /etc/zulip/zulip-secrets.conf:
```
```ini
postgres_password = xxxx
```
Finally, you can stop your database on the Zulip server via:
```
```bash
sudo service postgresql stop
sudo update-rc.d postgresql disable
```
@@ -76,7 +75,7 @@ can give you some tips.
When debugging PostgreSQL issues, in addition to the standard `pg_top`
tool, often it can be useful to use this query:
```
```postgresql
SELECT procpid,waiting,query_start,current_query FROM pg_stat_activity ORDER BY procpid;
```
@@ -84,20 +83,21 @@ which shows the currently running backends and their activity. This is
similar to the pg_top output, with the added advantage of showing the
complete query, which can be valuable in debugging.
To stop a runaway query, you can run `SELECT pg_cancel_backend(pid
int)` or `SELECT pg_terminate_backend(pid int)` as the 'postgres'
user. The former cancels the backend's current query and the latter
terminates the backend process. They are implemented by sending SIGINT
and SIGTERM to the processes, respectively. We recommend against
sending a PostgreSQL process SIGKILL. Doing so will cause the database
to kill all current connections, roll back any pending transactions,
and enter recovery mode.
To stop a runaway query, you can run
`SELECT pg_cancel_backend(pid int)` or
`SELECT pg_terminate_backend(pid int)` as the 'postgres' user. The
former cancels the backend's current query and the latter terminates
the backend process. They are implemented by sending SIGINT and
SIGTERM to the processes, respectively. We recommend against sending
a PostgreSQL process SIGKILL. Doing so will cause the database to kill
all current connections, roll back any pending transactions, and enter
recovery mode.
#### Stopping the Zulip PostgreSQL database
To start or stop PostgreSQL manually, use the pg_ctlcluster command:
```
```bash
pg_ctlcluster 9.1 [--force] main {start|stop|restart|reload}
```
@@ -107,15 +107,15 @@ prohibitively long. If you use the --force option with stop,
pg_ctlcluster will try to use the "fast" mode for shutting
down. "Fast" mode is described by the manpage thusly:
With the --force option the "fast" mode is used which rolls back all
active transactions, disconnects clients immediately and thus shuts
down cleanly. If that does not work, shutdown is attempted again in
"immediate" mode, which can leave the cluster in an inconsistent state
and thus will lead to a recovery run at the next start. If this still
does not help, the postmaster process is killed. Exits with 0 on
success, with 2 if the server is not running, and with 1 on other
failure conditions. This mode should only be used when the machine is
about to be shut down.
> With the --force option the "fast" mode is used which rolls back all
> active transactions, disconnects clients immediately and thus shuts
> down cleanly. If that does not work, shutdown is attempted again in
> "immediate" mode, which can leave the cluster in an inconsistent state
> and thus will lead to a recovery run at the next start. If this still
> does not help, the postmaster process is killed. Exits with 0 on
> success, with 2 if the server is not running, and with 1 on other
> failure conditions. This mode should only be used when the machine is
> about to be shut down.
Many database parameters can be adjusted while the database is
running. Just modify /etc/postgresql/9.1/main/postgresql.conf and
@@ -128,7 +128,7 @@ database failed to start. It may tell you to check the logs, but you
won't find any information there. pg_ctlcluster runs the following
command underneath when it actually goes to start PostgreSQL:
```
```bash
/usr/lib/postgresql/9.1/bin/pg_ctl start -D /var/lib/postgresql/9.1/main -s -o \
'-c config_file="/etc/postgresql/9.1/main/postgresql.conf"'
```
@@ -139,13 +139,12 @@ stop PostgreSQL and restart it using pg_ctlcluster after you've debugged
with this approach, since it does bypass some of the work that
pg_ctlcluster does.
#### PostgreSQL vacuuming alerts
The `autovac_freeze` PostgreSQL alert from `check_postgres` is
particularly important. This alert indicates that the age (in terms
particularly important. This alert indicates that the age (in terms
of number of transactions) of the oldest transaction id (XID) is
getting close to the `autovacuum_freeze_max_age` setting. When the
getting close to the `autovacuum_freeze_max_age` setting. When the
oldest XID hits that age, PostgreSQL will force a VACUUM operation,
which can often lead to sudden downtime until the operation finishes.
If it did not do this and the age of the oldest XID reached 2 billion,

View File

@@ -1,65 +1,70 @@
# Requirements and scalability
To run a Zulip server, you will need:
* A dedicated machine or VM
* A supported OS:
* Ubuntu 20.04 Focal
* Ubuntu 18.04 Bionic
* Debian 10 Buster
* At least 2GB RAM, and 10GB disk space
* If you expect 100+ users: 4GB RAM, and 2 CPUs
* A hostname in DNS
* Credentials for sending email
- A dedicated machine or VM
- A supported OS:
- Ubuntu 20.04 Focal
- Ubuntu 18.04 Bionic
- Debian 11 Bullseye
- Debian 10 Buster
- At least 2GB RAM, and 10GB disk space
- If you expect 100+ users: 4GB RAM, and 2 CPUs
- If you intend to [upgrade from Git][upgrade-from-git]: 3GB RAM, or
2G and at least 1G of swap configured.
- A hostname in DNS
- Credentials for sending email
For details on each of these requirements, see below.
[upgrade-from-git]: ../production/upgrade-or-modify.html#upgrading-from-a-git-repository
## Server
#### General
The installer expects Zulip to be the **only thing** running on the
system; it will install system packages with `apt` (like Nginx,
PostgreSQL, and Redis) and configure them for its own use. We
PostgreSQL, and Redis) and configure them for its own use. We
strongly recommend using either a fresh machine instance in a cloud
provider, a fresh VM, or a dedicated machine. If you decide to
provider, a fresh VM, or a dedicated machine. If you decide to
disregard our advice and use a server that hosts other services, we
can't support you, but
[we do have some notes on issues you'll encounter](install-existing-server.md).
#### Operating system
Ubuntu 20.04 Focal, 18.04 Bionic, and Debian 10 Buster are supported
for running Zulip in production. 64-bit is recommended. We recommend
installing on the newest supported OS release you're comfortable with,
to save a bit of future work [upgrading the operating
Ubuntu 20.04 Focal, 18.04 Bionic, Debian 11 Bullseye, and Debian 10 Buster
are supported for running Zulip in production. 64-bit is recommended.
We recommend installing on the newest supported OS release you're
comfortable with, to save a bit of future work [upgrading the operating
system][upgrade-os].
If you're using Ubuntu, the
[Ubuntu universe repository][ubuntu-repositories] must be
[enabled][enable-universe], which is usually just:
```
```bash
sudo add-apt-repository universe
sudo apt update
```
[upgrade-os]: ../production/upgrade-or-modify.html#upgrading-the-operating-system
[ubuntu-repositories]:
https://help.ubuntu.com/community/Repositories/Ubuntu
[ubuntu-repositories]: https://help.ubuntu.com/community/Repositories/Ubuntu
[enable-universe]: https://help.ubuntu.com/community/Repositories/CommandLine#Adding_the_Universe_and_Multiverse_Repositories
#### Hardware specifications
* CPU and memory: For installations with 100+ users you'll need a
- CPU and memory: For installations with 100+ users you'll need a
minimum of **2 CPUs** and **4GB RAM**. For installations with fewer
users, 1 CPU and 2GB RAM is sufficient. We strongly recommend against
installing with less than 2GB of RAM, as you will likely experience
out of memory issues installing dependencies. We recommend against
out of memory issues installing dependencies. We recommend against
using highly CPU-limited servers like the AWS `t2` style instances
for organizations with hundreds of users (active or no).
* Disk space: You'll need at least 10GB of free disk space for a
server with dozens of users. We recommend using an SSD and avoiding
- Disk space: You'll need at least 10GB of free disk space for a
server with dozens of users. We recommend using an SSD and avoiding
cloud storage backends that limit the IOPS per second, since the
disk is primarily used for the Zulip database.
@@ -68,45 +73,51 @@ on hardware requirements for larger organizations.
#### Network and security specifications
* Incoming HTTPS access (usually port 443, though this is
- Incoming HTTPS access (usually port 443, though this is
[configurable](../production/deployment.html#using-an-alternate-port))
from the networks where your users are (usually, the public
Internet).
* Incoming port 80 access (optional). Zulip only serves content over
- Incoming port 80 access (optional). Zulip only serves content over
HTTPS, and will redirect HTTP requests to HTTPS.
* Incoming port 25 if you plan to enable Zulip's [incoming email
- Incoming port 25 if you plan to enable Zulip's [incoming email
integration](../production/email-gateway.md).
* Outgoing HTTP(S) access (ports 80 and 443) to the public Internet so
- Incoming port 4369 should be protected by a firewall to prevent
exposing `epmd`, an Erlang service which does not support binding
only to localhost. Leaving this exposed will allow unauthenticated
remote users to determine that the server is running RabbitMQ, and
on which port, though no further information is leaked.
- Outgoing HTTP(S) access (ports 80 and 443) to the public Internet so
that Zulip can properly manage image and website previews and mobile
push notifications. Outgoing Internet access is not required if you
push notifications. Outgoing Internet access is not required if you
[disable those
features](https://zulip.com/help/allow-image-link-previews).
* Outgoing SMTP access (usually port 587) to your [SMTP
- Outgoing SMTP access (usually port 587) to your [SMTP
server](../production/email.md) so that Zulip can send emails.
* A domain name (e.g. `zulip.example.com`) that your users will use to
access the Zulip server. In order to generate valid SSL
- A domain name (e.g. `zulip.example.com`) that your users will use to
access the Zulip server. In order to generate valid SSL
certificates [with Certbot][doc-certbot], and to enable other
services such as Google authentication, public DNS name is simpler,
but Zulip can be configured to use a non-public domain or even an IP
address as its external hostname (though we don't recommend that
configuration).
* Zulip supports [running behind a reverse proxy][reverse-proxy].
* Zulip servers running inside a private network should configure the
[Smokescreen integration][smokescreen-proxy] to protect against
[SSRF attacks][SSRF], where users could make the Zulip server make
requests to private resources.
- Zulip supports [running behind a reverse proxy][reverse-proxy].
- Zulip configures [Smokescreen, and outgoing HTTP
proxy][smokescreen-proxy], to protect against [SSRF attacks][ssrf],
which prevents user from making the Zulip server make requests to
private resources. If your network has its own outgoing HTTP proxy,
Zulip supports using that instead.
[SSRF]: https://owasp.org/www-community/attacks/Server_Side_Request_Forgery
[smokescreen-proxy]: ../production/deployment.html#using-an-outgoing-http-proxy
[ssrf]: https://owasp.org/www-community/attacks/Server_Side_Request_Forgery
[smokescreen-proxy]: ../production/deployment.html#customizing-the-outgoing-http-proxy
[reverse-proxy]: ../production/deployment.html#putting-the-zulip-application-behind-a-reverse-proxy
[email-mirror-code]: https://github.com/zulip/zulip/blob/master/zerver/management/commands/email_mirror.py
[email-mirror-code]: https://github.com/zulip/zulip/blob/main/zerver/management/commands/email_mirror.py
## Credentials needed
#### SSL certificate
Your Zulip server will need an SSL certificate for the domain name it
uses. For most Zulip servers, the recommended (and simplest) way to
uses. For most Zulip servers, the recommended (and simplest) way to
get this is to just [use the `--certbot` option][doc-certbot] in the
Zulip installer, which will automatically get a certificate for you
and keep it renewed.
@@ -123,10 +134,10 @@ certificate documentation](ssl-certificates.md).
#### Outgoing email
* Outgoing email (SMTP) credentials that Zulip can use to send
- Outgoing email (SMTP) credentials that Zulip can use to send
outgoing emails to users (e.g. email address confirmation emails
during the signup process, message notification emails, password
reset, etc.). If you don't have an existing outgoing SMTP solution,
reset, etc.). If you don't have an existing outgoing SMTP solution,
read about
[free outgoing SMTP options and options for prototyping](email.html#free-outgoing-email-services).
@@ -139,80 +150,82 @@ Zulip in production](../production/install.md).
This section details some basic guidelines for running a Zulip server
for larger organizations (especially >1000 users or 500+ daily active
users). Zulip's resource needs depend mainly on 3 parameters:
* daily active users (e.g. number of employees if everyone's an
employee)
* total user accounts (can be much larger)
* message volume.
users). Zulip's resource needs depend mainly on 3 parameters:
- daily active users (e.g. number of employees if everyone's an
employee)
- total user accounts (can be much larger)
- message volume.
In the following, we discuss a configuration with at most two types of
servers: application servers (running Django, Tornado, RabbitMQ,
Redis, Memcached, etc.) and database servers. Of the application
server services, Django dominates the resource requirements. One can
Redis, Memcached, etc.) and database servers. Of the application
server services, Django dominates the resource requirements. One can
run every service on its own system (as
[docker-zulip](https://github.com/zulip/docker-zulip) does) but for
most use cases, there's little scalability benefit to doing so. See
most use cases, there's little scalability benefit to doing so. See
[deployment options](../production/deployment.md) for details on
installing Zulip with a dedicated database server.
* **Dedicated database**. For installations with hundreds of daily
- **Dedicated database**. For installations with hundreds of daily
active users, we recommend using a [remote PostgreSQL
database](postgresql.md), but it's not required.
* **RAM:** We recommended more RAM for larger installations:
* With 25+ daily active users, 4GB of RAM.
* With 100+ daily active users, 8GB of RAM.
* With 400+ daily active users, 16GB of RAM for the Zulip
application server, plus 16GB for the database.
* With 2000+ daily active users 32GB of RAM, plus 32GB for the
database.
* Roughly linear scaling beyond that.
- **RAM:** We recommended more RAM for larger installations:
* **CPU:** The Zulip application server's CPU usage is heavily
- With 25+ daily active users, 4GB of RAM.
- With 100+ daily active users, 8GB of RAM.
- With 400+ daily active users, 16GB of RAM for the Zulip
application server, plus 16GB for the database.
- With 2000+ daily active users 32GB of RAM, plus 32GB for the
database.
- Roughly linear scaling beyond that.
- **CPU:** The Zulip application server's CPU usage is heavily
optimized due to extensive work on optimizing the performance of
requests for latency reasons. Because most servers with sufficient
requests for latency reasons. Because most servers with sufficient
RAM have sufficient CPU resources, CPU requirements are rarely an
issue. For larger installations with a dedicated database, we
issue. For larger installations with a dedicated database, we
recommend high-CPU instances for the application server and a
database-optimized (usually low CPU, high memory) instance for the
database.
* **Disk for application server:** We recommend using [the S3 file
uploads backend][s3-uploads] to store uploaded files at scale. With
- **Disk for application server:** We recommend using [the S3 file
uploads backend][s3-uploads] to store uploaded files at scale. With
the S3 backend configuration, we recommend 50GB of disk for the OS,
Zulip software, logs and scratch/free space. Disk needs when
Zulip software, logs and scratch/free space. Disk needs when
storing uploads locally
* **Disk for database:** SSD disk is highly recommended. For
- **Disk for database:** SSD disk is highly recommended. For
installations where most messages have <100 recipients, 10GB per 1M
messages of history is sufficient plus 1GB per 1000 users is
sufficient. If most messages are to public streams with 10K+ users
sufficient. If most messages are to public streams with 10K+ users
subscribed (like on chat.zulip.org), add 20GB per (1000 user
accounts) per (1M messages to public streams).
* **Example:** When the
- **Example:** When the
[chat.zulip.org](../contributing/chat-zulip-org.md) community server
had 12K user accounts (~300 daily actives) and 800K messages of
history (400K to public streams), it was a default configuration
single-server installation with 16GB of RAM, 4 cores (essentially
always idle), and its database was using about 100GB of disk.
* **Disaster recovery:** One can easily run a hot spare application
- **Disaster recovery:** One can easily run a hot spare application
server and a hot spare database (using [PostgreSQL streaming
replication][streaming-replication]). Make sure the hot spare
replication][streaming-replication]). Make sure the hot spare
application server has copies of `/etc/zulip` and you're either
syncing `LOCAL_UPLOADS_DIR` or using the [S3 file uploads
backend][s3-uploads].
* **Sharding:** Zulip releases do not fully support dividing Tornado
- **Sharding:** Zulip releases do not fully support dividing Tornado
traffic for a single Zulip realm/organization between multiple
application servers, which is why we recommend a hot spare over
load-balancing. We don't have an easily deployed configuration for
load-balancing. We don't have an easily deployed configuration for
load-balancing Tornado within a single organization, and as a result
can't currently offer this model outside of enterprise support
contracts.
* Zulip 2.0 and later supports running multiple Tornado servers
- Zulip 2.0 and later supports running multiple Tornado servers
sharded by realm/organization, which is how we scale Zulip Cloud.
[Contact us][contact-support] for help implementing the sharding policy.

View File

@@ -1,6 +1,6 @@
# Security model
This section attempts to document the Zulip security model. It likely
This section attempts to document the Zulip security model. It likely
does not cover every issue; if there are details you're curious about,
please feel free to ask questions in [#production
help](https://chat.zulip.org/#narrow/stream/31-production-help) on the
@@ -11,7 +11,7 @@ announcement).
## Secure your Zulip server like your email server
* It's reasonable to think about security for a Zulip server like you
- It's reasonable to think about security for a Zulip server like you
do security for a team email server -- only trusted individuals
within an organization should have shell access to the server.
@@ -19,7 +19,7 @@ announcement).
or Zulip database server, or with access to the `zulip` user on a
Zulip application server, has complete control over the Zulip
installation and all of its data (so they can read messages, modify
history, etc.). It would be difficult or impossible to avoid this,
history, etc.). It would be difficult or impossible to avoid this,
because the server needs access to the data to support features
expected of a group chat system like the ability to search the
entire message history, and thus someone with control over the
@@ -27,17 +27,17 @@ announcement).
## Encryption and authentication
* Traffic between clients (web, desktop and mobile) and the Zulip
server is encrypted using HTTPS. By default, all Zulip services
- Traffic between clients (web, desktop and mobile) and the Zulip
server is encrypted using HTTPS. By default, all Zulip services
talk to each other either via a localhost connection or using an
encrypted SSL connection.
* Zulip requires CSRF tokens in all interactions with the web API to
- Zulip requires CSRF tokens in all interactions with the web API to
prevent CSRF attacks.
* The preferred way to log in to Zulip is using an SSO solution like
- The preferred way to log in to Zulip is using an SSO solution like
Google auth, LDAP, or similar, but Zulip also supports password
authentication. See
authentication. See
[the authentication methods documentation](../production/authentication-methods.md)
for details on Zulip's available authentication methods.
@@ -46,16 +46,16 @@ announcement).
Zulip stores user passwords using the standard PBKDF2 algorithm.
When the user is choosing a password, Zulip checks the password's
strength using the popular [zxcvbn][zxcvbn] library. Weak passwords
are rejected, and strong passwords encouraged. The minimum password
strength using the popular [zxcvbn][zxcvbn] library. Weak passwords
are rejected, and strong passwords encouraged. The minimum password
strength allowed is controlled by two settings in
`/etc/zulip/settings.py`:
* `PASSWORD_MIN_LENGTH`: The minimum acceptable length, in characters.
- `PASSWORD_MIN_LENGTH`: The minimum acceptable length, in characters.
Shorter passwords are rejected even if they pass the `zxcvbn` test
controlled by `PASSWORD_MIN_GUESSES`.
* `PASSWORD_MIN_GUESSES`: The minimum acceptable strength of the
- `PASSWORD_MIN_GUESSES`: The minimum acceptable strength of the
password, in terms of the estimated number of passwords an attacker
is likely to guess before trying this one. If the user attempts to
set a password that `zxcvbn` estimates to be guessable in less than
@@ -70,10 +70,10 @@ strength allowed is controlled by two settings in
Estimating the guessability of a password is a complex problem and
impossible to efficiently do perfectly. For background or when
considering an alternate value for this setting, the article
["Passwords and the Evolution of Imperfect Authentication"][BHOS15]
is recommended. The [2016 zxcvbn paper][zxcvbn-paper] adds useful
["Passwords and the Evolution of Imperfect Authentication"][bhos15]
is recommended. The [2016 zxcvbn paper][zxcvbn-paper] adds useful
information about the performance of zxcvbn, and [a large 2012 study
of Yahoo users][Bon12] is informative about the strength of the
of Yahoo users][bon12] is informative about the strength of the
passwords users choose.
<!---
@@ -86,59 +86,60 @@ strength allowed is controlled by two settings in
-->
[zxcvbn]: https://github.com/dropbox/zxcvbn
[BHOS15]: http://www.cl.cam.ac.uk/~fms27/papers/2015-BonneauHerOorSta-passwords.pdf
[bhos15]: http://www.cl.cam.ac.uk/~fms27/papers/2015-BonneauHerOorSta-passwords.pdf
[zxcvbn-paper]: https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_wheeler.pdf
[Bon12]: http://ieeexplore.ieee.org/document/6234435/
[bon12]: http://ieeexplore.ieee.org/document/6234435/
## Messages and history
* Zulip message content is rendered using a specialized Markdown
- Zulip message content is rendered using a specialized Markdown
parser which escapes content to protect against cross-site scripting
attacks.
* Zulip supports both public streams and private streams.
* Any non-guest user can join any public stream in the organization,
- Zulip supports both public streams and private streams.
- Any non-guest user can join any public stream in the organization,
and can view the complete message history of any public stream
without joining the stream. Guests can only access streams that
without joining the stream. Guests can only access streams that
another user adds them to.
* Organization owners and administrators can see and modify most
- Organization owners and administrators can see and modify most
aspects of a private stream, including the membership and
estimated traffic. Owners and administrators generally cannot see
messages sent to private streams or do things that would
indirectly give them access to those messages, like adding members
or changing the stream privacy settings.
* Non-admins cannot easily see which private streams exist, or interact
- Non-admins cannot easily see which private streams exist, or interact
with them in any way until they are added. Given a stream name, they can
figure out whether a stream with that name exists, but cannot see any
other details about the stream.
* See [Stream permissions](https://zulip.com/help/stream-permissions) for more details.
- See [Stream permissions](https://zulip.com/help/stream-permissions) for more details.
* Zulip supports editing the content and topics of messages that have
- Zulip supports editing the content and topics of messages that have
already been sent. As a general philosophy, our policies provide
hard limits on the ways in which message content can be changed or
undone. In contrast, our policies around message topics favor
usefulness (e.g. for conversational organization) over faithfulness
to the original. In all configurations:
* Message content can only ever be modified by the original author.
- Message content can only ever be modified by the original author.
* Any message visible to an organization owner or administrator can
- Any message visible to an organization owner or administrator can
be deleted at any time by that administrator.
* See
- See
[Configuring message editing and deletion](https://zulip.com/help/configure-message-editing-and-deletion)
for more details.
## Users and bots
* There are several types of users in a Zulip organization: organization
- There are several types of users in a Zulip organization: organization
owners, organization administrators, members (normal users), guests,
and bots.
* Owners and administrators have the ability to deactivate and
- Owners and administrators have the ability to deactivate and
reactivate other human and bot users, archive streams, add/remove
administrator privileges, as well as change configuration for the
organization.
@@ -148,47 +149,47 @@ strength allowed is controlled by two settings in
streams to which the administrator is not subscribed. There are two
exceptions:
* Organization owners may get access to private messages via some types of
- Organization owners may get access to private messages via some types of
[data export](https://zulip.com/help/export-your-organization).
* Administrators can change the ownership of a bot. If a bot is subscribed
- Administrators can change the ownership of a bot. If a bot is subscribed
to a private stream, then an administrator can indirectly get access to
stream messages by taking control of the bot, though the access will be
limited to what the bot can do. (E.g. incoming webhook bots cannot read
messages.)
* Every Zulip user has an API key, available on the settings page.
- Every Zulip user has an API key, available on the settings page.
This API key can be used to do essentially everything the user can
do; for that reason, users should keep their API key safe. Users
do; for that reason, users should keep their API key safe. Users
can rotate their own API key if it is accidentally compromised.
* To properly remove a user's access to a Zulip team, it does not
- To properly remove a user's access to a Zulip team, it does not
suffice to change their password or deactivate their account in a
SSO system, since neither of those prevents authenticating with the
user's API key or those of bots the user has created. Instead, you
user's API key or those of bots the user has created. Instead, you
should
[deactivate the user's account](https://zulip.com/help/deactivate-or-reactivate-a-user)
via Zulip's "Organization settings" interface.
* The Zulip mobile apps authenticate to the server by sending the
- The Zulip mobile apps authenticate to the server by sending the
user's password and retrieving the user's API key; the apps then use
the API key to authenticate all future interactions with the site.
Thus, if a user's phone is lost, in addition to changing passwords,
you should rotate the user's Zulip API key.
* Guest users are like Members, but they do not have automatic access
- Guest users are like Members, but they do not have automatic access
to public streams.
* Zulip supports several kinds of bots with different capabilities.
- Zulip supports several kinds of bots with different capabilities.
* Incoming webhook bots can only send messages into Zulip.
* Outgoing webhook bots and Generic bots can essentially do anything a
- Incoming webhook bots can only send messages into Zulip.
- Outgoing webhook bots and Generic bots can essentially do anything a
non-administrator user can, with a few exceptions (e.g. a bot cannot
log in to the web application, register for mobile push
notifications, or create other bots).
* Bots with the `can_forge_sender` permission can send messages that appear to have been sent by
- Bots with the `can_forge_sender` permission can send messages that appear to have been sent by
another user. They also have the ability to see the names of all
streams, including private streams. This is important for implementing
streams, including private streams. This is important for implementing
integrations like the Jabber, IRC, and Zephyr mirrors.
These bots cannot be created by Zulip users, including
@@ -197,14 +198,14 @@ strength allowed is controlled by two settings in
## User-uploaded content and user-generated requests
* Zulip supports user-uploaded files. Ideally they should be hosted
- Zulip supports user-uploaded files. Ideally they should be hosted
from a separate domain from the main Zulip server to protect against
various same-domain attacks (e.g. zulip-user-content.example.com).
We support two ways of hosting them: the basic `LOCAL_UPLOADS_DIR`
file storage backend, where they are stored in a directory on the
Zulip server's filesystem, and the S3 backend, where the files are
stored in Amazon S3. It would not be difficult to add additional
stored in Amazon S3. It would not be difficult to add additional
supported backends should there be a need; see
`zerver/lib/upload.py` for the full interface.
@@ -221,11 +222,11 @@ strength allowed is controlled by two settings in
provide additional layers of protection in both backends as well.
In the Zulip S3 backend, the random URLs to access files that are
presented to users don't actually host the content. Instead, the S3
presented to users don't actually host the content. Instead, the S3
backend verifies that the user has a valid Zulip session in the
relevant organization (and that has access to a Zulip message linking to
the file), and if so, then redirects the browser to a temporary S3
URL for the file that expires a short time later. In this way,
URL for the file that expires a short time later. In this way,
possessing a URL to a secret file in Zulip does not provide
unauthorized users with access to that file.
@@ -235,35 +236,41 @@ strength allowed is controlled by two settings in
browser is logged into a Zulip account that has received the
uploaded file in question).
* Zulip supports using the Camo image proxy to proxy content like
- Zulip supports using the [go-camo][go-camo] image proxy to proxy content like
inline image previews, that can be inserted into the Zulip message feed by
other users. This ensures that clients do not make requests to external
servers to fetch images, improving privacy.
* By default, Zulip will provide image previews inline in the body of
messages when a message contains a link to an image. You can
- By default, Zulip will provide image previews inline in the body of
messages when a message contains a link to an image. You can
control this using the `INLINE_IMAGE_PREVIEW` setting.
* Zulip may make outgoing HTTP connections to other servers in a
- Zulip may make outgoing HTTP connections to other servers in a
number of cases:
* Outgoing webhook bots (creation of which can be restricted)
* Inline image previews in messages (enabled by default, but can be disabled)
* Inline webpage previews and embeds (must be configured to be enabled)
* Twitter message previews (must be configured to be enabled)
* BigBlueButton and Zoom API requests (must be configured to be enabled)
* Mobile push notifications (must be configured to be enabled)
- Outgoing webhook bots (creation of which can be restricted)
- Inline image previews in messages (enabled by default, but can be disabled)
- Inline webpage previews and embeds (must be configured to be enabled)
- Twitter message previews (must be configured to be enabled)
- BigBlueButton and Zoom API requests (must be configured to be enabled)
- Mobile push notifications (must be configured to be enabled)
* Notably, these first 3 features give end users (limited) control to cause
the Zulip server to make HTTP requests on their behalf. As a result,
Zulip supports routing all outgoing outgoing HTTP requests [through
- Notably, these first 3 features give end users (limited) control to cause
the Zulip server to make HTTP requests on their behalf. Because of this,
Zulip routes all outgoing outgoing HTTP requests [through
Smokescreen][smokescreen-setup] to ensure that Zulip cannot be
used to execute [SSRF attacks][SSRF] against other systems on an
internal corporate network. The default Smokescreen configuration
used to execute [SSRF attacks][ssrf] against other systems on an
internal corporate network. The default Smokescreen configuration
denies access to all non-public IP addresses, including 127.0.0.1.
[SSRF]: https://owasp.org/www-community/attacks/Server_Side_Request_Forgery
[smokescreen-setup]: ../production/deployment.html#using-an-outgoing-http-proxy
The Camo image server does not, by default, route its traffic
through Smokescreen, since Camo includes logic to deny access to
private subnets; this can be [overridden][proxy.enable_for_camo].
[go-camo]: https://github.com/cactus/go-camo
[ssrf]: https://owasp.org/www-community/attacks/Server_Side_Request_Forgery
[smokescreen-setup]: ../production/deployment.html#customizing-the-outgoing-http-proxy
[proxy.enable_for_camo]: ../production/deployment.html#enable-for-camo
## Final notes and security response

View File

@@ -12,10 +12,11 @@ administrators][realm-admin-docs].
[realm-admin-docs]: https://zulip.com/help/getting-your-organization-started-with-zulip
This page discusses additional configuration that a system
administrator can do. To change any of the following settings, edit
administrator can do. To change any of the following settings, edit
the `/etc/zulip/settings.py` file on your Zulip server, and then
restart the server with the following command:
```
```bash
su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
```
@@ -28,7 +29,7 @@ comment documentation for new configuration settings after upgrading
to each new major release.
[update-settings-docs]: ../production/upgrade-or-modify.html#updating-settings-py-inline-documentation
[settings-py-template]: https://github.com/zulip/zulip/blob/master/zproject/prod_settings_template.py
[settings-py-template]: https://github.com/zulip/zulip/blob/main/zproject/prod_settings_template.py
Since Zulip's settings file is a Python script, there are a number of
other things that one can configure that are not documented; ask on
@@ -42,12 +43,12 @@ if there's something you'd like to do but can't figure out how to.
`EXTERNAL_HOST`: the user-accessible domain name for your Zulip
installation (i.e., what users will type in their web browser). This
should of course match the DNS name you configured to point to your
server and for which you configured SSL certificates. If you passed
server and for which you configured SSL certificates. If you passed
`--hostname` to the installer, this will be prefilled with that value.
`ZULIP_ADMINISTRATOR`: the email address of the person or team
maintaining this installation and who will get support and error
emails. If you passed `--email` to the installer, this will be
emails. If you passed `--email` to the installer, this will be
prefilled with that value.
### Authentication backends
@@ -68,14 +69,14 @@ them.
The Zulip apps expect to be talking to servers with a properly
signed SSL certificate, in most cases and will not accept a
self-signed certificate. You should get a proper SSL certificate
self-signed certificate. You should get a proper SSL certificate
before testing the apps.
Because of how Google and Apple have architected the security model of
their push notification protocols, the Zulip mobile apps for
[iOS](https://itunes.apple.com/us/app/zulip/id1203036395) and
[Android](https://play.google.com/store/apps/details?id=com.zulipmobile)
can only receive push notifications from a single Zulip server. We
can only receive push notifications from a single Zulip server. We
have configured that server to be `push.zulipchat.com`, and offer a
[push notification forwarding service](mobile-push-notifications.md) that
forwards push notifications through our servers to mobile devices.
@@ -85,21 +86,22 @@ and configure this service.
### Terms of Service and Privacy policy
Zulip allows you to configure your server's Terms of Service and
Privacy Policy pages (`/terms` and `/privacy`, respectively). You can
Privacy Policy pages (`/terms` and `/privacy`, respectively). You can
use the `TERMS_OF_SERVICE` and `PRIVACY_POLICY` settings to configure
the path to your server's policies. The syntax is Markdown (with
support for included HTML). A good approach is to use paths like
the path to your server's policies. The syntax is Markdown (with
support for included HTML). A good approach is to use paths like
`/etc/zulip/terms.md`, so that it's easy to back up your policy
configuration along with your other Zulip server configuration.
### Miscellaneous server settings
Some popular settings in `/etc/zulip/settings.py` include:
* The Twitter integration, which provides pretty inline previews of
- The Twitter integration, which provides pretty inline previews of
tweets.
* The [email gateway](../production/email-gateway.md), which lets
- The [email gateway](../production/email-gateway.md), which lets
users send emails into Zulip.
* The [Video call integrations](../production/video-calls.md).
- The [Video call integrations](../production/video-calls.md).
## Zulip announcement list

View File

@@ -11,13 +11,14 @@ chore (nor expense) that it used to be.
If you already have an SSL certificate, just install (or symlink) its
files into place at the following paths:
* `/etc/ssl/private/zulip.key` for the private key
* `/etc/ssl/certs/zulip.combined-chain.crt` for the certificate.
- `/etc/ssl/private/zulip.key` for the private key
- `/etc/ssl/certs/zulip.combined-chain.crt` for the certificate.
Your certificate file should contain not only your own certificate but
its **full chain, including any intermediate certificates** used by
your certificate authority (CA). See the [nginx
documentation][nginx-chains] for details on what this means. If
your certificate authority (CA). See the [nginx
documentation][nginx-chains] for details on what this means. If
you're missing part of the chain, your server may work with some
browsers, but not others and not the Zulip mobile and desktop apps.
The desktop apps support [configuring a custom CA][desktop-certs] to
@@ -32,15 +33,16 @@ browsers ignore errors that others don't.
Two good tests include:
* If your server is accessible from the public Internet, use the [SSL
Labs tester][ssllabs-tester]. Be sure to check for "Chain issues";
- If your server is accessible from the public Internet, use the [SSL
Labs tester][ssllabs-tester]. Be sure to check for "Chain issues";
if any, your certificate file is missing intermediate certificates.
* Alternatively, run a command like `curl -SsI https://zulip.example.com`
- Alternatively, run a command like `curl -SsI https://zulip.example.com`
(using your server's URL) from a machine that can reach your server.
Make sure that on the same machine, `curl -SsI
https://incomplete-chain.badssl.com` gives an error; `curl` on some
machines, including Macs, will accept incomplete chains.
Make sure that on the same machine,
`curl -SsI https://incomplete-chain.badssl.com` gives an error;
`curl` on some machines, including Macs, will accept incomplete
chains.
[ssllabs-tester]: https://www.ssllabs.com/ssltest/analyze.html
@@ -48,17 +50,18 @@ Two good tests include:
[Let's Encrypt](https://letsencrypt.org/) is a free, completely
automated CA launched in 2016 to help make HTTPS routine for the
entire Web. Zulip offers a simple automation for
entire Web. Zulip offers a simple automation for
[Certbot](https://certbot.eff.org/), a Let's Encrypt client, to get
SSL certificates from Let's Encrypt and renew them automatically.
We recommend most Zulip servers use Certbot. You'll want something
We recommend most Zulip servers use Certbot. You'll want something
else if:
* you have an existing workflow for managing SSL certificates
- you have an existing workflow for managing SSL certificates
that you prefer;
* you need wildcard certificates (support from Let's Encrypt released
- you need wildcard certificates (support from Let's Encrypt released
in [March 2018][letsencrypt-wildcard]); or
* your Zulip server is not on the public Internet. (In this case you
- your Zulip server is not on the public Internet. (In this case you
can [still use Certbot][certbot-manual-mode], but it's less
convenient; and you'll want to ignore Zulip's automation.)
@@ -71,7 +74,7 @@ To enable the Certbot automation when first installing Zulip, just
pass the `--certbot` flag when [running the install script][doc-install-script].
The `--hostname` and `--email` options are required when using
`--certbot`. You'll need the hostname to be a real DNS name, and the
`--certbot`. You'll need the hostname to be a real DNS name, and the
Zulip server machine to be reachable by that name from the public
Internet.
@@ -84,10 +87,12 @@ one as described in the section below after installing Zulip.
To enable the Certbot automation on an already-installed Zulip
server, run the following commands:
```
```bash
sudo -s # If not already root
/home/zulip/deployments/current/scripts/setup/setup-certbot --email=EMAIL HOSTNAME [HOSTNAME2...]
```
where HOSTNAME is the domain name users see in their browser when
using the server (e.g., `zulip.example.com`), and EMAIL is a contact
address for the server admins. Additional hostnames can also be
@@ -99,23 +104,40 @@ When the Certbot automation in Zulip is first enabled, by either
method, it creates an account for the server at the Let's Encrypt CA;
requests a certificate for the given hostname; proves to the CA that
the server controls the website at that hostname; and is then given a
certificate. (For details, refer to
certificate. (For details, refer to
[Let's Encrypt](https://letsencrypt.org/how-it-works/).)
Then it records a flag in `/etc/zulip/zulip.conf` saying Certbot is in
use and should be auto-renewed. A cron job checks that flag, then
checks if any certificates are due for renewal, and if they are (so
approximately once every 60 days), repeats the process of request,
prove, get a fresh certificate.
### Renewal
Let's Encrypt certificates expire after 90 days. Short expiration
periods are good for security, but they also mean that it's important
to automatically renew them to avoid regular maintenance work.
Zulip configures automatic renewal for you. As a result, a Zulip
server configured with Certbot does not require any ongoing work to
maintain a current valid SSL certificate.
The `certbot` package configures a systemd timer (similar to a cron
job) that will renew any Certbot certificates that are due for
renewal. The renewal process repeats the Certbot proof-of-control
process, receives the new certificate from Certbot, installs the new
certificate, and then reloads `nginx`.
#### Troubleshooting
If your Certbot certificate expires, it is usually because of firewall
rules preventing the Certbot renewal process (which is essentially
identical to the initial certificate request process) from
working. You can debug interactively by running the command from the
cron job, `/usr/bin/certbot renew`, as `root`.
## Self-signed certificate
If you aren't able to use Certbot, you can generate a self-signed SSL
certificate. This can be convenient for testing, but isn't
recommended for production, as it is insecure. The Zulip desktop and
certificate. This can be convenient for testing, but isn't
recommended for production, as it is insecure. The Zulip desktop and
mobile apps will not connect to a server if they cannot validate its
SSL certificate. The desktop apps support [configuring a custom
SSL certificate. The desktop apps support [configuring a custom
certificate authority][desktop-certs] to allow validation of an
internal certificate.
@@ -125,39 +147,40 @@ just pass the `--self-signed-cert` flag when
To generate a self-signed certificate for an already-installed Zulip
server, run the following commands:
```
```bash
sudo -s # If not already root
/home/zulip/deployments/current/scripts/setup/generate-self-signed-cert HOSTNAME
```
where HOSTNAME is the domain name (or IP address) to use on the
generated certificate.
After replacing the certificates, you need to reload `nginx` by
running the following as `root`:
```
```bash
service nginx reload
```
[desktop-certs]: https://zulip.com/help/custom-certificates
## Troubleshooting
### The Android app can't connect to the server
This is most often caused by an incomplete certificate chain. See
This is most often caused by an incomplete certificate chain. See
discussion in the [Manual install](#manual-install) section above.
### The iOS app can't connect to the server
This can be caused by a server set up to support only TLS 1.1 or
older (including TLS 1.0, SSL 3, or SSL 2.)
TLS 1.2 has been a standard for over 10 years, and all modern web
server software supports it. Starting in early 2020, all major
browsers [will *require* TLS 1.2 or later][tls12-required-news], and
will refuse to connect over TLS 1.1 or older. And on iOS, Apple [has
server software supports it. Starting in early 2020, all major
browsers [will _require_ TLS 1.2 or later][tls12-required-news], and
will refuse to connect over TLS 1.1 or older. And on iOS, Apple [has
since iOS 9][apple-ats] required TLS 1.2 for all connections made by
apps, unless the app specifically opts into lower security.
@@ -169,12 +192,11 @@ to check what TLS versions it supports is the [SSL Labs
tester][ssllabs-tester].
To resolve this issue, update your server to support TLS 1.2,
and preferably also TLS 1.3. For nginx, see [the `ssl_protocols`
and preferably also TLS 1.3. For nginx, see [the `ssl_protocols`
directive][nginx-doc-protocols] in your configuration.
[nginx-doc-protocols]: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols
### The Android app connects to the server on some devices but not others
An issue on Android 7.0 ([report][android7.0-tls-issue],
@@ -187,14 +209,14 @@ configuration.
The issue is that Android 7.0 supports only the curve `secp256r1` when
doing elliptic-curve cryptography for TLS, and not other curves like
`secp384r1` or `secp512r1`. If your server's TLS/SSL configuration
`secp384r1` or `secp512r1`. If your server's TLS/SSL configuration
offers only other curves, then Android 7.0 clients will be unable to
connect.
By default `nginx` (and therefore a Zulip server) offers the
`secp256r1` curve among others, and so everything works. You can
`secp256r1` curve among others, and so everything works. You can
control the offered curves with `ssl_ecdh_curve` in the `nginx`
configuration on your server. See [nginx docs][nginx-doc-curve] for
configuration on your server. See [nginx docs][nginx-doc-curve] for
details.
[nginx-doc-curve]: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve
@@ -202,13 +224,13 @@ details.
Two signs for diagnosing this issue in contrast to some other root
cause:
* This issue affects only Android 7.0; it's fixed in Android 7.1.1 and
- This issue affects only Android 7.0; it's fixed in Android 7.1.1 and
later.
* If your server is reachable from the public Internet, use the [SSL
Labs tester][ssllabs-tester]. Under "Cipher Suites" you may see
- If your server is reachable from the public Internet, use the [SSL
Labs tester][ssllabs-tester]. Under "Cipher Suites" you may see
lines beginning with `TLS_ECDHE`, for cipher suites which use
elliptic-curve cryptography. These lines will have further text
elliptic-curve cryptography. These lines will have further text
like `ECDH secp256r1` or `ECDH secp384r1`, which identifies specific
elliptic curves your server offers to use. This issue applies if
elliptic curves your server offers to use. This issue applies if
your server does not offer `secp256r1`.

View File

@@ -11,7 +11,7 @@ overview](../overview/architecture-overview.md), particularly the
understand the many services Zulip uses.
If you encounter issues while running Zulip, take a look at Zulip's logs, which
are located in `/var/log/zulip/`. That directory contains one log file for
are located in `/var/log/zulip/`. That directory contains one log file for
each service, plus `errors.log` (has all errors), `server.log` (has logs from
the Django and Tornado servers), and `workers.log` (has combined logs from the
queue workers).
@@ -21,7 +21,7 @@ on this page includes details about how to fix common issues with Zulip services
If you run into additional problems, [please report
them](https://github.com/zulip/zulip/issues) so that we can update
this page! The Zulip installation scripts logs its full output to
this page! The Zulip installation scripts logs its full output to
`/var/log/zulip/install.log`, so please include the context for any
tracebacks from that log.
@@ -37,13 +37,14 @@ and restart various services.
### Checking status with `supervisorctl status`
You can check if the Zulip application is running using:
```
```bash
supervisorctl status
```
When everything is running as expected, you will see something like this:
```
```console
process-fts-updates RUNNING pid 2194, uptime 1:13:11
zulip-django RUNNING pid 2192, uptime 1:13:11
zulip-tornado RUNNING pid 2193, uptime 1:13:11
@@ -63,10 +64,10 @@ zulip-workers:zulip-events-user-presence RUNNING pid 21
If you see any services showing a status other than `RUNNING`, or you
see an uptime under 5 seconds (which indicates it's crashing
immediately after startup and repeatedly restarting), that service
isn't running. If you don't see relevant logs in
isn't running. If you don't see relevant logs in
`/var/log/zulip/errors.log`, check the log file declared via
`stdout_logfile` for that service's entry in
`/etc/supervisor/conf.d/zulip.conf` for details. Logs only make it to
`/etc/supervisor/conf.d/zulip.conf` for details. Logs only make it to
`/var/log/zulip/errors.log` once a service has started fully.
### Restarting services with `supervisorctl restart all`
@@ -75,7 +76,7 @@ After you change configuration in `/etc/zulip/settings.py` or fix a
misconfiguration, you will often want to restart the Zulip application.
You can restart Zulip using:
```
```bash
supervisorctl restart all
```
@@ -83,7 +84,7 @@ supervisorctl restart all
Similarly, you can stop Zulip using:
```
```bash
supervisorctl stop all
```
@@ -96,39 +97,42 @@ The Zulip application uses several major open source services to store
and cache data, queue messages, and otherwise support the Zulip
application:
* PostgreSQL
* RabbitMQ
* Nginx
* Redis
* memcached
- PostgreSQL
- RabbitMQ
- Nginx
- Redis
- memcached
If one of these services is not installed or functioning correctly,
Zulip will not work. Below we detail some common configuration
Zulip will not work. Below we detail some common configuration
problems and how to resolve them:
* If your browser reports no webserver is running, that is likely
- If your browser reports no webserver is running, that is likely
because nginx is not configured properly and thus failed to start.
nginx will fail to start if you configured SSL incorrectly or did
not provide SSL certificates. To fix this, configure them properly
not provide SSL certificates. To fix this, configure them properly
and then run:
```
```bash
service nginx restart
```
* If your host is being port scanned by unauthorized users, you may see
- If your host is being port scanned by unauthorized users, you may see
messages in `/var/log/zulip/server.log` like
```
```text
2017-02-22 14:11:33,537 ERROR Invalid HTTP_HOST header: '10.2.3.4'. You may need to add u'10.2.3.4' to ALLOWED_HOSTS.
```
Django uses the hostnames configured in `ALLOWED_HOSTS` to identify
legitimate requests and block others. When an incoming request does
not have the correct HTTP Host header, Django rejects it and logs the
attempt. For more on this issue, see the [Django release notes on Host header
poisoning](https://www.djangoproject.com/weblog/2013/feb/19/security/#s-issue-host-header-poisoning)
* An AMQPConnectionError traceback or error running rabbitmqctl
- An AMQPConnectionError traceback or error running rabbitmqctl
usually means that RabbitMQ is not running; to fix this, try:
```
```bash
service rabbitmq-server restart
```
If RabbitMQ fails to start, the problem is often that you are using
@@ -137,16 +141,14 @@ problems and how to resolve them:
### Restrict unattended upgrades
```eval_rst
.. important::
We recommend that you `disable or limit Ubuntu's unattended-upgrades
to skip some server packages
<https://linoxide.com/ubuntu-how-to/enable-disable-unattended-upgrades-ubuntu-16-04/>`;
if you disable them, do not forget to regularly install apt upgrades
manually. With unattended upgrades enabled but not limited, the
moment a new PostgreSQL release is published, your Zulip server will
have its PostgreSQL server upgraded (and thus restarted).
```
:::{important}
We recommend that you disable or limit Ubuntu's unattended-upgrades
to skip some server packages. With unattended upgrades enabled but
not limited, the moment a new PostgreSQL release is published, your
Zulip server will have its PostgreSQL server upgraded (and thus
restarted). If you do disable unattended-upgrades, do not forget to
regularly install apt upgrades manually!
:::
Restarting one of the system services that Zulip uses (PostgreSQL,
memcached, Redis, or Rabbitmq) will drop the connections that
@@ -155,7 +157,7 @@ those connections throwing errors.
Zulip is designed to recover from system service downtime by creating
new connections once the system service is back up, so the Zulip
outage will end once the system service finishes restarting. But
outage will end once the system service finishes restarting. But
you'll get a bunch of error emails during the system service outage
whenever one of the Zulip server's ~20 workers attempts to access the
system service.
@@ -177,7 +179,7 @@ You can ensure that the `unattended-upgrades` package never upgrades
PostgreSQL, memcached, Redis, or RabbitMQ, by configuring in
`/etc/apt/apt.conf.d/50unattended-upgrades`:
```
```text
// Python regular expressions, matching packages to exclude from upgrading
Unattended-Upgrade::Package-Blacklist {
"libc\d+";
@@ -192,30 +194,30 @@ Unattended-Upgrade::Package-Blacklist {
## Monitoring
Chat is mission-critical to many organizations. This section contains
Chat is mission-critical to many organizations. This section contains
advice on monitoring your Zulip server to minimize downtime.
First, we should highlight that Zulip sends Django error emails to
`ZULIP_ADMINISTRATOR` for any backend exceptions. A properly
`ZULIP_ADMINISTRATOR` for any backend exceptions. A properly
functioning Zulip server shouldn't send any such emails, so it's worth
reporting/investigating any that you do see.
Beyond that, the most important monitoring for a Zulip server is
standard stuff:
* Basic host health monitoring for issues running out of disk space,
- Basic host health monitoring for issues running out of disk space,
especially for the database and where uploads are stored.
* Service uptime and standard monitoring for the [services Zulip
depends on](#troubleshooting-services). Most monitoring software
- Service uptime and standard monitoring for the [services Zulip
depends on](#troubleshooting-services). Most monitoring software
has standard plugins for Nginx, PostgreSQL, Redis, RabbitMQ,
and memcached, and those will work well with Zulip.
* `supervisorctl status` showing all services `RUNNING`.
* Checking for processes being OOM killed.
- `supervisorctl status` showing all services `RUNNING`.
- Checking for processes being OOM killed.
Beyond that, Zulip ships a few application-specific end-to-end health
checks. The Nagios plugins `check_send_receive_time`,
checks. The Nagios plugins `check_send_receive_time`,
`check_rabbitmq_queues`, and `check_rabbitmq_consumers` are generally
sufficient to point to the cause of any Zulip production issue. See
sufficient to point to the cause of any Zulip production issue. See
the next section for details.
### Nagios configuration
@@ -227,36 +229,36 @@ tarballs).
The Nagios plugins used by that configuration are installed
automatically by the Zulip installation process in subdirectories
under `/usr/lib/nagios/plugins/`. The following is a summary of the
under `/usr/lib/nagios/plugins/`. The following is a summary of the
useful Nagios plugins included with Zulip and what they check:
Application server and queue worker monitoring:
* `check_send_receive_time`: Sends a test message through the system
- `check_send_receive_time`: Sends a test message through the system
between two bot users to check that end-to-end message sending
works. An effective end-to-end check for Zulip's Django and Tornado
works. An effective end-to-end check for Zulip's Django and Tornado
systems being healthy.
* `check_rabbitmq_consumers` and `check_rabbitmq_queues`: Effective
- `check_rabbitmq_consumers` and `check_rabbitmq_queues`: Effective
checks for Zulip's RabbitMQ-based queuing systems being healthy.
* `check_worker_memory`: Monitors for memory leaks in queue workers.
* `check_email_deliverer_backlog` and `check_email_deliverer_process`:
- `check_worker_memory`: Monitors for memory leaks in queue workers.
- `check_email_deliverer_backlog` and `check_email_deliverer_process`:
Monitors for whether scheduled outgoing emails (e.g. invitation
reminders) are being sent properly.
Database monitoring:
* `check_fts_update_log`: Checks whether full-text search updates are
- `check_fts_update_log`: Checks whether full-text search updates are
being processed properly or getting backlogged.
* `check_postgres`: General checks for database health.
* `check_postgresql_backup`: Checks status of PostgreSQL backups.
* `check_postgresql_replication_lag`: Checks whether PostgreSQL streaming
- `check_postgres`: General checks for database health.
- `check_postgresql_backup`: Checks status of PostgreSQL backups.
- `check_postgresql_replication_lag`: Checks whether PostgreSQL streaming
replication is up to date.
Standard server monitoring:
* `check_website_response.sh`: Basic HTTP check.
* `check_debian_packages`: Checks whether the system is behind on `apt
upgrade`.
- `check_website_response.sh`: Basic HTTP check.
- `check_debian_packages`: Checks whether the system is behind on
`apt upgrade`.
If you're using these plugins, bug reports and pull requests to make
it easier to monitor Zulip and maintain it in production are
@@ -266,5 +268,5 @@ encouraged!
As a measure to mitigate the potential impact of any future memory
leak bugs in one of the Zulip daemons, Zulip service automatically
restarts itself every Sunday early morning. See
restarts itself every Sunday early morning. See
`/etc/cron.d/restart-zulip` for the precise configuration.

View File

@@ -10,53 +10,55 @@ This page explains how to upgrade, patch, or modify Zulip, including:
- [Upgrading the operating system](#upgrading-the-operating-system)
- [Upgrading PostgreSQL](#upgrading-postgresql)
- [Modifying Zulip](#modifying-zulip)
- [Applying changes from master](#applying-changes-from-master)
- [Applying changes from `main`](#applying-changes-from-main)
## Upgrading to a release
Note that there are additional instructions if you're [using
docker-zulip][docker-upgrade], have [patched Zulip](#modifying-zulip),
or have [modified Zulip-managed configuration
files](#preserving-local-changes-to-service-configuration-files). To upgrade
files](#preserving-local-changes-to-service-configuration-files). To upgrade
to a new Zulip release:
1. Read the [upgrade notes](../overview/changelog.html#upgrade-notes)
for all releases newer than what is currently installed.
1. Download the appropriate release tarball from
<https://www.zulip.org/dist/releases/> You can download the latest
release with:
<https://download.zulip.com/server/>. You can get the latest
release (**Zulip Server {{ LATEST_RELEASE_VERSION }}**) with the
following command:
```
wget https://www.zulip.org/dist/releases/zulip-server-latest.tar.gz
```
```bash
wget https://download.zulip.com/server/zulip-server-latest.tar.gz
```
You also have the option of upgrading Zulip [to a version in a Git
repository directly](#upgrading-from-a-git-repository) or creating
your own release tarballs from a copy of the [zulip.git
repository](https://github.com/zulip/zulip) using
`tools/build-release-tarball`.
You also have the option of upgrading Zulip [to a version in a Git
repository directly](#upgrading-from-a-git-repository) or creating
your own release tarballs from a copy of the [zulip.git
repository](https://github.com/zulip/zulip) using
`tools/build-release-tarball`.
1. Log in to your Zulip and run as root:
```
/home/zulip/deployments/current/scripts/upgrade-zulip zulip-server-VERSION.tar.gz
```
```bash
/home/zulip/deployments/current/scripts/upgrade-zulip zulip-server-latest.tar.gz
```
The upgrade process will:
* Run `apt-get upgrade`
* Install new versions of Zulip's dependencies (mainly Python packages).
* (`upgrade-zulip-from-git` only) Build Zulip's frontend assets using `webpack`.
* Shut down the Zulip service
* Run a `puppet apply`
* Run any database migrations
* Bring the Zulip service back up on the new version.
The upgrade process will:
- Run `apt-get upgrade`
- Install new versions of Zulip's dependencies (mainly Python packages).
- (`upgrade-zulip-from-git` only) Build Zulip's frontend assets using `webpack`.
- Shut down the Zulip service
- Run a `puppet apply`
- Run any database migrations
- Bring the Zulip service back up on the new version.
Upgrading will result in brief downtime for the service, which should
be under 30 seconds unless there is an expensive database migration
involved (these will be documented in the [release
notes](../overview/changelog.md), and usually can be avoided with
some care). If downtime is problematic for your organization,
some care). If downtime is problematic for your organization,
consider testing the upgrade on a
[backup](../production/export-and-import.html#backups) in advance,
doing the final upgrade at off hours, or buying a support contract.
@@ -68,15 +70,15 @@ run into any issues or need to roll back the upgrade.
Zulip supports upgrading a production installation to any commit in a
Git repository, which is great for [running pre-release changes from
master](#applying-changes-from-master) or [maintaining a
fork](#making-changes). The process is simple:
`main`](#applying-changes-from-main) or [maintaining a
fork](#making-changes). The process is simple:
```
```bash
# Upgrade to an official release
/home/zulip/deployments/current/scripts/upgrade-zulip-from-git 1.8.1
# Upgrade to a branch (or other Git ref)
/home/zulip/deployments/current/scripts/upgrade-zulip-from-git 2.1.x
/home/zulip/deployments/current/scripts/upgrade-zulip-from-git master
/home/zulip/deployments/current/scripts/upgrade-zulip-from-git main
```
Zulip will automatically fetch the relevant Git commit and upgrade to
@@ -87,15 +89,15 @@ containing the changes planned for the next minor release
(E.g. 2.1.5); we support these stable release branches as though they
were a published release.
The `master` branch contains changes planned for the next major
The `main` branch contains changes planned for the next major
release (E.g. 3.0); see our documentation on [running
master](#upgrading-to-master) before upgrading to it.
`main`](#upgrading-to-main) before upgrading to it.
By default, this uses the main upstream Zulip server repository, but
you can configure any other Git repository by adding a section like
this to `/etc/zulip/zulip.conf`:
```
```ini
[deployment]
git_repo_url = https://github.com/zulip/zulip.git
```
@@ -123,7 +125,7 @@ suggest using that updated template to update
do not have a recent [complete backup][backups]), and make a copy
of the current template:
```
```bash
cp -a /etc/zulip/settings.py ~/zulip-settings-backup.py
cp -a /home/zulip/deployments/current/zproject/prod_settings_template.py /etc/zulip/settings-new.py
```
@@ -137,7 +139,7 @@ suggest using that updated template to update
the template that your `/etc/zulip/settings.py` was installed
using, and the differences that your file has from that:
```
```bash
/home/zulip/deployments/current/scripts/setup/compare-settings-to-template
```
@@ -149,7 +151,7 @@ suggest using that updated template to update
the server to pick up the new file; this should be a no-op, but it
is much better to discover immediately if it is not:
```
```bash
cp -a /etc/zulip/settings-new.py /etc/zulip/settings.py
su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
```
@@ -163,22 +165,23 @@ See also the general Zulip server [troubleshooting
guide](../production/troubleshooting.md).
The upgrade scripts are idempotent, so there's no harm in trying again
after resolving an issue. The most common causes of errors are:
after resolving an issue. The most common causes of errors are:
* Networking issues (e.g. your Zulip server doesn't have reliable
Internet access or needs a proxy set up). Fix the networking issue
- Networking issues (e.g. your Zulip server doesn't have reliable
Internet access or needs a proxy set up). Fix the networking issue
and try again.
* Especially when using `upgrade-zulip-from-git`, systems with the
- Especially when using `upgrade-zulip-from-git`, systems with the
minimal RAM for running Zulip can run into out-of-memory issues
during the upgrade process (generally `tools/webpack` is the step
that fails). You can get past this by shutting down the Zulip
that fails). You can get past this by shutting down the Zulip
server with `supervisorctl stop all` to free up RAM before running
the upgrade process.
Useful logs are available in a few places:
* The Zulip upgrade scripts log all output to
- The Zulip upgrade scripts log all output to
`/var/log/zulip/upgrade.log`.
* The Zulip server logs all Internal Server Errors to
- The Zulip server logs all Internal Server Errors to
`/var/log/zulip/errors.log`.
If you need help and don't have a support contract, you can visit
@@ -206,18 +209,17 @@ This means that if the new version isn't working,
you can quickly downgrade to the old version by running
`/home/zulip/deployments/last/scripts/restart-server`, or to an
earlier previous version by running
`/home/zulip/deployments/DATE/scripts/restart-server`. The
`/home/zulip/deployments/DATE/scripts/restart-server`. The
`restart-server` script stops any running Zulip server, and starts
the version corresponding to the `restart-server` path you call.
## Preserving local changes to service configuration files
```eval_rst
.. warning::
If you have modified service configuration files installed by
Zulip (e.g. the nginx configuration), the Zulip upgrade process will
overwrite your configuration when it does the ``puppet apply``.
```
:::{warning}
If you have modified service configuration files installed by
Zulip (e.g. the nginx configuration), the Zulip upgrade process will
overwrite your configuration when it does the `puppet apply`.
:::
You can test whether this will happen assuming no upstream changes to
the configuration using `scripts/zulip-puppet-apply` (without the
@@ -228,9 +230,9 @@ configuration.
That said, Zulip's configuration files are designed to be flexible
enough for a wide range of installations, from a small self-hosted
system to Zulip Cloud. Before making local changes to a configuration
system to Zulip Cloud. Before making local changes to a configuration
file, first check whether there's an option supported by
`/etc/zulip/zulip.conf` for the customization you need. And if you
`/etc/zulip/zulip.conf` for the customization you need. And if you
need to make local modifications, please report the issue so that we
can make the Zulip Puppet configuration flexible enough to handle your
setup.
@@ -261,48 +263,57 @@ instructions for other supported platforms.
2. As the Zulip user, stop the Zulip server and run the following
to back up the system:
```
supervisorctl stop all
/home/zulip/deployments/current/manage.py backup --output=/home/zulip/release-upgrade.backup.tar.gz
```
```bash
supervisorctl stop all
/home/zulip/deployments/current/manage.py backup --output=/home/zulip/release-upgrade.backup.tar.gz
```
3. Switch to the root user and upgrade the operating system using the
OS's standard tooling. E.g. for Ubuntu, this means running
OS's standard tooling. E.g. for Ubuntu, this means running
`do-release-upgrade` and following the prompts until it completes
successfully:
```
sudo -i # Or otherwise get a root shell
do-release-upgrade -d
```
```bash
sudo -i # Or otherwise get a root shell
do-release-upgrade -d
```
The `-d` option to `do-release-upgrade` is required because Ubuntu
20.04 is new; it will stop being necessary once the first point
release update of Ubuntu 20.04 LTS is released.
The `-d` option to `do-release-upgrade` is required because Ubuntu
20.04 is new; it will stop being necessary once the first point
release update of Ubuntu 20.04 LTS is released.
When `do-release-upgrade` asks you how to upgrade configuration
files for services that Zulip manages like Redis, PostgreSQL,
Nginx, and memcached, the best choice is `N` to keep the
currently installed version. But it's not important; the next
step will re-install Zulip's configuration in any case.
When `do-release-upgrade` asks you how to upgrade configuration
files for services that Zulip manages like Redis, PostgreSQL,
Nginx, and memcached, the best choice is `N` to keep the
currently installed version. But it's not important; the next
step will re-install Zulip's configuration in any case.
4. As root, upgrade the database to the latest version of PostgreSQL:
```
/home/zulip/deployments/current/scripts/setup/upgrade-postgresql
```
```bash
/home/zulip/deployments/current/scripts/setup/upgrade-postgresql
```
5. Finally, we need to reinstall the current version of Zulip, which
5. Ubuntu 20.04 has a different version of the low-level glibc
library, which affects how PostgreSQL orders text data (known as
"collations"); this corrupts database indexes that rely on
collations. Regenerate the affected indexes by running:
```bash
/home/zulip/deployments/current/scripts/setup/reindex-textual-data --force
```
6. Finally, we need to reinstall the current version of Zulip, which
among other things will recompile Zulip's Python module
dependencies for your new version of Python and rewrite Zulip's
full-text search indexes to work with the upgraded dictionary
packages:
```
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets --audit-fts-indexes
```
```bash
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets --audit-fts-indexes
```
This will finish by restarting your Zulip server; you should now be
able to navigate to its URL and confirm everything is working
@@ -310,7 +321,7 @@ instructions for other supported platforms.
### Upgrading from Ubuntu 16.04 Xenial to 18.04 Bionic
1. Upgrade your server to the latest Zulip `2.1.x` release. You can
1. Upgrade your server to the latest Zulip `2.1.x` release. You can
only upgrade to Zulip 3.0 and newer after completing this process,
since newer releases don't support Ubuntu 16.04 Xenial.
@@ -321,27 +332,27 @@ instructions for other supported platforms.
4. As root, upgrade the database installation and OS configuration to
match the new OS version:
```
touch /usr/share/postgresql/10/pgroonga_setup.sql.applied
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
pg_dropcluster 10 main --stop
systemctl stop postgresql
pg_upgradecluster 9.5 main
pg_dropcluster 9.5 main
apt remove postgresql-9.5
systemctl start postgresql
systemctl restart memcached
```
```bash
touch /usr/share/postgresql/10/pgroonga_setup.sql.applied
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
pg_dropcluster 10 main --stop
systemctl stop postgresql
pg_upgradecluster 9.5 main
pg_dropcluster 9.5 main
apt remove postgresql-9.5
systemctl start postgresql
systemctl restart memcached
```
5. Finally, we need to reinstall the current version of Zulip, which
among other things will recompile Zulip's Python module
dependencies for your new version of Python:
```
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
```
```bash
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
```
This will finish by restarting your Zulip server; you should now
be able to navigate to its URL and confirm everything is working
@@ -352,13 +363,13 @@ instructions for other supported platforms.
7. As root, finish by verifying the contents of the full-text indexes:
```
/home/zulip/deployments/current/manage.py audit_fts_indexes
```
```bash
/home/zulip/deployments/current/manage.py audit_fts_indexes
```
### Upgrading from Ubuntu 14.04 Trusty to 16.04 Xenial
1. Upgrade your server to the latest Zulip `2.0.x` release. You can
1. Upgrade your server to the latest Zulip `2.0.x` release. You can
only upgrade to Zulip `2.1.x` and newer after completing this
process, since newer releases don't support Ubuntu 14.04 Trusty.
@@ -369,27 +380,27 @@ instructions for other supported platforms.
4. As root, upgrade the database installation and OS configuration to
match the new OS version:
```
apt remove upstart -y
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
pg_dropcluster 9.5 main --stop
systemctl stop postgresql
pg_upgradecluster -m upgrade 9.3 main
pg_dropcluster 9.3 main
apt remove postgresql-9.3
systemctl start postgresql
service memcached restart
```
```bash
apt remove upstart -y
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
pg_dropcluster 9.5 main --stop
systemctl stop postgresql
pg_upgradecluster -m upgrade 9.3 main
pg_dropcluster 9.3 main
apt remove postgresql-9.3
systemctl start postgresql
service memcached restart
```
5. Finally, we need to reinstall the current version of Zulip, which
among other things will recompile Zulip's Python module
dependencies for your new version of Python:
```
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
```
```bash
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
```
This will finish by restarting your Zulip server; you should now be
able to navigate to its URL and confirm everything is working
@@ -399,13 +410,80 @@ instructions for other supported platforms.
Bionic](#upgrading-from-ubuntu-16-04-xenial-to-18-04-bionic), so
that you are running a supported operating system.
### Upgrading from Debian Buster to Debian Bullseye
1. Upgrade your server to the latest Zulip `4.x` release.
2. As the Zulip user, stop the Zulip server and run the following
to back up the system:
```bash
supervisorctl stop all
/home/zulip/deployments/current/manage.py backup --output=/home/zulip/release-upgrade.backup.tar.gz
```
3. Follow [Debian's instructions to upgrade the OS][bullseye-upgrade].
[bullseye-upgrade]: https://www.debian.org/releases/bullseye/amd64/release-notes/ch-upgrading.html
When prompted for you how to upgrade configuration
files for services that Zulip manages like Redis, PostgreSQL,
Nginx, and memcached, the best choice is `N` to keep the
currently installed version. But it's not important; the next
step will re-install Zulip's configuration in any case.
4. As root, run the following steps to regenerate configurations
for services used by Zulip:
```bash
apt remove upstart -y
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
```
5. Reinstall the current version of Zulip, which among other things
will recompile Zulip's Python module dependencies for your new
version of Python:
```bash
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
```
This will finish by restarting your Zulip server; you should now
be able to navigate to its URL and confirm everything is working
correctly.
6. Debian Bullseye has a different version of the low-level glibc
library, which affects how PostgreSQL orders text data (known as
"collations"); this corrupts database indexes that rely on
collations. Regenerate the affected indexes by running:
```bash
/home/zulip/deployments/current/scripts/setup/reindex-textual-data --force
```
7. As root, finish by verifying the contents of the full-text indexes:
```bash
/home/zulip/deployments/current/manage.py audit_fts_indexes
```
8. As an additional step, you can also [upgrade the postgresql version](#upgrading-postgresql).
### Upgrading from Debian Stretch to Debian Buster
1. Upgrade your server to the latest Zulip `2.1.x` release. You can
1. Upgrade your server to the latest Zulip `2.1.x` release. You can
only upgrade to Zulip 3.0 and newer after completing this process,
since newer releases don't support Ubuntu Debian Stretch.
2. Same as for Bionic to Focal.
2. As the Zulip user, stop the Zulip server and run the following
to back up the system:
```bash
supervisorctl stop all
/home/zulip/deployments/current/manage.py backup --output=/home/zulip/release-upgrade.backup.tar.gz
```
3. Follow [Debian's instructions to upgrade the OS][debian-upgrade-os].
@@ -414,33 +492,33 @@ instructions for other supported platforms.
When prompted for you how to upgrade configuration
files for services that Zulip manages like Redis, PostgreSQL,
Nginx, and memcached, the best choice is `N` to keep the
currently installed version. But it's not important; the next
currently installed version. But it's not important; the next
step will re-install Zulip's configuration in any case.
4. As root, upgrade the database installation and OS configuration to
match the new OS version:
```
apt remove upstart -y
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
pg_dropcluster 11 main --stop
systemctl stop postgresql
pg_upgradecluster -m upgrade 9.6 main
pg_dropcluster 9.6 main
apt remove postgresql-9.6
systemctl start postgresql
service memcached restart
```
```bash
apt remove upstart -y
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
pg_dropcluster 11 main --stop
systemctl stop postgresql
pg_upgradecluster -m upgrade 9.6 main
pg_dropcluster 9.6 main
apt remove postgresql-9.6
systemctl start postgresql
service memcached restart
```
5. Finally, we need to reinstall the current version of Zulip, which
among other things will recompile Zulip's Python module
dependencies for your new version of Python:
```
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
```
```bash
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
```
This will finish by restarting your Zulip server; you should now
be able to navigate to its URL and confirm everything is working
@@ -449,16 +527,25 @@ instructions for other supported platforms.
6. [Upgrade to the latest Zulip release](#upgrading-to-a-release), now
that your server is running a supported operating system.
7. As root, finish by verifying the contents of the full-text indexes:
7. Debian Buster has a different version of the low-level glibc
library, which affects how PostgreSQL orders text data (known as
"collations"); this corrupts database indexes that rely on
collations. Regenerate the affected indexes by running:
```
/home/zulip/deployments/current/manage.py audit_fts_indexes
```
```bash
/home/zulip/deployments/current/scripts/setup/reindex-textual-data --force
```
8. As root, finish by verifying the contents of the full-text indexes:
```bash
/home/zulip/deployments/current/manage.py audit_fts_indexes
```
## Upgrading PostgreSQL
Starting with Zulip 3.0, we use the latest available version of
PostgreSQL at installation time (currently version 13). Upgrades to
PostgreSQL at installation time (currently version 13). Upgrades to
the version of PostgreSQL are no longer linked to upgrades of the
distribution; that is, you may opt to upgrade to PostgreSQL 13 while
running Ubuntu 18.04 Bionic.
@@ -467,88 +554,99 @@ To upgrade the version of PostgreSQL on the Zulip server:
1. Upgrade your server to the latest Zulip release (at least 3.0).
2. Stop the server and take a backup:
1. Stop the server, as the `zulip` user:
```
supervisorctl stop all
/home/zulip/deployments/current/manage.py backup --output=/home/zulip/postgresql-upgrade.backup.tar.gz
```
```bash
# On Zulip before 4.0, use `supervisor stop all` instead
/home/zulip/deployments/current/scripts/stop-server
```
3. As root, run the database upgrade tool:
1. Take a backup, in case of any problems:
```
/home/zulip/deployments/current/scripts/setup/upgrade-postgresql
```
```bash
/home/zulip/deployments/current/manage.py backup --output=/home/zulip/postgresql-upgrade.backup.tar.gz
```
`upgrade-postgresql` will have finished by restarting your Zulip server;
you should now be able to navigate to its URL and confirm everything
is working correctly.
1. As root, run the database upgrade tool:
```bash
/home/zulip/deployments/current/scripts/setup/upgrade-postgresql
```
1. As the `zulip` user, start the server again:
```bash
# On Zulip before 4.0, use `restart-server` instead of `start-server` instead
/home/zulip/deployments/current/scripts/start-server
```
You should now be able to navigate to the Zulip server's URL and
confirm everything is working correctly.
## Modifying Zulip
Zulip is 100% free and open source software, and you're welcome to
modify it! This section explains how to make and maintain
modify it! This section explains how to make and maintain
modifications in a safe and convenient fashion.
If you do modify Zulip and then report an issue you see in your
modified version of Zulip, please be responsible about communicating
that fact:
* Ideally, you'd reproduce the issue in an unmodified version (e.g. on
[chat.zulip.org](../contributing/chat-zulip-org.md) or
[zulip.com](https://zulip.com)).
* Where that is difficult or you think it's very unlikely your changes
are related to the issue, just mention your changes in the issue report.
- Ideally, you'd reproduce the issue in an unmodified version (e.g. on
[chat.zulip.org](../contributing/chat-zulip-org.md) or
[zulip.com](https://zulip.com)).
- Where that is difficult or you think it's very unlikely your changes
are related to the issue, just mention your changes in the issue report.
If you're looking to modify Zulip by applying changes developed by the
Zulip core team and merged into master, skip to [this
section](#applying-changes-from-master).
Zulip core team and merged into `main`, skip to [this
section](#applying-changes-from-main).
## Making changes
One way to modify Zulip is to just edit files under
`/home/zulip/deployments/current` and then restart the server. This
`/home/zulip/deployments/current` and then restart the server. This
can work OK for testing small changes to Python code or shell scripts.
But we don't recommend this approach for maintaining changes because:
* You cannot modify JavaScript, CSS, or other frontend files this way,
- You cannot modify JavaScript, CSS, or other frontend files this way,
because we don't include them in editable form in our production
release tarballs (doing so would make our release tarballs much
larger without any runtime benefit).
* You will need to redo your changes after you next upgrade your Zulip
- You will need to redo your changes after you next upgrade your Zulip
server (or they will be lost).
* You need to remember to restart the server or your changes won't
- You need to remember to restart the server or your changes won't
have effect.
* Your changes aren't tracked, so mistakes can be hard to debug.
- Your changes aren't tracked, so mistakes can be hard to debug.
Instead, we recommend the following GitHub-based workflow (see [our
Git guide][git-guide] if you need a primer):
* Decide where you're going to edit Zulip's code. We recommend [using
- Decide where you're going to edit Zulip's code. We recommend [using
the Zulip development environment](../development/overview.md) on
a desktop or laptop as it will make it extremely convenient for you
to test your changes without deploying them in production. But if
to test your changes without deploying them in production. But if
your changes are small or you're OK with risking downtime, you don't
strictly need it; you just need an environment with Git installed.
* **Important**. Determine what Zulip version you're running on your
server. You can check by inspecting `ZULIP_VERSION` in
- **Important**. Determine what Zulip version you're running on your
server. You can check by inspecting `ZULIP_VERSION` in
`/home/zulip/deployments/current/version.py` (we'll use `2.0.4`
below). If you apply your changes to the wrong version of Zulip,
below). If you apply your changes to the wrong version of Zulip,
it's likely to fail and potentially cause downtime.
* [Fork and clone][fork-clone] the [zulip/zulip][] repository on
- [Fork and clone][fork-clone] the [zulip/zulip][] repository on
[GitHub](https://github.com).
* Create a branch (named `acme-branch` below) containing your changes:
- Create a branch (named `acme-branch` below) containing your changes:
```
```bash
cd zulip
git checkout -b acme-branch 2.0.4
```
* Use your favorite code editor to modify Zulip.
* Commit your changes and push them to GitHub:
- Use your favorite code editor to modify Zulip.
- Commit your changes and push them to GitHub:
```
```bash
git commit -a
# Use `git diff` to verify your changes are what you expect
@@ -558,10 +656,10 @@ git diff 2.0.4 acme-branch
git push origin +acme-branch
```
* Log in to your Zulip server and configure and use
[upgrade-zulip-from-git][] to install the changes; remember to
configure `git_repo_url` to point to your fork on GitHub and run it as
`upgrade-zulip-from-git acme-branch`.
- Log in to your Zulip server and configure and use
[upgrade-zulip-from-git][] to install the changes; remember to
configure `git_repo_url` to point to your fork on GitHub and run it as
`upgrade-zulip-from-git acme-branch`.
This workflow solves all of the problems described above: your change
will be compiled and installed correctly (restarting the server), and
@@ -570,21 +668,21 @@ across future Zulip releases.
### Upgrading to future releases
Eventually, you'll want to upgrade to a new Zulip release. If your
Eventually, you'll want to upgrade to a new Zulip release. If your
changes were integrated into that Zulip release or are otherwise no
longer needed, you can just [upgrade as
usual](#upgrading-to-a-release). If you [upgraded to
master](#upgrading-to-master); review that section again; new
usual](#upgrading-to-a-release). If you [upgraded to
`main`](#upgrading-to-main); review that section again; new
maintenance releases are likely "older" than your current installation
and you might need to upgrade to the master again rather than to the
and you might need to upgrade to `main` again rather than to the
new maintenance release.
Otherwise, you'll need to update your branch by rebasing your changes
(starting from a [clone][fork-clone] of the [zulip/zulip][]
repository). The example below assumes you have a branch off of 2.0.4
repository). The example below assumes you have a branch off of 2.0.4
and want to upgrade to 2.1.0.
```
```bash
cd zulip
git fetch --tags upstream
git checkout acme-branch
@@ -605,29 +703,29 @@ branch, as before.
If you are using [docker-zulip][], there are two things that are
different from the above:
* Because of how container images work, editing files directly is even
- Because of how container images work, editing files directly is even
more precarious, because Docker is designed for working with
container images and may lose your changes.
* Instead of running `upgrade-zulip-from-git`, you will need to use
- Instead of running `upgrade-zulip-from-git`, you will need to use
the [docker upgrade workflow][docker-zulip-upgrade] to build a
container image based on your modified version of Zulip.
[docker-zulip]: https://github.com/zulip/docker-zulip
[docker-zulip-upgrade]: https://github.com/zulip/docker-zulip#upgrading-from-a-git-repository
## Applying changes from master
## Applying changes from `main`
If you are experiencing an issue that has already been fixed by the
Zulip development community, and you'd like to get the fix now, you
have a few options. There are two possible ways you might get those
have a few options. There are two possible ways you might get those
fixes on your local Zulip server without waiting for an official release.
### Applying a small change
Many bugs have small/simple fixes. In this case, you can use the Git
Many bugs have small/simple fixes. In this case, you can use the Git
workflow [described above](#making-changes), using:
```
```bash
git fetch upstream
git cherry-pick abcd1234
```
@@ -637,35 +735,35 @@ of the change you'd like).
In general, we can't provide unpaid support for issues caused by
cherry-picking arbitrary commits if the issues don't also affect
master or an official release.
`main` or an official release.
The exception to this rule is when we ask or encourage a user to apply
a change to their production system to help verify the fix resolves
the issue for them. You can expect the Zulip community to be
the issue for them. You can expect the Zulip community to be
responsive in debugging any problems caused by a patch we asked
you to apply.
Also, consider asking whether a small fix that is important to you can
be added to the current stable release branch (E.g. `2.1.x`). In
be added to the current stable release branch (E.g. `2.1.x`). In
addition to scheduling that change for Zulip's next bug fix release,
we support changes in stable release branches as though they were
released.
### Upgrading to master
### Upgrading to `main`
Many Zulip servers (including chat.zulip.org and zulip.com) upgrade to
master on a regular basis to get the latest features. Before doing
`main` on a regular basis to get the latest features. Before doing
so, it's important to understand how to happily run a server based on
master.
`main`.
For background, it's backporting arbitrary patches from master to an
older version requires some care. Common issues include:
For background, it's backporting arbitrary patches from `main` to an
older version requires some care. Common issues include:
* Changes containing database migrations (new files under
`*/migrations/`), which includes most new features. We
- Changes containing database migrations (new files under
`*/migrations/`), which includes most new features. We
don't support applying database migrations out of order.
* Changes that are stacked on top of other changes to the same system.
* Essentially any patch with hundreds of lines of changes will have
- Changes that are stacked on top of other changes to the same system.
- Essentially any patch with hundreds of lines of changes will have
merge conflicts and require extra work to apply.
While it's possible to backport these sorts of changes, you're
@@ -673,52 +771,52 @@ unlikely to succeed without help from the core team via a support
contract.
If you need an unreleased feature, the best path is usually to
upgrade to Zulip master using [upgrade-zulip-from-git][]. Before
upgrading to master, make sure you understand:
upgrade to Zulip `main` using [upgrade-zulip-from-git][]. Before
upgrading to `main`, make sure you understand:
* In Zulip's version numbering scheme, `master` will always be "newer"
- In Zulip's version numbering scheme, `main` will always be "newer"
than the latest maintenance release (E.g. `3.1` or `2.1.6`) and
"older" than the next major release (E.g. `3.0` or `4.0`).
* The `master` branch is under very active development; dozens of new
changes are integrated into it on most days. The `master` branch
- The `main` branch is under very active development; dozens of new
changes are integrated into it on most days. The `main` branch
can have thousands of changes not present in the latest release (all
of which will be included in our next major release). On average
`master` usually has fewer total bugs than the latest release
of which will be included in our next major release). On average
`main` usually has fewer total bugs than the latest release
(because we fix hundreds of bugs in every major release) but it
might have some bugs that are more severe than we would consider
acceptable for a release.
* We deploy `master` to chat.zulip.org and zulip.com on a regular
- We deploy `main` to chat.zulip.org and zulip.com on a regular
basis (often daily), so it's very important to the project that it
be stable. Most regressions will be minor UX issues or be fixed
be stable. Most regressions will be minor UX issues or be fixed
quickly, because we need them to be fixed for Zulip Cloud.
* The development community is very interested in helping debug issues
that arise when upgrading from the latest release to master, since
- The development community is very interested in helping debug issues
that arise when upgrading from the latest release to `main`, since
they provide us an opportunity to fix that category of issue before
our next major release. (Much more so than we are in helping folks
debug other custom changes). That said, we cannot make any
our next major release. (Much more so than we are in helping folks
debug other custom changes). That said, we cannot make any
guarantees about how quickly we'll resolve an issue to folks without
a formal support contract.
* We do not support downgrading from `master` to earlier versions, so
- We do not support downgrading from `main` to earlier versions, so
if downtime for your Zulip server is unacceptable, make sure you
have a current
[backup](../production/export-and-import.html#backups) in case the
upgrade fails.
* Our changelog contains [draft release
- Our changelog contains [draft release
notes](../overview/changelog.md) available listing major changes
since the last release. The **Upgrade notes** section will always
since the last release. The **Upgrade notes** section will always
be current, even if some new features aren't documented.
* Whenever we push a security or maintenance release, the changes in
that release will always be merged to master; so you can get the
security fixes by upgrading to master.
* You can always upgrade from master to the next major release when it
- Whenever we push a security or maintenance release, the changes in
that release will always be merged to `main`; so you can get the
security fixes by upgrading to `main`.
- You can always upgrade from `main` to the next major release when it
comes out, using either [upgrade-zulip-from-git][] or the release
tarball. So there's no risk of upgrading to `master` resulting in
tarball. So there's no risk of upgrading to `main` resulting in
a system that's not upgradeable back to a normal release.
## Contributing patches
Zulip contains thousands of changes submitted by volunteer
contributors like you. If your changes are likely to be of useful to
contributors like you. If your changes are likely to be of useful to
other organizations, consider [contributing
them](../overview/contributing.md).

View File

@@ -18,13 +18,13 @@ provider supported by the `boto` library).
## S3 backend configuration
Here, we document the process for configuring Zulip's S3 file upload
backend. To enable this backend, you need to do the following:
backend. To enable this backend, you need to do the following:
1. In the AWS management console, create a new IAM account (aka API
user) for your Zulip server, and two buckets in S3, one for uploaded
files included in messages, and another for user avatars. You need
two buckets because the "user avatars" bucket is generally configured
as world-readable, whereas the "uploaded files" one is not.
user) for your Zulip server, and two buckets in S3, one for uploaded
files included in messages, and another for user avatars. You need
two buckets because the "user avatars" bucket is generally configured
as world-readable, whereas the "uploaded files" one is not.
1. Set `s3_key` and `s3_secret_key` in /etc/zulip/zulip-secrets.conf
to be the S3 access and secret keys for the IAM account.
@@ -44,31 +44,31 @@ as world-readable, whereas the "uploaded files" one is not.
setting to your default AWS region's code (e.g. `"eu-central-1"`).
1. You will need to configure `nginx` to direct requests for uploaded
files to the Zulip server (which will then serve a redirect to the
appropriate place in S3), rather than serving them directly.
files to the Zulip server (which will then serve a redirect to the
appropriate place in S3), rather than serving them directly.
With Zulip 1.9.0 and newer, you can do this automatically with the
following commands run as root:
With Zulip 1.9.0 and newer, you can do this automatically with the
following commands run as root:
```
crudini --set /etc/zulip/zulip.conf application_server no_serve_uploads true
/home/zulip/deployments/current/scripts/zulip-puppet-apply
```
```bash
crudini --set /etc/zulip/zulip.conf application_server no_serve_uploads true
/home/zulip/deployments/current/scripts/zulip-puppet-apply
```
(The first line will update your `/etc/zulip/zulip.conf`).
(The first line will update your `/etc/zulip/zulip.conf`).
With older Zulip, you need to edit
`/etc/nginx/sites-available/zulip-enterprise` to comment out the
`nginx` configuration block for `/user_avatars` and the `include
/etc/nginx/zulip-include/uploads.route` line and then reload the
`nginx` service (`service nginx reload`).
With older Zulip, you need to edit
`/etc/nginx/sites-available/zulip-enterprise` to comment out the
`nginx` configuration block for `/user_avatars` and the
`include /etc/nginx/zulip-include/uploads.route` line and then
reload the `nginx` service (`service nginx reload`).
1. Finally, restart the Zulip server so that your settings changes
take effect
(`/home/zulip/deployments/current/scripts/restart-server`).
It's simplest to just do this configuration when setting up your Zulip
server for production usage. Note that if you had any existing
server for production usage. Note that if you had any existing
uploading files, this process does not upload them to Amazon S3; see
[migration instructions](#migrating-from-local-uploads-to-amazon-s3-backend)
below for those steps.
@@ -78,12 +78,12 @@ below for those steps.
## S3 bucket policy
The best way to do the S3 integration with Amazon is to create a new
IAM user just for your Zulip server with limited permissions. For
IAM user just for your Zulip server with limited permissions. For
each of the two buckets, you'll want to
[add an S3 bucket policy](https://awspolicygen.s3.amazonaws.com/policygen.html)
entry that looks something like this:
```
```json
{
"Version": "2012-10-17",
"Id": "Policy1468991802321",
@@ -117,7 +117,7 @@ entry that looks something like this:
The avatars bucket is intended to be world-readable, so you'll also
need a block like this:
```
```json
{
"Sid": "Stmt1468991795389",
"Effect": "Allow",
@@ -127,32 +127,31 @@ need a block like this:
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::BUCKET_NAME_HERE/*"
}
```
The file-uploads bucket should not be world-readable. See the
The file-uploads bucket should not be world-readable. See the
[documentation on the Zulip security model](security-model.md) for
details on the security model for uploaded files.
## Migrating from local uploads to Amazon S3 backend
As you scale your server, you might want to migrate the uploads from
your local backend to Amazon S3. Follow these instructions, step by
your local backend to Amazon S3. Follow these instructions, step by
step, to do the migration.
1. First, [set up the S3 backend](#s3-backend-configuration) in the settings
(all the auth stuff), but leave `LOCAL_UPLOADS_DIR` set -- the
migration tool will need that value to know where to find your uploads.
(all the auth stuff), but leave `LOCAL_UPLOADS_DIR` set -- the
migration tool will need that value to know where to find your uploads.
2. Run `./manage.py transfer_uploads_to_s3`. This will upload all the
files from the local uploads directory to Amazon S3. By default,
this command runs on 6 parallel processes, since uploading is a
latency-sensitive operation. You can control this parameter using
the `--processes` option.
files from the local uploads directory to Amazon S3. By default,
this command runs on 6 parallel processes, since uploading is a
latency-sensitive operation. You can control this parameter using
the `--processes` option.
3. Once the transfer script completes, disable `LOCAL_UPLOADS_DIR`, and
restart your server (continuing the last few steps of the S3
backend setup instructions).
restart your server (continuing the last few steps of the S3
backend setup instructions).
Congratulations! Your uploaded files are now migrated to S3.
Congratulations! Your uploaded files are now migrated to S3.
**Caveat**: The current version of this tool does not migrate an
uploaded organization avatar or logo.
uploaded organization avatar or logo.

View File

@@ -15,18 +15,18 @@ installation, you'll need to register a custom Zoom app as follows:
1. Create an app with the **OAuth** type.
* Choose an app name such as "ExampleCorp Zulip".
* Select **User-managed app**.
* Disable the option to publish the app on the Marketplace.
* Click **Create**.
- Choose an app name such as "ExampleCorp Zulip".
- Select **User-managed app**.
- Disable the option to publish the app on the Marketplace.
- Click **Create**.
1. Inside of the Zoom app management page:
* On the **App Credentials** tab, set both the **Redirect URL for
- On the **App Credentials** tab, set both the **Redirect URL for
OAuth** and the **Whitelist URL** to
`https://zulip.example.com/calls/zoom/complete` (replacing
`zulip.example.com` by your main Zulip hostname).
* On the **Scopes** tab, add the `meeting:write` scope.
- On the **Scopes** tab, add the `meeting:write` scope.
You can then configure your Zulip server to use that Zoom app as
follows:
@@ -40,7 +40,7 @@ follows:
1. Restart the Zulip server with
`/home/zulip/deployments/current/scripts/restart-server`.
This enables Zoom support in your Zulip server. Finally, [configure
This enables Zoom support in your Zulip server. Finally, [configure
Zoom as the video call
provider](https://zulip.com/help/start-a-call) in the Zulip
organization(s) where you want to use it.
@@ -71,7 +71,7 @@ Server as follows:
3. Restart the Zulip server with
`/home/zulip/deployments/current/scripts/restart-server`.
This enables Big Blue Button support in your Zulip server. Finally, [configure
This enables Big Blue Button support in your Zulip server. Finally, [configure
Big Blue Button as the video call
provider](https://zulip.com/help/start-a-call) in the Zulip
organization(s) where you want to use it.

View File

@@ -29,7 +29,7 @@ There are three main components:
The next several sections will dive into the details of these components.
## The *Count database tables
## The \*Count database tables
The Zulip analytics system is built around collecting time series data in a
set of database tables. Each of these tables has the following fields:
@@ -76,7 +76,7 @@ by the system and with what data.
## The FillState table
The default Zulip production configuration runs a cron job once an hour that
updates the *Count tables for each of the CountStats in the COUNT_STATS
updates the \*Count tables for each of the CountStats in the COUNT_STATS
dictionary. The FillState table simply keeps track of the last end_time that
we successfully updated each stat. It also enables the analytics system to
recover from errors (by retrying) and to monitor that the cron job is
@@ -103,23 +103,23 @@ There are a few important principles that we use to make the system
efficient:
- Not repeating work to keep things up to date (via FillState)
- Storing data in the *Count tables to avoid our endpoints hitting the core
- Storing data in the \*Count tables to avoid our endpoints hitting the core
Message/UserMessage tables is key, because some queries could take minutes
to calculate. This allows any expensive operations to run offline, and
then the endpoints to server data to users can be fast.
- Doing expensive operations inside the database, rather than fetching data
to Python and then sending it back to the database (which can be far
slower if there's a lot of data involved). The Django ORM currently
slower if there's a lot of data involved). The Django ORM currently
doesn't support the "insert into .. select" type SQL query that's needed
for this, which is why we use raw database queries (which we usually avoid
in Zulip) rather than the ORM.
- Aggregating where possible to avoid unnecessary queries against the
Message and UserMessage tables. E.g. rather than querying the Message
Message and UserMessage tables. E.g. rather than querying the Message
table both to generate sent message counts for each realm and again for
each user, we just query for each user, and then add up the numbers for
the users to get the totals for the realm.
- Not storing rows when the value is 0. An hourly user stat would otherwise
collect 24 * 365 * roughly .5MB per db row = 4GB of data per user per
collect 24 \* 365 \* roughly .5MB per db row = 4GB of data per user per
year, most of whose values are 0. A related note is to be cautious about
adding queries that are typically non-0 instead of being typically 0.
@@ -129,25 +129,25 @@ There are a few types of automated tests that are important for this sort of
system:
- Most important: Tests for the code path that actually populates data into
the analytics tables. These are most important, because it can be very
the analytics tables. These are most important, because it can be very
expensive to fix bugs in the logic that generates these tables (one
basically needs to regenerate all of history for those tables), and these
bugs are hard to discover. It's worth taking the time to think about
bugs are hard to discover. It's worth taking the time to think about
interesting corner cases and add them to the test suite.
- Tests for the backend views code logic for extracting data from the
database and serving it to clients.
For manual backend testing, it sometimes can be valuable to use `./manage.py
dbshell` to inspect the tables manually to check that things look right; but
usually anything you feel the need to check manually, you should add some
sort of assertion for to the backend analytics tests, to make sure it stays
that way as we refactor.
For manual backend testing, it sometimes can be valuable to use
`./manage.py dbshell` to inspect the tables manually to check that
things look right; but usually anything you feel the need to check
manually, you should add some sort of assertion for to the backend
analytics tests, to make sure it stays that way as we refactor.
## LoggingCountStats
The system discussed above is designed primarily around the technical
problem of showing useful analytics about things where the raw data is
already stored in the database (e.g. Message, UserMessage). This is great
already stored in the database (e.g. Message, UserMessage). This is great
because we can always backfill that data to the beginning of time, but of
course sometimes one wants to do analytics on things that aren't worth
storing every data point for (e.g. activity data, request performance
@@ -161,10 +161,10 @@ statistics, etc.). There is currently a reference implementation of a
The main testing approach for the /stats page UI is manual testing.
For most UI testing, you can visit `/stats/realm/analytics` while
logged in as Iago (this is the server administrator view of stats for
a given realm). The only piece that you can't test here is the "Me"
buttons, which won't have any data. For those, you can instead log in
a given realm). The only piece that you can't test here is the "Me"
buttons, which won't have any data. For those, you can instead log in
as the `shylock@analytics.ds` in the `analytics` realm and visit
`/stats` there (which is only a bit more work). Note that the
`/stats` there (which is only a bit more work). Note that the
`analytics` realm is a shell with no streams, so you'll only want to
use it for testing the graphs.
@@ -221,9 +221,9 @@ Tips and tricks:
### /activity page
- There's a somewhat less developed /activity page, for server
administrators, showing data on all the realms on a server. To
administrators, showing data on all the realms on a server. To
access it, you need to have the `is_staff` bit set on your
UserProfile object. You can set it using `manage.py shell` and
editing the UserProfile object directly. A great future project is
UserProfile object. You can set it using `manage.py shell` and
editing the UserProfile object directly. A great future project is
to clean up that page's data sources, and make this a documented
interface.

View File

@@ -12,7 +12,7 @@ The steps below assume that you are familiar with the material
presented [here](https://packaging.python.org/tutorials/installing-packages/).
1. [Reconfigure the package][2], if need be (upgrade version
number, development status, and so on).
number, development status, and so on).
2. Create a [source distribution][3].

View File

@@ -4,8 +4,9 @@ Zulip uses a third party (Stripe) for billing, so working on the billing
system requires a little bit of setup.
To set up the development environment to work on the billing code:
* Create a Stripe account
* Go to <https://dashboard.stripe.com/account/apikeys>, and add the
- Create a Stripe account
- Go to <https://dashboard.stripe.com/account/apikeys>, and add the
publishable key and secret key as `stripe_publishable_key` and
`stripe_secret_key` to `zproject/dev-secrets.conf`.
@@ -15,13 +16,14 @@ Nearly all the billing-relevant code lives in `corporate/`.
Stripe makes pretty regular updates to their API. The process for upgrading
our code is:
* Go to <https://dashboard.stripe.com/developers> in your Stripe account.
* Upgrade the API version.
* Run `tools/test-backend --generate-stripe-fixtures`
* Fix any failing tests, and manually look through `git diff` to understand
- Go to <https://dashboard.stripe.com/developers> in your Stripe account.
- Upgrade the API version.
- Run `tools/test-backend --generate-stripe-fixtures`
- Fix any failing tests, and manually look through `git diff` to understand
the changes.
* If there are no material changes, commit the diff, and open a PR.
* Ask Rishi or Tim to go to <https://dashboard.stripe.com/developers> in the
- If there are no material changes, commit the diff, and open a PR.
- Ask Rishi or Tim to go to <https://dashboard.stripe.com/developers> in the
zulipchat Stripe account, and upgrade the API version there.
We currently aren't set up to do version upgrades where there are breaking

View File

@@ -1,7 +1,7 @@
# Caching in Zulip
Like any product with good performance characteristics, Zulip makes
extensive use of caching. This article talks about our caching
extensive use of caching. This article talks about our caching
strategy, focusing on how we use `memcached` (since it's the thing
people generally think about when they ask about how a server does
caching).
@@ -9,7 +9,7 @@ caching).
## Backend caching with memcached
On the backend, Zulip uses `memcached`, a popular key-value store, for
caching. Our `memcached` caching helps let us optimize Zulip's
caching. Our `memcached` caching helps let us optimize Zulip's
performance and scalability, since most requests don't need to talk to
the database (which, even for a trivial query with everything on the
same machine, usually takes 3-10x as long as a memcached fetch).
@@ -37,11 +37,11 @@ Zulip's Django codebase, all one needs to do is call the standard
accessor functions for data (like `get_user` or `get_stream` to fetch
user and stream objects, or for view code, functions like
`access_stream_by_id`, which checks permissions), and everything will
work great. The data fetches automatically benefit from `memcached`
work great. The data fetches automatically benefit from `memcached`
caching, since those accessor methods have already been written to
transparently use Zulip's memcached caching system, and the developer
doesn't need to worry about whether the data returned is up-to-date:
it is. In the following sections, we'll talk about how we make this
it is. In the following sections, we'll talk about how we make this
work.
As a sidenote, the policy of using these accessor functions wherever
@@ -51,7 +51,7 @@ also generally take care of details you might not think about
It's amazing how slightly tricky logic that's duplicated in several
places invariably ends up buggy in some of those places, and in
aggregate we call these accessor functions hundreds of times in
Zulip. But the caching is certainly a nice bonus.
Zulip. But the caching is certainly a nice bonus.
### The core implementation
@@ -59,7 +59,7 @@ The `get_user` function is a pretty typical piece of code using this
framework; as you can see, it's very little code on top of our
`cache_with_key` decorator:
``` python
```python
def user_profile_cache_key_id(email: str, realm_id: int) -> str:
return u"user_profile:%s:%s" % (make_safe_digest(email.strip()), realm_id,)
@@ -73,22 +73,23 @@ def get_user(email: str, realm: Realm) -> UserProfile:
```
This decorator implements a pretty classic caching paradigm:
* The `user_profile_cache_key` function defines a unique map from a
canonical form of its arguments to a string. These strings are
- The `user_profile_cache_key` function defines a unique map from a
canonical form of its arguments to a string. These strings are
namespaced (the `user_profile:` part) so that they won't overlap
with other caches, and encode the arguments so that two uses of this
cache won't overlap. In this case, a hash of the email address and
realm ID are those canonicalized arguments. (The `make_safe_digest`
cache won't overlap. In this case, a hash of the email address and
realm ID are those canonicalized arguments. (The `make_safe_digest`
is important to ensure we don't send special characters to
memcached). And we have two versions, depending whether the caller
memcached). And we have two versions, depending whether the caller
has access to a `Realm` or just a `realm_id`.
* When `get_user` is called, `cache_with_key` will compute the key,
- When `get_user` is called, `cache_with_key` will compute the key,
and do a Django `cache_get` query for the key (which goes to
memcached). If the key is in the cache, it just returns the value.
memcached). If the key is in the cache, it just returns the value.
Otherwise, it fetches the value from the database (using the actual
code in the body of `get_user`), and then stores the value back to
that memcached key before returning the result to the caller.
* Cache entries expire after the timeout; in this case, a week.
- Cache entries expire after the timeout; in this case, a week.
Though in frequently deployed environments like chat.zulip.org,
often cache entries will stop being used long before that, because
`KEY_PREFIX` is rotated every time we deploy to production; see
@@ -101,12 +102,12 @@ huge amount of otherwise very self-similar caching code.
The one thing to be really careful with in using `cache_with_key` is
that if an item is in the cache, the body of `get_user` (above) is
never called. This means some things that might seem like clever code
reuse are actually a really bad idea. For example:
never called. This means some things that might seem like clever code
reuse are actually a really bad idea. For example:
* Don't add a `get_active_user` function that uses the same cache key
- Don't add a `get_active_user` function that uses the same cache key
function as `get_user` (but with a different query that filters our
deactivated users). If one called `get_active_user` to access a
deactivated users). If one called `get_active_user` to access a
deactivated user, the right thing would happen, but if you called
`get_user` to access that user first, then the `get_active_user`
function would happily return the user from the cache, without ever
@@ -118,13 +119,13 @@ even if they feature the same objects.
### Cache invalidation after writes
The caching strategy described above works pretty well for anything
where the state it's storing is immutable (i.e. never changes). With
where the state it's storing is immutable (i.e. never changes). With
mutable state, one needs to do something to ensure that the Python
processes don't end up fetching stale data from the cache after a
write to the database.
We handle this using Django's longstanding
[post_save signals][post-save-signals] feature. Django signals let
[post_save signals][post-save-signals] feature. Django signals let
you configure some code to run every time Django does something (for
`post_save`, right after any write to the database using Django's
`.save()`).
@@ -132,18 +133,18 @@ you configure some code to run every time Django does something (for
There's a handful of lines in `zerver/models.py` like these that
configure this:
```
```python
post_save.connect(flush_realm, sender=Realm)
post_save.connect(flush_user_profile, sender=UserProfile)
```
Once this `post_save` hook is registered, whenever one calls
`user_profile.save(...)` with a UserProfile object in our Django
project, Django will call the `flush_user_profile` function. Zulip is
project, Django will call the `flush_user_profile` function. Zulip is
systematic about using the standard Django `.save()` function for
modifying `user_profile` objects (and passing the `update_fields`
argument to `.save()` consistently, which encodes which fields on an
object changed). This means that all we have to do is write those
object changed). This means that all we have to do is write those
cache-flushing functions correctly, and people writing Zulip code
won't need to think about (or even know about!) the caching.
@@ -156,7 +157,7 @@ those keys from the cache (if present).
Maintaining these flush functions requires some care (every time we
add a new cache, we need to look through them), but overall it's a
pretty simple algorithm: If the changed data appears in any form in a
given cache key, that cache key needs to be cleared. E.g. the
given cache key, that cache key needs to be cleared. E.g. the
`active_user_ids_cache_key` cache for a realm needs to be flushed
whenever a new user is created in that realm, or user is
deactivated/reactivated, even though it's just a list of IDs and thus
@@ -175,8 +176,8 @@ code in Zulip just needs to modify Django model objects and call
When upgrading a Zulip server, it's important to avoid having one
version of the code interact with cached objects from another version
that has a different data layout. In Zulip, we avoid this through
some clever caching strategies. Each "deployment directory" for Zulip
that has a different data layout. In Zulip, we avoid this through
some clever caching strategies. Each "deployment directory" for Zulip
in production has inside it a `var/remote_cache_prefix` file,
containing a cache prefix (`KEY_PREFIX` in the code) that is
automatically appended to the start of any cache keys accessed by that
@@ -188,19 +189,19 @@ from inconsistent versions of the source code / data formats in the cache.
### Automated testing and memcached
For Zulip's `test-backend` unit tests, we use the same strategy. In
For Zulip's `test-backend` unit tests, we use the same strategy. In
particular, we just edit `KEY_PREFIX` before each unit test; this
means each of the thousands of test cases in Zulip has its own
independent memcached key namespace on each run of the unit tests. As
independent memcached key namespace on each run of the unit tests. As
a result, we never have to worry about memcached caching causing
problems across multiple tests.
This is a really important detail. It makes it possible for us to do
This is a really important detail. It makes it possible for us to do
assertions in our tests on the number of database queries or memcached
queries that are done as part of a particular function/route, and have
those checks consistently get the same result (those tests are great
for catching bugs where we accidentally do database queries in a
loop). And it means one can debug failures in the test suite without
loop). And it means one can debug failures in the test suite without
having to consider the possibility that memcached is somehow confusing
the situation.
@@ -225,8 +226,8 @@ You can run the server with that behavior disabled using
### Performance
One thing be careful about with memcached queries is to avoid doing
them in loops (the same applies for database queries!). Instead, one
should use a bulk query. We have a fancy function,
them in loops (the same applies for database queries!). Instead, one
should use a bulk query. We have a fancy function,
`generate_bulk_cached_fetch`, which is super magical and handles this
for us, with support for a bunch of fancy features like marshalling
data before/after going into the cache (e.g. to compress `message`
@@ -236,12 +237,13 @@ objects to minimize data transfer between Django and memcached).
We generally try to avoid in-process backend caching in Zulip's Django
codebase, because every Zulip production installation involves
multiple servers. We do have a few, however:
* `per_request_display_recipient_cache`: A cache flushed at the start
multiple servers. We do have a few, however:
- `per_request_display_recipient_cache`: A cache flushed at the start
of every request; this simplifies correctly implementing our goal of
not repeatedly fetching the "display recipient" (e.g. stream name)
for each message in the `GET /messages` codebase.
* Caches of various data, like the `SourceMap` object, that are
- Caches of various data, like the `SourceMap` object, that are
expensive to construct, not needed for most requests, and don't
change once a Zulip server has been deployed in production.
@@ -252,13 +254,12 @@ apps; details like which users exist, with metadata like names and
avatars, similar details for streams, recent message history, etc.
This data is fetched in the `/register` endpoint (or `page_params`
for the webapp), and kept correct over time. The key to keeping these
for the webapp), and kept correct over time. The key to keeping these
state up to date is Zulip's
[real-time events system](../subsystems/events-system.md), which
allows the server to notify clients whenever state that might be
cached by clients is changed. Clients are responsible for handling
cached by clients is changed. Clients are responsible for handling
the events, updating their state, and rerendering any UI components
that might display the modified state.
[post-save-signals]:
https://docs.djangoproject.com/en/2.0/ref/signals/#post-save
[post-save-signals]: https://docs.djangoproject.com/en/2.0/ref/signals/#post-save

View File

@@ -1,7 +1,7 @@
# Clients in Zulip
`zerver.models.Client` is Zulip's analogue of the HTTP User-Agent
header (and is populated from User-Agent). It exists for use in
header (and is populated from User-Agent). It exists for use in
analytics and other places to provide human-readable summary data
about "which Zulip client" was used for an operation (e.g. was it the
Android app, the desktop app, or a bot?).
@@ -19,7 +19,7 @@ A `Client` is used to sort messages into client categories such as
Generally, integrations in Zulip should declare a unique User-Agent,
so that it's easy to figure out which integration is involved when
debugging an issue. For incoming webhook integrations, we do that
debugging an issue. For incoming webhook integrations, we do that
convenentialy via the auth decorators (as we will describe shortly);
other integrations generally should set the first User-Agent element
on their HTTP requests to something of the form

View File

@@ -1,22 +1,22 @@
# Provisioning and third-party dependencies
Zulip is a large project, with well over 100 third-party dependencies,
and managing them well is essential to the quality of the project. In
and managing them well is essential to the quality of the project. In
this document, we discuss the various classes of dependencies that
Zulip has, and how we manage them. Zulip's dependency management has
Zulip has, and how we manage them. Zulip's dependency management has
some really nice properties:
* **Fast provisioning**. When switching to a different commit in the
- **Fast provisioning**. When switching to a different commit in the
Zulip project with the same dependencies, it takes under 5 seconds
to re-provision a working Zulip development environment after
switching. If there are new dependencies, one only needs to wait to
switching. If there are new dependencies, one only needs to wait to
download the new ones, not all the pre-existing dependencies.
* **Consistent provisioning**. Every time a Zulip development or
- **Consistent provisioning**. Every time a Zulip development or
production environment is provisioned/installed, it should end up
using the exactly correct versions of all major dependencies.
* **Low maintenance burden**. To the extent possible, we want to
- **Low maintenance burden**. To the extent possible, we want to
avoid manual work and keeping track of things that could be
automated. This makes it easy to keep running the latest versions
automated. This makes it easy to keep running the latest versions
of our various dependencies.
The purpose of this document is to detail all of Zulip's third-party
@@ -25,11 +25,11 @@ dependencies and how we manage their versions.
## Provisioning
We refer to "provisioning" as the process of installing and
configuring the dependencies of a Zulip development environment. It's
configuring the dependencies of a Zulip development environment. It's
done using `tools/provision`, and the output is conveniently logged by
`var/log/provision.log` to help with debugging. Provisioning makes
use of a lot of caching. Some of those caches are not immune to being
corrupted if you mess around with files in your repository a lot. We
`var/log/provision.log` to help with debugging. Provisioning makes
use of a lot of caching. Some of those caches are not immune to being
corrupted if you mess around with files in your repository a lot. We
have `tools/provision --force` to (still fairly quickly) rerun most
steps that would otherwise have been skipped due to caching.
@@ -42,13 +42,13 @@ also run an initial provision the first time only.
In `version.py`, we have a special parameter, `PROVISION_VERSION`,
which is used to help ensure developers don't spend time debugging
test/linter/etc. failures that actually were caused by the developer
rebasing and forgetting to provision". `PROVISION_VERSION` has a
rebasing and forgetting to provision". `PROVISION_VERSION` has a
format of `x.y`; when `x` doesn't match the value from the last time
the user provisioned, or `y` is higher than than the value from last
time, most Zulip tools will crash early and ask the user to provision.
This has empirically made a huge impact on how often developers spend
time debugging a "weird failure" after rebasing that had an easy
solution. (Of course, the other key part of achieving this is all the
solution. (Of course, the other key part of achieving this is all the
work that goes into making sure that `provision` reliably leaves the
development environment in a good state.)
@@ -58,29 +58,29 @@ require re-running provision, so don't forget about it!
## Philosophy on adding third-party dependencies
In the Zulip project, we take a pragmatic approach to third-party
dependencies. Overall, if a third-party project does something well
dependencies. Overall, if a third-party project does something well
that Zulip needs to do (and has an appropriate license), we'd love to
use it rather than reinventing the wheel. If the third-party project
use it rather than reinventing the wheel. If the third-party project
needs some small changes to work, we prefer to make those changes and
contribute them upstream. When the upstream maintainer is slow to
contribute them upstream. When the upstream maintainer is slow to
respond, we may use a fork of the dependency until the code is merged
upstream; as a result, we usually have a few packages in
`requirements.txt` that are installed from a GitHub URL.
What we look for in choosing dependencies is whether the project is
well-maintained. Usually one can tell fairly quickly from looking at
well-maintained. Usually one can tell fairly quickly from looking at
a project's issue tracker how well-managed it is: a quick look at how
the issue tracker is managed (or not) and the test suite is usually
enough to decide if a project is going to be a high-maintenance
dependency or not. That said, we do still take on some smaller
dependency or not. That said, we do still take on some smaller
dependencies that don't have a well-managed project, if we feel that
using the project will still be a better investment than writing our
own implementation of that project's functionality. We've adopted a
own implementation of that project's functionality. We've adopted a
few projects in the past that had a good codebase but whose maintainer
no longer had time for them.
One case where we apply added scrutiny to third-party dependencies is
JS libraries. They are a particularly important concern because we
JS libraries. They are a particularly important concern because we
want to keep the Zulip webapp's JS bundle small, so that Zulip
continues to load quickly on systems with low network bandwidth.
We'll look at large JS libraries with much greater scrutiny for
@@ -94,20 +94,21 @@ For the third-party services like PostgreSQL, Redis, Nginx, and RabbitMQ
that are documented in the
[architecture overview](../overview/architecture-overview.md), we rely on the
versions of those packages provided alongside the Linux distribution
on which Zulip is deployed. Because Zulip
on which Zulip is deployed. Because Zulip
[only supports Ubuntu in production](../production/requirements.md), this
usually means `apt`, though we do support
[other platforms in development](../development/setup-advanced.md). Since
[other platforms in development](../development/setup-advanced.md). Since
we don't control the versions of these dependencies, we avoid relying
on specific versions of these packages wherever possible.
The exact lists of `apt` packages needed by Zulip are maintained in a
few places:
* For production, in our Puppet configuration, `puppet/zulip/`, using
- For production, in our Puppet configuration, `puppet/zulip/`, using
the `Package` and `SafePackage` directives.
* For development, in `SYSTEM_DEPENDENCIES` in `tools/lib/provision.py`.
* The packages needed to build a Zulip virtualenv, in
`VENV_DEPENDENCIES` in `scripts/lib/setup_venv.py`. These are
- For development, in `SYSTEM_DEPENDENCIES` in `tools/lib/provision.py`.
- The packages needed to build a Zulip virtualenv, in
`VENV_DEPENDENCIES` in `scripts/lib/setup_venv.py`. These are
separate from the rest because (1) we may need to install a
virtualenv before running the more complex scripts that, in turn,
install other dependencies, and (2) because that list is shared
@@ -121,79 +122,79 @@ extension, used by our [full-text search](full-text-search.md).
We manage Python packages via the Python-standard `requirements.txt`
system and virtualenvs, but theres a number of interesting details
about how Zulip makes this system work well for us that are worth
highlighting. The system is largely managed by the code in
highlighting. The system is largely managed by the code in
`scripts/lib/setup_venv.py`
* **Using `pip` to manage dependencies**. This is standard in the
- **Using `pip` to manage dependencies**. This is standard in the
Python ecosystem, and means we only need to record a list of
versions in a `requirements.txt` file to declare what we're using.
Since we have a few different installation targets, we maintain
several `requirements.txt` format files in the `requirements/`
directory (e.g. `dev.in` for development, `prod.in` for
production, `docs.in` for ReadTheDocs, `common.in` for the vast
majority of packages common to prod and development, etc.). We use
majority of packages common to prod and development, etc.). We use
`pip install --no-deps` to ensure we only install the packages we
explicitly declare as dependencies.
* **virtualenv with pinned versions**. For a large application like
- **virtualenv with pinned versions**. For a large application like
Zulip, it is important to ensure that we're always using consistent,
predictable versions of all of our Python dependencies. To ensure
predictable versions of all of our Python dependencies. To ensure
this, we install our dependencies in a [virtualenv][] that contains
only the packages and versions that Zulip needs, and we always pin
exact versions of our dependencies in our `requirements.txt` files.
We pin exact versions, not minimum versions, so that installing
Zulip won't break if a dependency makes a buggy release. A side
Zulip won't break if a dependency makes a buggy release. A side
effect is that it's easy to debug problems caused by dependency
upgrades, since we're always doing those upgrades with an explicit
commit updating the `requirements/` directory.
* **Pinning versions of indirect dependencies**. We "pin" or "lock"
- **Pinning versions of indirect dependencies**. We "pin" or "lock"
the versions of our indirect dependencies files with
`tools/update-locked-requirements` (powered by `pip-compile`). What
`tools/update-locked-requirements` (powered by `pip-compile`). What
this means is that we have some "source" requirements files, like
`requirements/common.in`, that declare the packages that Zulip
depends on directly. Those packages have their own recursive
dependencies. When adding or removing a dependency from Zulip, one
depends on directly. Those packages have their own recursive
dependencies. When adding or removing a dependency from Zulip, one
simply edits the appropriate "source" requirements files, and then
runs `tools/update-locked-requirements`. That tool will use `pip
compile` to generate the locked requirements files like `prod.txt`,
`dev.txt` etc files that explicitly declare versions of all of
Zulip's recursive dependencies. For indirect dependencies (i.e.
dependencies not explicitly declared in the source requirements files),
it provides helpful comments explaining which direct dependency (or
dependencies) needed that indirect dependency. The process for
using this system is documented in more detail in
runs `tools/update-locked-requirements`. That tool will use
`pip-compile` to generate the locked requirements files like
`prod.txt`, `dev.txt` etc files that explicitly declare versions of
all of Zulip's recursive dependencies. For indirect dependencies
(i.e. dependencies not explicitly declared in the source
requirements files), it provides helpful comments explaining which
direct dependency (or dependencies) needed that indirect dependency.
The process for using this system is documented in more detail in
`requirements/README.md`.
* **Caching of virtualenvs and packages**. To make updating the
- **Caching of virtualenvs and packages**. To make updating the
dependencies of a Zulip installation efficient, we maintain a cache
of virtualenvs named by the hash of the relevant `requirements.txt`
file (`scripts/lib/hash_reqs.py`). These caches live under
`/srv/zulip-venv-cache/<hash>`. That way, when re-provisioning a
file (`scripts/lib/hash_reqs.py`). These caches live under
`/srv/zulip-venv-cache/<hash>`. That way, when re-provisioning a
development environment or deploying a new production version with
the same Python dependencies, no downloading or installation is
required: we just use the same virtualenv. When the only changes
required: we just use the same virtualenv. When the only changes
are upgraded versions, we'll use [virtualenv-clone][] to clone the
most similar existing virtualenv and then just upgrade the packages
needed, making small version upgrades extremely efficient. And
needed, making small version upgrades extremely efficient. And
finally, we use `pip`'s built-in caching to ensure that a specific
version of a specific package is only downloaded once.
* **Garbage-collecting caches**. We have a tool,
- **Garbage-collecting caches**. We have a tool,
`scripts/lib/clean_venv_cache.py`, which will clean old cached
virtualenvs that are no longer in use. In production, the algorithm
virtualenvs that are no longer in use. In production, the algorithm
preserves recent virtualenvs as well as those in use by any current
production deployment directory under `/home/zulip/deployments/`.
This helps ensure that a Zulip installation doesn't leak large
amounts of disk over time.
* **Scripts**. Often, we want a script running in production to use
the Zulip virtualenv. To make that work without a lot of duplicated
- **Scripts**. Often, we want a script running in production to use
the Zulip virtualenv. To make that work without a lot of duplicated
code, we have a helpful function,
`scripts.lib.setup_path.setup_path`, which on import will put the
currently running Python script into the Zulip virtualenv. This is
currently running Python script into the Zulip virtualenv. This is
called by `./manage.py` to ensure that our Django code always uses
the correct virtualenv as well.
* **Mypy type checker**. Because we're using mypy in a strict mode,
- **Mypy type checker**. Because we're using mypy in a strict mode,
when you add use of a new Python dependency, you usually need to
either adds stubs to the `stubs/` directory for the library, or edit
`mypy.ini` in the root of the Zulip project to configure
`ignore_missing_imports` for the new library. See
`ignore_missing_imports` for the new library. See
[our mypy docs][mypy-docs] for more details.
### Upgrading packages
@@ -202,7 +203,7 @@ See the [README][requirements-readme] file in `requirements/` directory
to learn how to upgrade a single Python package.
[mypy-docs]: ../testing/mypy.md
[requirements-readme]: https://github.com/zulip/zulip/blob/master/requirements/README.md#requirements
[requirements-readme]: https://github.com/zulip/zulip/blob/main/requirements/README.md#requirements
[stack-overflow]: https://askubuntu.com/questions/8653/how-to-keep-processes-running-after-ending-ssh-session
[caching]: https://help.github.com/en/articles/caching-your-github-password-in-git
@@ -212,31 +213,31 @@ We use the same set of strategies described for Python dependencies
for most of our JavaScript dependencies, so we won't repeat the
reasoning here.
* In a fashion very analogous to the Python codebase,
- In a fashion very analogous to the Python codebase,
`scripts/lib/node_cache.py` manages cached `node_modules`
directories in `/srv/zulip-npm-cache`. Each is named by its hash,
directories in `/srv/zulip-npm-cache`. Each is named by its hash,
computed by the `generate_sha1sum_node_modules` function.
`scripts/lib/clean_node_cache.py` handles garbage-collection.
* We use [yarn][], a `pip`-like tool for JavaScript, to download most
JavaScript dependencies. Yarn talks to standard the [npm][]
repository. We use the standard `package.json` file to declare our
- We use [yarn][], a `pip`-like tool for JavaScript, to download most
JavaScript dependencies. Yarn talks to standard the [npm][]
repository. We use the standard `package.json` file to declare our
direct dependencies, with sections for development and
production. Yarn takes care of pinning the versions of indirect
production. Yarn takes care of pinning the versions of indirect
dependencies in the `yarn.lock` file; `yarn install` updates the
`yarn.lock` files.
* `tools/update-prod-static`. This process is discussed in detail in
- `tools/update-prod-static`. This process is discussed in detail in
the [static asset pipeline](../subsystems/html-css.html#static-asset-pipeline)
article, but we don't use the `node_modules` directories directly in
production. Instead, static assets are compiled using our static
production. Instead, static assets are compiled using our static
asset pipeline and it is the compiled assets that are served
directly to users. As a result, we don't ship the `node_modules`
directly to users. As a result, we don't ship the `node_modules`
directory in a Zulip production release tarball, which is a good
thing, because doing so would more than double the size of a Zulip
release tarball.
* **Checked-in packages**. In contrast with Python, we have a few
- **Checked-in packages**. In contrast with Python, we have a few
JavaScript dependencies that we have copied into the main Zulip
repository under `static/third`, often with patches. These date
from an era before `npm` existed. It is a project goal to eliminate
repository under `static/third`, often with patches. These date
from an era before `npm` existed. It is a project goal to eliminate
these checked-in versions of dependencies and instead use versions
managed by the npm repositories.
@@ -248,13 +249,13 @@ its version) and `scripts/lib/third/install-yarn.sh` (the standard
installer for `yarn`, modified to support installing to a path that is
not the current user's home directory).
* `nvm` has its own system for installing each version of `node` at
its own path, which we use, though we install a `/usr/local/bin/node`
wrapper to access the desired version conveniently and efficiently
(`nvm` has a lot of startup overhead).
* `install-yarn.sh` is configured to install `yarn` at
`/srv/zulip-yarn`. We don't do anything special to try to manage
multiple versions of `yarn`.
- `nvm` has its own system for installing each version of `node` at
its own path, which we use, though we install a `/usr/local/bin/node`
wrapper to access the desired version conveniently and efficiently
(`nvm` has a lot of startup overhead).
- `install-yarn.sh` is configured to install `yarn` at
`/srv/zulip-yarn`. We don't do anything special to try to manage
multiple versions of `yarn`.
## Other third-party and generated files
@@ -266,9 +267,9 @@ maintain them.
### Emoji
Zulip uses the [iamcal emoji data package][iamcal] for its emoji data
and sprite sheets. We download this dependency using `npm`, and then
and sprite sheets. We download this dependency using `npm`, and then
have a tool, `tools/setup/build_emoji`, which reformats the emoji data
into the files under `static/generated/emoji`. Those files are in
into the files under `static/generated/emoji`. Those files are in
turn used by our [Markdown processor](../subsystems/markdown.md) and
`tools/update-prod-static` to make Zulip's emoji work in the various
environments where they need to be displayed.
@@ -277,9 +278,9 @@ Since processing emoji is a relatively expensive operation, as part of
optimizing provisioning, we use the same caching strategy for the
compiled emoji data as we use for virtualenvs and `node_modules`
directories, with `scripts/lib/clean_emoji_cache.py` responsible for
garbage-collection. This caching and garbage-collection is required
garbage-collection. This caching and garbage-collection is required
because a correct emoji implementation involves over 1000 small image
files and a few large ones. There is a more extended article on our
files and a few large ones. There is a more extended article on our
[emoji infrastructure](emoji.md).
### Translations data
@@ -287,7 +288,7 @@ files and a few large ones. There is a more extended article on our
Zulip's [translations infrastructure](../translating/translating.md) generates
several files from the source data, which we manage similar to our
emoji, but without the caching (and thus without the
garbage-collection). New translations data is downloaded from
garbage-collection). New translations data is downloaded from
Transifex and then compiled to generate both the production locale
files and also language data in `locale/language*.json` using
`manage.py compilemessages`, which extends the default Django
@@ -296,7 +297,7 @@ implementation of that tool.
### Pygments data
The list of languages supported by our Markdown syntax highlighting
comes from the [pygments][] package. `tools/setup/build_pygments_data` is
comes from the [pygments][] package. `tools/setup/build_pygments_data` is
responsible for generating `static/generated/pygments_data.json` so that
our JavaScript Markdown processor has access to the supported list.
@@ -305,16 +306,16 @@ our JavaScript Markdown processor has access to the supported list.
When making changes to Zulip's provisioning process or dependencies,
usually one needs to think about making changes in 3 places:
* `tools/lib/provision.py`. This is the main provisioning script,
- `tools/lib/provision.py`. This is the main provisioning script,
used by most developers to maintain their development environment.
* `docs/development/dev-setup-non-vagrant.md`. This is our "manual installation"
documentation. Strategically, we'd like to move the support for more
- `docs/development/dev-setup-non-vagrant.md`. This is our "manual installation"
documentation. Strategically, we'd like to move the support for more
versions of Linux from here into `tools/lib/provision.py`.
* Production. Our tools for compiling/generating static assets need
- Production. Our tools for compiling/generating static assets need
to be called from `tools/update-prod-static`, which is called by
`tools/build-release-tarball` (for doing Zulip releases) as well as
`tools/upgrade-zulip-from-git` (for deploying a Zulip server off of
master).
`main`).
[virtualenv]: https://virtualenv.pypa.io/en/stable/
[virtualenv-clone]: https://github.com/edwardgeorge/virtualenv-clone/

View File

@@ -1,32 +1,32 @@
# Upgrading Django
This article documents notes on the process for upgrading Zulip to
new major versions of Django. Here are the steps:
new major versions of Django. Here are the steps:
* Carefully read the Django upstream changelog, and `git grep` to
- Carefully read the Django upstream changelog, and `git grep` to
check if we're using anything deprecated or significantly modified
and put them in an issue (and then starting working through them).
Also, note any new features we might want to use after the upgrade,
and open an issue listing them;
[example](https://github.com/zulip/zulip/issues/2564).
* Start submitting PRs to do any deprecation-type migrations that work
on both the old and new version of Django. The goal here is to have
- Start submitting PRs to do any deprecation-type migrations that work
on both the old and new version of Django. The goal here is to have
the actual cutover commit be as small as possible, and to test as
much of the changes for the migration as we can independently from
the big cutover.
* Check the version support of the third-party Django packages we use
- Check the version support of the third-party Django packages we use
(`git grep django requirements/` to see a list), upgrade any as
needed and file bugs upstream for any that lack support. Look into
needed and file bugs upstream for any that lack support. Look into
fixing said bugs.
* Look at the pieces of Django code that we've copied and then
- Look at the pieces of Django code that we've copied and then
adapted, and confirm whether Django has any updates to the modified
code we should apply. Partial list:
* `CursorDebugWrapper`, which we have a modified version of in
`zerver/lib/db.py`. See
code we should apply. Partial list:
- `CursorDebugWrapper`, which we have a modified version of in
`zerver/lib/db.py`. See
[the issue for contributing this upstream](https://github.com/zulip/zulip/issues/974)
* `PasswordResetForm` and any other forms we import from
- `PasswordResetForm` and any other forms we import from
`django.contrib.auth.forms` in `zerver/forms.py` (which has all of
our Django forms).
* Our AsyncDjangoHandler class has some code copied from the core
- Our AsyncDjangoHandler class has some code copied from the core
Django handlers code; look at whether that code was changed in
Django upstream.

View File

@@ -11,32 +11,33 @@ our instructions for
On to the documentation. Zulip's email system is fairly straightforward,
with only a few things you need to know to get started.
* All email templates are in `templates/zerver/emails/`. Each email has three
- All email templates are in `templates/zerver/emails/`. Each email has three
template files: `<template_prefix>.subject.txt`, `<template_prefix>.txt`, and
`<template_prefix>.source.html`. Email templates, along with all other templates
in the `templates/` directory, are Jinja2 templates.
* Most of the CSS and HTML layout for emails is in `email_base.html`. Note
- Most of the CSS and HTML layout for emails is in `email_base.html`. Note
that email has to ship with all of its CSS and HTML, so nothing in
`static/` is useful for an email. If you're adding new CSS or HTML for an
email, there's a decent chance it should go in `email_base.html`.
* All email is eventually sent by `zerver.lib.send_email.send_email`. There
- All email is eventually sent by `zerver.lib.send_email.send_email`. There
are several other functions in `zerver.lib.send_email`, but all of them
eventually call the `send_email` function. The most interesting one is
`send_future_email`. The `ScheduledEmail` entries are eventually processed
by a supervisor job that runs `zerver/management/commands/deliver_scheduled_emails.py`.
* Always use `user_profile.delivery_email`, not `user_profile.email`,
when passing data into the `send_email` library. The
- Always use `user_profile.delivery_email`, not `user_profile.email`,
when passing data into the `send_email` library. The
`user_profile.email` field may not always be valid.
* A good way to find a bunch of example email pathways is to `git grep` for
- A good way to find a bunch of example email pathways is to `git grep` for
`zerver/emails` in the `zerver/` directory.
One slightly complicated decision you may have to make when adding an email
is figuring out how to schedule it. There are 3 ways to schedule email.
* Send it immediately, in the current Django process, e.g. by calling
- Send it immediately, in the current Django process, e.g. by calling
`send_email` directly. An example of this is the `confirm_registration`
email.
* Add it to a queue. An example is the `invitation` email.
* Send it (approximately) at a specified time in the future, using
- Add it to a queue. An example is the `invitation` email.
- Send it (approximately) at a specified time in the future, using
`send_future_email`. An example is the `followup_day2` email.
Email takes about a quarter second per email to process and send. Generally
@@ -48,15 +49,15 @@ from a queue. Documentation on our queueing system is available
## Development and testing
All the emails sent in the development environment can be accessed by
visiting `/emails` in the browser. The way that this works is that
visiting `/emails` in the browser. The way that this works is that
we've set the email backend (aka what happens when you call the email
`.send()` method in Django) in the development environment to be our
custom backend, `EmailLogBackEnd`. It does the following:
custom backend, `EmailLogBackEnd`. It does the following:
* Logs any sent emails to `var/log/email_content.log`. This log is
- Logs any sent emails to `var/log/email_content.log`. This log is
displayed by the `/emails` endpoint
(e.g. http://zulip.zulipdev.com:9991/emails).
* Print a friendly message on console advertising `/emails` to make
- Print a friendly message on console advertising `/emails` to make
this nice and discoverable.
### Testing in a real email client
@@ -65,7 +66,7 @@ You can also forward all the emails sent in the development
environment to an email account of your choice by clicking on
**Forward emails to an email account** on the `/emails` page. This
feature can be used for testing how the emails gets rendered by
actual email clients. This is important because web email clients
actual email clients. This is important because web email clients
have limited CSS functionality, autolinkify things, and otherwise
mutate the HTML email one can see previewed on `/emails`.
@@ -81,16 +82,16 @@ Once you have the login credentials of the SMTP provider, since there
is not `/etc/zulip/settings.py` in development, configure it using the
following keys in `zproject/dev-secrets.conf`
* `email_host` - SMTP hostname.
* `email_port` - SMTP port.
* `email_host_user` - Username of the SMTP user
* `email_password` - Password of the SMTP user.
* `email_use_tls` - Set to `true` for most providers. Else, don't set any value.
- `email_host` - SMTP hostname.
- `email_port` - SMTP port.
- `email_host_user` - Username of the SMTP user
- `email_password` - Password of the SMTP user.
- `email_use_tls` - Set to `true` for most providers. Else, don't set any value.
Here is an example of how `zproject/dev-secrets.conf` might look if
you are using Gmail.
```
```ini
email_host = smtp.gmail.com
email_port = 587
email_host_user = username@gmail.com
@@ -103,18 +104,18 @@ email_password = gmail_password
### Notes
* After changing any HTML email or `email_base.html`, you need to run
- After changing any HTML email or `email_base.html`, you need to run
`scripts/setup/inline_email_css.py` for the changes to be reflected
in the development environment. The script generates files like
`templates/zerver/emails/compiled/<template_prefix>.html`.
* Images won't be displayed in a real email client unless you change
- Images won't be displayed in a real email client unless you change
the `base_image_uri` used for emails to a public URL such as
`https://chat.zulip.org/static/images/emails` (image links to
`localhost:9991` aren't allowed by modern email providers). See
`zproject/email_backends.py` for more details.
* While running the backend test suite, we use
- While running the backend test suite, we use
`django.core.mail.backends.locmem.EmailBackend` as the email
backend. The `locmem` backend stores messages in a special attribute
of the django.core.mail module, "outbox". The outbox attribute is
@@ -123,18 +124,18 @@ email_password = gmail_password
## Email templates
Zulip's email templates live under `templates/zerver/emails`. Email
Zulip's email templates live under `templates/zerver/emails`. Email
templates are a messy problem, because on the one hand, you want nice,
readable markup and styling, but on the other, email clients have very
limited CSS support and generally require us to inject any CSS we're
using in the emails into the email as inline styles. And then you
also need both plain-text and HTML emails. We solve these problems
using in the emails into the email as inline styles. And then you
also need both plain-text and HTML emails. We solve these problems
using a combination of the
[premailer](https://github.com/peterbe/premailer) library and having
two copies of each email (plain-text and HTML).
So for each email, there are two source templates: the `.txt` version
(for plain-text format) as well as a `.source.html` template. The
(for plain-text format) as well as a `.source.html` template. The
`.txt` version is used directly; while the `.source.html` template is
processed by `scripts/setup/inline_email_css.py` (generating a `.html` template
under `templates/zerver/emails/compiled`); that tool (powered by
@@ -143,19 +144,19 @@ under `templates/zerver/emails/compiled`); that tool (powered by
What this means is that when you're editing emails, **you need to run
`scripts/setup/inline_email_css.py`** after making changes to see the changes
take effect. Our tooling automatically runs this as part of
take effect. Our tooling automatically runs this as part of
`tools/provision` and production deployments; but you should bump
`PROVISION_VERSION` when making changes to emails that change test
behavior, or other developers will get test failures until they
provision.
While this model is great for the markup side, it isn't ideal for
[translations](../translating/translating.md). The Django
[translations](../translating/translating.md). The Django
translation system works with exact strings, and having different new
markup can require translators to re-translate strings, which can
result in problems like needing 2 copies of each string (one for
plain-text, one for HTML) and/or needing to re-translate a bunch of
strings after making a CSS tweak. Re-translating these strings is
strings after making a CSS tweak. Re-translating these strings is
relatively easy in Transifex, but annoying.
So when writing email templates, we try to translate individual

View File

@@ -1,28 +1,28 @@
# Emoji
Emoji seem like a simple idea, but there's actually a ton of
complexity that goes into an effective emoji implementation. This
complexity that goes into an effective emoji implementation. This
document discusses a number of these issues.
Currently, Zulip supports these four display formats for emoji:
* Google modern
* Google classic
* Twitter
* Plain text
- Google modern
- Google classic
- Twitter
- Plain text
## Emoji codes
The Unicode standard has various ranges of characters set aside for
emoji. So you can put emoji in your terminal using actual Unicode
characters like 😀 and 👍. If you paste those into Zulip, Zulip will
emoji. So you can put emoji in your terminal using actual Unicode
characters like 😀 and 👍. If you paste those into Zulip, Zulip will
render them as the corresponding emoji image.
However, the Unicode committee did not standardize on a set of
human-readable names for emoji. So, for example, when using the
human-readable names for emoji. So, for example, when using the
popular `:` based style for entering emoji from the keyboard, we have
to decide whether to use `:angry:` or `:angry_face:` to represent an
angry face. Different products use different approaches, but for
angry face. Different products use different approaches, but for
purposes like emoji pickers or autocomplete, you definitely want to
pick exactly one of these names, since otherwise users will always be
seeing duplicates of a given emoji next to each other.
@@ -32,9 +32,9 @@ section on [picking emoji names](#picking-emoji-names) below.
### Custom emoji
Zulip supports custom user-uploaded emoji. We manage those by having
Zulip supports custom user-uploaded emoji. We manage those by having
the name of the emoji be its "emoji code", and using an emoji_type
field to keep track of it. We are in the progress of migrating Zulip
field to keep track of it. We are in the progress of migrating Zulip
to refer to these emoji only by ID, which is a requirement for being
able to support deprecating old realm emoji in a sensible way.
@@ -42,17 +42,17 @@ able to support deprecating old realm emoji in a sensible way.
We use the [iamcal emoji data package][iamcal] to provide sprite
sheets and individual images for our emoji, as well as a data set of
emoji categories, code points, etc. The sprite sheets are used
emoji categories, code points, etc. The sprite sheets are used
by the Zulip webapp to display emoji in messages, emoji reactions,
etc. However, we can't use the sprite sheets in some contexts, such
etc. However, we can't use the sprite sheets in some contexts, such
as missed-message and digest emails, that need to have self-contained
assets. For those, we use individual emoji files under
`static/generated/emoji`. The structure of that repository contains
assets. For those, we use individual emoji files under
`static/generated/emoji`. The structure of that repository contains
both files named after the Unicode representation of emoji (as actual
image files) as well as symlinks pointing to those emoji.
We need to maintain those both for the names used in the iamcal emoji
data set as well as our old emoji data set (`emoji_map.json`). Zulip
data set as well as our old emoji data set (`emoji_map.json`). Zulip
has a tool, `tools/setup/emoji/build_emoji`, that combines the
`emoji.json` file from iamcal with the old `emoji_map.json` data set
to construct the various symlink farms and output files described
@@ -64,30 +64,31 @@ The `build_emoji` tool generates the set of files under
`static/generated/emoji` is a symlink to that tree; we do this in
order to cache old versions to make provisioning and production
deployments super fast in the common case that we haven't changed the
emoji tooling). See [our dependencies document](../subsystems/dependencies.md)
emoji tooling). See [our dependencies document](../subsystems/dependencies.md)
for more details on this strategy.
The emoji tree generated by this process contains several import elements:
* `emoji_codes.json`: A set of mappings used by the Zulip frontend to
- `emoji_codes.json`: A set of mappings used by the Zulip frontend to
understand what Unicode emoji exist and what their shortnames are,
used for autocomplete, emoji pickers, etc. This has been
used for autocomplete, emoji pickers, etc. This has been
deduplicated using the logic in
`tools/setup/emoji/emoji_setup_utils.py` to generally only have
`:angry:` and not also `:angry_face:`, since having both is ugly and
pointless for purposes like autocomplete and emoji pickers.
* `images/emoji/unicode/*.png`: A farm of emoji
* `images/emoji/*.png`: A farm of symlinks from emoji names to the
`images/emoji/unicode/` tree. This is used to serve individual emoji
- `images/emoji/unicode/*.png`: A farm of emoji
- `images/emoji/*.png`: A farm of symlinks from emoji names to the
`images/emoji/unicode/` tree. This is used to serve individual emoji
images, as well as for the
[backend Markdown processor](../subsystems/markdown.md) to know which emoji
names exist and what Unicode emoji / images they map to. In this
names exist and what Unicode emoji / images they map to. In this
tree, we currently include all of the emoji in `emoji-map.json`;
this means that if you send `:angry_face:`, it won't autocomplete,
but will still work (but not in previews).
* Some CSS and PNGs for the emoji spritesheets, used in Zulip for
- Some CSS and PNGs for the emoji spritesheets, used in Zulip for
emoji pickers where we would otherwise need to download over 1000 of
individual emoji images (which would cause a browser performance
problem). We have multiple spritesheets: one for each emoji
problem). We have multiple spritesheets: one for each emoji
provider that we support (Google, Twitter, EmojiOne, and Apple.).
[iamcal]: https://github.com/iamcal/emoji-data
@@ -102,36 +103,36 @@ The following set of considerations is not comprehensive, but has a few
principles that were applied to the current set of names. We use (strong),
(medium), and (weak) denote how strong a consideration it is.
* Even with over 1000 symbols, emoji feels surprisingly sparse as a language,
- Even with over 1000 symbols, emoji feels surprisingly sparse as a language,
and more often than not, if you search for something, you don't find an
appropriate emoji for it. So a primary goal for our set of names is to
maximize the number of situations in which the user finds an emoji that
feels appropriate. (strong)
* Conversely, we remove generic words that will gum up the typeahead. So
- Conversely, we remove generic words that will gum up the typeahead. So
`:outbox:` instead of `:outbox_tray:`. Each word should count. (medium)
* We aim for the set of names to be as widely culturally applicable as
- We aim for the set of names to be as widely culturally applicable as
possible, even if the glyphs are not. So `:statue:` instead of
`:new_york:` for the statue of liberty, and `:tower:` instead of
`:tokyo_tower:`. (strong)
* We remove unnecessary gender descriptions. So `:ok_signal:` instead of
- We remove unnecessary gender descriptions. So `:ok_signal:` instead of
`:ok_woman:`. (strong)
* We don't add names that could be inappropriate in school or work
- We don't add names that could be inappropriate in school or work
environments, even if the use is common on the internet. For example, we
have not added `:butt:` for `:peach:`, or `:cheers:` for
`:beers:`. (strong)
* Names should be compatible with the four emoji sets we support, but don't
- Names should be compatible with the four emoji sets we support, but don't
have to be compatible with any other emoji set. (medium)
* We try not to use a creative canonical_name for emoji that are likely to
- We try not to use a creative canonical_name for emoji that are likely to
be familiar to a large subset of users. This largely applies to certain
faces. (medium)
* The set of names should be compatible with the iamcal, gemoji, and Unicode
- The set of names should be compatible with the iamcal, gemoji, and Unicode
names. Compatible here means that if there is an emoji name a user knows
from one of those sets, and the user searches for the key word of that
name, they will get an emoji in our set. It is okay if this emoji has a
@@ -142,26 +143,26 @@ Much of the work of picking names went into the first bullet above: making
the emoji language less sparse. Some tricks and heuristics that were used
for that:
* There are many near duplicates, like `:dog:` and `:dog_face:`, or
- There are many near duplicates, like `:dog:` and `:dog_face:`, or
`:mailbox:`, `:mailbox_with_mail:`, and `:mailbox_with_no_mail:`. In these
cases we repurpose the duplicates to be as useful as we can, like `:dog:`
and `:puppy:`, and `:mailbox:`, `:unread_mail:`, `:inbox_zero:` for the
ones above. There isn't a ton of flexibility, since we can't change the
glyphs. But in most cases we have been able to come up with something.
* Many emoji have commonly understood meanings among people that use emoji a
- Many emoji have commonly understood meanings among people that use emoji a
lot, and there are websites and articles that document some of these
meanings. A commonly understood meaning can be a great thing to add as an
alternate name, since often it is a sign that the meaning is addressing a
real gap in the emoji system.
* Many emoji names are unnecessarily specific in iamcal/etc, like
- Many emoji names are unnecessarily specific in iamcal/etc, like
`:flower_playing_cards:`, `:izakaya_lantern:`, or `:amphora:`. Renaming
them to `:playing_cards:`, `:lantern:`, and `:vase:` makes them more
widely usable. In such cases we often keep the specific name as an
alternate.
* If there are natural things someone might type, like `:happy:`, we try to
- If there are natural things someone might type, like `:happy:`, we try to
find an emoji to match. This extends to things that someone might not
think to type, but as soon as someone in the organization discovers it it
could get wide use, like `:working_on_it:`. Good future work would be to
@@ -171,7 +172,7 @@ for that:
Other notes
* Occasionally there are near duplicates where we don't have ideas for
- Occasionally there are near duplicates where we don't have ideas for
useful names for the second one. In that case we sometimes remove the
emoji rather than have two nearly identical glyphs in the emoji picker and
typeahead. For instance, we kept `:spiral_notepad:` and dropped
@@ -179,7 +180,7 @@ Other notes
of glyphs look very different, we'll find two names that allow them both
to stay.
* We removed many of the moons and clocks, to make the typeahead experience
- We removed many of the moons and clocks, to make the typeahead experience
better when searching for something that catches all the moons or all the
clocks. We kept all the squares and diamonds and other shapes, even though
they have the same problem, since they are commonly used to make emoji art

View File

@@ -1,7 +1,7 @@
# Real-time push and events
Zulip's "events system" is the server-to-client push system that
powers our real-time sync. This document explains how it works; to
powers our real-time sync. This document explains how it works; to
read an example of how a complete feature using this system works,
check out the
[new application feature tutorial](../tutorials/new-feature-tutorial.md).
@@ -9,23 +9,23 @@ check out the
Any single-page web application like Zulip needs a story for how
changes made by one client are synced to other clients, though having
a good architecture for this is particularly important for a chat tool
like Zulip, since the state is constantly changing. When we talk
like Zulip, since the state is constantly changing. When we talk
about clients, think a browser tab, mobile app, or API bot that needs
to receive updates to the Zulip data. The simplest example is a new
to receive updates to the Zulip data. The simplest example is a new
message being sent by one client; other clients must be notified in
order to display the message. But a complete application like Zulip
order to display the message. But a complete application like Zulip
has dozens of different types of data that need to be synced to other
clients, whether it be new streams, changes in a user's name or
avatar, settings changes, etc. In Zulip, we call these updates that
avatar, settings changes, etc. In Zulip, we call these updates that
need to be sent to other clients **events**.
An important thing to understand when designing such a system is that
events need to be synced to every client that has a copy of the old
data if one wants to avoid clients displaying inaccurate data to
users. So if a user has two browser windows open and sends a message,
users. So if a user has two browser windows open and sends a message,
every client controlled by that user as well as any recipients of the
message, including both of those two browser windows, will receive
that event. (Technically, we don't need to send events to the client
that event. (Technically, we don't need to send events to the client
that triggered the change, but this approach saves a bunch of
unnecessary duplicate UI update code, since the client making the
change can just use the same code as every other client, maybe plus a
@@ -34,11 +34,11 @@ little notification that the operation succeeded).
Architecturally, there are a few things needed to make a successful
real-time sync system work:
* **Generation**. Generating events when changes happen to data, and
- **Generation**. Generating events when changes happen to data, and
determining which users should receive each event.
* **Delivery**. Efficiently delivering those events to interested
- **Delivery**. Efficiently delivering those events to interested
clients, ideally in an exactly-once fashion.
* **UI updates**. Updating the UI in the client once it has received
- **UI updates**. Updating the UI in the client once it has received
events from the server.
Reactive JavaScript libraries like React and Vue can help simplify the
@@ -51,30 +51,30 @@ problems in a scalable, correct, and predictable way.
## Generation system
Zulip's generation system is built around a Python function,
`send_event(realm, event, users)`. It accepts the realm (used for
`send_event(realm, event, users)`. It accepts the realm (used for
sharding), the event data structure (just a Python dictionary with
some keys and value; `type` is always one of the keys but the rest
depends on the specific event) and a list of user IDs for the users
whose clients should receive the event. In special cases such as
whose clients should receive the event. In special cases such as
message delivery, the list of users will instead be a list of dicts
mapping user IDs to user-specific data like whether that user was
mentioned in that message. The data passed to `send_event` are simply
mentioned in that message. The data passed to `send_event` are simply
marshalled as JSON and placed in the `notify_tornado` RabbitMQ queue
to be consumed by the delivery system.
Usually, this list of users is one of 3 things:
* A single user (e.g. for user-level settings changes).
* Everyone in the realm (e.g. for organization-level settings changes,
- A single user (e.g. for user-level settings changes).
- Everyone in the realm (e.g. for organization-level settings changes,
like new realm emoji).
* Everyone who would receive a given message (for messages, emoji
- Everyone who would receive a given message (for messages, emoji
reactions, message editing, etc.); i.e. the subscribers to a stream
or the people on a private message thread.
It is the responsibility of the caller of `send_event` to choose the
list of user IDs correctly. There can be security problems if e.g. an
list of user IDs correctly. There can be security problems if e.g. an
event containing private message content is sent to the entire
organization. However, if an event isn't sent to enough clients,
organization. However, if an event isn't sent to enough clients,
there will likely be user-visible real-time sync bugs.
Most of the hard work in event generation is about defining consistent
@@ -84,7 +84,7 @@ wide range of possible clients, and make it easy for developers.
## Delivery system
Zulip's event delivery (real-time push) system is based on Tornado,
which is ideal for handling a large number of open requests. Details
which is ideal for handling a large number of open requests. Details
on Tornado are available in the
[architecture overview](../overview/architecture-overview.md), but in short it
is good at holding open a large number of connections for a long time.
@@ -94,7 +94,7 @@ primarily `zerver/tornado/event_queue.py`.
Zulip's event delivery system is based on "long-polling"; basically
clients make `GET /json/events` calls to the server, and the server
doesn't respond to the request until it has an event to deliver to the
client. This approach is reasonably efficient and works everywhere
client. This approach is reasonably efficient and works everywhere
(unlike websockets, which have a decreasing but nonzero level of
client compatibility problems).
@@ -103,16 +103,16 @@ For each connected client, the **event queue server** maintains an
that client which have not yet been acknowledged by that client.
Ignoring the subtle details around error handling, the protocol is
pretty simple; when a client does a `GET /json/events` call, the
server checks if there are any events in the queue. If there are, it
returns the events immediately. If there aren't, it records that
server checks if there are any events in the queue. If there are, it
returns the events immediately. If there aren't, it records that
queue as having a waiting client (often called a `handler` in the
code).
When it pulls an event off the `notify_tornado` RabbitMQ queue, it
simply delivers the event to each queue associated with one of the
target users. If the queue has a waiting client, it breaks the
target users. If the queue has a waiting client, it breaks the
long-poll connection by returning an HTTP response to the waiting
client request. If there is no waiting client, it simply pushes the
client request. If there is no waiting client, it simply pushes the
event onto the queue.
When starting up, each client makes a `POST /json/register` to the
@@ -120,25 +120,25 @@ server, which creates a new event queue for that client and returns the
`queue_id` as well as an initial `last_event_id` to the client (it can
also, optionally, fetch the initial data to save an RTT and avoid
races; see the below section on initial data fetches for details on
why this is useful). Once the event queue is registered, the client
why this is useful). Once the event queue is registered, the client
can just do an infinite loop calling `GET /json/events` with those
parameters, updating `last_event_id` each time to acknowledge any
events it has received (see `call_on_each_event` in the
[Zulip Python API bindings][api-bindings-code] for a complete example
implementation). When handling each `GET /json/events` request, the
implementation). When handling each `GET /json/events` request, the
queue server can safely delete any events that have an event ID less
than or equal to the client's `last_event_id` (event IDs are just a
counter for the events a given queue has received.)
If network failures were impossible, the `last_event_id` parameter in
the protocol would not be required, but it is important for enabling
exactly-once delivery in the presence of potential failures. (Without
exactly-once delivery in the presence of potential failures. (Without
it, the queue server would have to delete events from the queue as
soon as it attempted to send them to the client; if that specific HTTP
response didn't reach the client due to a network TCP failure, then
those events could be lost).
[api-bindings-code]: https://github.com/zulip/python-zulip-api/blob/master/zulip/zulip/__init__.py
[api-bindings-code]: https://github.com/zulip/python-zulip-api/blob/main/zulip/zulip/__init__.py
The queue servers are a very high-traffic system, processing at a
minimum one request for every message delivered to every Zulip client.
@@ -150,13 +150,13 @@ every 45s or so (if no other events have arrived in the meantime).
To avoid a large memory and other resource leak, the queues are
garbage collected after (by default) 10 minutes of inactivity from a
client, under the theory that the client has likely gone off the
Internet (or no longer exists) access; this happens constantly. If
Internet (or no longer exists) access; this happens constantly. If
the client returns, it will receive a "queue not found" error when
requesting events; its handler for this case should just restart the
client / reload the browser so that it refetches initial data the same
way it would on startup. Since clients have to implement their
way it would on startup. Since clients have to implement their
startup process anyway, this approach adds minimal technical
complexity to clients. A nice side effect is that if the event queue
complexity to clients. A nice side effect is that if the event queue
server (which stores queues in memory) were to crash and lose
its data, clients would recover, just as if they had lost Internet
access briefly (there is some DoS risk to manage, though).
@@ -175,10 +175,10 @@ anyway).
When a client starts up, it usually wants to get 2 things from the
server:
* The "current state" of various pieces of data, e.g. the current
- The "current state" of various pieces of data, e.g. the current
settings, set of users in the organization (for typeahead), stream,
messages, etc. (aka the "initial state").
* A subscription to receive updates to those data when they are
- A subscription to receive updates to those data when they are
changed by a client (aka an event queue).
Ideally, one would get those two things atomically, i.e. if some other
@@ -197,31 +197,31 @@ This is quite challenging to do technically, because fetching the
initial state for a complex web application like Zulip might involve
dozens of queries to the database, caches, etc. over the course of
100ms or more, and it is thus nearly impossible to do all of those
things together atomically. So instead, we use a more complicated
things together atomically. So instead, we use a more complicated
algorithm that can produce the atomic result from non-atomic
subroutines. Here's how it works when you make a `register` API
subroutines. Here's how it works when you make a `register` API
request; the logic is in `zerver/views/events_register.py` and
`zerver/lib/events.py`. The request is directly handled by Django:
`zerver/lib/events.py`. The request is directly handled by Django:
* Django makes an HTTP request to Tornado, requesting that a new event
- Django makes an HTTP request to Tornado, requesting that a new event
queue be created, and records its queue ID.
* Django does all the various database/cache/etc. queries to fetch the
- Django does all the various database/cache/etc. queries to fetch the
data, non-atomically, from the various data sources (see
the `fetch_initial_state_data` function).
* Django makes a second HTTP request to Tornado, requesting any events
- Django makes a second HTTP request to Tornado, requesting any events
that had been added to the Tornado event queue since it
was created.
* Finally, Django "applies" the events (see the `apply_events`
function) to the initial state that it fetched. E.g. for a name
- Finally, Django "applies" the events (see the `apply_events`
function) to the initial state that it fetched. E.g. for a name
change event, it finds the user data in the `realm_user` data
structure, and updates it to have the new name.
### Testing
The design above achieves everything we desire, at the cost that we need to
write a correct `apply_events` function. This is a difficult function to
write a correct `apply_events` function. This is a difficult function to
implement correctly, because the situations that it handles almost never
happen (being race conditions) during manual testing. Fortunately, we have
happen (being race conditions) during manual testing. Fortunately, we have
a protocol for testing `apply_events` in our automated backend tests.
#### Overview
@@ -233,19 +233,21 @@ ready to write a test in `test_events.py`.
The actual code for a `test_events` test can be quite concise:
def test_default_streams_events(self) -> None:
stream = get_stream("Scotland", self.user_profile.realm)
events = self.verify_action(lambda: do_add_default_stream(stream))
check_default_streams("events[0]", events[0])
# (some details omitted)
```python
def test_default_streams_events(self) -> None:
stream = get_stream("Scotland", self.user_profile.realm)
events = self.verify_action(lambda: do_add_default_stream(stream))
check_default_streams("events[0]", events[0])
# (some details omitted)
```
The real trick is debugging these tests.
The test example above has three things going on:
* Set up some data (`get_stream`)
* Call `verify_action` with an action function (`do_add_default_stream`)
* Use a schema checker to validate data (`check_default_streams`)
- Set up some data (`get_stream`)
- Call `verify_action` with an action function (`do_add_default_stream`)
- Use a schema checker to validate data (`check_default_streams`)
#### verify_action
@@ -255,7 +257,7 @@ within `test_events.py`.
The `verify_action` function simulates the possible race condition in
order to verify that the `apply_events` logic works correctly in the
context of some action function. To use our concrete example above,
context of some action function. To use our concrete example above,
we are seeing that applying the events from the
`do_remove_default_stream` action inside of `apply_events` to a stale
copy of your state results in the same state dictionary as doing the
@@ -263,15 +265,15 @@ action and then fetching a fresh copy of the state.
In particular, `verify_action` does the following:
* Call `fetch_initial_state_data` to get the current state.
* Call the action function (e.g. `do_add_default_stream`).
* Capture the events generated by the action function.
* Check the events generated are documented in the [OpenAPI
- Call `fetch_initial_state_data` to get the current state.
- Call the action function (e.g. `do_add_default_stream`).
- Capture the events generated by the action function.
- Check the events generated are documented in the [OpenAPI
schema](../documentation/api.md) defined in
`zerver/openapi/zulip.yaml`.
* Call `apply_events(state, events)`, to get the resulting "hybrid state".
* Call `fetch_initial_state_data` again to get the "normal state".
* Compare the two results.
- Call `apply_events(state, events)`, to get the resulting "hybrid state".
- Call `fetch_initial_state_data` again to get the "normal state".
- Compare the two results.
In the event that you wrote the `apply_events` logic correctly the
first time, then the two states will be identical, and the
@@ -288,75 +290,81 @@ behind the `apply_events` function. It may also be helpful to read the code
for `verify_action` itself. Finally, you may want to ask for help on chat.
Before we move on to the next step, it's worth noting that `verify_action`
only has one required parameter, which is the action function. We
only has one required parameter, which is the action function. We
typically express the action function as a lambda, so that we
can pass in arguments:
events = self.verify_action(lambda: do_add_default_stream(stream))
```python
events = self.verify_action(lambda: do_add_default_stream(stream))
```
There are some notable optional parameters for `verify_action`:
* `state_change_expected` must be set to `False` if your action
doesn't actually require state changes for some reason; otherwise,
`verify_action` will complain that your test doesn't really
exercise any `apply_events` logic. Typing notifications (which
are ephemereal) are a common place where we use this.
- `state_change_expected` must be set to `False` if your action
doesn't actually require state changes for some reason; otherwise,
`verify_action` will complain that your test doesn't really
exercise any `apply_events` logic. Typing notifications (which
are ephemereal) are a common place where we use this.
* `num_events` will tell `verify_action` how many events the
`hamlet` user will receive after the action (the default is 1).
- `num_events` will tell `verify_action` how many events the
`hamlet` user will receive after the action (the default is 1).
* parameters such as `client_gravatar` and `slim_presence` get
passed along to `fetch_initial_state_data` (and it's important
to test both boolean values of these parameters for relevant
actions).
- parameters such as `client_gravatar` and `slim_presence` get
passed along to `fetch_initial_state_data` (and it's important
to test both boolean values of these parameters for relevant
actions).
For advanced use cases of `verify_action`, we highly recommend reading
the code itself in `BaseAction` (in `test_events.py`).
#### Schema checking
The `test_events.py` system has two forms of schema checking. The
The `test_events.py` system has two forms of schema checking. The
first is verifying that you've updated the [GET /events API
documentation](https://zulip.com/api/get-events) to document your new
event's format for benefit of the developers of Zulip's mobile app,
terminal app, and other API clients. See the [API documentation
terminal app, and other API clients. See the [API documentation
docs](../documentation/api.md) for details on the OpenAPI
documentation.
The second is higher-detail check inside `test_events` that this
specific test generated the expected series of events. Let's look at
specific test generated the expected series of events. Let's look at
the last line of our example test snippet:
# ...
events = self.verify_action(lambda: do_add_default_stream(stream))
check_default_streams("events[0]", events[0])
```python
# ...
events = self.verify_action(lambda: do_add_default_stream(stream))
check_default_streams("events[0]", events[0])
```
We have discussed `verify_action` in some detail, and you will
note that it returns the actual events generated by the action
function. It is part of our test discipline in `test_events` to
function. It is part of our test discipline in `test_events` to
verify that the events are formatted in a predictable way.
Ideally, we would test that events match the exact data that we
expect, but it can be difficult to do this due to unpredictable
things like database ids. So instead, we just verify the "schema"
of the event(s). We use a schema checker like `check_default_streams`
things like database ids. So instead, we just verify the "schema"
of the event(s). We use a schema checker like `check_default_streams`
to validate the types of the data.
If you are creating a new event format, then you will have to
write your own schema checker in `event_schema.py`. Here is
write your own schema checker in `event_schema.py`. Here is
the example relevant to our example:
default_streams_event = event_dict_type(
required_keys=[
("type", Equals("default_streams")),
("default_streams", ListType(DictType(basic_stream_fields))),
]
)
check_default_streams = make_checker(default_streams_event)
```python
default_streams_event = event_dict_type(
required_keys=[
("type", Equals("default_streams")),
("default_streams", ListType(DictType(basic_stream_fields))),
]
)
check_default_streams = make_checker(default_streams_event)
```
Note that `basic_stream_fields` is not shown in these docs. The
Note that `basic_stream_fields` is not shown in these docs. The
best way to understand how to write schema checkers is to read
`event_schema.py`. There is a large block comment at the top of
`event_schema.py`. There is a large block comment at the top of
the file, and then you can skim the rest of the file to see the
patterns.
@@ -378,9 +386,9 @@ against the two versions of the schema that you declared above using
The final detail we need to ensure that `apply_events` always works
correctly is to make sure that we have relevant tests for
every event type that can be generated by Zulip. This can be tested
every event type that can be generated by Zulip. This can be tested
manually using `test-backend --coverage BaseAction` and then
checking that all the calls to `send_event` are covered. Someday
checking that all the calls to `send_event` are covered. Someday
we'll add automation that verifies this directly by inspecting the
coverage data.
@@ -392,11 +400,11 @@ available via the `page_params` parameter.
### Messages
One exception to the protocol described in the last section is the
actual messages. Because Zulip clients usually fetch them in a
actual messages. Because Zulip clients usually fetch them in a
separate AJAX call after the rest of the site is loaded, we don't need
them to be included in the initial state data. To handle those
them to be included in the initial state data. To handle those
correctly, clients are responsible for discarding events related to
messages that the client has not yet fetched.
Additionally, see
[the master documentation on sending messages](../subsystems/sending-messages.md)
[the main documentation on sending messages](../subsystems/sending-messages.md)

View File

@@ -1,7 +1,7 @@
# Full-text search
Zulip supports full-text search, which can be combined arbitrarily
with Zulip's full suite of narrowing operators. By default, it only
with Zulip's full suite of narrowing operators. By default, it only
supports English text, but there is an experimental
[PGroonga](https://pgroonga.github.io/) integration that provides
full-text search for all languages.
@@ -19,7 +19,7 @@ search results.
In order to optimize the performance of delivering messages, the
full-text search index is updated for newly sent messages in the
background, after the message has been delivered. This background
background, after the message has been delivered. This background
updating is done by
`puppet/zulip/files/postgresql/process_fts_updates`, which is usually
deployed on the database server, but could be deployed on an
@@ -31,7 +31,7 @@ Zulip also supports using [PGroonga](https://pgroonga.github.io/) for
full-text search. While PostgreSQL's built-in full-text search feature
supports only one language at a time (in Zulip's case, English), the
PGroonga full-text search engine supports all languages
simultaneously, including Japanese and Chinese. Once we have tested
simultaneously, including Japanese and Chinese. Once we have tested
this new backend sufficiently, we expect to switch Zulip deployments
to always use PGroonga.
@@ -41,46 +41,63 @@ All steps in this section should be run as the `root` user; on most installs, th
1. Alter the deployment setting:
crudini --set /etc/zulip/zulip.conf machine pgroonga enabled
```bash
crudini --set /etc/zulip/zulip.conf machine pgroonga enabled
```
1. Update the deployment to respect that new setting:
/home/zulip/deployments/current/scripts/zulip-puppet-apply
```bash
/home/zulip/deployments/current/scripts/zulip-puppet-apply
```
1. Edit `/etc/zulip/settings.py`, to add:
USING_PGROONGA = True
```python
USING_PGROONGA = True
```
1. Apply the PGroonga migrations:
su zulip -c '/home/zulip/deployments/current/manage.py migrate pgroonga'
```bash
su zulip -c '/home/zulip/deployments/current/manage.py migrate pgroonga'
```
Note that the migration may take a long time, and users will be
unable to send new messages until the migration finishes.
Note that the migration may take a long time, and users will be
unable to send new messages until the migration finishes.
1. Once the migrations are complete, restart Zulip:
su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
```bash
su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
```
### Disabling PGroonga
1. Remove the PGroonga migration:
su zulip -c '/home/zulip/deployments/current/manage.py migrate pgroonga zero'
```bash
su zulip -c '/home/zulip/deployments/current/manage.py migrate pgroonga zero'
```
If you intend to re-enable PGroonga later, you can skip this step,
at the cost of your Message table being slightly larger than it would
be otherwise.
If you intend to re-enable PGroonga later, you can skip this step,
at the cost of your Message table being slightly larger than it would
be otherwise.
1. Edit `/etc/zulip/settings.py`, editing the line containing `USING_PGROONGA` to read:
USING_PGROONGA = False
```python
USING_PGROONGA = False
```
1. Restart Zulip:
su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
```bash
su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
```
1. Finally, remove the deployment setting:
crudini --del /etc/zulip/zulip.conf machine pgroonga
```bash
crudini --del /etc/zulip/zulip.conf machine pgroonga
```

View File

@@ -7,42 +7,42 @@ be used to deep-link into the application and allow the browser's
"back" functionality to let the user navigate between parts of the UI.
Some examples are:
* `/#settings/your-bots`: Bots section of the settings overlay.
* `/#streams`: Streams overlay, where the user manages streams
- `/#settings/your-bots`: Bots section of the settings overlay.
- `/#streams`: Streams overlay, where the user manages streams
(subscription etc.)
* `/#streams/11/announce`: Streams overlay with stream ID 11 (called
- `/#streams/11/announce`: Streams overlay with stream ID 11 (called
"announce") selected.
* `/#narrow/stream/42-android/topic/fun`: Message feed showing stream
"android" and topic "fun". (The `42` represents the id of the
stream.
- `/#narrow/stream/42-android/topic/fun`: Message feed showing stream
"android" and topic "fun". (The `42` represents the id of the
stream.
The main module in the frontend that manages this all is
`static/js/hashchange.js` (plus `hash_util.js` for all the parsing
code), which is unfortunately one of our thorniest modules. Part of
code), which is unfortunately one of our thorniest modules. Part of
the reason that it's thorny is that it needs to support a lot of
different flows:
* The user clicking on an in-app link, which in turn opens an overlay.
- The user clicking on an in-app link, which in turn opens an overlay.
For example the streams overlay opens when the user clicks the small
cog symbol on the left sidebar, which is in fact a link to
`/#streams`. This makes it easy to have simple links around the app
`/#streams`. This makes it easy to have simple links around the app
without custom click handlers for each one.
* The user uses the "back" button in their browser (basically
equivalent to the previous one, as a *link* out of the browser history
- The user uses the "back" button in their browser (basically
equivalent to the previous one, as a _link_ out of the browser history
will be visited).
* The user clicking some in-app click handler (e.g. "Stream settings"
- The user clicking some in-app click handler (e.g. "Stream settings"
for an individual stream), that potentially does
several UI-manipulating things including e.g. loading the streams
overlay, and needs to update the hash without re-triggering the open
animation (etc.).
* Within an overlay like the streams overlay, the user clicks to
- Within an overlay like the streams overlay, the user clicks to
another part of the overlay, which should update the hash but not
re-trigger loading the overlay (which would result in a confusing
animation experience).
* The user is in a part of the webapp, and reloads their browser window.
- The user is in a part of the webapp, and reloads their browser window.
Ideally the reloaded browser window should return them to their
original state.
* A server-initiated browser reload (done after a new version is
- A server-initiated browser reload (done after a new version is
deployed, or when a user comes back after being idle for a while,
see [notes below][self-server-reloads]), where we try to preserve
extra state (e.g. content of compose box, scroll position within a
@@ -56,21 +56,21 @@ that it's easy to accidentally break something.
The main external API lives in `static/js/browser_history.js`:
* `browser_history.update` is used to update the browser
- `browser_history.update` is used to update the browser
history, and it should be called when the app code is taking care
of updating the UI directly
* `browser_history.go_to_location` is used when you want the `hashchange`
- `browser_history.go_to_location` is used when you want the `hashchange`
module to actually dispatch building the next page
Internally you have these functions:
* `hashchange.hashchanged` is the function used to handle the hash,
- `hashchange.hashchanged` is the function used to handle the hash,
whether it's changed by the browser (e.g. by clicking on a link to
a hash or using the back button) or triggered internally.
* `hashchange.do_hashchange_normal` handles most cases, like loading the main
- `hashchange.do_hashchange_normal` handles most cases, like loading the main
page (but maybe with a specific URL if you are narrowed to a
stream or topic or PMs, etc.).
* `hashchange.do_hashchange_overlay` handles overlay cases. Overlays have
- `hashchange.do_hashchange_overlay` handles overlay cases. Overlays have
some minor complexity related to remembering the page from
which the overlay was launched, as well as optimizing in-page
transitions (i.e. don't close/re-open the overlay if you can
@@ -81,22 +81,22 @@ Internally you have these functions:
There are a few circumstances when the Zulip browser window needs to
reload itself:
* If the browser has been offline for more than 10 minutes, the
- If the browser has been offline for more than 10 minutes, the
browser's [event queue][events-system] will have been
garbage-collected by the server, meaning the browser can no longer
get real-time updates altogether. In this case, the browser
auto-reloads immediately in order to reconnect. We have coded an
get real-time updates altogether. In this case, the browser
auto-reloads immediately in order to reconnect. We have coded an
unsuspend callback (based on some clever time logic) that ensures we
check immediately when a client unsuspends; grep for `watchdog` to
see the code.
* If a new version of the server has been deployed, we want to reload
the browser so that it will start running the latest code. However,
we don't want server deploys to be disruptive. So, the backend
- If a new version of the server has been deployed, we want to reload
the browser so that it will start running the latest code. However,
we don't want server deploys to be disruptive. So, the backend
preserves user-side event queues (etc.) and just pushes a special
`restart` event to all clients. That event causes the browser to
`restart` event to all clients. That event causes the browser to
start looking for a good time to reload, based on when the user is
idle (ideally, we'd reload when they're not looking and restore
state so that the user never knew it happened!). The logic for
state so that the user never knew it happened!). The logic for
doing this is in `static/js/reload.js`; but regardless we'll reload
within 30 minutes unconditionally.
@@ -106,10 +106,10 @@ reload itself:
Here are some key functions in the reload system:
* `reload.preserve_state` is called when a server-initiated browser
- `reload.preserve_state` is called when a server-initiated browser
reload happens, and encodes a bunch of data like the current scroll
position into the hash.
* `reload.initialize` handles restoring the preserved state after a
- `reload.initialize` handles restoring the preserved state after a
reload where the hash starts with `/#reload`.
## All reloads

View File

@@ -17,7 +17,7 @@ In `zerver/lib/hotspots.py`, add your content to the `ALL_HOTSPOTS` dictionary.
Each key-value pair in `ALL_HOTSPOTS` associates the name of the hotspot with the
content displayed to the user.
```
```python
ALL_HOTSPOTS = {
...
'new_hotspot_name': {
@@ -32,8 +32,8 @@ ALL_HOTSPOTS = {
The target element and visual orientation of each hotspot is specified in
`HOTSPOT_LOCATIONS` of `static/js/hotspots.js`.
The `icon_offset` property specifies where the pulsing icon is placed *relative to
the width and height of the target element*.
The `icon_offset` property specifies where the pulsing icon is placed _relative to
the width and height of the target element_.
By default, `popovers.compute_placement` is used to responsively
determine whether a popover is best displayed above (TOP), below (BOTTOM),
@@ -48,11 +48,12 @@ However, if you would like to fix the orientation of a hotspot popover, a
To test your hotspot in the development environment, set
`ALWAYS_SEND_ALL_HOTSPOTS = True` in `zproject/dev_settings.py`, and
invoke `hotspots.initialize()` in your browser console. Every hotspot
should be displayed. Note that this setting has a bug that can result
should be displayed. Note that this setting has a bug that can result
in multiple copies of hotspots appearing; you can clear that by
reloading the browser.
Here are some visual characteristics to confirm:
- popover content is readable
- icons reposition themselves on resize
- icons are hidden and shown along with their associated elements
@@ -67,8 +68,9 @@ a target element on a sidebar or overlay, the icon's z-index may need to
be increased to 101, 102, or 103.
This adjustment can be made at the bottom of `static/styles/hotspots.css`:
```
\#hotspot_new_hotspot_name_icon {
```css
#hotspot_new_hotspot_name_icon {
z-index: 103;
}
```

View File

@@ -3,12 +3,12 @@
## Zulip CSS organization
The Zulip application's CSS can be found in the `static/styles/`
directory. Zulip uses [Bootstrap](https://getbootstrap.com/) as its
directory. Zulip uses [Bootstrap](https://getbootstrap.com/) as its
main third-party CSS library.
Zulip uses PostCSS for its CSS files. There are two high-level sections
Zulip uses PostCSS for its CSS files. There are two high-level sections
of CSS: the "portico" (logged-out pages like /help/, /login/, etc.),
and the app. The portico CSS lives under the `static/styles/portico`
and the app. The portico CSS lives under the `static/styles/portico`
subdirectory.
## Editing Zulip CSS
@@ -33,8 +33,8 @@ browser window (following backend changes).
Without care, it's easy for a web application to end up with thousands
of lines of duplicated CSS code, which can make it very difficult to
understand the current styling or modify it. We would very much like
to avoid such a fate. So please make an effort to reuse existing
understand the current styling or modify it. We would very much like
to avoid such a fate. So please make an effort to reuse existing
styling, clean up now-unused CSS, etc., to keep things maintainable.
### Be consistent with existing similar UI
@@ -62,16 +62,16 @@ browsers to make sure things look the same.
### Behavior
* Templates are automatically recompiled in development when the file
is saved; a refresh of the page should be enough to display the latest
version. You might need to do a hard refresh, as some browsers cache
webpages.
- Templates are automatically recompiled in development when the file
is saved; a refresh of the page should be enough to display the latest
version. You might need to do a hard refresh, as some browsers cache
webpages.
* Variables can be used in templates. The variables available to the
template are called the **context**. Passing the context to the HTML
template sets the values of those variables to the value they were
given in the context. The sections below contain specifics on how the
context is defined and where it can be found.
- Variables can be used in templates. The variables available to the
template are called the **context**. Passing the context to the HTML
template sets the values of those variables to the value they were
given in the context. The sections below contain specifics on how the
context is defined and where it can be found.
### Backend templates
@@ -84,24 +84,24 @@ found [here][jconditionals].
The context for Jinja2 templates is assembled from a few places:
* `zulip_default_context` in `zerver/context_processors.py`. This is
the default context available to all Jinja2 templates.
- `zulip_default_context` in `zerver/context_processors.py`. This is
the default context available to all Jinja2 templates.
* As an argument in the `render` call in the relevant function that
renders the template. For example, if you want to find the context
passed to `index.html`, you can do:
- As an argument in the `render` call in the relevant function that
renders the template. For example, if you want to find the context
passed to `index.html`, you can do:
```
```console
$ git grep zerver/app/index.html '*.py'
zerver/views/home.py: response = render(request, 'zerver/app/index.html',
```
The next line in the code being the context definition.
* `zproject/urls.py` for some fairly static pages that are rendered
using `TemplateView`, for example:
- `zproject/urls.py` for some fairly static pages that are rendered
using `TemplateView`, for example:
```
```python
path('config-error/google', TemplateView.as_view(
template_name='zerver/config_error.html',),
{'google_error': True},),
@@ -151,10 +151,10 @@ relevant background as well.
### Primary build process
Zulip's frontend is primarily JavaScript in the `static/js` directory;
we are working on migrating these to TypeScript modules. Stylesheets
we are working on migrating these to TypeScript modules. Stylesheets
are written in CSS extended by various PostCSS plugins; they are
converted from plain CSS, and we have yet to take full advantage of
the features PostCSS offers. We use Webpack to transpile and build JS
the features PostCSS offers. We use Webpack to transpile and build JS
and CSS bundles that the browser can understand, one for each entry
points specified in `tools/webpack.*assets.json`; source maps are
generated in the process for better debugging experience.
@@ -188,9 +188,9 @@ first add it to the appropriate place under `static/`.
version of third-party libraries.
- Third-party files that we have patched should all go in
`static/third/`. Tag the commit with "[third]" when adding or
modifying a third-party package. Our goal is to the extent possible
modifying a third-party package. Our goal is to the extent possible
to eliminate patched third-party code from the project.
- Our own JavaScript and TypeScript files live under `static/js`. Ideally,
- Our own JavaScript and TypeScript files live under `static/js`. Ideally,
new modules should be written in TypeScript (details on this policy below).
- CSS files live under `static/styles`.
- Portico JavaScript ("portico" means for logged-out pages) lives under
@@ -203,14 +203,14 @@ For your asset to be included in a development/production bundle, it
needs to be accessible from one of the entry points defined either in
`tools/webpack.assets.json` or `tools/webpack.dev-assets.json`.
* If you plan to only use the file within the app proper, and not on the login
- If you plan to only use the file within the app proper, and not on the login
page or other standalone pages, put it in the `app` bundle by importing it
in `static/js/bundles/app.js`.
* If it needs to be available both in the app and all
- If it needs to be available both in the app and all
logged-out/portico pages, import it to
`static/js/bundles/common.js` which itself is imported to the
`app` and `common` bundles.
* If it's just used on a single standalone page which is only used in
- If it's just used on a single standalone page which is only used in
a development environment (e.g. `/devlogin`) create a new entry
point in `tools/webpack.dev-assets.json` or it's used in both
production and development (e.g. `/stats`) create a new entry point
@@ -224,22 +224,23 @@ If you want to test minified files in development, look for the
### How it works in production
A few useful notes are:
* Zulip installs static assets in production in
`/home/zulip/prod-static`. When a new version is deployed, before the
server is restarted, files are copied into that directory.
* We use the VFL (versioned file layout) strategy, where each file in
- Zulip installs static assets in production in
`/home/zulip/prod-static`. When a new version is deployed, before the
server is restarted, files are copied into that directory.
- We use the VFL (versioned file layout) strategy, where each file in
the codebase (e.g. `favicon.ico`) gets a new name
(e.g. `favicon.c55d45ae8c58.ico`) that contains a hash in it. Each
(e.g. `favicon.c55d45ae8c58.ico`) that contains a hash in it. Each
deployment, has a manifest file
(e.g. `/home/zulip/deployments/current/staticfiles.json`) that maps
codebase filenames to serving filenames for that deployment. The
codebase filenames to serving filenames for that deployment. The
benefit of this VFL approach is that all the static files for past
deployments can coexist, which in turn eliminates most classes of
race condition bugs where browser windows opened just before a
deployment can't find their static assets. It also is necessary for
deployment can't find their static assets. It also is necessary for
any incremental rollout strategy where different clients get
different versions of the site.
* Some paths for files (e.g. emoji) are stored in the
- Some paths for files (e.g. emoji) are stored in the
`rendered_content` of past messages, and thus cannot be removed
without breaking the rendering of old messages (or doing a
mass-rerender of old messages).
@@ -261,28 +262,28 @@ where one is moving code from an existing JavaScript module, the new
commit should just move the code, not translate it to TypeScript).
TypeScript provides more accurate information to development tools,
allowing for better refactoring, auto-completion and static analysis.
TypeScript also uses the ES6 module system. See our documentation on
TypeScript also uses the ES6 module system. See our documentation on
[TypeScript static types](../testing/typescript).
Webpack does not ordinarily allow modules to be accessed directly from
the browser console, but for debugging convenience, we have a custom
webpack plugin (`tools/debug-require-webpack-plugin.ts`) that exposes
a version of the `require()` function to the development environment
browser console for this purpose. For example, you can access our
`people` module by evaluating `people =
require("./static/js/people")`, or the third-party `lodash` module
with `_ = require("lodash")`. This mechanism is **not** a stable API
and should not be used for any purpose other than interactive
debugging.
browser console for this purpose. For example, you can access our
`people` module by evaluating
`people = require("./static/js/people")`, or the third-party `lodash`
module with `_ = require("lodash")`. This mechanism is **not** a
stable API and should not be used for any purpose other than
interactive debugging.
We have one module, `zulip_test`, thats exposed as a global variable
using `expose-loader` for direct use in Puppeteer tests and in the
production browser console. If you need to access a variable or
function in those scenarios, add it to `zulip_test`. This is also
production browser console. If you need to access a variable or
function in those scenarios, add it to `zulip_test`. This is also
**not** a stable API.
[Jinja2]: http://jinja.pocoo.org/
[Handlebars]: https://handlebarsjs.com/
[jinja2]: http://jinja.pocoo.org/
[handlebars]: https://handlebarsjs.com/
[trans]: http://jinja.pocoo.org/docs/dev/templates/#i18n
[jconditionals]: http://jinja.pocoo.org/docs/2.9/templates/#list-of-control-structures
[hconditionals]: https://handlebarsjs.com/guide/#block_helpers.html

View File

@@ -27,17 +27,17 @@ var pills = input_pill.create({
```
You can look at `static/js/user_pill.js` to see how the above
methods are implemented. Essentially you just need to convert
methods are implemented. Essentially you just need to convert
from raw data (like an email) to structured data (like an object
with display_value, email, and user_id for a user), and vice
versa. The most important field to supply is `display_value`.
versa. The most important field to supply is `display_value`.
For user pills `pill_item.display_value === user.full_name`.
## Typeahead
Pills almost always work in conjunction with typeahead, and
you will want to provide a `source` function to typeahead
that can exclude items from the prior pills. Here is an
that can exclude items from the prior pills. Here is an
example snippet from our user group settings code.
```js
@@ -66,7 +66,6 @@ export function filter_taken_users(items, pill_widget) {
You can get notifications from the pill code that pills have been
created/remove.
```js
pills.onPillCreate(function () {
update_save_state();

View File

@@ -1,12 +1,12 @@
# Logging and error reporting
Having a good system for logging error reporting is essential to
making a large project like Zulip successful. Without reliable error
making a large project like Zulip successful. Without reliable error
reporting, one has to rely solely on bug reports from users in order
to produce a working product.
Our goal as a project is to have zero known 500 errors on the backend
and zero known JavaScript exceptions on the frontend. While there
and zero known JavaScript exceptions on the frontend. While there
will always be new bugs being introduced, that goal is impossible
without an efficient and effective error reporting framework.
@@ -20,19 +20,19 @@ is great for small installations.
The [Django][django-errors] framework provides much of the
infrastructure needed by our error reporting system:
* The ability to send emails to the server's administrators with any
500 errors, using the `mail_admins` function. We enhance these data
- The ability to send emails to the server's administrators with any
500 errors, using the `mail_admins` function. We enhance these data
with extra details (like what user was involved in the error) in
`zerver/logging_handlers.py`, and then send them to the
administrator in `zerver/lib/error_notify.py` (which also supports
sending Zulips to a stream about production errors).
* The ability to rate-limit certain errors to avoid sending hundreds
- The ability to rate-limit certain errors to avoid sending hundreds
of emails in an outage (see `_RateLimitFilter` in
`zerver/lib/logging_util.py`)
* A nice framework for filtering passwords and other important user
- A nice framework for filtering passwords and other important user
data from the exception details, which we use in
`zerver/filters.py`.
* Middleware for handling `JsonableError`, our system for allowing
- Middleware for handling `JsonableError`, our system for allowing
code anywhere in Django to report an API-facing `json_error` from
anywhere in a view code path.
@@ -48,10 +48,10 @@ exception, and the full request headers which triggered it.
### Backend logging
[Django's logging system][django-logging] uses the standard
[Python logging infrastructure][python-logging]. We have configured
[Python logging infrastructure][python-logging]. We have configured
them so that `logging.exception` and `logging.error` get emailed to
the server maintainer, while `logging.warning` will just appear in
`/var/log/zulip/errors.log`. Lower log levels just appear in the main
`/var/log/zulip/errors.log`. Lower log levels just appear in the main
server log (as well as in the log for corresponding process, be it
`django.log` for the main Django processes or the appropriate
`events_*` log file for a queue worker).
@@ -60,7 +60,7 @@ server log (as well as in the log for corresponding process, be it
The main Zulip server log contains a line for each backend request.
It also contains warnings, errors, and the full tracebacks for any
Python exceptions. In production, it goes to
Python exceptions. In production, it goes to
`/var/log/zulip/server.log`; in development, it goes to the terminal
where you run `run-dev.py`.
@@ -72,7 +72,7 @@ In production, one usually wants to look at `errors.log` for errors
since the main server log can be very verbose, but the main server log
can be extremely valuable for investigating performance problems.
```
```text
2016-05-20 14:50:22.056 INFO [zr] 127.0.0.1 GET 302 528ms (db: 1ms/1q) (+start: 123ms) / (unauth@zulip via ?)
[20/May/2016 14:50:22]"GET / HTTP/1.0" 302 0
2016-05-20 14:50:22.272 INFO [zr] 127.0.0.1 GET 200 124ms (db: 3ms/2q) /login/ (unauth@zulip via ?)
@@ -84,19 +84,20 @@ can be extremely valuable for investigating performance problems.
```
The format of this output is:
* Timestamp
* Log level
* Logger name, abbreviated as "zr" for these Zulip request logs
* IP address
* HTTP method
* HTTP status code
* Time to process
* (Optional perf data details, e.g. database time/queries, memcached
time/queries, Django process startup time, Markdown processing time,
etc.)
* Endpoint/URL from zproject/urls.py
* "email via client" showing user account involved (if logged in) and
the type of client they used ("web", "Android", etc.).
- Timestamp
- Log level
- Logger name, abbreviated as "zr" for these Zulip request logs
- IP address
- HTTP method
- HTTP status code
- Time to process
- (Optional perf data details, e.g. database time/queries, memcached
time/queries, Django process startup time, Markdown processing time,
etc.)
- Endpoint/URL from zproject/urls.py
- "email via client" showing user account involved (if logged in) and
the type of client they used ("web", "Android", etc.).
The performance data details are particularly useful for investigating
performance problems, since one can see at a glance whether a slow
@@ -105,7 +106,7 @@ processor, in memcached, or in other Python code.
One useful thing to note, however, is that the database time is only
the time spent connecting to and receiving a response from the
database. Especially when response are large, there can often be a
database. Especially when response are large, there can often be a
great deal of Python processing overhead to marshall the data from the
database into Django objects that is not accounted for in these
numbers.
@@ -114,43 +115,44 @@ numbers.
We have a custom library, called `blueslip` (named after the form used
at MIT to report problems with the facilities), that takes care of
reporting JavaScript errors. In production, this means emailing the
reporting JavaScript errors. In production, this means emailing the
server administrators (though the setting controlling this,
`BROWSER_ERROR_REPORTING`, is disabled by default, since most problems
are unlikely to be addressable by a system administrator, and it's
very hard to make JavaScript errors not at least somewhat spammy due
to the variety of browser versions and sets of extensions that someone
might use). In development, this means displaying a highly visible
might use). In development, this means displaying a highly visible
overlay over the message view area, to make exceptions in testing a
new feature hard to miss.
* Blueslip is implemented in `static/js/blueslip.js`.
* In order to capture essentially any error occurring in the browser,
- Blueslip is implemented in `static/js/blueslip.js`.
- In order to capture essentially any error occurring in the browser,
Blueslip listens for the `error` event on `window`, and has methods
for being manually triggered by Zulip JavaScript code for warnings
and assertion failures.
* Blueslip keeps a log of all the notices it has received during a
- Blueslip keeps a log of all the notices it has received during a
browser session, and includes them in reports to the server, so that
one can see cases where exceptions chained together. You can print
this log from the browser console using `blueslip =
require("./static/js/blueslip"); blueslip.get_log()`.
one can see cases where exceptions chained together. You can print
this log from the browser console using
`blueslip = require("./static/js/blueslip"); blueslip.get_log()`.
Blueslip supports several error levels:
* `throw new Error(…)`: For fatal errors that cannot be easily
recovered from. We try to avoid using it, since it kills the
- `throw new Error(…)`: For fatal errors that cannot be easily
recovered from. We try to avoid using it, since it kills the
current JS thread, rather than returning execution to the caller.
* `blueslip.error`: For logging of events that are definitely caused
- `blueslip.error`: For logging of events that are definitely caused
by a bug and thus sufficiently important to be reported, but where
we can handle the error without creating major user-facing problems
(e.g. an exception when handling a presence update).
* `blueslip.warn`: For logging of events that are a problem but not
important enough to send an email about in production. They are,
- `blueslip.warn`: For logging of events that are a problem but not
important enough to send an email about in production. They are,
however, highlighted in the JS console in development.
* `blueslip.log` (and `blueslip.info`): Logged to the JS console in
development and also in the blueslip log in production. Useful for
- `blueslip.log` (and `blueslip.info`): Logged to the JS console in
development and also in the blueslip log in production. Useful for
data that might help discern what state the browser was in during an
error (e.g. whether the user was in a narrow).
* `blueslip.debug`: Similar to `blueslip.log`, but are not printed to
- `blueslip.debug`: Similar to `blueslip.log`, but are not printed to
the JS console in development.
## Frontend performance reporting
@@ -159,12 +161,12 @@ In order to make it easier to debug potential performance problems in
the critically latency-sensitive message sending code pathway, we log
and report to the server the following whenever a message is sent:
* The time the user triggered the message (aka the start time).
* The time the `send_message` response returned from the server.
* The time the message was received by the browser from the
- The time the user triggered the message (aka the start time).
- The time the `send_message` response returned from the server.
- The time the message was received by the browser from the
`get_events` protocol (these last two race with each other).
* Whether the message was locally echoed.
* If so, whether there was a disparity between the echoed content and
- Whether the message was locally echoed.
- If so, whether there was a disparity between the echoed content and
the server-rendered content, which can be used for statistics on how
effective our [local echo system](../subsystems/markdown.md) is.
@@ -173,9 +175,9 @@ The code is all in `zerver/lib/report.py` and `static/js/sent_messages.js`.
We have similar reporting for the time it takes to narrow / switch to
a new view:
* The time the action was initiated
* The time when the updated message feed was visible to the user
* The time when the browser was idle again after switching views
- The time the action was initiated
- The time when the updated message feed was visible to the user
- The time when the browser was idle again after switching views
(intended to catch issues where we generate a lot of deferred work).
[django-errors]: https://docs.djangoproject.com/en/2.2/howto/error-reporting/

View File

@@ -5,7 +5,7 @@ live under `{zerver,zilencer,analytics}/management/commands/`.
If you need some Python code to run with a Zulip context (access to
the database, etc.) in a script, it should probably go in a management
command. The key thing distinguishing these from production scripts
command. The key thing distinguishing these from production scripts
(`scripts/`) and development scripts (`tools/`) is that management
commands can access the database.
@@ -13,43 +13,43 @@ While Zulip takes advantage of built-in Django management commands for
things like managing Django migrations, we also have dozens that we've
written for a range of purposes:
* Cron jobs to do regular updates, e.g. `update_analytics_counts.py`,
- Cron jobs to do regular updates, e.g. `update_analytics_counts.py`,
`sync_ldap_user_data`, etc.
* Useful parts of provisioning or upgrading a Zulip development
- Useful parts of provisioning or upgrading a Zulip development
environment or server, e.g. `makemessages`, `compilemessages`,
`populate_db`, `fill_memcached_caches`, etc.
* The actual scripts run by supervisord to run the persistent
- The actual scripts run by supervisord to run the persistent
processes in a Zulip server, e.g. `runtornado` and `process_queue`.
* For a sysadmin to verify a Zulip server's configuration during
installation, e.g. `checkconfig`, `send_test_email`.
* As the interface for doing those rare operations that don't have a
- For a sysadmin to verify a Zulip server's configuration during
installation, e.g. `checkconfig`, `send_test_email`.
- As the interface for doing those rare operations that don't have a
UI yet, e.g. `deactivate_realm`, `reactivate_realm`,
`change_user_email` (for the case where the user doesn't control the
old email address).
* For a sysadmin to easily interact with and script common possible
- For a sysadmin to easily interact with and script common possible
changes they might want to make to the database on a Zulip server.
E.g. `send_password_reset_email`, `export`, `purge_queue`.
## Writing management commands
It's generally pretty easy to template off an existing management
command to write a new one. Some good examples are
`change_user_email` and `deactivate_realm`. The Django documentation
command to write a new one. Some good examples are
`change_user_email` and `deactivate_realm`. The Django documentation
is good, but we have a few pieces advice specific to the Zulip
project.
* If you need to access a realm or user, use the `ZulipBaseCommand`
- If you need to access a realm or user, use the `ZulipBaseCommand`
class in `zerver/lib/management.py` so you don't need to write the
tedious code of looking those objects up. This is especially
tedious code of looking those objects up. This is especially
important for users, since the library handles the issues around
looking up users by email well (if there's a unique user with that
email, just modify it without requiring the user to specify the
realm as well, but if there's a collision, throw a nice error).
* Avoid writing a lot of code in management commands; management
- Avoid writing a lot of code in management commands; management
commands are annoying to unit test, and thus easier to maintain if
all the interesting logic is in a nice function that is unit tested
(and ideally, also used in Zulip's existing code). Look for code in
`zerver/lib/` that already does what you need. For most actions,
(and ideally, also used in Zulip's existing code). Look for code in
`zerver/lib/` that already does what you need. For most actions,
you can just call a `do_change_foo` type function from
`zerver/lib/actions.py` to do all the work; this is usually far
better than manipulating the database directly, since the library

View File

@@ -1,13 +1,13 @@
# Markdown implementation
Zulip uses a special flavor of Markdown/CommonMark for its message
formatting. Our Markdown flavor is unique primarily to add important
formatting. Our Markdown flavor is unique primarily to add important
extensions, such as quote blocks and math blocks, and also to do
previews and correct issues specific to the chat context. Beyond
previews and correct issues specific to the chat context. Beyond
that, it has a number of minor historical variations resulting from
its history predacting CommonMark (and thus Zulip choosing different
solutions to some problems) and based in part on Python-Markdown,
which is proudly a classic Markdown implementation. We reduce these
which is proudly a classic Markdown implementation. We reduce these
variations with every major Zulip release.
Zulip has two implementations of Markdown. The backend implementation
@@ -15,11 +15,11 @@ at `zerver/lib/markdown/` is based on
[Python-Markdown](https://pypi.python.org/pypi/Markdown) and is used to
authoritatively render messages to HTML (and implements
slow/expensive/complex features like querying the Twitter API to
render tweets nicely). The frontend implementation is in JavaScript,
render tweets nicely). The frontend implementation is in JavaScript,
based on [marked.js](https://github.com/chjj/marked)
(`static/js/echo.js`), and is used to preview and locally echo
messages the moment the sender hits Enter, without waiting for round
trip from the server. Those frontend renderings are only shown to the
trip from the server. Those frontend renderings are only shown to the
sender of a message, and they are (ideally) identical to the backend
rendering.
@@ -28,12 +28,12 @@ The JavaScript Markdown implementation has a function,
contains any syntax that needs to be rendered to HTML on the backend.
If `markdown.contains_backend_only_syntax` returns true, the frontend simply won't
echo the message for the sender until it receives the rendered HTML
from the backend. If there is a bug where `markdown.contains_backend_only_syntax`
from the backend. If there is a bug where `markdown.contains_backend_only_syntax`
returns false incorrectly, the frontend will discover this when the
backend returns the newly sent message, and will update the HTML based
on the authoritative backend rendering (which would cause a change in
the rendering that is visible only to the sender shortly after a
message is sent). As a result, we try to make sure that
message is sent). As a result, we try to make sure that
`markdown.contains_backend_only_syntax` is always correct.
## Testing
@@ -46,24 +46,24 @@ The Python-Markdown implementation is tested by
A shared set of fixed test data ("test fixtures") is present in
`zerver/tests/fixtures/markdown_test_cases.json`, and is automatically used
by both test suites; as a result, it is the preferred place to add new
tests for Zulip's Markdown system. Some important notes on reading
tests for Zulip's Markdown system. Some important notes on reading
this file:
* `expected_output` is the expected output for the backend Markdown
- `expected_output` is the expected output for the backend Markdown
processor.
* When the frontend processor doesn't support a feature and it should
- When the frontend processor doesn't support a feature and it should
just be rendered on the backend, we set `backend_only_rendering` to
`true` in the fixtures; this will automatically verify that
`markdown.contains_backend_only_syntax` rejects the syntax, ensuring
it will be rendered only by the backend processor.
* When the two processors disagree, we set `marked_expected_output` in
the fixtures; this will ensure that the syntax stays that way. If
- When the two processors disagree, we set `marked_expected_output` in
the fixtures; this will ensure that the syntax stays that way. If
the differences are important (i.e. not just whitespace), we should
also open an issue on GitHub to track the problem.
* For mobile push notifications, we need a text version of the
- For mobile push notifications, we need a text version of the
rendered content, since the APNS and GCM push notification systems
don't support richer markup. Mostly, this involves stripping HTML,
but there's some syntax we take special care with. Tests for what
don't support richer markup. Mostly, this involves stripping HTML,
but there's some syntax we take special care with. Tests for what
this plain-text version of content should be are stored in the
`text_content` field.
@@ -72,10 +72,10 @@ implementation, the easiest way to do this is as follows:
1. Log in to your development server.
2. Stop your Zulip server with Ctrl-C, leaving the browser open.
3. Compose and send the messages you'd like to test. They will be
3. Compose and send the messages you'd like to test. They will be
locally echoed using the frontend rendering.
This procedure prevents any server-side rendering. If you don't do
This procedure prevents any server-side rendering. If you don't do
this, backend will likely render the Markdown you're testing and swap
it in before you can see the frontend's rendering.
@@ -91,51 +91,51 @@ tests with `tools/test-js-with-node markdown` and backend tests with
First, you will likely find these third-party resources helpful:
* **[Python-Markdown](https://pypi.python.org/pypi/Markdown)** is the Markdown
- **[Python-Markdown](https://pypi.python.org/pypi/Markdown)** is the Markdown
library used by Zulip as a base to build our custom Markdown syntax upon.
* **[Python's XML ElementTree](https://docs.python.org/3/library/xml.etree.elementtree.html)**
- **[Python's XML ElementTree](https://docs.python.org/3/library/xml.etree.elementtree.html)**
is the part of the Python standard library used by Python Markdown
and any custom extensions to generate and modify the output HTML.
When changing Zulip's Markdown syntax, you need to update several
places:
* The backend Markdown processor (`zerver/lib/markdown/__init__.py`).
* The frontend Markdown processor (`static/js/markdown.js` and sometimes
- The backend Markdown processor (`zerver/lib/markdown/__init__.py`).
- The frontend Markdown processor (`static/js/markdown.js` and sometimes
`static/third/marked/lib/marked.js`), or `markdown.contains_backend_only_syntax` if
your changes won't be supported in the frontend processor.
* If desired, the typeahead logic in `static/js/composebox_typeahead.js`.
* The test suite, probably via adding entries to `zerver/tests/fixtures/markdown_test_cases.json`.
* The in-app Markdown documentation (`markdown_help_rows` in `static/js/info_overlay.js`).
* The list of changes to Markdown at the end of this document.
- If desired, the typeahead logic in `static/js/composebox_typeahead.js`.
- The test suite, probably via adding entries to `zerver/tests/fixtures/markdown_test_cases.json`.
- The in-app Markdown documentation (`markdown_help_rows` in `static/js/info_overlay.js`).
- The list of changes to Markdown at the end of this document.
Important considerations for any changes are:
* Security: A bug in the Markdown processor can lead to XSS issues.
- Security: A bug in the Markdown processor can lead to XSS issues.
For example, we should not insert unsanitized HTML from a
third-party web application into a Zulip message.
* Uniqueness: We want to avoid users having a bad experience due to
- Uniqueness: We want to avoid users having a bad experience due to
accidentally triggering Markdown syntax or typeahead that isn't
related to what they are trying to express.
* Performance: Zulip can render a lot of messages very quickly, and
we'd like to keep it that way. New regular expressions similar to
- Performance: Zulip can render a lot of messages very quickly, and
we'd like to keep it that way. New regular expressions similar to
the ones already present are unlikely to be a problem, but we need
to be thoughtful about expensive computations or third-party API
requests.
* Database: The backend Markdown processor runs inside a Python thread
- Database: The backend Markdown processor runs inside a Python thread
(as part of how we implement timeouts for third-party API queries),
and for that reason we currently should avoid making database
queries inside the Markdown processor. This is a technical
queries inside the Markdown processor. This is a technical
implementation detail that could be changed with a few days of work,
but is an important detail to know about until we do that work.
* Testing: Every new feature should have both positive and negative
- Testing: Every new feature should have both positive and negative
tests; they're easy to write and give us the flexibility to refactor
frequently.
## Per-realm features
Zulip's Markdown processor's rendering supports a number of features
that depend on realm-specific or user-specific data. For example, the
that depend on realm-specific or user-specific data. For example, the
realm could have
[linkifiers](https://zulip.com/help/add-a-custom-linkifier)
or [custom emoji](https://zulip.com/help/add-custom-emoji)
@@ -144,7 +144,7 @@ groups (which depend on data like users' names, IDs, etc.).
At a backend code level, these are controlled by the `message_realm`
object and other arguments passed into `do_convert` (`sent_by_bot`,
`translate_emoticons`, `mention_data`, etc.). Because
`translate_emoticons`, `mention_data`, etc.). Because
Python-Markdown doesn't support directly passing arguments into the
Markdown processor, our logic attaches these data to the Markdown
processor object via e.g. `_md_engine.zulip_db_data`, and then
@@ -154,7 +154,7 @@ For non-message contexts (e.g. an organization's profile (aka the
thing on the right-hand side of the login page), stream descriptions,
or rendering custom profile fields), one needs to just pass in a
`message_realm` (see, for example, `zulip_default_context` for the
organization profile code for this). But for messages, we need to
organization profile code for this). But for messages, we need to
pass in attributes like `sent_by_bot` and `translate_emoticons` that
indicate details about how the user sending the message is configured.
@@ -171,21 +171,21 @@ plain text (e.g. emails) that it helps more than getting in the way.
The main issue for using Markdown in instant messaging is that the
Markdown standard syntax used in a lot of wikis/blogs has nontrivial
error rates, where the author needs to go back and edit the post to
fix the formatting after typing it the first time. While that's
fix the formatting after typing it the first time. While that's
basically fine when writing a blog, it gets annoying very fast in a
chat product; even though you can edit messages to fix formatting
mistakes, you don't want to be doing that often. There are basically
mistakes, you don't want to be doing that often. There are basically
2 types of error rates that are important for a product like Zulip:
* What fraction of the time, if you pasted a short technical email
- What fraction of the time, if you pasted a short technical email
that you wrote to your team and passed it through your Markdown
implementation, would you need to change the text of your email for it
to render in a reasonable way? This is the "accidental Markdown
to render in a reasonable way? This is the "accidental Markdown
syntax" problem, common with Markdown syntax like the italics syntax
interacting with talking about `char *`s.
* What fraction of the time do users attempting to use a particular
Markdown syntax actually succeed at doing so correctly? Syntax like
- What fraction of the time do users attempting to use a particular
Markdown syntax actually succeed at doing so correctly? Syntax like
required a blank line between text and the start of a bulleted list
raise this figure substantially.
@@ -207,71 +207,70 @@ accurate.
### Basic syntax
* Enable `nl2br` extension: this means one newline creates a line
- Enable `nl2br` extension: this means one newline creates a line
break (not paragraph break).
* Allow only `*` syntax for italics, not `_`. This resolves an issue where
- Allow only `*` syntax for italics, not `_`. This resolves an issue where
people were using `_` and hitting it by mistake too often. Asterisks
surrounded by spaces won't trigger italics, either (e.g. with stock Markdown
`You should use char * instead of void * there` would produce undesired
results).
* Allow only `**` syntax for bold, not `__` (easy to hit by mistake if
- Allow only `**` syntax for bold, not `__` (easy to hit by mistake if
discussing Python `__init__` or something).
* Add `~~` syntax for strikethrough.
- Add `~~` syntax for strikethrough.
* Disable special use of `\` to escape other syntax. Rendering `\\` as
- Disable special use of `\` to escape other syntax. Rendering `\\` as
`\` was hugely controversial, but having no escape syntax is also
controversial. We may revisit this. For now you can always put
controversial. We may revisit this. For now you can always put
things in code blocks.
### Lists
* Allow tacking a bulleted list or block quote onto the end of a
- Allow tacking a bulleted list or block quote onto the end of a
paragraph, i.e. without a blank line before it.
* Allow only `*` for bulleted lists, not `+` or `-` (previously
- Allow only `*` for bulleted lists, not `+` or `-` (previously
created confusion with diff-style text sloppily not included in a
code block).
* Disable ordered list syntax: stock Markdown automatically renumbers, which
- Disable ordered list syntax: stock Markdown automatically renumbers, which
can be really confusing when sending a numbered list across multiple
messages.
### Links
* Enable auto-linkification, both for `http://...` and guessing at
- Enable auto-linkification, both for `http://...` and guessing at
things like `t.co/foo`.
* Force links to be absolute. `[foo](google.com)` will go to
- Force links to be absolute. `[foo](google.com)` will go to
`http://google.com`, and not `https://zulip.com/google.com` which
is the default behavior.
* Set `title=`(the URL) on every link tag.
- Set `title=`(the URL) on every link tag.
* Disable link-by-reference syntax,
- Disable link-by-reference syntax,
`[foo][bar]` ... `[bar]: https://google.com`.
* Enable linking to other streams using `#**streamName**`.
- Enable linking to other streams using `#**streamName**`.
### Code
* Enable fenced code block extension, with syntax highlighting.
- Enable fenced code block extension, with syntax highlighting.
* Disable line-numbering within fenced code blocks -- the `<table>`
- Disable line-numbering within fenced code blocks -- the `<table>`
output confused our web client code.
### Other
* Disable headings, both `# foo` and `== foo ==` syntax: they don't
- Disable headings, both `# foo` and `== foo ==` syntax: they don't
make much sense for chat messages.
* Disabled images with `![]()` (images from links are shown as an inline
- Disabled images with `![]()` (images from links are shown as an inline
preview).
* Allow embedding any avatar as a tiny (list bullet size) image. This
- Allow embedding any avatar as a tiny (list bullet size) image. This
is used primarily by version control integrations.
* We added the `~~~ quote` block quote syntax.
- We added the `~~~ quote` block quote syntax.

View File

@@ -11,10 +11,10 @@ the details of the email/mobile push notifications code path.
Here we name a few corner cases worth understanding in designing this
sort of notifications system:
* The **idle desktop problem**: We don't want the presence of a
- The **idle desktop problem**: We don't want the presence of a
desktop computer at the office to eat all notifications because the
user has an "online" client that they may not have used in 3 days.
* The **hard disconnect problem**: A client can lose its connection to
- The **hard disconnect problem**: A client can lose its connection to
the Internet (or be suspended, or whatever) at any time, and this
happens routinely. We want to ensure that races where a user closes
their laptop shortly after a notifiable message is sent does not
@@ -25,25 +25,26 @@ sort of notifications system:
As a reminder, the relevant part of the flow for sending messages is
as follows:
* `do_send_messages` is the synchronous message-sending code path,
- `do_send_messages` is the synchronous message-sending code path,
and passing the following data in its `send_event` call:
* Data about the message's content (E.g. mentions, wildcard
mentions, and alert words) and encodes it into the `UserMessage`
table's `flags` structure, which is in turn passed into
`send_event` for each user receiving the message.
* Data about user configuration relevant to the message, such as
`push_notify_user_ids` and `stream_notify_user_ids`, are included
alongside `flags` in the per-user data structure.
* The `presence_idle_user_ids` set, containing the subset of
recipient users who are mentioned, are PM recipients, have alert
words, or otherwise would normally get a notification, but have not
interacted with a Zulip client in the last few minutes. (Users who
have generally will not receive a notification unless the
`enable_online_push_notifications` flag is enabled). This data
structure ignores users for whom the message is not notifiable,
which is important to avoid this being thousands of `user_ids` for
messages to large streams with few currently active users.
* The Tornado [event queue system](../subsystems/events-system.md)
- Data about the message's content (E.g. mentions, wildcard
mentions, and alert words) and encodes it into the `UserMessage`
table's `flags` structure, which is in turn passed into
`send_event` for each user receiving the message.
- Data about user configuration relevant to the message, such as
`push_notify_user_ids` and `stream_notify_user_ids`, are included
alongside `flags` in the per-user data structure.
- The `presence_idle_user_ids` set, containing the subset of
recipient users who are mentioned, are PM recipients, have alert
words, or otherwise would normally get a notification, but have not
interacted with a Zulip client in the last few minutes. (Users who
have generally will not receive a notification unless the
`enable_online_push_notifications` flag is enabled). This data
structure ignores users for whom the message is not notifiable,
which is important to avoid this being thousands of `user_ids` for
messages to large streams with few currently active users.
- The Tornado [event queue system](../subsystems/events-system.md)
processes that data, as well as data about each user's active event
queues, to (1) push an event to each queue needing that message and
(2) for notifiable messages, pushing an event onto the
@@ -51,107 +52,107 @@ as follows:
queues. This important message-processing logic has notable extra
logic not present when processing normal events, both for details
like splicing `flags` to customize event payloads per-user, as well.
* The Tornado system determines whether the user is "offline/idle".
- The Tornado system determines whether the user is "offline/idle".
Zulip's email notifications are designed to not fire when the user
is actively using Zulip to avoid spam, and this is where those
checks are implemented.
* Users in `presence_idle_user_ids` are always considered idle:
- Users in `presence_idle_user_ids` are always considered idle:
the variable name means "users who are idle because of
presence". This is how we solve the idle desktop problem; users
with an idle desktop are treated the same as users who aren't
logged in for this check.
* However, that check does not handle the hard disconnect problem:
- However, that check does not handle the hard disconnect problem:
if a user was present 1 minute before a message was sent, and then
closed their laptop, the user will not be in
`presence_idle_user_ids`, and so without an additional mechanism,
messages sent shortly after a user leaves would never trigger a
notification (!).
* We solve that problem by also notifying if
- We solve that problem by also notifying if
`receiver_is_off_zulip` returns `True`, which checks whether the user has any
current events system clients registered to receive `message`
events. This check is done immediately (handling soft disconnects,
where E.g. the user closes their last Zulip tab and we get the
`DELETE /events/{queue_id}` request).
* The `receiver_is_off_zulip` check is effectively repeated when
- The `receiver_is_off_zulip` check is effectively repeated when
event queues are garbage-collected (in `missedmessage_hook`) by
looking for whether the queue being garbage-collectee was the only
one; this second check solves the hard disconnect problem, resulting in
notifications for these hard-disconnect cases usually coming 10
minutes late.
* The message-edit code path has parallel logic in
- The message-edit code path has parallel logic in
`maybe_enqueue_notifications_for_message_update` for triggering
notifications in cases like a mention added during message
editing.
* The business logic for all these notification decisions made
- The business logic for all these notification decisions made
inside Tornado has extensive automated test suites; e.g.
`test_message_edit_notifications.py` covers all the cases around
editing a message to add/remove a mention.
* We may in the future want to add some sort of system for letting
- We may in the future want to add some sort of system for letting
users see past notifications, to help with explaining and
debugging this system, since it has so much complexity.
* Desktop notifications are the simplest; they are implemented
- Desktop notifications are the simplest; they are implemented
client-side by the web/desktop app's logic
(`static/js/notifications.js`) inspecting the `flags` fields that
were spliced into `message` events by the Tornado system, as well as
the user's notification settings.
* The queue processors for those queues make the final determination
- The queue processors for those queues make the final determination
for whether to send a notification, and do the work to generate an
email (`zerver/lib/email_notifications.py`) or mobile
(`zerver/lib/push_notifications.py`) notification. We'll detail
(`zerver/lib/push_notifications.py`) notification. We'll detail
this process in more detail for each system below, but it's
important to know that it's normal for a message to sit in these
queues for minutes (and in the future, possibly hours).
* Both queue processor code paths do additional filtering before
- Both queue processor code paths do additional filtering before
sending a notification:
* Messages that have already been marked as read by the user before
- Messages that have already been marked as read by the user before
the queue processor runs never trigger a notification.
* Messages that were already deleted never trigger a notification.
* The user-level settings for whether email/mobile notifications are
- Messages that were already deleted never trigger a notification.
- The user-level settings for whether email/mobile notifications are
disabled are rechecked, as the user may have disabled one of these
settings during the queuing period.
* The **Email notifications queue processor**, `MissedMessageWorker`,
takes care to wait for 2 minutes (hopefully in the future this will be a
configuration setting) and starts a thread to batch together multiple
messages into a single email. These features are unnecessary
for mobile push notifications, because we can live-update those
details with a future notification, whereas emails cannot be readily
updated once sent. Zulip's email notifications are styled similarly
to GitHub's email notifications, with a clean, simple design that
makes replying from an email client possible (using the [incoming
email integration](../production/email-gateway.md)).
* The **Push notifications queue processor**,
`PushNotificationsWorker`, is a simple wrapper around the
`push_notifications.py` code that actually sends the
notification. This logic is somewhat complicated by having to track
the number of unread push notifications to display on the mobile
apps' badges, as well as using the [mobile push notifications
service](../production/mobile-push-notifications.md) for self-hosted
systems.
- The **Email notifications queue processor**, `MissedMessageWorker`,
takes care to wait for 2 minutes (hopefully in the future this will be a
configuration setting) and starts a thread to batch together multiple
messages into a single email. These features are unnecessary
for mobile push notifications, because we can live-update those
details with a future notification, whereas emails cannot be readily
updated once sent. Zulip's email notifications are styled similarly
to GitHub's email notifications, with a clean, simple design that
makes replying from an email client possible (using the [incoming
email integration](../production/email-gateway.md)).
- The **Push notifications queue processor**,
`PushNotificationsWorker`, is a simple wrapper around the
`push_notifications.py` code that actually sends the
notification. This logic is somewhat complicated by having to track
the number of unread push notifications to display on the mobile
apps' badges, as well as using the [mobile push notifications
service](../production/mobile-push-notifications.md) for self-hosted
systems.
The following important constraints are worth understanding about the
structure of the system, when thinking about changes to it:
* **Bulk database queries** are much more efficient for checking
- **Bulk database queries** are much more efficient for checking
details from the database like "which users receiving this message
are online".
* **Thousands of users**. Zulip supports thousands of users, and we
- **Thousands of users**. Zulip supports thousands of users, and we
want to avoid `send_event()` pushing large amounts of per-user data
to Tornado via RabbitMQ for scalability reasons.
* **Tornado doesn't do database queries**. Because the Tornado system
- **Tornado doesn't do database queries**. Because the Tornado system
is an asynchronous event-driven framework, and our Django database
library is synchronous, database queries are very expensive. So
library is synchronous, database queries are very expensive. So
these queries need to be done in either `do_send_messages` or the
queue processor logic. (For example, this means `presence` data
should be checked in either `do_send_messages` or the queue
processors, not in Tornado).
* **Future configuration**. Notification settings are an area that we
- **Future configuration**. Notification settings are an area that we
expect to only expand with time, with upcoming features like
following a topic (to get notifications for messages only within
that topic in a stream). There are a lot of different workflows
possible with Zulip's threading, and it's important to make it easy
for users to set up Zulip's notification to fit as many of those
workflows as possible.
* **Message editing**. Zulip supports editing messages, and that
- **Message editing**. Zulip supports editing messages, and that
interacts with notifications in ways that require careful handling:
Notifications should have
the latest edited content (users often fix typos 30 seconds after

View File

@@ -1,22 +1,22 @@
# Performance and scalability
This page aims to give some background to help prioritize work on the
Zulip's server's performance and scalability. By scalability, we mean
Zulip's server's performance and scalability. By scalability, we mean
the ability of the Zulip server on given hardware to handle a certain
workload of usage without performance materially degrading.
First, a few notes on philosophy.
* We consider it an important technical goal for Zulip to be fast,
- We consider it an important technical goal for Zulip to be fast,
because that's an important part of user experience for a real-time
collaboration tool like Zulip. Many UI features in the Zulip webapp
collaboration tool like Zulip. Many UI features in the Zulip webapp
are designed to load instantly, because all the data required for
them is present in the initial HTTP response, and both the Zulip
API and webapp are architected around that strategy.
* The Zulip database model and server implementation are carefully
- The Zulip database model and server implementation are carefully
designed to ensure that every common operation is efficient, with
automated tests designed to prevent the accidental introductions of
inefficient or excessive database queries. We much prefer doing
inefficient or excessive database queries. We much prefer doing
design/implementation work to make requests fast over the operational
work of running 2-5x as much hardware to handle the same load.
@@ -29,10 +29,11 @@ important to understand the load profiles for production uses.
Zulip servers typically involve a mixture of two very different types
of load profiles:
* Open communities like open source projects, online classes,
etc. have large numbers of users, many of whom are idle. (Many of
- Open communities like open source projects, online classes,
etc. have large numbers of users, many of whom are idle. (Many of
the others likely stopped by to ask a question, got it answered, and
then didn't need the community again for the next year). Our own
then didn't need the community again for the next year). Our own
[chat.zulip.org](../contributing/chat-zulip-org.md) is a good
example for this, with more than 15K total user accounts, of which
only several hundred have logged in during the last few weeks.
@@ -40,9 +41,9 @@ of load profiles:
deactivation](../subsystems/sending-messages.html#soft-deactivation)
to ensure idle users have minimal impact on both server-side
scalability and request latency.
* Fulltime teams, like your typical corporate Zulip installation,
- Fulltime teams, like your typical corporate Zulip installation,
have users who are mostly active for multiple hours a day and sending a
high volume of messages each. This load profile is most important
high volume of messages each. This load profile is most important
for self-hosted servers, since many of those are used exclusively by
the employees of the organization running the server.
@@ -53,7 +54,7 @@ organizations from each of those two load profiles.
It's important to understand that Zulip has a handful of endpoints
that result in the vast majority of all server load, and essentially
every other endpoint is not important for scalability. We still put
every other endpoint is not important for scalability. We still put
effort into making sure those other endpoints are fast for latency
reasons, but were they to be 10x faster (a huge optimization!), it
wouldn't materially improve Zulip's scalability.
@@ -63,7 +64,7 @@ around the several specific endpoints that have a combination of
request volume and cost that makes them important.
That said, it is important to distinguish the load associated with an
API endpoint from the load associated with a feature. Almost any
API endpoint from the load associated with a feature. Almost any
significant new feature is likely to result in its data being sent to
the client in `page_params` or `GET /messages`, i.e. one of the
endpoints important to scalability here. As a result, it is important
@@ -77,30 +78,26 @@ optimizations which save a few milliseconds that would be invisible to the end u
if they carry any cost in code readability.
In Zulip's documentation, our general rule is to primarily write facts
that are likely to remain true for a long time. While the numbers
that are likely to remain true for a long time. While the numbers
presented here vary with hardware, usage patterns, and time (there's
substantial oscillation within a 24 hour period), we expect the rough
sense of them (as well as the list of important endpoints) is not
likely to vary dramatically over time.
``` eval_rst
======================= ============ ============== ===============
Endpoint Average time Request volume Average impact
======================= ============ ============== ===============
POST /users/me/presence 25ms 36% 9000
GET /messages 70ms 3% 2100
GET / 300ms 0.3% 900
GET /events 2ms 44% 880
GET /user_uploads/* 12ms 5% 600
POST /messages/flags 25ms 1.5% 375
POST /messages 40ms 0.5% 200
POST /users/me/* 50ms 0.04% 20
======================= ============ ============== ===============
```
| Endpoint | Average time | Request volume | Average impact |
| ----------------------- | ------------ | -------------- | -------------- |
| POST /users/me/presence | 25ms | 36% | 9000 |
| GET /messages | 70ms | 3% | 2100 |
| GET / | 300ms | 0.3% | 900 |
| GET /events | 2ms | 44% | 880 |
| GET /user_uploads/\* | 12ms | 5% | 600 |
| POST /messages/flags | 25ms | 1.5% | 375 |
| POST /messages | 40ms | 0.5% | 200 |
| POST /users/me/\* | 50ms | 0.04% | 20 |
The "Average impact" above is computed by multiplying request volume
by average time; this tells you roughly that endpoint's **relative**
contribution to the steady-state total CPU load of the system. It's
contribution to the steady-state total CPU load of the system. It's
not precise -- waiting for a network request is counted the same as
active CPU time, but it's extremely useful for providing intuition for
what code paths are most important to optimize, especially since
@@ -110,16 +107,16 @@ memcached to do work.
As one can see, there are two categories of endpoints that are
important for scalability: those with extremely high request volumes,
and those with moderately high request volumes that are also
expensive. It doesn't matter how expensive, for example, `POST
/users/me/subscriptions` is for scalability, because the volume is
negligible.
expensive. It doesn't matter how expensive, for example,
`POST /users/me/subscriptions` is for scalability, because the volume
is negligible.
### Tornado
Zulip's Tornado-based [real-time push
system](../subsystems/events-system.md), and in particular `GET
/events`, accounts for something like 50% of all HTTP requests to a
production Zulip server. Despite `GET /events` being extremely
system](../subsystems/events-system.md), and in particular
`GET /events`, accounts for something like 50% of all HTTP requests to
a production Zulip server. Despite `GET /events` being extremely
high-volume, the typical request takes 1-3ms to process, and doesn't
use the database at all (though it will access `memcached` and
`redis`), so they aren't a huge contributor to the overall CPU usage
@@ -132,16 +129,16 @@ usage of a Zulip installation.
It's worth noting that most (~80%) Tornado requests end the
longpolling via a `heartbeat` event, which are issued to idle
connections after about a minute. These `heartbeat` events are
connections after about a minute. These `heartbeat` events are
useless aside from avoiding problems with networks/proxies/NATs that
are configured poorly and might kill HTTP connections that have been
idle for a minute. It's likely that with some strategy for detecting
idle for a minute. It's likely that with some strategy for detecting
such situations, we could reduce their volume (and thus overall
Tornado load) dramatically.
Currently, Tornado is sharded by realm, which is sufficient for
arbitrary scaling of the number of organizations on a multi-tenant
system like zulip.com. With a somewhat straightforward set of work,
system like zulip.com. With a somewhat straightforward set of work,
one could change this to sharding by `user_id` instead, which will
eventually be important for individual large organizations with many
thousands of concurrent users.
@@ -151,9 +148,9 @@ thousands of concurrent users.
`POST /users/me/presence` requests, which submit the current user's
presence information and return the information for all other active
users in the organization, account for about 36% of all HTTP requests
on production Zulip servers. See
on production Zulip servers. See
[presence](../subsystems/presence.md) for details on this system and
how it's optimized. For this article, it's important to know that
how it's optimized. For this article, it's important to know that
presence is one of the most important scalability concerns for any
chat system, because it cannot be cached long, and is structurally a
quadratic problem.
@@ -162,7 +159,7 @@ Because typical presence requests consume 10-50ms of server-side
processing time (to fetch and send back live data on all other active
users in the organization), and are such a high volume, presence is
the single most important source of steady-state load for a Zulip
server. This is true for most other chat server implementations as
server. This is true for most other chat server implementations as
well.
There is an ongoing [effort to rewrite the data model for
@@ -181,8 +178,8 @@ Zulip is somewhat unusual among webapps in sending essentially all of the
data required for the entire Zulip webapp in this single request,
which is part of why the Zulip webapp loads very quickly -- one only
needs a single round trip aside from cacheable assets (avatars, images, JS,
CSS). Data on other users in the organization, streams, supported
emoji, custom profile fields, etc., is all included. The nice thing
CSS). Data on other users in the organization, streams, supported
emoji, custom profile fields, etc., is all included. The nice thing
about this model is that essentially every UI element in the Zulip
client can be rendered immediately without paying latency to the
server; this is critical to Zulip feeling performant even for users
@@ -191,13 +188,13 @@ who have a lot of latency to the server.
There are only a few exceptions where we fetch data in a separate AJAX
request after page load:
* Message history is managed separately; this is why the Zulip webapp will
- Message history is managed separately; this is why the Zulip webapp will
first render the entire site except for the middle panel, and then a
moment later render the middle panel (showing the message history).
* A few very rarely accessed data sets like [message edit
- A few very rarely accessed data sets like [message edit
history](https://zulip.com/help/view-a-messages-edit-history) are
only fetched on demand.
* A few data sets that are only required for administrative settings
- A few data sets that are only required for administrative settings
pages are fetched only when loading those parts of the UI.
Requests to `GET /` and `/api/v1/register` that fetch `page_params`
@@ -210,7 +207,7 @@ history](#fetching-message-history).
The cost for fetching `page_params` varies dramatically based
primarily on the organization's size, varying from 90ms-300ms for a
typical organization but potentially multiple seconds for large open
organizations with 10,000s of users. There is also smaller
organizations with 10,000s of users. There is also smaller
variability based on a individual user's personal data state,
primarily in that having 10,000s of unread messages results in a
somewhat expensive query to find which streams/topics those are in.
@@ -221,7 +218,7 @@ greater than a second to be a bug, and there is ongoing work to fix that.
It can help when thinking about this to imagine `page_params` as what
in another webapp would have been 25 or so HTTP GET requests, each
fetching data of a given type (users, streams, custom emoji, etc.); in
Zulip, we just do all of those in a single API request. In the
Zulip, we just do all of those in a single API request. In the
future, we will likely move to a design that does much of the database
fetching work for different features in parallel to improve latency.
@@ -232,32 +229,32 @@ of active optimization work.
### Fetching message history
Bulk requests for message content and metadata ([`GET
/messages`](https://zulip.com/api/get-messages)) account for ~3% of
total HTTP requests. The zulip webapp has a few major reasons it does
a large number of these requests:
Bulk requests for message content and metadata
([`GET /messages`](https://zulip.com/api/get-messages)) account for
~3% of total HTTP requests. The zulip webapp has a few major reasons
it does a large number of these requests:
* Most of these requests are from users clicking into different views
- Most of these requests are from users clicking into different views
-- to avoid certain subtle bugs, Zulip's webapp currently fetches
content from the server even when it has the history for the
relevant stream/topic cached locally.
* When a browser opens the Zulip webapp, it will eventually fetch and
- When a browser opens the Zulip webapp, it will eventually fetch and
cache in the browser all messages newer than the oldest unread
message in a non-muted context. This can be in total extremely
message in a non-muted context. This can be in total extremely
expensive for users with 10,000s of unread messages, resulting in a
single browser doing 100 of these requests.
* When a new version of the Zulip server is deployed, every browser
- When a new version of the Zulip server is deployed, every browser
will reload within 30 minutes to ensure they are running the latest
code. For installations that deploy often like chat.zulip.org and
code. For installations that deploy often like chat.zulip.org and
zulip.com, this can result in a thundering herd effect for both `/`
and `GET /messages`. A great deal of care has been taking in
and `GET /messages`. A great deal of care has been taking in
designing this [auto-reload
system](../subsystems/hashchange-system.html#server-initiated-reloads)
to spread most of that herd over several minutes.
Typical requests consume 20-100ms to process, much of which is waiting
to fetch message IDs from the database and then their content from
memcached. While not large in an absolute sense, these requests are
memcached. While not large in an absolute sense, these requests are
expensive relative to most other Zulip endpoints.
Some requests, like full-text search for commonly used words, can be
@@ -265,16 +262,16 @@ more expensive, but they are sufficiently rare in an absolute sense so
as to be immaterial to the overall scalability of the system.
This server-side code path is already heavily optimized on a
per-request basis. However, we have technical designs for optimizing
per-request basis. However, we have technical designs for optimizing
the overall frequency with which clients need to make these requests
in two major ways:
* Improving [client-side
- Improving [client-side
caching](https://github.com/zulip/zulip/issues/15131) to allow
caching of narrows that the user has viewed in the current session,
avoiding repeat fetches of message content during a given session.
* Adjusting the behavior for clients with 10,000s of unread messages
to not fetch as much old message history into the cache. See [this
- Adjusting the behavior for clients with 10,000s of unread messages
to not fetch as much old message history into the cache. See [this
issue](https://github.com/zulip/zulip/issues/16697) for relevant
design work.
@@ -284,7 +281,7 @@ scalability cost of fetching message history dramatically.
### User uploads
Requests to fetch uploaded files (including user avatars) account for
about 5% of total HTTP requests. Zulip spends consistently ~10-15ms
about 5% of total HTTP requests. Zulip spends consistently ~10-15ms
processing one of these requests (mostly authorization logic), before
handing off delivery of the file to `nginx` or S3 (depending on the
configured [upload backend](../production/upload-backends.md)).
@@ -306,7 +303,7 @@ sent to 50 users triggers ~50 `GET /events` requests.
A typical message-send request takes 20-70ms, with more expensive
requests typically resulting from [Markdown
rendering](../subsystems/markdown.md) of more complex syntax. As a
rendering](../subsystems/markdown.md) of more complex syntax. As a
result, these requests are not material to Zulip's scalability.
Editing messages and adding emoji reactions are very similar to
sending them for the purposes of performance and scalability, since
@@ -339,11 +336,11 @@ The above doesn't cover all of the work that a production Zulip server
does; various tasks like sending outgoing emails or recording the data
that powers [/stats](https://zulip.com/help/analytics) are run by
[queue processors](../subsystems/queuing.md) and cron jobs, not in
response to incoming HTTP requests. In practice, all of these have
response to incoming HTTP requests. In practice, all of these have
been written such that they are immaterial to total load and thus
architectural scalability, though we do from time to time need to do
operational work to add additional queue processors for particularly
high-traffic queues. For all of our queue processors, any
high-traffic queues. For all of our queue processors, any
serialization requirements are at most per-user, and thus it would be
straightforward to shard by `user_id` or `realm_id` if required.
@@ -355,7 +352,7 @@ services (memcached, redis, rabbitmq, and most importantly postgres),
as well as queue processors (which might get backlogged).
In practice, efforts to make an individual endpoint faster will very
likely reduce the load on these services as well. But it is worth
likely reduce the load on these services as well. But it is worth
considering that database time is a more precious resource than
Python/CPU time (being harder to scale horizontally).

View File

@@ -17,18 +17,18 @@ make to the model.
First a bit of terminology:
* "Narrowing" is the process of filtering to a particular subset of
- "Narrowing" is the process of filtering to a particular subset of
the messages the user has access to.
* The blue cursor box (the "pointer") is around is called the
"selected" message. Zulip ensures that the currently selected
- The blue cursor box (the "pointer") is around is called the
"selected" message. Zulip ensures that the currently selected
message is always in-view.
## Pointer logic
### Recipient bar: message you clicked
If you enter a narrow by clicking on a message group's *recipient bar*
If you enter a narrow by clicking on a message group's _recipient bar_
(stream/topic or private message recipient list at the top of a group
of messages), Zulip will select the message you clicked on. This
provides a nice user experience where you get to see the stuff near
@@ -53,7 +53,7 @@ streams.)
### Unnarrow: previous sequence
When you unnarrow using e.g. the `a` key, you will automatically be
When you unnarrow using e.g. the `a` key, you will automatically be
taken to the same message that was selected in the All messages view before
you narrowed, unless in the narrow you read new messages, in which
case you will be jumped forward to the first unread and non-muted
@@ -76,13 +76,13 @@ see [the architectural overview](../overview/architecture-overview.md).
How does Zulip decide whether a message has been read by the user?
The algorithm needs to correctly handle a range of ways people might
use the product. The algorithm is as follows:
use the product. The algorithm is as follows:
* Any message which is selected or above a message which is selected
is marked as read. So messages are marked as read as you scroll
- Any message which is selected or above a message which is selected
is marked as read. So messages are marked as read as you scroll
down the keyboard when the pointer passes over them.
* If the whitespace at the very bottom of the feed is in view, all
- If the whitespace at the very bottom of the feed is in view, all
messages in view are marked as read.
These two simple rules, combined with the pointer logic above, end up
@@ -95,10 +95,10 @@ thread; search views will never mark messages as read.
## Testing and development
In a Zulip development environment, you can use `manage.py
mark_all_messages_unread` to set every user's pointer to 0 and all
messages as unread, for convenience in testing unread count related
logic.
In a Zulip development environment, you can use
`manage.py mark_all_messages_unread` to set every user's pointer to 0
and all messages as unread, for convenience in testing unread count
related logic.
It can be useful to combine this with `manage.py populate_db -n 3000`
(which rebuilds the database with 3000 initial messages) to ensure a

View File

@@ -5,53 +5,53 @@ This document explains the model for Zulip's presence.
In a chat tool like Zulip, users expect to see the “presence” status
of other users: is the person I want to talk to currently online? If
not, were they last online 5 minutes ago, or more like an hour ago, or
a week? Presence helps set expectations for whether someone is likely
to respond soon. To a user, this feature can seem like a simple thing
that should be easy. But presence is actually one of the hardest
a week? Presence helps set expectations for whether someone is likely
to respond soon. To a user, this feature can seem like a simple thing
that should be easy. But presence is actually one of the hardest
scalability problems for a team chat tool like Zulip.
There's a lot of performance-related details in the backend and
network protocol design that we won't get into here. The focus of
network protocol design that we won't get into here. The focus of
this is what one needs to know to correctly implement a Zulip client's
presence implementation (e.g. webapp, mobile app, terminal client, or
other tool that's intended to represent whether a user is online and
using Zulip).
A client should report to the server every minute a `POST` request to
`/users/me/presence`, containing the current user's status. The
requests contains a few parameters. The most important is "status",
`/users/me/presence`, containing the current user's status. The
requests contains a few parameters. The most important is "status",
which had 2 valid values:
* "active" -- this means the user has interacted with the client
recently. We use this for the "green" state in the webapp.
* "idle" -- the user has not interacted with the client recently.
- "active" -- this means the user has interacted with the client
recently. We use this for the "green" state in the webapp.
- "idle" -- the user has not interacted with the client recently.
This is important for the case where a user left a Zulip tab open on
their desktop at work and went home for the weekend. We use this
their desktop at work and went home for the weekend. We use this
for the "orange" state in the webapp.
The client receives in the response to that request a data set that,
for each user, contains their status and timestamp that we last heard
from that client. There are a few important details to understand
from that client. There are a few important details to understand
about that data structure:
* It's really important that the timestamp is the last time we heard
from the client. A client can only interpret the status to display
- It's really important that the timestamp is the last time we heard
from the client. A client can only interpret the status to display
about another user by doing a simple computation using the (status,
timestamp) pair. E.g. a user who last used Zulip 1 week ago will
have a timestamp of 1 week ago and a status of "active". Why?
Because this correctly handles the race conditions. For example, if
timestamp) pair. E.g. a user who last used Zulip 1 week ago will
have a timestamp of 1 week ago and a status of "active". Why?
Because this correctly handles the race conditions. For example, if
the threshold for displaying a user as "offline" was 5 minutes
since the user was last online, the client can at any time
accurately compute whether that user is offline (even if the last
data from the server was 45 seconds ago, and the user was last
online 4:30 before the client received that server data).
* Users can disable their own presence updates in user settings
- Users can disable their own presence updates in user settings
(`UserProfile.presence_enabled` is the flag storing [this user
preference](https://zulip.com/help/status-and-availability#disable-updating-availability)).
* The `status_from_timestamp` function in `static/js/presence.js` is
- The `status_from_timestamp` function in `static/js/presence.js` is
useful sample code; the `OFFLINE_THRESHOLD_SECS` check is critical
to correct output.
* We provide the data for e.g. whether the user was online on their
- We provide the data for e.g. whether the user was online on their
desktop or the mobile app, but for a basic client, you will likely
only want to parse the "aggregated" key, which shows the summary
answer for "is this user online".

View File

@@ -1,26 +1,26 @@
# Queue processors
Zulip uses RabbitMQ to manage a system of internal queues. These are
Zulip uses RabbitMQ to manage a system of internal queues. These are
used for a variety of purposes:
* Asynchronously doing expensive operations like sending email
- Asynchronously doing expensive operations like sending email
notifications which can take seconds per email and thus would
otherwise time out when 100s are triggered at once (E.g. inviting a
lot of new users to a realm).
* Asynchronously doing non-time-critical somewhat expensive operations
- Asynchronously doing non-time-critical somewhat expensive operations
like updating analytics tables (e.g. UserActivityInternal) which
don't have any immediate runtime effect.
* Communicating events to push to clients (browsers, etc.) from the
- Communicating events to push to clients (browsers, etc.) from the
main Zulip Django application process to the Tornado-based events
system. Example events might be that a new message was sent, a user
system. Example events might be that a new message was sent, a user
has changed their subscriptions, etc.
* Processing mobile push notifications and email mirroring system
- Processing mobile push notifications and email mirroring system
messages.
* Processing various errors, frontend tracebacks, and slow database
- Processing various errors, frontend tracebacks, and slow database
queries in a batched fashion.
Needless to say, the RabbitMQ-based queuing system is an important
@@ -35,23 +35,23 @@ custom integration defined in `zerver/lib/queue.py`.
To add a new queue processor:
* Define the processor in `zerver/worker/queue_processors.py` using
- Define the processor in `zerver/worker/queue_processors.py` using
the `@assign_queue` decorator; it's pretty easy to get the template
for an existing similar queue processor. This suffices to test your
for an existing similar queue processor. This suffices to test your
queue worker in the Zulip development environment
(`tools/run-dev.py` will automatically restart the queue processors
and start running your new queue processor code). You can also run
a single queue processor manually using e.g. `./manage.py
process_queue --queue=user_activity`.
and start running your new queue processor code). You can also run
a single queue processor manually using e.g.
`./manage.py process_queue --queue=user_activity`.
* So that supervisord will know to run the queue processor in
- So that supervisord will know to run the queue processor in
production, you will need to add to the `queues` variable in
`puppet/zulip/manifests/app_frontend_base.pp`; the list there is
used to generate `/etc/supervisor/conf.d/zulip.conf`.
The queue will automatically be added to the list of queues tracked by
`scripts/nagios/check-rabbitmq-consumers`, so Nagios can properly
check whether a queue processor is running for your queue. You still
check whether a queue processor is running for your queue. You still
need to update the sample Nagios configuration in `puppet/zulip_ops`
manually.
@@ -61,12 +61,12 @@ You can publish events to a RabbitMQ queue using the
`queue_json_publish` function defined in `zerver/lib/queue.py`.
An interesting challenge with queue processors is what should happen
when queued events in Zulip's backend tests. Our current solution is
when queued events in Zulip's backend tests. Our current solution is
that in the tests, `queue_json_publish` will (by default) simple call
the `consume` method for the relevant queue processor. However,
the `consume` method for the relevant queue processor. However,
`queue_json_publish` also supports being passed a function that should
be called in the tests instead of the queue processor's `consume`
method. Where possible, we prefer the model of calling `consume` in
method. Where possible, we prefer the model of calling `consume` in
tests since that's more predictable and automatically covers the queue
processor's code path, but it isn't always possible.
@@ -75,14 +75,14 @@ processor's code path, but it isn't always possible.
If you need to clear a queue (delete all the events in it), run
`./manage.py purge_queue <queue_name>`, for example:
```
```bash
./manage.py purge_queue user_activity
```
You can also use the amqp tools directly. Install `amqp-tools` from
You can also use the amqp tools directly. Install `amqp-tools` from
apt and then run:
```
```bash
amqp-delete-queue --username=zulip --password='...' --server=localhost \
--queue=user_presence
```

View File

@@ -1,6 +1,6 @@
# Realms in Zulip
Zulip allows multiple *realms* to be hosted on a single instance.
Zulip allows multiple _realms_ to be hosted on a single instance.
Realms are the Zulip codebases's internal name for what we refer to in
user documentation as an organization (the name "realm" comes from
[Kerberos](https://web.mit.edu/kerberos/)).
@@ -8,7 +8,7 @@ user documentation as an organization (the name "realm" comes from
Wherever possible, we avoid using the term `realm` in any user-facing
string or documentation; "Organization" is the equivalent term used in
those contexts (and we have linters that attempt to enforce this rule
in translateable strings). We may in the future modify Zulip's
in translateable strings). We may in the future modify Zulip's
internals to use `organization` instead.
The
@@ -19,27 +19,27 @@ are also relevant reading.
There are two main methods for creating realms.
* Using unique link generator
* Enabling open realm creation
- Using unique link generator
- Enabling open realm creation
#### Using unique link generator
```bash
./manage.py generate_realm_creation_link
./manage.py generate_realm_creation_link
```
The above command will output a URL which can be used for creating a
new realm and an administrator user for that realm. The link expires
after the creation of the realm. The link also expires if not used
after the creation of the realm. The link also expires if not used
within 7 days. The expiration period can be changed by modifying
`REALM_CREATION_LINK_VALIDITY_DAYS` in settings.py.
### Enabling open realm creation
If you want anyone to be able to create new realms on your server, you
can enable open realm creation. This will add a **Create new
can enable open realm creation. This will add a **Create new
organization** link to your Zulip homepage footer, and anyone can
create a new realm by visiting this link (**/new**). This
create a new realm by visiting this link (**/new**). This
feature is disabled by default in production instances, and can be
enabled by setting `OPEN_REALM_CREATION = True` in settings.py.
@@ -57,30 +57,30 @@ records so that the subdomains point to your Zulip installation IP. An
job.
We also recommend upgrading to at least Zulip 1.7, since older Zulip
releases had much less nice handling for subdomains. See our
releases had much less nice handling for subdomains. See our
[docs on using subdomains](../production/multiple-organizations.md) for
user-facing documentation on this.
### Working with subdomains in development environment
By default, Linux does not provide a convenient way to use subdomains
in your local development environment. To solve this problem, we use
in your local development environment. To solve this problem, we use
the **zulipdev.com** domain, which has a wildcard A record pointing to
127.0.0.1. You can use zulipdev.com to connect to your Zulip
127.0.0.1. You can use zulipdev.com to connect to your Zulip
development server instead of localhost. The default realm with the
Shakespeare users has the subdomain `zulip` and can be accessed by
visiting **zulip.zulipdev.com**.
If you are behind a **proxy server**, this method won't work. When you
make a request to load zulipdev.com in your browser, the proxy server
will try to get the page on your behalf. Since zulipdev.com points
to 127.0.0.1 the proxy server is likely to give you a 503 error. The
will try to get the page on your behalf. Since zulipdev.com points
to 127.0.0.1 the proxy server is likely to give you a 503 error. The
workaround is to disable your proxy for `*.zulipdev.com`. The DNS
lookup should still work even if you disable proxy for
*.zulipdev.com. If it doesn't you can add zulipdev.com records in
\*.zulipdev.com. If it doesn't you can add zulipdev.com records in
`/etc/hosts` file. The file should look something like this.
```
```text
127.0.0.1 localhost
127.0.0.1 zulipdev.com

Some files were not shown because too many files have changed in this diff Show More