diff --git a/docs/contributing/accessibility.md b/docs/contributing/accessibility.md
index dec4b7baab..e960852d0c 100644
--- a/docs/contributing/accessibility.md
+++ b/docs/contributing/accessibility.md
@@ -60,7 +60,7 @@ Problems with Zulip's accessibility should be reported as
label. This label can be added by entering the following text in a separate
comment on the issue:
- @zulipbot add "area: accessibility"
+> @zulipbot add "area: accessibility"
If you want to help make Zulip more accessible, here is a list of the
[currently open accessibility issues][accessibility-issues].
diff --git a/docs/contributing/code-style.md b/docs/contributing/code-style.md
index fb2a655f7b..304725fa68 100644
--- a/docs/contributing/code-style.md
+++ b/docs/contributing/code-style.md
@@ -34,11 +34,15 @@ When in doubt, ask in [chat.zulip.org](https://chat.zulip.org).
You can run them all at once with
- ./tools/lint
+```bash
+./tools/lint
+```
You can set this up as a local Git commit hook with
- tools/setup-git-repo
+```bash
+tools/setup-git-repo
+```
The Vagrant setup process runs this for you.
@@ -66,17 +70,21 @@ to read secrets from `/etc/zulip/secrets.conf`.
Look out for Django code like this:
- bars = Bar.objects.filter(...)
- for bar in bars:
- foo = bar.foo
- # Make use of foo
+```python
+bars = Bar.objects.filter(...)
+for bar in bars:
+ foo = bar.foo
+ # Make use of foo
+```
...because it equates to:
- bars = Bar.objects.filter(...)
- for bar in bars:
- foo = Foo.objects.get(id=bar.foo.id)
- # Make use of foo
+```python
+bars = Bar.objects.filter(...)
+for bar in bars:
+ foo = Foo.objects.get(id=bar.foo.id)
+ # Make use of foo
+```
...which makes a database query for every Bar. While this may be fast
locally in development, it may be quite slow in production! Instead,
@@ -84,10 +92,12 @@ tell Django's [QuerySet
API](https://docs.djangoproject.com/en/dev/ref/models/querysets/) to
_prefetch_ the data in the initial query:
- bars = Bar.objects.filter(...).select_related()
- for bar in bars:
- foo = bar.foo # This doesn't take another query, now!
- # Make use of foo
+```python
+bars = Bar.objects.filter(...).select_related()
+for bar in bars:
+ foo = bar.foo # This doesn't take another query, now!
+ # Make use of foo
+```
If you can't rewrite it as a single query, that's a sign that something
is wrong with the database schema. So don't defer this optimization when
@@ -118,7 +128,7 @@ different database queries:
For example, the following will, surprisingly, fail:
-```
+```python
# Bad example -- will raise!
obj: UserProfile = get_user_profile_by_id(17)
some_objs = UserProfile.objects.get(id=17)
@@ -127,7 +137,7 @@ assert obj in set([some_objs])
You should work with the IDs instead:
-```
+```python
obj: UserProfile = get_user_profile_by_id(17)
some_objs = UserProfile.objects.get(id=17)
assert obj.id in set([o.id for i in some_objs])
@@ -266,18 +276,24 @@ The best way to build complicated DOM elements is a Mustache template
like `static/templates/message_reactions.hbs`. For simpler things
you can use jQuery DOM building APIs like so:
- var new_tr = $('
').attr('id', object.id);
+```js
+var new_tr = $('
').attr('id', object.id);
+```
Passing a HTML string to jQuery is fine for simple hardcoded things
that don't need internationalization:
- foo.append('/
');
+```js
+foo.append('/
');
+```
but avoid programmatically building complicated strings.
We used to favor attaching behaviors in templates like so:
-
+```js
+
+```
but there are some reasons to prefer attaching events using jQuery code:
@@ -328,8 +344,10 @@ type changes in the future.
reason to do otherwise.
- Unpacking sequences doesn't require list brackets:
- [x, y] = xs # unnecessary
- x, y = xs # better
+ ```python
+ [x, y] = xs # unnecessary
+ x, y = xs # better
+ ```
- For string formatting, use `x % (y,)` rather than `x % y`, to avoid
ambiguity if `y` happens to be a tuple.
diff --git a/docs/development/remote.md b/docs/development/remote.md
index c392e389d0..7b7eb55124 100644
--- a/docs/development/remote.md
+++ b/docs/development/remote.md
@@ -21,7 +21,7 @@ The best way to connect to your server is using the command line tool `ssh`.
Open *Terminal* or *Bash for Git*, and connect with the following:
-```
+```console
$ ssh username@host
```
@@ -75,7 +75,7 @@ Once you have set up the development environment, you can start up the
development server with the following command in the directory where
you cloned Zulip:
-```
+```bash
./tools/run-dev.py --interface=''
```
@@ -98,7 +98,7 @@ such as a DigitalOcean Droplet or an AWS EC2 instance, you can set up
port-forwarding to access Zulip by running the following command in
your terminal:
-```
+```bash
ssh -L 3000:127.0.0.1:9991 @ -N
```
@@ -202,7 +202,7 @@ developing locally.
1. Install the extension
[Remote VSCode](https://marketplace.visualstudio.com/items?itemName=rafaelmaiolla.remote-vscode).
2. On your remote machine, run:
- ```
+ ```console
$ mkdir -p ~/bin
$ curl -Lo ~/bin/rmate https://raw.githubusercontent.com/textmate/rmate/master/bin/rmate
$ chmod a+x ~/bin/rmate
@@ -210,11 +210,11 @@ developing locally.
3. Make sure the remote server is running in VS Code (you can
force-start through the Command Palette).
4. SSH to your remote machine using
- ```
+ ```console
$ ssh -R 52698:localhost:52698 user@example.org
```
5. On your remote machine, run
- ```
+ ```console
$ rmate [options] file
```
and the file should open up in VS Code. Any changes you make now will be saved remotely.
@@ -292,7 +292,7 @@ different.
1. First, get an SSL certificate; you can use
[our certbot wrapper script used for production](../production/ssl-certificates.html#certbot-recommended)
by running the following commands as root:
- ```
+ ```bash
# apt install -y crudini
mkdir -p /var/lib/zulip/certbot-webroot/
# if nginx running this will fail and you need to run `service nginx stop`
@@ -303,7 +303,7 @@ different.
1. Install nginx configuration:
- ```
+ ```bash
apt install -y nginx-full
cp -a /home/zulipdev/zulip/tools/droplets/zulipdev /etc/nginx/sites-available/
ln -nsf /etc/nginx/sites-available/zulipdev /etc/nginx/sites-enabled/
@@ -316,6 +316,6 @@ different.
will be HTTPS.
1. Start the Zulip development environment with the following command:
- ```
+ ```bash
env EXTERNAL_HOST="hostname.example.com" ./tools/run-dev.py --interface=''
```
diff --git a/docs/development/setup-advanced.md b/docs/development/setup-advanced.md
index ef58d80928..3d13bd7a28 100644
--- a/docs/development/setup-advanced.md
+++ b/docs/development/setup-advanced.md
@@ -36,13 +36,13 @@ the
Start by [cloning your fork of the Zulip repository][zulip-rtd-git-cloning]
and [connecting the Zulip upstream repository][zulip-rtd-git-connect]:
-```
+```bash
git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git
cd zulip
git remote add -f upstream https://github.com/zulip/zulip.git
```
-```
+```bash
# On CentOS/RHEL, you must first install epel-release, and then python36,
# and finally you must run `sudo ln -nsf /usr/bin/python36 /usr/bin/python3`
# On Fedora, you must first install python3
@@ -71,20 +71,20 @@ installation method described here.
1. Launch the `Ubuntu 18.04` shell and run the following commands:
- ```
+ ```bash
sudo apt update && sudo apt upgrade
sudo apt install rabbitmq-server memcached redis-server postgresql
```
1. Open `/etc/rabbitmq/rabbitmq-env.conf` using e.g.:
- ```
+ ```bash
sudo vim /etc/rabbitmq/rabbitmq-env.conf
```
Add the following lines at the end of your file and save:
- ```
+ ```ini
NODE_IP_ADDRESS=127.0.0.1
NODE_PORT=5672
```
@@ -92,14 +92,14 @@ installation method described here.
1. Make sure you are inside the WSL disk and not in a Windows mounted disk.
You will run into permission issues if you run `provision` from `zulip`
in a Windows mounted disk.
- ```
+ ```bash
cd ~ # or cd /home/USERNAME
```
1. [Clone your fork of the Zulip repository][zulip-rtd-git-cloning]
and [connecting the Zulip upstream repository][zulip-rtd-git-connect]:
- ```
+ ```bash
git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git ~/zulip
cd zulip
git remote add -f upstream https://github.com/zulip/zulip.git
@@ -109,7 +109,7 @@ installation method described here.
start it (click `Allow access` if you get popups for Windows Firewall
blocking some services)
- ```
+ ```bash
# Start database, cache, and other services
./tools/wsl/start_services
# Install/update the Zulip development environment
@@ -154,7 +154,7 @@ expected.
1. Start by [cloning your fork of the Zulip repository][zulip-rtd-git-cloning]
and [connecting the Zulip upstream repository][zulip-rtd-git-connect]:
- ```
+ ```bash
git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git
cd zulip
git remote add -f upstream https://github.com/zulip/zulip.git
@@ -169,7 +169,7 @@ expected.
You should get output like this:
- ```text
+ ```console
Bringing machine 'default' up with 'hyperv' provider...
==> default: Verifying Hyper-V is enabled...
==> default: Verifying Hyper-V is accessible...
@@ -203,14 +203,14 @@ expected.
1. Set the `EXTERNAL_HOST` environment variable.
- ```bash
+ ```console
(zulip-py3-venv) vagrant@ubuntu-18:/srv/zulip$ export EXTERNAL_HOST="$(hostname -I | xargs):9991"
(zulip-py3-venv) vagrant@ubuntu-18:/srv/zulip$ echo $EXTERNAL_HOST
```
The output will be like:
- ```text
+ ```console
172.28.122.156:9991
```
@@ -226,13 +226,13 @@ expected.
1. You should now be able to start the Zulip development server.
- ```bash
+ ```console
(zulip-py3-venv) vagrant@ubuntu-18:/srv/zulip$ ./tools/run-dev.py
```
The output will look like:
- ```text
+ ```console
Starting Zulip on:
http://172.30.24.235:9991/
diff --git a/docs/development/setup-vagrant.md b/docs/development/setup-vagrant.md
index 3596fb3817..c4e7b9de5f 100644
--- a/docs/development/setup-vagrant.md
+++ b/docs/development/setup-vagrant.md
@@ -94,14 +94,14 @@ Now you are ready for [Step 2: Get Zulip code](#step-2-get-zulip-code).
##### 1. Install Vagrant, Docker, and Git
-```
+```console
christie@ubuntu-desktop:~
$ sudo apt install vagrant docker.io git
```
##### 2. Add yourself to the `docker` group:
-```
+```console
christie@ubuntu-desktop:~
$ sudo adduser $USER docker
Adding user `christie' to group `docker' ...
@@ -112,7 +112,7 @@ Done.
You will need to reboot for this change to take effect. If it worked,
you will see `docker` in your list of groups:
-```
+```console
christie@ubuntu-desktop:~
$ groups | grep docker
christie adm cdrom sudo dip plugdev lpadmin sambashare docker
@@ -126,7 +126,7 @@ bug](https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1844894)
may prevent Docker from being automatically enabled and started after
installation. You can check using the following:
-```
+```console
$ systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
@@ -137,7 +137,7 @@ If the service is not running, you'll see `Active: inactive (dead)` on
the second line, and will need to enable and start the Docker service
using the following:
-```
+```bash
sudo systemctl unmask docker
sudo systemctl enable docker
sudo systemctl start docker
@@ -189,13 +189,13 @@ In **Git for BASH**:
Open **Git BASH as an administrator** and run:
-```
+```console
$ git config --global core.symlinks true
```
Now confirm the setting:
-```
+```console
$ git config core.symlinks
true
```
@@ -210,7 +210,7 @@ In **Cygwin**:
Open a Cygwin window **as an administrator** and do this:
-```
+```console
christie@win10 ~
$ echo 'export "CYGWIN=$CYGWIN winsymlinks:native"' >> ~/.bash_profile
```
@@ -218,7 +218,7 @@ $ echo 'export "CYGWIN=$CYGWIN winsymlinks:native"' >> ~/.bash_profile
Next, close that Cygwin window and open another. If you `echo` $CYGWIN you
should see:
-```
+```console
christie@win10 ~
$ echo $CYGWIN
winsymlinks:native
@@ -244,7 +244,7 @@ projects and to instead follow these instructions exactly.)
[clone your fork of the Zulip repository](../git/cloning.html#step-1b-clone-to-your-machine) and
[connect the Zulip upstream repository](../git/cloning.html#step-1c-connect-your-fork-to-zulip-upstream):
-```
+```bash
git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git
cd zulip
git remote add -f upstream https://github.com/zulip/zulip.git
@@ -255,7 +255,7 @@ This will create a 'zulip' directory and download the Zulip code into it.
Don't forget to replace YOURUSERNAME with your Git username. You will see
something like:
-```
+```console
christie@win10 ~
$ git clone --config pull.rebase git@github.com:YOURUSERNAME/zulip.git
Cloning into 'zulip'...
@@ -276,7 +276,7 @@ environment](#step-3-start-the-development-environment).
Change into the zulip directory and tell vagrant to start the Zulip
development environment with `vagrant up`:
-```
+```bash
# On Windows or macOS:
cd zulip
vagrant plugin install vagrant-vbguest
@@ -320,14 +320,14 @@ specified.` several times. This is normal and is not a problem.
Once `vagrant up` has completed, connect to the development
environment with `vagrant ssh`:
-```
+```console
christie@win10 ~/zulip
$ vagrant ssh
```
You should see output that starts like this:
-```
+```console
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-54-generic x86_64)
```
@@ -340,14 +340,14 @@ provisioning failed and you should look at the
Next, start the Zulip server:
-```
+```console
(zulip-py3-venv) vagrant@ubuntu-bionic:/srv/zulip
$ ./tools/run-dev.py
```
You will see several lines of output starting with something like:
-```
+```console
2016-05-04 22:20:33,895 INFO: process_fts_updates starting
Recompiling templates
2016-05-04 18:20:34,804 INFO: Not in recovery; listening for FTS updates
@@ -364,7 +364,7 @@ Performing system checks...
```
And ending with something similar to:
-```
+```console
http://localhost:9994/webpack-dev-server/
webpack result is served from http://localhost:9991/webpack/
content is served from /srv/zulip
@@ -385,7 +385,7 @@ The Zulip server will continue to run and send output to the terminal window.
When you navigate to Zulip in your browser, check your terminal and you
should see something like:
-```
+```console
2016-05-04 18:21:57,547 INFO 127.0.0.1 GET 302 582ms (+start: 417ms) / (unauth@zulip via ?)
[04/May/2016 18:21:57]"GET / HTTP/1.0" 302 0
2016-05-04 18:21:57,568 INFO 127.0.0.1 GET 301 4ms /login (unauth@zulip via ?)
@@ -484,7 +484,7 @@ can halt vagrant from another Terminal/Git BASH window.
From the window where run-dev.py is running:
-```
+```console
2016-05-04 18:33:13,330 INFO 127.0.0.1 GET 200 92ms /register/ (unauth@zulip via ?)
^C
KeyboardInterrupt
@@ -495,7 +495,7 @@ christie@win10 ~/zulip
```
Now you can suspend the development environment:
-```
+```console
christie@win10 ~/zulip
$ vagrant suspend
==> default: Saving VM state and suspending execution...
@@ -503,7 +503,7 @@ $ vagrant suspend
If `vagrant suspend` doesn't work, try `vagrant halt`:
-```
+```console
christie@win10 ~/zulip
$ vagrant halt
==> default: Attempting graceful shutdown of VM...
@@ -520,7 +520,7 @@ pass the `--provider` option required above). You will also need to
connect to the virtual machine with `vagrant ssh` and re-start the
Zulip server:
-```
+```console
christie@win10 ~/zulip
$ vagrant up
$ vagrant ssh
@@ -572,7 +572,7 @@ This is caused by provisioning failing to complete successfully. You
can see the errors in `var/log/provision.log`; it should end with
something like this:
-```
+```text
ESC[94mZulip development environment setup succeeded!ESC[0m
```
@@ -589,7 +589,7 @@ shell and run `vagrant ssh` again to get the virtualenv setup properly.
#### Vagrant was unable to mount VirtualBox shared folders
For the following error:
-```
+```console
Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel
@@ -603,7 +603,7 @@ was:
If this error starts happening unexpectedly, then just run:
-```
+```bash
vagrant halt
vagrant up
```
@@ -615,7 +615,7 @@ to reboot the guest. After this, you can do `vagrant provision` and
If you receive the following error while running `vagrant up`:
-```
+```console
SSL read: error:00000000:lib(0):func(0):reason(0), errno 104
```
@@ -627,14 +627,14 @@ better network connection).
When running `vagrant up` or `provision`, if you see the following error:
-```
+```console
==> default: E:unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
```
It means that your local apt repository has been corrupted, which can
usually be resolved by executing the command:
-```
+```bash
apt-get -f install
```
@@ -642,7 +642,7 @@ apt-get -f install
On running `vagrant ssh`, if you see the following error:
-```
+```console
ssh_exchange_identification: Connection closed by remote host
```
@@ -655,7 +655,7 @@ for more details.
If you receive the following error while running `vagrant up`:
-```
+```console
==> default: Traceback (most recent call last):
==> default: File "./emoji_dump.py", line 75, in
==> default:
@@ -697,7 +697,7 @@ Get the name of your virtual machine by running `vboxmanage list vms` and
then print out the custom settings for this virtual machine with
`vboxmanage getextradata YOURVMNAME enumerate`:
-```
+```console
christie@win10 ~/zulip
$ vboxmanage list vms
"zulip_default_1462498139595_55484" {5a65199d-8afa-4265-b2f6-6b1f162f157d}
@@ -716,7 +716,7 @@ If `vboxmanage enumerate` prints nothing, or shows a value of 0 for
VBoxInternal2/SharedFoldersEnableSymlinksCreate/srv_zulip, then enable
symbolic links by running this command in Terminal/Git BASH/Cygwin:
-```
+```bash
vboxmanage setextradata YOURVMNAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/srv_zulip 1
```
@@ -730,7 +730,7 @@ Windows is incorrectly attempting to use Hyper-V rather than
Virtualbox as the virtualization provider. You can fix this by
explicitly passing the virtualbox provider to `vagrant up`:
-```
+```console
christie@win10 ~/zulip
$ vagrant up --provide=virtualbox
```
@@ -739,7 +739,7 @@ $ vagrant up --provide=virtualbox
If you see the following error after running `vagrant up`:
-```
+```console
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
@@ -762,7 +762,7 @@ this post](https://stackoverflow.com/questions/22575261/vagrant-stuck-connection
If you see the following error when you run `vagrant up`:
-```
+```console
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
@@ -809,7 +809,7 @@ proxy to access the Internet and haven't [configured the development
environment to use it](#specifying-a-proxy).
Once you've provisioned successfully, you'll get output like this:
-```
+```console
Zulip development environment setup succeeded!
(zulip-py3-venv) vagrant@vagrant-base-trusty-amd64:~/zulip$
```
@@ -836,7 +836,7 @@ the VM.
##### yarn install warnings
-```
+```console
$ yarn install
yarn install v0.24.5
[1/4] Resolving packages...
@@ -853,7 +853,7 @@ It is okay to proceed and start the Zulip server.
#### VBoxManage errors related to VT-x or WHvSetupPartition
-```
+```console
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
@@ -866,7 +866,7 @@ VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component ConsoleWrap,
or
-```
+```console
Stderr: VBoxManage.exe: error: Call to WHvSetupPartition failed: ERROR_SUCCESS (Last=0xc000000d/87) (VERR_NEM_VM_CREATE_FAILED)
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component ConsoleWrap, interface IConsole
```
@@ -882,7 +882,7 @@ later, run `bcdedit /deletevalue hypervisorlaunchtype`, and reboot.
#### OSError: [Errno 26] Text file busy
-```
+```console
default: Traceback (most recent call last):
…
default: File "/srv/zulip-py3-venv/lib/python3.6/shutil.py", line 426, in _rmtree_safe_fd
@@ -896,7 +896,7 @@ the VirtualBox Guest Additions for Linux on Windows hosts. You can
check the running version of VirtualBox Guest Additions with this
command:
-```
+```bash
vagrant ssh -- 'modinfo -F version vboxsf'
```
@@ -905,13 +905,13 @@ able to work around it by downgrading VirtualBox Guest Additions to
6.0.4. To do this, create a `~/.zulip-vagrant-config` file and add
this line:
-```
+```text
VBOXADD_VERSION 6.0.4
```
Then run these commands (yes, reload is needed twice):
-```
+```bash
vagrant plugin install vagrant-vbguest
vagrant reload
vagrant reload --provision
@@ -927,7 +927,7 @@ a local mirror closer to your location. To do this, create
`~/.zulip-vagrant-config` and add a line like this, replacing the URL
as appropriate:
-```
+```text
UBUNTU_MIRROR http://us.archive.ubuntu.com/ubuntu/
```
@@ -937,14 +937,14 @@ If you need to use a proxy server to access the Internet, you will
need to specify the proxy settings before running `Vagrant up`.
First, install the Vagrant plugin `vagrant-proxyconf`:
-```
+```bash
vagrant plugin install vagrant-proxyconf
```
Then create `~/.zulip-vagrant-config` and add the following lines to
it (with the appropriate values in it for your proxy):
-```
+```text
HTTP_PROXY http://proxy_host:port
HTTPS_PROXY http://proxy_host:port
NO_PROXY localhost,127.0.0.1,.example.com,.zulipdev.com
@@ -953,7 +953,7 @@ NO_PROXY localhost,127.0.0.1,.example.com,.zulipdev.com
For proxies that require authentication, the config will be a bit more
complex, e.g.:
-```
+```text
HTTP_PROXY http://userName:userPassword@192.168.1.1:8080
HTTPS_PROXY http://userName:userPassword@192.168.1.1:8080
NO_PROXY localhost,127.0.0.1,.example.com,.zulipdev.com
@@ -978,7 +978,7 @@ then do a `vagrant reload`.
You can also change the port on the host machine that Vagrant uses by
adding to your `~/.zulip-vagrant-config` file. E.g. if you set:
-```
+```text
HOST_PORT 9971
```
@@ -989,7 +989,7 @@ If you'd like to be able to connect to your development environment from other
machines than the VM host, you can manually set the host IP address in the
'~/.zulip-vagrant-config' file as well. For example, if you set:
-```
+```text
HOST_IP_ADDR 0.0.0.0
```
@@ -1015,14 +1015,14 @@ more resources.
To do so, create a `~/.zulip-vagrant-config` file containing the
following lines:
-```
+```text
GUEST_CPUS
GUEST_MEMORY_MB
```
For example:
-```
+```text
GUEST_CPUS 4
GUEST_MEMORY_MB 8192
```
diff --git a/docs/development/test-install.md b/docs/development/test-install.md
index 27bd782e98..8c904f1ca6 100644
--- a/docs/development/test-install.md
+++ b/docs/development/test-install.md
@@ -17,7 +17,7 @@ RAM, in order to accommodate the VMs and the steps which build the
release assets.
To begin, install the LXC toolchain:
-```
+```bash
sudo apt-get install lxc lxc-utils
```
@@ -32,7 +32,7 @@ You only need to do this step once per time you work on a set of
changes, to refresh the package that the installer uses. The installer
doesn't work cleanly out of a source checkout; it wants a release
checkout, so we build a tarball of one of those first:
-```
+```bash
./tools/build-release-tarball test-installer
```
@@ -46,7 +46,7 @@ directory. The test installer needs the release directory to be named
`zulip-server`, so we rename it and move it appropriately. In the
first line, you'll need to substitute the actual path that you got for
the tarball, above:
-```
+```bash
tar xzf /tmp/tmp.fepqqNBWxp/zulip-server-test-installer.tar.gz
mkdir zulip-test-installer
mv zulip-server-test-installer zulip-test-installer/zulip-server
@@ -65,7 +65,7 @@ into the installer.
For example, to test an install onto Ubuntu 20.04 "Focal", we might
call:
-```
+```bash
sudo ./tools/test-install/install \
-r focal \
./zulip-test-installer/ \
@@ -82,7 +82,7 @@ take a while.
Regardless of if the install succeeds or fails, it will stay running
so you can inspect it. You can see all of the containers which are
running, and their randomly-generated names, by running:
-```
+```bash
sudo lxc-ls -f
```
@@ -90,7 +90,7 @@ sudo lxc-ls -f
After using `lxc-ls` to list containers, you can choose one of them
and connect to its terminal:
-```
+```bash
sudo lxc-attach --clear-env -n zulip-install-focal-PUvff
```
@@ -98,12 +98,12 @@ sudo lxc-attach --clear-env -n zulip-install-focal-PUvff
To destroy all containers (but leave the base containers, which speed
up the initial install):
-```
+```bash
sudo ./tools/test-install/destroy-all -f
```
To destroy just one container:
-```
+```bash
sudo lxc-destroy -f -n zulip-install-focal-PUvff
```
@@ -115,7 +115,7 @@ Iterate on the installer by making changes to your source tree,
copying them into the release directory, and re-running the installer,
which will start up a new container. Here, we update just the
`scripts` and `puppet` directories of the release directory:
-```
+```bash
rsync -az scripts puppet zulip-test-installer/zulip-server/
sudo ./tools/test-install/install \
@@ -124,4 +124,3 @@ sudo ./tools/test-install/install \
--hostname=zulip.example.net \
--email=username@example.net
```
-
diff --git a/docs/documentation/api.md b/docs/documentation/api.md
index f156bf66dc..09f73ca056 100644
--- a/docs/documentation/api.md
+++ b/docs/documentation/api.md
@@ -101,7 +101,7 @@ defined using a special Markdown extension
(`zerver/openapi/markdown_extension.py`). To use this extension, one
writes a Markdown file block that looks something like this:
-```
+```md
{start_tabs}
{tab|python}
@@ -169,7 +169,7 @@ an API endpoint supports. You'll see this in files like
directive (implemented in
`zerver/lib/markdown/api_arguments_table_generator.py`):
-```
+```md
{generate_api_arguments_table|zulip.yaml|/messages/render:post}
```
@@ -186,7 +186,7 @@ You can use the following Markdown directive to render the fixtures
defined in the OpenAPI `zulip.yaml` for a given endpoint and status
code:
-```
+```md
{generate_code_example|/messages/render:post|fixture(200)}
```
diff --git a/docs/documentation/integrations.md b/docs/documentation/integrations.md
index db1d693f4b..d9f72bcbd3 100644
--- a/docs/documentation/integrations.md
+++ b/docs/documentation/integrations.md
@@ -46,7 +46,7 @@ Usually, this involves a few steps:
If your new integration is an incoming webhook integration, you can generate
the screenshot using `tools/generate-integration-docs-screenshot`:
- ```sh
+ ```bash
./tools/generate-integration-docs-screenshot --integration integrationname
```
@@ -136,7 +136,7 @@ Here are a few common macros used to document Zulip's integrations:
* `{!webhook-url-with-bot-email.md!}` - Used in certain non-webhook integrations
to generate URLs of the form:
- ```
+ ```text
https://bot_email:bot_api_key@yourZulipDomain.zulipchat.com/api/v1/external/beanstalk
```
diff --git a/docs/documentation/openapi.md b/docs/documentation/openapi.md
index a50ffa3a4a..701b1e15de 100644
--- a/docs/documentation/openapi.md
+++ b/docs/documentation/openapi.md
@@ -40,7 +40,7 @@ types of authentication, and configure other settings. Once defined,
information in this section rarely changes.
For example, the `swagger` and `info` objects look like this:
-```
+```yaml
# Basic Swagger UI info
openapi: 3.0.1
info:
@@ -79,7 +79,7 @@ expects a GET request with one
Basic authentication, and returns a JSON response containing `msg`,
`result`, and `presence` values.
-```
+```yaml
/users/{user}/presence:
get:
description: Get presence data for another user.
@@ -119,7 +119,7 @@ contains schemas referenced by other objects. For example,
contains three required parameters. Two are strings, and one is an
integer.
-```
+```yaml
MessageResponse:
type: object
required:
@@ -183,7 +183,7 @@ correct.
### Examples:
-```
+```yaml
Description: |
This description has multiple lines.
Sometimes descriptions can go on for
diff --git a/docs/documentation/overview.md b/docs/documentation/overview.md
index 39bc3b7ed8..9fd89ae9ae 100644
--- a/docs/documentation/overview.md
+++ b/docs/documentation/overview.md
@@ -43,7 +43,7 @@ your changes), the dependencies are automatically installed as part of
Zulip development environment provisioning, and you can build the
documentation using:
-```
+```bash
./tools/build-docs
```
diff --git a/docs/documentation/user.md b/docs/documentation/user.md
index 21471e6b20..d021cf3afd 100644
--- a/docs/documentation/user.md
+++ b/docs/documentation/user.md
@@ -210,7 +210,7 @@ instructions. For instance, it may address a common problem users may
encounter while following the instructions, or point to an option for power
users.
-```
+```md
!!! tip ""
If you've forgotten your password, see the
[Change your password](/help/change-your-password) page for
@@ -220,7 +220,7 @@ users.
A **warning** is a note on what happens when there is some kind of problem.
Tips are more common than warnings.
-```
+```md
!!! warn ""
**Note:** If you attempt to input a nonexistent stream name, an error
message will appear.
@@ -237,14 +237,16 @@ design to easily show the instructions for different
[platforms](https://zulip.com/help/logging-out) in user docs,
languages in API docs, etc. To create a tab switcher, write:
- {start_tabs}
- {tab|desktop-web}
- # First tab's content
- {tab|ios}
- # Second tab's content
- {tab|android}
- # Third tab's content
- {end_tabs}
+```md
+{start_tabs}
+{tab|desktop-web}
+# First tab's content
+{tab|ios}
+# Second tab's content
+{tab|android}
+# Third tab's content
+{end_tabs}
+```
The tab identifiers (e.g. `desktop-web` above) and their mappings to
the tabs' labels are declared in
diff --git a/docs/git/cloning.md b/docs/git/cloning.md
index 0dd1687fd5..b464c45e67 100644
--- a/docs/git/cloning.md
+++ b/docs/git/cloning.md
@@ -20,7 +20,7 @@ the main server app, this is [zulip/zulip][github-zulip-zulip].
Next, clone your fork to your local machine:
-```
+```console
$ git clone --config pull.rebase https://github.com/YOUR_USERNAME/zulip.git
Cloning into 'zulip'
remote: Counting objects: 86768, done.
@@ -56,7 +56,7 @@ your fork.
First, show the currently configured remote repository:
-```
+```console
$ git remote -v
origin git@github.com:YOUR_USERNAME/zulip.git (fetch)
origin git@github.com:YOUR_USERNAME/zulip.git (push)
@@ -68,7 +68,7 @@ have the upstream remote repository configured. For example, when you clone
the remote repository `zulip` and you see the following output from `git remote
-v`:
-```
+```console
origin git@github.com:YOUR_USERNAME/zulip.git (fetch)
origin git@github.com:YOUR_USERNAME/zulip.git (push)
zulip https://github.com/zulip/zulip.git (fetch)
@@ -78,13 +78,13 @@ zulip https://github.com/zulip/zulip.git (push)
If your client hasn't automatically configured a remote for zulip/zulip, you'll
need to with:
-```
+```console
$ git remote add -f upstream https://github.com/zulip/zulip.git
```
Finally, confirm that the new remote repository, upstream, has been configured:
-```
+```console
$ git remote -v
origin git@github.com:YOUR_USERNAME/zulip.git (fetch)
origin git@github.com:YOUR_USERNAME/zulip.git (push)
diff --git a/docs/git/collaborate.md b/docs/git/collaborate.md
index 4c00ec1d88..4d2ec0a100 100644
--- a/docs/git/collaborate.md
+++ b/docs/git/collaborate.md
@@ -6,7 +6,7 @@ What happens when you would like to collaborate with another contributor and
they have work-in-progress on their own fork of Zulip? No problem! Just add
their fork as a remote and pull their changes.
-```
+```console
$ git remote add https://github.com//zulip.git
$ git fetch
```
@@ -15,12 +15,12 @@ Now you can check out their branch just like you would any other. You can name
the branch anything you want, but using both the username and branch name will
help you keep things organized.
-```
+```console
$ git checkout -b /
```
You can choose to rename the branch if you prefer:
-```
+```bash
git checkout -b /
```
@@ -34,13 +34,13 @@ specific to GitHub rather than Git.
First, fetch and create a branch for the pull request, replacing *ID* and
*BRANCHNAME* with the ID of the pull request and your desired branch name:
-```
+```console
$ git fetch upstream pull/ID/head:BRANCHNAME
```
Now switch to the branch:
-```
+```console
$ git checkout BRANCHNAME
```
@@ -48,7 +48,7 @@ Now you work on this branch as you would any other.
Note: you can use the scripts provided in the tools/ directory to fetch pull
requests. You can read more about what they do [here][tools-PR].
-```
+```bash
tools/fetch-rebase-pull-request
tools/fetch-pull-request
```
diff --git a/docs/git/pull-requests.md b/docs/git/pull-requests.md
index d50b6cb9f4..2c04dfe47a 100644
--- a/docs/git/pull-requests.md
+++ b/docs/git/pull-requests.md
@@ -46,7 +46,7 @@ fork up to date][keep-up-to-date] for details.
Here's an example (you would replace *issue-123* with the name of your feature branch):
-```
+```console
$ git checkout issue-123
Switched to branch 'issue-123'
@@ -68,7 +68,7 @@ Applying: troubleshooting tip about provisioning
Once you've updated your local feature branch, push the changes to GitHub:
-```
+```console
$ git push origin issue-123
Counting objects: 6, done.
Delta compression using up to 4 threads.
@@ -83,7 +83,7 @@ To git@github.com:christi3k/zulip.git
If your push is rejected with error **failed to push some refs** then you need
to prefix the name of your branch with a `+`:
-```
+```console
$ git push origin +issue-123
Counting objects: 6, done.
Delta compression using up to 4 threads.
diff --git a/docs/git/reviewing.md b/docs/git/reviewing.md
index 6c102e24e7..def8033c6b 100644
--- a/docs/git/reviewing.md
+++ b/docs/git/reviewing.md
@@ -8,20 +8,20 @@ on reviewing changes by other contributors.
Display changes between index and working tree (what is not yet staged for commit):
-```
+```console
$ git diff
```
Display changes between index and last commit (what you have staged for commit):
-```
+```console
$ git diff --cached
```
Display changes in working tree since last commit (changes that are staged as
well as ones that are not):
-```
+```console
$ git diff HEAD
```
@@ -31,13 +31,13 @@ Use any git-ref to compare changes between two commits on the current branch.
Display changes between commit before last and last commit:
-```
+```console
$ git diff HEAD^ HEAD
```
Display changes between two commits using their hashes:
-```
+```console
$ git diff e2f404c 7977169
```
@@ -45,19 +45,19 @@ $ git diff e2f404c 7977169
Display changes between tip of topic branch and tip of master branch:
-```
+```console
$ git diff topic master
```
Display changes that have occurred on master branch since topic branch was created:
-```
+```console
$ git diff topic...master
```
Display changes you've committed so far since creating a branch from upstream/master:
-```
+```console
$ git diff upstream/master...HEAD
```
diff --git a/docs/git/troubleshooting.md b/docs/git/troubleshooting.md
index 8757c24b94..c477471cbf 100644
--- a/docs/git/troubleshooting.md
+++ b/docs/git/troubleshooting.md
@@ -26,7 +26,7 @@ A merge commit is usually created when you've run `git pull` or `git merge`.
You'll know you're creating a merge commit if you're prompted for a commit
message and the default is something like this:
-```
+```text
Merge branch 'master' of https://github.com/zulip/zulip
# Please enter a commit message to explain why this merge is necessary,
@@ -38,7 +38,7 @@ Merge branch 'master' of https://github.com/zulip/zulip
And the first entry for `git log` will show something like:
-```
+```console
commit e5f8211a565a5a5448b93e98ed56415255546f94
Merge: 13bea0e e0c10ed
Author: Christie Koehler
@@ -52,7 +52,7 @@ Some graphical Git clients may also create merge commits.
To undo a merge commit, first run `git reflog` to identify the commit you want
to roll back to:
-```
+```console
$ git reflog
e5f8211 HEAD@{0}: pull upstream master: Merge made by the 'recursive' strategy.
@@ -67,7 +67,7 @@ by `git pull` and `13bea0e HEAD@{1}:` is the last commit I made before running
Once you'd identified the ref you want to revert to, you can do so with [git
reset][gitbook-reset]:
-```
+```console
$ git reset --hard 13bea0e
HEAD is now at 13bea0e test commit for docs.
```
@@ -87,7 +87,7 @@ just keep in mind that this changes as you run git commands.
Now when you look at the output of `git reflog`, you should see that the tip of your branch points to your
last commit `13bea0e` before the merge:
-```
+```console
$ git reflog
13bea0e HEAD@{2}: reset: moving to HEAD@{1}
@@ -97,7 +97,7 @@ e5f8211 HEAD@{3}: pull upstream master: Merge made by the 'recursive' strategy.
And the first entry `git log` shows is this:
-```
+```console
commit 13bea0e40197b1670e927a9eb05aaf50df9e8277
Author: Christie Koehler
Date: Mon Oct 10 13:25:38 2016 -0700
@@ -115,14 +115,14 @@ with `git cherry-pick` ([docs][gitbook-git-cherry-pick]).
For example, let's say you just committed "some work" and your `git log` looks
like this:
-```
+```console
* 67aea58 (HEAD -> master) some work
* 13bea0e test commit for docs.
```
You then mistakenly run `git reset --hard 13bea0e`:
-```
+```console
$ git reset --hard 13bea0e
HEAD is now at 13bea0e test commit for docs.
@@ -134,7 +134,7 @@ And then realize you actually needed to keep commit 67aea58. First, use `git
reflog` to confirm that commit you want to restore and then run `git
cherry-pick `:
-```
+```console
$ git reflog
13bea0e HEAD@{0}: reset: moving to 13bea0e
67aea58 HEAD@{1}: commit: some work
@@ -160,7 +160,7 @@ change to a part of the file I also want to change. When I try to bring my
branch up to date with `git fetch` and then `git rebase upstream/master`, I see
the following:
-```
+```console
First, rewinding head to replay your work on top of it...
Applying: test change for docs
Using index info to reconstruct a base tree...
@@ -182,7 +182,7 @@ after bringing in the new commits from upstream/master.
Running `git status` also gives me some information:
-```
+```console
rebase in progress; onto 5ae56e6
You are currently rebasing branch 'docs-test' on '5ae56e6'.
(fix conflicts and then run "git rebase --continue")
@@ -204,7 +204,7 @@ and `>>>>>>>`) markers to indicate where in files there are conflicts.
Tip: You can see recent changes made to a file by running the following
commands:
-```
+```bash
git fetch upstream
git log -p upstream/master -- /path/to/file
```
@@ -215,7 +215,7 @@ you are rebasing.
Once you've done that, save the file(s), stage them with `git add` and then
continue the rebase with `git rebase --continue`:
-```
+```console
$ git add README.md
$ git rebase --continue
@@ -241,7 +241,7 @@ where you committed them.
So, before you stop working for the day, or before you switch computers, push
all of your commits to GitHub with `git push`:
-```
+```console
$ git push origin
```
@@ -254,7 +254,7 @@ But if you're switching to another computer on which you have already cloned
Zulip, you need to update your local Git database with new refs from your
GitHub fork. You do this with `git fetch`:
-```
+```console
$ git fetch
```
@@ -262,7 +262,7 @@ Ideally you should do this before you have made any commits on the same branch
on the second computer. Then you can `git merge` on whichever branch you need
to update:
-```
+```console
$ git checkout
Switched to branch ''
diff --git a/docs/git/using.md b/docs/git/using.md
index 9696b85303..f9b2fc56af 100644
--- a/docs/git/using.md
+++ b/docs/git/using.md
@@ -8,7 +8,7 @@ determine the currently checked out branch several ways.
One way is with [git status][gitbook-git-status]:
-```
+```console
$ git status
On branch issue-demo
nothing to commit, working directory clean
@@ -17,7 +17,7 @@ nothing to commit, working directory clean
Another is with [git branch][gitbook-git-branch] which will display all local
branches, with a star next to the current branch:
-```
+```console
$ git branch
* issue-demo
master
@@ -26,7 +26,7 @@ $ git branch
To see even more information about your branches, including remote branches,
use `git branch -vva`:
-```
+```console
$ git branch -vva
* issue-123 517468b troubleshooting tip about provisioning
master f0eaee6 [origin/master] bug: Fix traceback in get_missed_message_token_from_address().
@@ -53,14 +53,14 @@ and then `git rebase`.
First, [fetch][gitbook-fetch] changes from Zulip's upstream repository you
configured in the step above:
-```
+```console
$ git fetch upstream
```
Next, check out your `master` branch and [rebase][gitbook-git-rebase] it on top
of `upstream/master`:
-```
+```console
$ git checkout master
Switched to branch 'master'
@@ -74,7 +74,7 @@ history clean and readable.
When you're ready, [push your changes][github-help-push] to your remote fork.
Make sure you're in branch `master` and then run `git push`:
-```
+```console
$ git checkout master
$ git push origin master
```
@@ -83,7 +83,7 @@ You can keep any branch up to date using this method. If you're working on a
feature branch (see next section), which we recommend, you would change the
command slightly, using the name of your `feature-branch` rather than `master`:
-```
+```console
$ git checkout feature-branch
Switched to branch 'feature-branch'
@@ -105,7 +105,7 @@ how][zulip-git-guide-up-to-date]).
Next, from your master branch, create a new tracking branch, providing a
descriptive name for your feature branch:
-```
+```console
$ git checkout master
Switched to branch 'master'
@@ -116,7 +116,7 @@ Switched to a new branch 'issue-1755-fail2ban'
Alternatively, you can create a new branch explicitly based off
`upstream/master`:
-```
+```console
$ git checkout -b issue-1755-fail2ban upstream/master
Switched to a new branch 'issue-1755-fail2ban'
```
@@ -146,7 +146,7 @@ staged, use `git status`.
If you have no changes in the working directory, you'll see something like
this:
-```
+```console
$ git status
On branch issue-123
nothing to commit, working directory clean
@@ -154,7 +154,7 @@ nothing to commit, working directory clean
If you have unstaged changes, you'll see something like this:
-```
+```console
On branch issue-123
Untracked files:
(use "git add ..." to include in what will be committed)
@@ -173,7 +173,7 @@ add` is all about staging the changes you want to commit, you use it to add
Continuing our example from above, after we run `git add newfile.py`, we'll see
the following from `git status`:
-```
+```console
On branch issue-123
Changes to be committed:
(use "git reset HEAD ..." to unstage)
@@ -193,7 +193,7 @@ You can also stage changes using your graphical Git client.
If you stage a file, you can undo it with `git reset HEAD `. Here's
an example where we stage a file `test3.txt` and then unstage it:
-```
+```console
$ git add test3.txt
On branch issue-1234
Changes to be committed:
@@ -222,7 +222,7 @@ stage the file for deletion and leave it in your working directory.
To stage a file for deletion and **remove** it from your working directory, use
`git rm `:
-```
+```console
$ git rm test.txt
rm 'test.txt'
@@ -240,7 +240,7 @@ ls: No such file or directory
To stage a file for deletion and **keep** it in your working directory, use
`git rm --cached `:
-```
+```console
$ git rm --cached test2.txt
rm 'test2.txt'
@@ -258,7 +258,7 @@ test2.txt
If you stage a file for deletion with the `--cached` option, and haven't yet
run `git commit`, you can undo it with `git reset HEAD `:
-```
+```console
$ git reset HEAD test2.txt
```
@@ -273,7 +273,7 @@ with `git commit -m "My commit message."` to include a commit message.
Here's an example of committing with the `-m` for a one-line commit message:
-```
+```console
$ git commit -m "Add a test commit for docs."
[issue-123 173e17a] Add a test commit for docs.
1 file changed, 1 insertion(+)
@@ -295,7 +295,7 @@ messages][zulip-rtd-commit-messages] for details.
Here's an example of a longer commit message that will be used for a pull request:
-```
+```text
Integrate Fail2Ban.
Updates Zulip logging to put an unambiguous entry into the logs such
@@ -337,7 +337,7 @@ machine and allows others to follow your progress. It also allows you to
Pushing to a feature branch is just like pushing to master:
-```
+```console
$ git push origin
Counting objects: 6, done.
Delta compression using up to 4 threads.
@@ -367,7 +367,7 @@ your commit history be able to clearly understand your progression of work?
On the command line, you can use the `git log` command to display an easy to
read list of your commits:
-```
+```console
$ git log --all --graph --oneline --decorate
* 4f8d75d (HEAD -> 1754-docs-add-git-workflow) docs: Add details about configuring Travis CI.
@@ -404,7 +404,7 @@ Any time you alter history for commits you have already pushed to GitHub,
you'll need to prefix the name of your branch with a `+`. Without this, your
updates will be rejected with a message such as:
-```
+```console
$ git push origin 1754-docs-add-git-workflow
To git@github.com:christi3k/zulip.git
! [rejected] 1754-docs-add-git-workflow -> 1754-docs-add-git-workflow (non-fast-forward)
@@ -418,7 +418,7 @@ hint: See the 'Note about fast-forwards' in 'git push --help' for details.
Re-running the command with `+` allows the push to continue by
re-writing the history for the remote repository:
-```
+```console
$ git push origin +1754-docs-add-git-workflow
Counting objects: 12, done.
Delta compression using up to 4 threads.
diff --git a/docs/git/zulip-tools.md b/docs/git/zulip-tools.md
index da3a156895..dcf6ca58c6 100644
--- a/docs/git/zulip-tools.md
+++ b/docs/git/zulip-tools.md
@@ -16,7 +16,7 @@ notices or warnings it displays.
It's simple to use. Make sure you're in the clone of zulip and run the following:
-```
+```console
$ ./tools/setup-git-repo
```
@@ -24,7 +24,7 @@ The script doesn't produce any output if successful. To check that the hook has
been installed, print a directory listing for `.git/hooks` and you should see
something similar to:
-```
+```console
$ ls -l .git/hooks
pre-commit -> ../../tools/pre-commit
```
@@ -47,7 +47,7 @@ First, make sure you are working in a branch you want to move (in this
example, we'll use the local `master` branch). Then run the script
with the ID number of the pull request as the first argument.
-```
+```console
$ git checkout master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
@@ -74,7 +74,7 @@ changes from upstream/master with `git rebase`.
Run the script with the ID number of the pull request as the first argument.
-```
+```console
$ tools/fetch-rebase-pull-request 1913
+ request_id=1913
+ git fetch upstream pull/1913/head
@@ -101,7 +101,7 @@ exactly the same repository state as the commit author had.
Run the script with the ID number of the pull request as the first argument.
-```
+```console
$ tools/fetch-pull-request 5156
+ git diff-index --quiet HEAD
+ request_id=5156
@@ -155,7 +155,7 @@ arguments for default behavior. Since removing review branches can inadvertently
feature branches whose names are like `review-*`, it is not done by default. To
use it, run `tools/clean-branches --reviews`.
-```
+```console
$ tools/clean-branches --reviews
Deleting local branch review-original-5156 (was 5a1e982)
```
@@ -167,7 +167,7 @@ regenerate the file. *Important* don't delete the yarn.lock file. Check out the
latest one from origin/master so that yarn knows the previous asset versions.
Run the following commands
-```
+```bash
git checkout origin/master -- yarn.lock
yarn install
git add yarn.lock
diff --git a/docs/overview/architecture-overview.md b/docs/overview/architecture-overview.md
index 534e9e1296..a209f8c9a6 100644
--- a/docs/overview/architecture-overview.md
+++ b/docs/overview/architecture-overview.md
@@ -181,8 +181,10 @@ Redis is configured in `zulip/puppet/zulip/files/redis` and it's a
pretty standard configuration except for the last line, which turns off
persistence:
- # Zulip-specific configuration: disable saving to disk.
- save ""
+```text
+# Zulip-specific configuration: disable saving to disk.
+save ""
+```
People often wonder if we could replace memcached with Redis (or
replace RabbitMQ with Redis, with some loss of functionality).
diff --git a/docs/overview/changelog.md b/docs/overview/changelog.md
index 08e8f2d144..d43a5038b2 100644
--- a/docs/overview/changelog.md
+++ b/docs/overview/changelog.md
@@ -418,7 +418,7 @@ up-to-date list of raw changes.
that will fix this bug. The new migration will fail if any such
duplicate accounts already exist; you can check whether this will
happen be running the following in a [management shell][manage-shell]:
- ```
+ ```python
from django.db.models.functions import Lower
UserProfile.objects.all().annotate(email_lower=Lower("delivery_email"))
.values('realm_id', 'email_lower').annotate(Count('id')).filter(id__count__gte=2)
diff --git a/docs/production/authentication-methods.md b/docs/production/authentication-methods.md
index 5f80ef70b9..9b50b66452 100644
--- a/docs/production/authentication-methods.md
+++ b/docs/production/authentication-methods.md
@@ -98,7 +98,7 @@ In either configuration, you will need to do the following:
* Set `AUTH_LDAP_REVERSE_EMAIL_SEARCH` to a query that will find
an LDAP user given their email address (i.e. a search by
`LDAP_EMAIL_ATTR`). For example:
- ```
+ ```python
AUTH_LDAP_REVERSE_EMAIL_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(email)s)")
```
@@ -107,7 +107,7 @@ In either configuration, you will need to do the following:
You can quickly test whether your configuration works by running:
-```
+```bash
/home/zulip/deployments/current/manage.py query_ldap username
```
@@ -119,7 +119,7 @@ email address, if it isn't the same as the "Zulip username").
of the following configurations:
* To access by Active Directory username:
- ```
+ ```python
AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(sAMAccountName=%(user)s)")
AUTH_LDAP_REVERSE_EMAIL_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
@@ -128,7 +128,7 @@ of the following configurations:
```
* To access by Active Directory email address:
- ```
+ ```python
AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
ldap.SCOPE_SUBTREE, "(mail=%(user)s)")
AUTH_LDAP_REVERSE_EMAIL_SEARCH = LDAPSearch("ou=users,dc=example,dc=com",
@@ -157,7 +157,7 @@ Zulip can automatically synchronize data declared in
`AUTH_LDAP_USER_ATTR_MAP` from LDAP into Zulip, via the following
management command:
-```
+```bash
/home/zulip/deployments/current/manage.py sync_ldap_user_data
```
@@ -270,7 +270,7 @@ the fields that would be useful to sync from your LDAP databases.
### Multiple LDAP searches
To do the union of multiple LDAP searches, use `LDAPSearchUnion`. For example:
-```
+```python
AUTH_LDAP_USER_SEARCH = LDAPSearchUnion(
LDAPSearch("ou=users,dc=example,dc=com", ldap.SCOPE_SUBTREE, "(uid=%(user)s)"),
LDAPSearch("ou=otherusers,dc=example,dc=com", ldap.SCOPE_SUBTREE, "(uid=%(user)s)"),
@@ -300,7 +300,7 @@ For the root subdomain, `www` in the list will work, or any other of
For example, with `org_membership` set to `department`, a user with
the following attributes will have access to the root and `engineering` subdomains:
-```
+```text
...
department: engineering
department: www
@@ -428,7 +428,7 @@ it as follows:
trust, which consists of multiple certificates.
4. Set the proper permissions on these files and directories:
- ```
+ ```bash
chown -R zulip.zulip /etc/zulip/saml/
find /etc/zulip/saml/ -type f -exec chmod 644 -- {} +
chmod 640 /etc/zulip/saml/zulip-private-key.key
@@ -492,7 +492,7 @@ For example, with `attr_org_membership` set to `member`, a user with
the following attribute in their `AttributeStatement` will have access
to the root and `engineering` subdomains:
-```
+```xml
www
@@ -525,7 +525,7 @@ straightforward way to deploy that SSO solution with Zulip.
2. Edit `/etc/zulip/zulip.conf` and change the `puppet_classes` line to read:
- ```
+ ```ini
puppet_classes = zulip::profile::standalone, zulip::apache_sso
```
@@ -543,7 +543,7 @@ straightforward way to deploy that SSO solution with Zulip.
using the `htpasswd` example configuration and demonstrate that
working end-to-end, before returning later to configure your SSO
solution. You can do that with the following steps:
- ```
+ ```bash
/home/zulip/deployments/current/scripts/restart-server
cd /etc/apache2/sites-available/
cp zulip-sso.example zulip-sso.conf
@@ -637,7 +637,7 @@ domain for your server).
`/etc/zulip/apple-auth-key.p8`. Be sure to set
permissions correctly:
- ```
+ ```bash
chown zulip:zulip /etc/zulip/apple-auth-key.p8
chmod 640 /etc/zulip/apple-auth-key.p8
```
diff --git a/docs/production/deployment.md b/docs/production/deployment.md
index 6f6cba0e48..c07b61a58c 100644
--- a/docs/production/deployment.md
+++ b/docs/production/deployment.md
@@ -11,7 +11,7 @@ something more complicated. This page documents the options for doing so.
To install a development version of Zulip from Git, just clone the Git
repository from GitHub:
-```
+```bash
# First, install Git if you don't have it installed already
sudo apt install git
git clone https://github.com/zulip/zulip.git zulip-server-git
@@ -86,7 +86,7 @@ configuration to be completely modular.
For example, to install a Zulip Redis server on a machine, you can run
the following after unpacking a Zulip production release tarball:
-```
+```bash
env PUPPET_CLASSES=zulip::profile::redis ./scripts/setup/install
```
@@ -119,7 +119,7 @@ Follow the [standard instructions](../production/install.md), with one
change. When running the installer, pass the `--no-init-db`
flag, e.g.:
-```
+```bash
sudo -s # If not already root
./zulip-server-*/scripts/setup/install --certbot \
--email=YOUR_EMAIL --hostname=YOUR_HOSTNAME \
@@ -130,7 +130,7 @@ The script also installs and starts PostgreSQL on the server by
default. We don't need it, so run the following command to
stop and disable the local PostgreSQL server.
-```
+```bash
sudo service postgresql stop
sudo update-rc.d postgresql disable
```
@@ -167,13 +167,13 @@ If you're using password authentication, you should specify the
password of the `zulip` user in /etc/zulip/zulip-secrets.conf as
follows:
-```
+```ini
postgres_password = abcd1234
```
Now complete the installation by running the following commands.
-```
+```bash
# Ask Zulip installer to initialize the PostgreSQL database.
su zulip -c '/home/zulip/deployments/current/scripts/setup/initialize-database'
@@ -191,7 +191,7 @@ configure that as follows:
with `/home/zulip/deployments/current/scripts/restart-server`.
1. Add the following block to `/etc/zulip/zulip.conf`:
- ```
+ ```ini
[application_server]
nginx_listen_port = 12345
```
@@ -219,7 +219,7 @@ To use Smokescreen:
1. Add `, zulip::profile::smokescreen` to the list of `puppet_classes`
in `/etc/zulip/zulip.conf`. A typical value after this change is:
- ```
+ ```ini
puppet_classes = zulip::profile::standalone, zulip::profile::smokescreen
```
@@ -231,7 +231,7 @@ To use Smokescreen:
1. Add the following block to `/etc/zulip/zulip.conf`, substituting in
your proxy's hostname/IP and port:
- ```
+ ```ini
[http_proxy]
host = 127.0.0.1
port = 4750
@@ -280,7 +280,7 @@ HTTP as follows:
1. Add the following block to `/etc/zulip/zulip.conf`:
- ```
+ ```ini
[application_server]
http_only = true
```
@@ -304,7 +304,7 @@ For `nginx` configuration, there's two things you need to set up:
`/etc/nginx/sites-available`) for the Zulip app. The following
example is a good starting point:
-```
+```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
@@ -341,7 +341,7 @@ make the following changes in two configuration files.
1. Follow the instructions for [Configure Zulip to allow HTTP](#configuring-zulip-to-allow-http).
2. Add the following to `/etc/zulip/settings.py`:
- ```
+ ```python
EXTERNAL_HOST = 'zulip.example.com'
ALLOWED_HOSTS = ['zulip.example.com', '127.0.0.1']
USE_X_FORWARDED_HOST = True
@@ -357,7 +357,7 @@ make the following changes in two configuration files.
and then run `a2ensite zulip.example.com && systemctl reload
apache2`):
- ```
+ ```apache
ServerName zulip.example.com
RewriteEngine On
@@ -398,7 +398,7 @@ make the following changes in two configuration files.
If you want to use HAProxy with Zulip, this `backend` config is a good
place to start.
-```
+```text
backend zulip
mode http
balance leastconn
diff --git a/docs/production/email-gateway.md b/docs/production/email-gateway.md
index 893a3479ae..007d8c2adf 100644
--- a/docs/production/email-gateway.md
+++ b/docs/production/email-gateway.md
@@ -57,7 +57,7 @@ using an [HTTP reverse proxy][reverse-proxy]).
configuring email for `emaildomain.example.com` to be processed by
`hostname.example.com`. You can check your work using this command:
- ```
+ ```console
$ dig +short emaildomain.example.com -t MX
1 hostname.example.com
```
@@ -66,7 +66,7 @@ using an [HTTP reverse proxy][reverse-proxy]).
1. Add `, zulip::postfix_localmail` to `puppet_classes` in
`/etc/zulip/zulip.conf`. A typical value after this change is:
- ```
+ ```ini
puppet_classes = zulip::profile::standalone, zulip::postfix_localmail
```
@@ -74,7 +74,7 @@ using an [HTTP reverse proxy][reverse-proxy]).
`emaildomain.example.com`, add a section to `/etc/zulip/zulip.conf`
on your Zulip server like this:
- ```
+ ```ini
[postfix]
mailname = emaildomain.example.com
```
@@ -117,7 +117,7 @@ Congratulations! The integration should be fully operational.
* Password in `/etc/zulip/zulip-secrets.conf` as `email_gateway_password`.
1. Install a cron job to poll the inbox every minute for new messages:
- ```
+ ```bash
cd /home/zulip/deployments/current/
sudo cp puppet/zulip/files/cron.d/email-mirror /etc/cron.d/
```
diff --git a/docs/production/email.md b/docs/production/email.md
index 19b57343a2..f919882906 100644
--- a/docs/production/email.md
+++ b/docs/production/email.md
@@ -78,7 +78,7 @@ configuration on the system that forwards email sent locally into your
corporate email system), you will likely need to use something like
these setting values:
-```
+```python
EMAIL_HOST = 'localhost'
EMAIL_PORT = 25
EMAIL_USE_TLS = False
@@ -122,7 +122,7 @@ can log them to a file instead.
To do so, add these lines to `/etc/zulip/settings.py`:
-```
+```python
EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
EMAIL_FILE_PATH = '/var/log/zulip/emails'
```
@@ -137,7 +137,7 @@ later set up a real SMTP provider!
You can quickly test your outgoing email configuration using:
-```
+```bash
su zulip -c '/home/zulip/deployments/current/manage.py send_test_email user@example.com'
```
diff --git a/docs/production/expensive-migrations.md b/docs/production/expensive-migrations.md
index 71e86f85c1..eb0f81b0fb 100644
--- a/docs/production/expensive-migrations.md
+++ b/docs/production/expensive-migrations.md
@@ -19,42 +19,46 @@ can run them manually before starting the upgrade:
PostgreSQL database.
3. In the PostgreSQL shell, run the following commands:
- CREATE INDEX CONCURRENTLY
- zerver_usermessage_is_private_message_id
- ON zerver_usermessage (user_profile_id, message_id)
- WHERE (flags & 2048) != 0;
+ ```postgresql
+ CREATE INDEX CONCURRENTLY
+ zerver_usermessage_is_private_message_id
+ ON zerver_usermessage (user_profile_id, message_id)
+ WHERE (flags & 2048) != 0;
- CREATE INDEX CONCURRENTLY
- zerver_usermessage_active_mobile_push_notification_id
- ON zerver_usermessage (user_profile_id, message_id)
- WHERE (flags & 4096) != 0;
+ CREATE INDEX CONCURRENTLY
+ zerver_usermessage_active_mobile_push_notification_id
+ ON zerver_usermessage (user_profile_id, message_id)
+ WHERE (flags & 4096) != 0;
+ ```
-(These first migrations are the only new ones in Zulip 1.9).
+ (These first migrations are the only new ones in Zulip 1.9).
- CREATE INDEX CONCURRENTLY
- zerver_usermessage_mentioned_message_id
- ON zerver_usermessage (user_profile_id, message_id)
- WHERE (flags & 8) != 0;
+ ```postgresql
+ CREATE INDEX CONCURRENTLY
+ zerver_usermessage_mentioned_message_id
+ ON zerver_usermessage (user_profile_id, message_id)
+ WHERE (flags & 8) != 0;
- CREATE INDEX CONCURRENTLY
- zerver_usermessage_starred_message_id
- ON zerver_usermessage (user_profile_id, message_id)
- WHERE (flags & 2) != 0;
+ CREATE INDEX CONCURRENTLY
+ zerver_usermessage_starred_message_id
+ ON zerver_usermessage (user_profile_id, message_id)
+ WHERE (flags & 2) != 0;
- CREATE INDEX CONCURRENTLY
- zerver_usermessage_has_alert_word_message_id
- ON zerver_usermessage (user_profile_id, message_id)
- WHERE (flags & 512) != 0;
+ CREATE INDEX CONCURRENTLY
+ zerver_usermessage_has_alert_word_message_id
+ ON zerver_usermessage (user_profile_id, message_id)
+ WHERE (flags & 512) != 0;
- CREATE INDEX CONCURRENTLY
- zerver_usermessage_wildcard_mentioned_message_id
- ON zerver_usermessage (user_profile_id, message_id)
- WHERE (flags & 8) != 0 OR (flags & 16) != 0;
+ CREATE INDEX CONCURRENTLY
+ zerver_usermessage_wildcard_mentioned_message_id
+ ON zerver_usermessage (user_profile_id, message_id)
+ WHERE (flags & 8) != 0 OR (flags & 16) != 0;
- CREATE INDEX CONCURRENTLY
- zerver_usermessage_unread_message_id
- ON zerver_usermessage (user_profile_id, message_id)
- WHERE (flags & 1) = 0;
+ CREATE INDEX CONCURRENTLY
+ zerver_usermessage_unread_message_id
+ ON zerver_usermessage (user_profile_id, message_id)
+ WHERE (flags & 1) = 0;
+ ```
These will take some time to run, during which the server will
continue to serve user traffic as usual with no disruption. Once they
diff --git a/docs/production/export-and-import.md b/docs/production/export-and-import.md
index 436e53ef71..4c31e6bc29 100644
--- a/docs/production/export-and-import.md
+++ b/docs/production/export-and-import.md
@@ -57,7 +57,7 @@ service (or back):
The Zulip server has a built-in backup tool:
-```
+```bash
# As the zulip user
/home/zulip/deployments/current/manage.py backup
# Or as root
@@ -84,7 +84,7 @@ First, [install a new Zulip server through Step 3][install-server]
with the same version of both the base OS and Zulip from your previous
installation. Then, run as root:
-```
+```bash
/home/zulip/deployments/current/scripts/setup/restore-backup /path/to/backup
```
@@ -113,7 +113,7 @@ tarball: `postgres-version`, `os-version`, and `zulip-version`. The
following command may be useful for viewing these files without
extracting the entire archive.
-```
+```bash
tar -Oaxf /path/to/archive/zulip-backup-rest.tar.gz zulip-backup/zulip-version
```
@@ -167,7 +167,7 @@ daily incremental backups using
storing the backups, edit `/etc/zulip/zulip-secrets.conf` on the
PostgreSQL server to add:
- ```
+ ```ini
s3_backups_key = # aws public key
s3_backups_secret_key = # aws secret key
s3_backups_bucket = # name of S3 backup
@@ -280,7 +280,7 @@ of the lines for the appropriate option.
Log in to a shell on your Zulip server as the `zulip` user. Run the
following commands:
-```
+```bash
cd /home/zulip/deployments/current
# ./scripts/stop-server
# export DEACTIVATE_FLAG="--deactivate" # Deactivates the organization
@@ -307,7 +307,7 @@ archive of all the organization's uploaded files.
master][upgrade-zulip-from-git], since we run run master on
Zulip Cloud:
- ```
+ ```bash
/home/zulip/deployments/current/scripts/upgrade-zulip-from-git master
```
@@ -364,7 +364,7 @@ about doing so:
following commands, replacing the filename with the path to your data
export tarball:
-```
+```bash
cd ~
tar -xf /path/to/export/file/zulip-export-zcmpxfm6.tar.gz
cd /home/zulip/deployments/current
@@ -386,7 +386,7 @@ custom subdomain, e.g. if you already have an existing organization on the
root domain. Replace the last three lines above with the following, after replacing
`` with the desired subdomain.
-```
+```bash
./manage.py import ~/zulip-export-zcmpxfm6
./manage.py reactivate_realm -r # Reactivates the organization
```
@@ -403,12 +403,12 @@ You can use the `./manage.py send_password_reset_email` command to
send password reset emails to your users. We
recommend starting with sending one to yourself for testing:
-```
+```bash
./manage.py send_password_reset_email -u username@example.com
```
and then once you're ready, you can email them to everyone using e.g.
-```
+```bash
./manage.py send_password_reset_email -r '' --all-users
```
@@ -426,7 +426,7 @@ organization using the following procedure:
with the subdomain if [you are hosting the organization on a
subdomain](../production/multiple-organizations.md):
-```
+```python
realm = Realm.objects.get(string_id="")
realm.delete()
```
@@ -434,7 +434,7 @@ realm.delete()
The output contains details on the objects deleted from the database.
Now, exit the management shell and run this to clear Zulip's cache:
-```
+```bash
/home/zulip/deployments/current/scripts/setup/flush-memcached
```
@@ -444,7 +444,7 @@ can additionally delete all file uploads, avatars, and custom emoji on
a Zulip server (across **all organizations**) with the following
command:
-```
+```bash
rm -rf /home/zulip/uploads/*/*
```
@@ -454,7 +454,7 @@ in the management shell before deleting the organization from the
database (this will be `2` for the first organization created on a
Zulip server, shown in the example below), e.g.:
-```
+```bash
rm -rf /home/zulip/uploads/*/2/
```
diff --git a/docs/production/install-existing-server.md b/docs/production/install-existing-server.md
index 143f165848..5ae9ee38de 100644
--- a/docs/production/install-existing-server.md
+++ b/docs/production/install-existing-server.md
@@ -26,7 +26,7 @@ zulip.com](https://zulip.com)).
Copy your existing nginx configuration to a backup and then merge the
one created by Zulip into it:
-```shell
+```bash
sudo cp /etc/nginx/nginx.conf /etc/nginx.conf.before-zulip-install
sudo wget -O /etc/nginx/nginx.conf.zulip \
https://raw.githubusercontent.com/zulip/zulip/master/puppet/zulip/templates/nginx.conf.template.erb
@@ -43,7 +43,7 @@ installs.
After the Zulip installation completes, then you can overwrite (or
merge) your new nginx.conf with the installed one:
-```shell
+```console
$ sudo meld /etc/nginx/nginx.conf.zulip /etc/nginx/nginx.conf # be sure to merge to the right
$ sudo service nginx restart
```
@@ -58,13 +58,13 @@ If you have a Puppet server running on your server, you will get an
error message about not being able to connect to the client during the
install process:
-```shell
+```console
puppet-agent[29873]: Could not request certificate: Failed to open TCP connection to puppet:8140
```
So you'll need to shut down any Puppet servers.
-```shell
+```console
$ sudo service puppet-agent stop
$ sudo service puppet stop
```
diff --git a/docs/production/install.md b/docs/production/install.md
index 3f15269b64..d1545dd7a4 100644
--- a/docs/production/install.md
+++ b/docs/production/install.md
@@ -18,7 +18,7 @@ Download and unpack [the latest built server
tarball](https://www.zulip.org/dist/releases/zulip-server-latest.tar.gz)
with the following commands:
-```
+```bash
cd $(mktemp -d)
wget https://www.zulip.org/dist/releases/zulip-server-latest.tar.gz
tar -xf zulip-server-latest.tar.gz
@@ -35,7 +35,7 @@ using code from our [repository on GitHub](https://github.com/zulip/zulip/).
To set up Zulip with the most common configuration, you can run the
installer as follows:
-```
+```bash
sudo -s # If not already root
./zulip-server-*/scripts/setup/install --certbot \
--email=YOUR_EMAIL --hostname=YOUR_HOSTNAME
diff --git a/docs/production/management-commands.md b/docs/production/management-commands.md
index fb83d1dade..641fe3b39a 100644
--- a/docs/production/management-commands.md
+++ b/docs/production/management-commands.md
@@ -10,7 +10,7 @@ framework][django-management].
Start by logging in as the `zulip` user on the Zulip server. Then run
them as follows:
-```
+```bash
cd /home/zulip/deployments/current
# Start by reading the help
@@ -39,7 +39,7 @@ string ID (usually the subdomain).
You can see all the organizations on your Zulip server using
`./manage.py list_realms`.
-```
+```console
zulip@zulip:~$ /home/zulip/deployments/current/manage.py list_realms
id string_id name
-- --------- ----
@@ -56,7 +56,7 @@ Unless you are
your single Zulip organization on the root domain will have the empty
string (`''`) as its `string_id`. So you can run e.g.:
-```
+```console
zulip@zulip:~$ /home/zulip/deployments/current/manage.py show_admins -r ''
```
@@ -73,7 +73,7 @@ You can get an IPython shell with full access to code within the Zulip
project using `manage.py shell`, e.g., you can do the following to
change a user's email address:
-```
+```console
$ cd /home/zulip/deployments/current/
$ ./manage.py shell
In [1]: user_profile = get_user_profile_by_email("email@example.com")
diff --git a/docs/production/mobile-push-notifications.md b/docs/production/mobile-push-notifications.md
index bdcce8d2e7..62b7710e80 100644
--- a/docs/production/mobile-push-notifications.md
+++ b/docs/production/mobile-push-notifications.md
@@ -34,7 +34,7 @@ You can enable this for your Zulip server as follows:
1. If you're running Zulip 1.8.1 or newer, you can run the
registration command:
- ```
+ ```bash
# As root:
su zulip -c '/home/zulip/deployments/current/manage.py register_server'
# Or as the zulip user, you can skip the `su zulip -c`:
diff --git a/docs/production/multiple-organizations.md b/docs/production/multiple-organizations.md
index 9bf58328e7..bed3062f28 100644
--- a/docs/production/multiple-organizations.md
+++ b/docs/production/multiple-organizations.md
@@ -71,7 +71,7 @@ If you'd like to use hostnames that are not subdomains of each other,
you can set the `REALM_HOSTS` setting in `/etc/zulip/settings.py` to a
Python dictionary, like this:
-```
+```python
REALM_HOSTS = {
'mysubdomain': 'hostname.example.com',
}
diff --git a/docs/production/postgresql.md b/docs/production/postgresql.md
index 3460dd2006..40e61fa810 100644
--- a/docs/production/postgresql.md
+++ b/docs/production/postgresql.md
@@ -53,13 +53,13 @@ PostgreSQL documentation):
Then you should specify the password of the user zulip for the
database in /etc/zulip/zulip-secrets.conf:
-```
+```ini
postgres_password = xxxx
```
Finally, you can stop your database on the Zulip server via:
-```
+```bash
sudo service postgresql stop
sudo update-rc.d postgresql disable
```
@@ -76,7 +76,7 @@ can give you some tips.
When debugging PostgreSQL issues, in addition to the standard `pg_top`
tool, often it can be useful to use this query:
-```
+```postgresql
SELECT procpid,waiting,query_start,current_query FROM pg_stat_activity ORDER BY procpid;
```
@@ -97,7 +97,7 @@ and enter recovery mode.
To start or stop PostgreSQL manually, use the pg_ctlcluster command:
-```
+```bash
pg_ctlcluster 9.1 [--force] main {start|stop|restart|reload}
```
@@ -128,7 +128,7 @@ database failed to start. It may tell you to check the logs, but you
won't find any information there. pg_ctlcluster runs the following
command underneath when it actually goes to start PostgreSQL:
-```
+```bash
/usr/lib/postgresql/9.1/bin/pg_ctl start -D /var/lib/postgresql/9.1/main -s -o \
'-c config_file="/etc/postgresql/9.1/main/postgresql.conf"'
```
diff --git a/docs/production/requirements.md b/docs/production/requirements.md
index e5a7b535f1..9f28b50e49 100644
--- a/docs/production/requirements.md
+++ b/docs/production/requirements.md
@@ -38,7 +38,7 @@ If you're using Ubuntu, the
[Ubuntu universe repository][ubuntu-repositories] must be
[enabled][enable-universe], which is usually just:
-```
+```bash
sudo add-apt-repository universe
sudo apt update
```
diff --git a/docs/production/settings.md b/docs/production/settings.md
index 7c1f6c9fe4..5d23223e32 100644
--- a/docs/production/settings.md
+++ b/docs/production/settings.md
@@ -15,7 +15,7 @@ This page discusses additional configuration that a system
administrator can do. To change any of the following settings, edit
the `/etc/zulip/settings.py` file on your Zulip server, and then
restart the server with the following command:
-```
+```bash
su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
```
diff --git a/docs/production/ssl-certificates.md b/docs/production/ssl-certificates.md
index 43953ac718..4d2403b95a 100644
--- a/docs/production/ssl-certificates.md
+++ b/docs/production/ssl-certificates.md
@@ -84,7 +84,7 @@ one as described in the section below after installing Zulip.
To enable the Certbot automation on an already-installed Zulip
server, run the following commands:
-```
+```bash
sudo -s # If not already root
/home/zulip/deployments/current/scripts/setup/setup-certbot --email=EMAIL HOSTNAME [HOSTNAME2...]
```
@@ -125,7 +125,7 @@ just pass the `--self-signed-cert` flag when
To generate a self-signed certificate for an already-installed Zulip
server, run the following commands:
-```
+```bash
sudo -s # If not already root
/home/zulip/deployments/current/scripts/setup/generate-self-signed-cert HOSTNAME
```
@@ -134,7 +134,7 @@ generated certificate.
After replacing the certificates, you need to reload `nginx` by
running the following as `root`:
-```
+```bash
service nginx reload
```
diff --git a/docs/production/troubleshooting.md b/docs/production/troubleshooting.md
index 0ec1d24118..0a7fecb31a 100644
--- a/docs/production/troubleshooting.md
+++ b/docs/production/troubleshooting.md
@@ -37,13 +37,13 @@ and restart various services.
### Checking status with `supervisorctl status`
You can check if the Zulip application is running using:
-```
+```bash
supervisorctl status
```
When everything is running as expected, you will see something like this:
-```
+```console
process-fts-updates RUNNING pid 2194, uptime 1:13:11
zulip-django RUNNING pid 2192, uptime 1:13:11
zulip-tornado RUNNING pid 2193, uptime 1:13:11
@@ -75,7 +75,7 @@ After you change configuration in `/etc/zulip/settings.py` or fix a
misconfiguration, you will often want to restart the Zulip application.
You can restart Zulip using:
-```
+```bash
supervisorctl restart all
```
@@ -83,7 +83,7 @@ supervisorctl restart all
Similarly, you can stop Zulip using:
-```
+```bash
supervisorctl stop all
```
@@ -111,13 +111,13 @@ problems and how to resolve them:
nginx will fail to start if you configured SSL incorrectly or did
not provide SSL certificates. To fix this, configure them properly
and then run:
- ```
+ ```bash
service nginx restart
```
* If your host is being port scanned by unauthorized users, you may see
messages in `/var/log/zulip/server.log` like
- ```
+ ```text
2017-02-22 14:11:33,537 ERROR Invalid HTTP_HOST header: '10.2.3.4'. You may need to add u'10.2.3.4' to ALLOWED_HOSTS.
```
Django uses the hostnames configured in `ALLOWED_HOSTS` to identify
@@ -128,7 +128,7 @@ problems and how to resolve them:
* An AMQPConnectionError traceback or error running rabbitmqctl
usually means that RabbitMQ is not running; to fix this, try:
- ```
+ ```bash
service rabbitmq-server restart
```
If RabbitMQ fails to start, the problem is often that you are using
@@ -176,7 +176,7 @@ You can ensure that the `unattended-upgrades` package never upgrades
PostgreSQL, memcached, Redis, or RabbitMQ, by configuring in
`/etc/apt/apt.conf.d/50unattended-upgrades`:
-```
+```text
// Python regular expressions, matching packages to exclude from upgrading
Unattended-Upgrade::Package-Blacklist {
"libc\d+";
diff --git a/docs/production/upgrade-or-modify.md b/docs/production/upgrade-or-modify.md
index 28ecfe4b30..c684c05a0b 100644
--- a/docs/production/upgrade-or-modify.md
+++ b/docs/production/upgrade-or-modify.md
@@ -27,7 +27,7 @@ to a new Zulip release:
You can download the latest
release with:
- ```
+ ```bash
wget https://www.zulip.org/dist/releases/zulip-server-latest.tar.gz
```
@@ -39,7 +39,7 @@ to a new Zulip release:
1. Log in to your Zulip and run as root:
- ```
+ ```bash
/home/zulip/deployments/current/scripts/upgrade-zulip zulip-server-VERSION.tar.gz
```
@@ -71,7 +71,7 @@ Git repository, which is great for [running pre-release changes from
master](#applying-changes-from-master) or [maintaining a
fork](#making-changes). The process is simple:
-```
+```bash
# Upgrade to an official release
/home/zulip/deployments/current/scripts/upgrade-zulip-from-git 1.8.1
# Upgrade to a branch (or other Git ref)
@@ -95,7 +95,7 @@ By default, this uses the main upstream Zulip server repository, but
you can configure any other Git repository by adding a section like
this to `/etc/zulip/zulip.conf`:
-```
+```ini
[deployment]
git_repo_url = https://github.com/zulip/zulip.git
```
@@ -123,7 +123,7 @@ suggest using that updated template to update
do not have a recent [complete backup][backups]), and make a copy
of the current template:
- ```
+ ```bash
cp -a /etc/zulip/settings.py ~/zulip-settings-backup.py
cp -a /home/zulip/deployments/current/zproject/prod_settings_template.py /etc/zulip/settings-new.py
```
@@ -137,7 +137,7 @@ suggest using that updated template to update
the template that your `/etc/zulip/settings.py` was installed
using, and the differences that your file has from that:
- ```
+ ```bash
/home/zulip/deployments/current/scripts/setup/compare-settings-to-template
```
@@ -149,7 +149,7 @@ suggest using that updated template to update
the server to pick up the new file; this should be a no-op, but it
is much better to discover immediately if it is not:
- ```
+ ```bash
cp -a /etc/zulip/settings-new.py /etc/zulip/settings.py
su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
```
@@ -261,7 +261,7 @@ instructions for other supported platforms.
2. As the Zulip user, stop the Zulip server and run the following
to back up the system:
- ```
+ ```bash
supervisorctl stop all
/home/zulip/deployments/current/manage.py backup --output=/home/zulip/release-upgrade.backup.tar.gz
```
@@ -271,7 +271,7 @@ instructions for other supported platforms.
`do-release-upgrade` and following the prompts until it completes
successfully:
- ```
+ ```bash
sudo -i # Or otherwise get a root shell
do-release-upgrade -d
```
@@ -288,7 +288,7 @@ instructions for other supported platforms.
4. As root, upgrade the database to the latest version of PostgreSQL:
- ```
+ ```bash
/home/zulip/deployments/current/scripts/setup/upgrade-postgresql
```
@@ -297,7 +297,7 @@ instructions for other supported platforms.
"collations"); this corrupts database indexes that rely on
collations. Regenerate the affected indexes by running:
- ```
+ ```bash
/home/zulip/deployments/current/scripts/setup/reindex-textual-data --force
```
@@ -307,7 +307,7 @@ instructions for other supported platforms.
full-text search indexes to work with the upgraded dictionary
packages:
- ```
+ ```bash
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets --audit-fts-indexes
@@ -330,7 +330,7 @@ instructions for other supported platforms.
4. As root, upgrade the database installation and OS configuration to
match the new OS version:
- ```
+ ```bash
touch /usr/share/postgresql/10/pgroonga_setup.sql.applied
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
pg_dropcluster 10 main --stop
@@ -346,7 +346,7 @@ instructions for other supported platforms.
among other things will recompile Zulip's Python module
dependencies for your new version of Python:
- ```
+ ```bash
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
@@ -361,7 +361,7 @@ instructions for other supported platforms.
7. As root, finish by verifying the contents of the full-text indexes:
- ```
+ ```bash
/home/zulip/deployments/current/manage.py audit_fts_indexes
```
@@ -378,7 +378,7 @@ instructions for other supported platforms.
4. As root, upgrade the database installation and OS configuration to
match the new OS version:
- ```
+ ```bash
apt remove upstart -y
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
pg_dropcluster 9.5 main --stop
@@ -394,7 +394,7 @@ instructions for other supported platforms.
among other things will recompile Zulip's Python module
dependencies for your new version of Python:
- ```
+ ```bash
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
@@ -429,7 +429,7 @@ instructions for other supported platforms.
4. As root, upgrade the database installation and OS configuration to
match the new OS version:
- ```
+ ```bash
apt remove upstart -y
/home/zulip/deployments/current/scripts/zulip-puppet-apply -f
pg_dropcluster 11 main --stop
@@ -445,7 +445,7 @@ instructions for other supported platforms.
among other things will recompile Zulip's Python module
dependencies for your new version of Python:
- ```
+ ```bash
rm -rf /srv/zulip-venv-cache/*
/home/zulip/deployments/current/scripts/lib/upgrade-zulip-stage-2 \
/home/zulip/deployments/current/ --ignore-static-assets
@@ -463,13 +463,13 @@ instructions for other supported platforms.
"collations"); this corrupts database indexes that rely on
collations. Regenerate the affected indexes by running:
- ```
+ ```bash
/home/zulip/deployments/current/scripts/setup/reindex-textual-data --force
```
8. As root, finish by verifying the contents of the full-text indexes:
- ```
+ ```bash
/home/zulip/deployments/current/manage.py audit_fts_indexes
```
@@ -570,7 +570,7 @@ Git guide][git-guide] if you need a primer):
[GitHub](https://github.com).
* Create a branch (named `acme-branch` below) containing your changes:
-```
+```bash
cd zulip
git checkout -b acme-branch 2.0.4
```
@@ -578,7 +578,7 @@ git checkout -b acme-branch 2.0.4
* Use your favorite code editor to modify Zulip.
* Commit your changes and push them to GitHub:
-```
+```bash
git commit -a
# Use `git diff` to verify your changes are what you expect
@@ -614,7 +614,7 @@ Otherwise, you'll need to update your branch by rebasing your changes
repository). The example below assumes you have a branch off of 2.0.4
and want to upgrade to 2.1.0.
-```
+```bash
cd zulip
git fetch --tags upstream
git checkout acme-branch
@@ -657,7 +657,7 @@ fixes on your local Zulip server without waiting for an official release.
Many bugs have small/simple fixes. In this case, you can use the Git
workflow [described above](#making-changes), using:
-```
+```bash
git fetch upstream
git cherry-pick abcd1234
```
diff --git a/docs/production/upload-backends.md b/docs/production/upload-backends.md
index 9ab225fa68..426a185401 100644
--- a/docs/production/upload-backends.md
+++ b/docs/production/upload-backends.md
@@ -50,7 +50,7 @@ as world-readable, whereas the "uploaded files" one is not.
With Zulip 1.9.0 and newer, you can do this automatically with the
following commands run as root:
- ```
+ ```bash
crudini --set /etc/zulip/zulip.conf application_server no_serve_uploads true
/home/zulip/deployments/current/scripts/zulip-puppet-apply
```
@@ -83,7 +83,7 @@ each of the two buckets, you'll want to
[add an S3 bucket policy](https://awspolicygen.s3.amazonaws.com/policygen.html)
entry that looks something like this:
-```
+```json
{
"Version": "2012-10-17",
"Id": "Policy1468991802321",
@@ -117,7 +117,7 @@ entry that looks something like this:
The avatars bucket is intended to be world-readable, so you'll also
need a block like this:
-```
+```json
{
"Sid": "Stmt1468991795389",
"Effect": "Allow",
diff --git a/docs/subsystems/caching.md b/docs/subsystems/caching.md
index 9a54855974..caf45ab15e 100644
--- a/docs/subsystems/caching.md
+++ b/docs/subsystems/caching.md
@@ -132,7 +132,7 @@ you configure some code to run every time Django does something (for
There's a handful of lines in `zerver/models.py` like these that
configure this:
-```
+```python
post_save.connect(flush_realm, sender=Realm)
post_save.connect(flush_user_profile, sender=UserProfile)
```
diff --git a/docs/subsystems/email.md b/docs/subsystems/email.md
index bc615c842f..0d7fd1754c 100644
--- a/docs/subsystems/email.md
+++ b/docs/subsystems/email.md
@@ -90,7 +90,7 @@ following keys in `zproject/dev-secrets.conf`
Here is an example of how `zproject/dev-secrets.conf` might look if
you are using Gmail.
-```
+```ini
email_host = smtp.gmail.com
email_port = 587
email_host_user = username@gmail.com
diff --git a/docs/subsystems/events-system.md b/docs/subsystems/events-system.md
index 6f4c1b62f7..25bd249a4d 100644
--- a/docs/subsystems/events-system.md
+++ b/docs/subsystems/events-system.md
@@ -233,11 +233,13 @@ ready to write a test in `test_events.py`.
The actual code for a `test_events` test can be quite concise:
- def test_default_streams_events(self) -> None:
- stream = get_stream("Scotland", self.user_profile.realm)
- events = self.verify_action(lambda: do_add_default_stream(stream))
- check_default_streams("events[0]", events[0])
- # (some details omitted)
+```python
+def test_default_streams_events(self) -> None:
+ stream = get_stream("Scotland", self.user_profile.realm)
+ events = self.verify_action(lambda: do_add_default_stream(stream))
+ check_default_streams("events[0]", events[0])
+ # (some details omitted)
+```
The real trick is debugging these tests.
@@ -292,23 +294,25 @@ only has one required parameter, which is the action function. We
typically express the action function as a lambda, so that we
can pass in arguments:
- events = self.verify_action(lambda: do_add_default_stream(stream))
+```python
+events = self.verify_action(lambda: do_add_default_stream(stream))
+```
There are some notable optional parameters for `verify_action`:
- * `state_change_expected` must be set to `False` if your action
- doesn't actually require state changes for some reason; otherwise,
- `verify_action` will complain that your test doesn't really
- exercise any `apply_events` logic. Typing notifications (which
- are ephemereal) are a common place where we use this.
+* `state_change_expected` must be set to `False` if your action
+ doesn't actually require state changes for some reason; otherwise,
+ `verify_action` will complain that your test doesn't really
+ exercise any `apply_events` logic. Typing notifications (which
+ are ephemereal) are a common place where we use this.
- * `num_events` will tell `verify_action` how many events the
- `hamlet` user will receive after the action (the default is 1).
+* `num_events` will tell `verify_action` how many events the
+ `hamlet` user will receive after the action (the default is 1).
- * parameters such as `client_gravatar` and `slim_presence` get
- passed along to `fetch_initial_state_data` (and it's important
- to test both boolean values of these parameters for relevant
- actions).
+* parameters such as `client_gravatar` and `slim_presence` get
+ passed along to `fetch_initial_state_data` (and it's important
+ to test both boolean values of these parameters for relevant
+ actions).
For advanced use cases of `verify_action`, we highly recommend reading
the code itself in `BaseAction` (in `test_events.py`).
@@ -327,9 +331,11 @@ The second is higher-detail check inside `test_events` that this
specific test generated the expected series of events. Let's look at
the last line of our example test snippet:
- # ...
- events = self.verify_action(lambda: do_add_default_stream(stream))
- check_default_streams("events[0]", events[0])
+```python
+# ...
+events = self.verify_action(lambda: do_add_default_stream(stream))
+check_default_streams("events[0]", events[0])
+```
We have discussed `verify_action` in some detail, and you will
note that it returns the actual events generated by the action
@@ -346,13 +352,15 @@ If you are creating a new event format, then you will have to
write your own schema checker in `event_schema.py`. Here is
the example relevant to our example:
- default_streams_event = event_dict_type(
- required_keys=[
- ("type", Equals("default_streams")),
- ("default_streams", ListType(DictType(basic_stream_fields))),
- ]
- )
- check_default_streams = make_checker(default_streams_event)
+```python
+default_streams_event = event_dict_type(
+ required_keys=[
+ ("type", Equals("default_streams")),
+ ("default_streams", ListType(DictType(basic_stream_fields))),
+ ]
+)
+check_default_streams = make_checker(default_streams_event)
+```
Note that `basic_stream_fields` is not shown in these docs. The
best way to understand how to write schema checkers is to read
diff --git a/docs/subsystems/full-text-search.md b/docs/subsystems/full-text-search.md
index 514c71f2b5..96d9e7adac 100644
--- a/docs/subsystems/full-text-search.md
+++ b/docs/subsystems/full-text-search.md
@@ -41,33 +41,44 @@ All steps in this section should be run as the `root` user; on most installs, th
1. Alter the deployment setting:
- crudini --set /etc/zulip/zulip.conf machine pgroonga enabled
+ ```bash
+ crudini --set /etc/zulip/zulip.conf machine pgroonga enabled
+ ```
1. Update the deployment to respect that new setting:
- /home/zulip/deployments/current/scripts/zulip-puppet-apply
+ ```bash
+ /home/zulip/deployments/current/scripts/zulip-puppet-apply
+ ```
1. Edit `/etc/zulip/settings.py`, to add:
- USING_PGROONGA = True
+ ```python
+ USING_PGROONGA = True
+ ```
1. Apply the PGroonga migrations:
- su zulip -c '/home/zulip/deployments/current/manage.py migrate pgroonga'
+ ```bash
+ su zulip -c '/home/zulip/deployments/current/manage.py migrate pgroonga'
+ ```
Note that the migration may take a long time, and users will be
unable to send new messages until the migration finishes.
1. Once the migrations are complete, restart Zulip:
- su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
-
+ ```bash
+ su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
+ ```
### Disabling PGroonga
1. Remove the PGroonga migration:
- su zulip -c '/home/zulip/deployments/current/manage.py migrate pgroonga zero'
+ ```bash
+ su zulip -c '/home/zulip/deployments/current/manage.py migrate pgroonga zero'
+ ```
If you intend to re-enable PGroonga later, you can skip this step,
at the cost of your Message table being slightly larger than it would
@@ -75,12 +86,18 @@ All steps in this section should be run as the `root` user; on most installs, th
1. Edit `/etc/zulip/settings.py`, editing the line containing `USING_PGROONGA` to read:
- USING_PGROONGA = False
+ ```python
+ USING_PGROONGA = False
+ ```
1. Restart Zulip:
- su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
+ ```bash
+ su zulip -c '/home/zulip/deployments/current/scripts/restart-server'
+ ```
1. Finally, remove the deployment setting:
- crudini --del /etc/zulip/zulip.conf machine pgroonga
+ ```bash
+ crudini --del /etc/zulip/zulip.conf machine pgroonga
+ ```
diff --git a/docs/subsystems/hotspots.md b/docs/subsystems/hotspots.md
index f0dbdbf07d..4f01d64555 100644
--- a/docs/subsystems/hotspots.md
+++ b/docs/subsystems/hotspots.md
@@ -17,7 +17,7 @@ In `zerver/lib/hotspots.py`, add your content to the `ALL_HOTSPOTS` dictionary.
Each key-value pair in `ALL_HOTSPOTS` associates the name of the hotspot with the
content displayed to the user.
-```
+```python
ALL_HOTSPOTS = {
...
'new_hotspot_name': {
@@ -67,8 +67,8 @@ a target element on a sidebar or overlay, the icon's z-index may need to
be increased to 101, 102, or 103.
This adjustment can be made at the bottom of `static/styles/hotspots.css`:
-```
-\#hotspot_new_hotspot_name_icon {
+```css
+#hotspot_new_hotspot_name_icon {
z-index: 103;
}
```
diff --git a/docs/subsystems/html-css.md b/docs/subsystems/html-css.md
index 98f20e5f75..d2b97a4f4f 100644
--- a/docs/subsystems/html-css.md
+++ b/docs/subsystems/html-css.md
@@ -91,7 +91,7 @@ the default context available to all Jinja2 templates.
renders the template. For example, if you want to find the context
passed to `index.html`, you can do:
-```
+```console
$ git grep zerver/app/index.html '*.py'
zerver/views/home.py: response = render(request, 'zerver/app/index.html',
```
@@ -101,7 +101,7 @@ The next line in the code being the context definition.
* `zproject/urls.py` for some fairly static pages that are rendered
using `TemplateView`, for example:
-```
+```python
path('config-error/google', TemplateView.as_view(
template_name='zerver/config_error.html',),
{'google_error': True},),
diff --git a/docs/subsystems/logging.md b/docs/subsystems/logging.md
index 1ab45e9f67..7e0166bd0e 100644
--- a/docs/subsystems/logging.md
+++ b/docs/subsystems/logging.md
@@ -72,7 +72,7 @@ In production, one usually wants to look at `errors.log` for errors
since the main server log can be very verbose, but the main server log
can be extremely valuable for investigating performance problems.
-```
+```text
2016-05-20 14:50:22.056 INFO [zr] 127.0.0.1 GET 302 528ms (db: 1ms/1q) (+start: 123ms) / (unauth@zulip via ?)
[20/May/2016 14:50:22]"GET / HTTP/1.0" 302 0
2016-05-20 14:50:22.272 INFO [zr] 127.0.0.1 GET 200 124ms (db: 3ms/2q) /login/ (unauth@zulip via ?)
diff --git a/docs/subsystems/queuing.md b/docs/subsystems/queuing.md
index f90494de53..9485a61c5f 100644
--- a/docs/subsystems/queuing.md
+++ b/docs/subsystems/queuing.md
@@ -75,14 +75,14 @@ processor's code path, but it isn't always possible.
If you need to clear a queue (delete all the events in it), run
`./manage.py purge_queue `, for example:
-```
+```bash
./manage.py purge_queue user_activity
```
You can also use the amqp tools directly. Install `amqp-tools` from
apt and then run:
-```
+```bash
amqp-delete-queue --username=zulip --password='...' --server=localhost \
--queue=user_presence
```
diff --git a/docs/subsystems/realms.md b/docs/subsystems/realms.md
index 70d1243d5e..9dd688a561 100644
--- a/docs/subsystems/realms.md
+++ b/docs/subsystems/realms.md
@@ -25,7 +25,7 @@ There are two main methods for creating realms.
#### Using unique link generator
```bash
- ./manage.py generate_realm_creation_link
+./manage.py generate_realm_creation_link
```
The above command will output a URL which can be used for creating a
@@ -80,7 +80,7 @@ lookup should still work even if you disable proxy for
*.zulipdev.com. If it doesn't you can add zulipdev.com records in
`/etc/hosts` file. The file should look something like this.
- ```
+```text
127.0.0.1 localhost
127.0.0.1 zulipdev.com
diff --git a/docs/subsystems/schema-migrations.md b/docs/subsystems/schema-migrations.md
index 925b2c393c..287e544c9f 100644
--- a/docs/subsystems/schema-migrations.md
+++ b/docs/subsystems/schema-migrations.md
@@ -93,7 +93,7 @@ migrations.
Another important note is that making changes to the data in a table
via `RunPython` code and `ALTER TABLE` operations within a single,
atomic migration don't mix well. If you encounter an error such as
- ```
+ ```text
django.db.utils.OperationalError: cannot ALTER TABLE "table_name" because it has pending trigger events
```
when testing the migration, the reason is often that these operations
diff --git a/docs/subsystems/settings.md b/docs/subsystems/settings.md
index 01fb6335f1..3604b63ee1 100644
--- a/docs/subsystems/settings.md
+++ b/docs/subsystems/settings.md
@@ -30,7 +30,7 @@ means that the settings files are Python programs that set a lot of
variables with all-capital names like `EMAIL_GATEWAY_PATTERN`. You can
access these anywhere in the Zulip Django code using e.g.:
-```
+```python
from django.conf import settings
print(settings.EMAIL_GATEWAY_PATTERN)
```
@@ -38,7 +38,7 @@ print(settings.EMAIL_GATEWAY_PATTERN)
Additionally, if you need to access a Django setting in a shell
script (or just on the command line for debugging), you can use e.g.:
-```
+```console
$ ./scripts/get-django-setting EMAIL_GATEWAY_PATTERN
%s@localhost:9991
```
diff --git a/docs/subsystems/unread_messages.md b/docs/subsystems/unread_messages.md
index 2f5720117d..027cc24ccb 100644
--- a/docs/subsystems/unread_messages.md
+++ b/docs/subsystems/unread_messages.md
@@ -12,7 +12,7 @@ state grouped by relevant conversation keys. This data is included in the
`unread_msgs` key if both `update_message_flags` and `message` are required
in the register call.
-```
+```json
{
"count": 4,
"huddles": [
diff --git a/docs/testing/linters.md b/docs/testing/linters.md
index 47440e9339..0babc505fa 100644
--- a/docs/testing/linters.md
+++ b/docs/testing/linters.md
@@ -37,9 +37,11 @@ and exempting legacy files from lint checks.
If you run `./tools/test-all`, it will automatically run the linters.
You can also run them individually or pass specific files:
- ./tools/lint
- ./tools/lint static/js/compose.js
- ./tools/lint static/js/
+```bash
+./tools/lint
+./tools/lint static/js/compose.js
+./tools/lint static/js/
+```
`./tools/lint` has many useful options; you can read about them in its
internal documentation using `./tools/lint --help`. Of particular
diff --git a/docs/testing/mypy.md b/docs/testing/mypy.md
index b056374f53..52cf9edcf1 100644
--- a/docs/testing/mypy.md
+++ b/docs/testing/mypy.md
@@ -48,7 +48,9 @@ requirements/mypy.txt`.
To run mypy on Zulip's python code, you can run the command:
- tools/run-mypy
+```bash
+tools/run-mypy
+```
Mypy outputs errors in the same style as a compiler would. For
example, if your code has a type error like this:
@@ -60,7 +62,7 @@ foo = '1'
you'll get an error like this:
-```
+```console
test.py: note: In function "test":
test.py:200: error: Incompatible types in assignment (expression has type "str", variable has type "int")
```
@@ -507,7 +509,7 @@ have untracked files in your Zulip checkout safely). So if you get a
`mypy` error like this after adding a new file that is referenced by
the existing codebase:
-```
+```console
mypy | zerver/models.py:1234: note: Import of 'zerver.lib.markdown_wrappers' ignored
mypy | zerver/models.py:1234: note: (Using --follow-imports=error, module not passed on command line)
```
diff --git a/docs/testing/testing-with-django.md b/docs/testing/testing-with-django.md
index 8395dc7ecd..3cb57d5ea0 100644
--- a/docs/testing/testing-with-django.md
+++ b/docs/testing/testing-with-django.md
@@ -30,14 +30,18 @@ on a fast machine. When you are in iterative mode, you can run
individual tests or individual modules, following the dotted.test.name
convention below:
- cd /srv/zulip
- ./tools/test-backend zerver.tests.test_queue_worker.WorkerTest
+```bash
+cd /srv/zulip
+./tools/test-backend zerver.tests.test_queue_worker.WorkerTest
+```
There are many command line options for running Zulip tests, such
as a `--verbose` option. The
best way to learn the options is to use the online help:
- ./tools/test-backend --help
+```bash
+./tools/test-backend --help
+```
We also have ways to instrument our tests for finding code coverage,
URL coverage, and slow tests. Use the `-h` option to discover these
@@ -172,24 +176,28 @@ analyzed.
Say you have a module `greetings` defining the following functions:
- def fetch_database(key: str) -> str:
- # ...
- # Do some look-ups in a database
- return data
+```python
+def fetch_database(key: str) -> str:
+ # ...
+ # Do some look-ups in a database
+ return data
- def greet(name_key: str) -> str:
- name = fetch_database(name_key)
- return "Hello" + name
+def greet(name_key: str) -> str:
+ name = fetch_database(name_key)
+ return "Hello" + name
+```
* You want to test `greet()`.
* In your test, you want to call `greet("Mario")` and verify that it returns the correct greeting:
- from greetings import greet
+ ```python
+ from greetings import greet
- def test_greet() -> str:
- greeting = greet("Mario")
- assert greeting == "Hello Mr. Mario Mario"
+ def test_greet() -> str:
+ greeting = greet("Mario")
+ assert greeting == "Hello Mr. Mario Mario"
+ ```
-> **You have a problem**: `greet()` calls `fetch_database()`. `fetch_database()` does some look-ups in
a database. *You haven't created that database for your tests, so your test would fail, even though
@@ -202,15 +210,17 @@ Say you have a module `greetings` defining the following functions:
-> **Solution**: You mock `fetch_database()`. This is also referred to as "mocking out" `fetch_database()`.
- from unittest.mock import patch
+```python
+from unittest.mock import patch
- def test_greet() -> None:
- # Mock `fetch_database()` with an object that acts like a shell: It still accepts calls like `fetch_database()`,
- # but doesn't do any database lookup. We "fill" the shell with a return value; This value will be returned on every
- # call to `fetch_database()`.
- with patch("greetings.fetch_database", return_value="Mr. Mario Mario"):
- greeting = greetings.greet("Mario")
- assert greeting == "Hello Mr. Mario Mario"
+def test_greet() -> None:
+ # Mock `fetch_database()` with an object that acts like a shell: It still accepts calls like `fetch_database()`,
+ # but doesn't do any database lookup. We "fill" the shell with a return value; This value will be returned on every
+ # call to `fetch_database()`.
+ with patch("greetings.fetch_database", return_value="Mr. Mario Mario"):
+ greeting = greetings.greet("Mario")
+ assert greeting == "Hello Mr. Mario Mario"
+```
That's all. Note that **this mock is suitable for testing `greet()`, but not for testing `fetch_database()`**.
More generally, you should only mock those functions you explicitly don't want to test.
@@ -226,17 +236,21 @@ those are the ones starting with with a dunder `__`). From the docs:
`Mock` itself is a class that principally accepts and records any and all calls. A piece of code like
- from unittest import mock
+```python
+from unittest import mock
- foo = mock.Mock()
- foo.bar('quux')
- foo.baz
- foo.qux = 42
+foo = mock.Mock()
+foo.bar('quux')
+foo.baz
+foo.qux = 42
+```
is *not* going to throw any errors. Our mock silently accepts all these calls and records them.
`Mock` also implements methods for us to access and assert its records, e.g.
- foo.bar.assert_called_with('quux')
+```python
+foo.bar.assert_called_with('quux')
+```
Finally, `unittest.mock` also provides a method to mock objects only within a scope: `patch()`. We can use `patch()` either
as a decorator or as a context manager. In both cases, the mock created by `patch()` will apply for the scope of the decorator /
@@ -244,13 +258,15 @@ context manager. `patch()` takes only one required argument `target`. `target` i
the name of the object you want to mock*. It will then assign a `MagicMock()` to that object.
As an example, look at the following code:
- from unittest import mock
- from os import urandom
+```python
+from unittest import mock
+from os import urandom
- with mock.patch('__main__.urandom', return_value=42):
- print(urandom(1))
- print(urandom(1)) # No matter what value we plug in for urandom, it will always return 42.
- print(urandom(1)) # We exited the context manager, so the mock doesn't apply anymore. Will return a random byte.
+with mock.patch('__main__.urandom', return_value=42):
+ print(urandom(1))
+ print(urandom(1)) # No matter what value we plug in for urandom, it will always return 42.
+print(urandom(1)) # We exited the context manager, so the mock doesn't apply anymore. Will return a random byte.
+```
*Note that calling `mock.patch('os.urandom', return_value=42)` wouldn't work here*: `os.urandom` would be the name of our patched
object. However, we imported `urandom` with `from os import urandom`; hence, we bound the `urandom` name to our current module
@@ -262,27 +278,35 @@ On the other hand, if we had used `import os.urandom`, we would need to call `mo
* Including the Python mocking library:
- from unittest import mock
+ ```python
+ from unittest import mock
+ ```
* Mocking a class with a context manager:
- with mock.patch('module.ClassName', foo=42, return_value='I am a mock') as my_mock:
- # In here, 'module.ClassName' is mocked with a MagicMock() object my_mock.
- # my_mock has an attribute named foo with the value 42.
- # var = module.ClassName() will assign 'I am a mock' to var.
+ ```python
+ with mock.patch('module.ClassName', foo=42, return_value='I am a mock') as my_mock:
+ # In here, 'module.ClassName' is mocked with a MagicMock() object my_mock.
+ # my_mock has an attribute named foo with the value 42.
+ # var = module.ClassName() will assign 'I am a mock' to var.
+ ```
* Mocking a class with a decorator:
- @mock.patch('module.ClassName', foo=42, return_value='I am a mock')
- def my_function(my_mock):
- # ...
- # In here, 'module.ClassName' will behave as in the previous example.
+ ```python
+ @mock.patch('module.ClassName', foo=42, return_value='I am a mock')
+ def my_function(my_mock):
+ # ...
+ # In here, 'module.ClassName' will behave as in the previous example.
+ ```
* Mocking a class attribute:
- with mock.patch.object(module.ClassName, 'class_method', return_value=42)
- # In here, 'module.ClassName' has the same properties as before, except for 'class_method'
- # Calling module.ClassName.class_method() will now return 42.
+ ```python
+ with mock.patch.object(module.ClassName, 'class_method', return_value=42)
+ # In here, 'module.ClassName' has the same properties as before, except for 'class_method'
+ # Calling module.ClassName.class_method() will now return 42.
+ ```
Note the missing quotes around module.ClassName in the patch.object() call.
@@ -292,11 +316,13 @@ For mocking we generally use the "mock" library and use `mock.patch` as
a context manager or decorator. We also take advantage of some context managers
from Django as well as our own custom helpers. Here is an example:
- with self.settings(RATE_LIMITING=True):
- with mock.patch('zerver.decorator.rate_limit_user') as rate_limit_mock:
- api_result = my_webhook(request)
+```python
+with self.settings(RATE_LIMITING=True):
+ with mock.patch('zerver.decorator.rate_limit_user') as rate_limit_mock:
+ api_result = my_webhook(request)
- self.assertTrue(rate_limit_mock.called)
+self.assertTrue(rate_limit_mock.called)
+```
Follow [this link](../subsystems/settings.html#testing-non-default-settings) for more
information on the "settings" context manager.
diff --git a/docs/testing/testing-with-node.md b/docs/testing/testing-with-node.md
index ff6d3cf6cc..cc3c305474 100644
--- a/docs/testing/testing-with-node.md
+++ b/docs/testing/testing-with-node.md
@@ -7,8 +7,8 @@ system since it is much (>100x) faster and also easier to do correctly
than the Puppeteer system.
You can run this test suite as follows:
-```
- tools/test-js-with-node
+```bash
+tools/test-js-with-node
```
See `test-js-with-node --help` for useful options; even though the
@@ -19,7 +19,7 @@ The JS unit tests are written to work with node. You can find them
in `frontend_tests/node_tests`. Here is an example test from
`frontend_tests/node_tests/stream_data.js`:
-```
+```js
(function test_get_by_id() {
stream_data.clear_subscriptions();
var id = 42;
@@ -109,19 +109,23 @@ different types of declarations depending on whether we want to:
For all the modules where you want to run actual code, add statements
like the following toward the top of your test file:
-> zrequire('util');
-> zrequire('stream_data');
-> zrequire('Filter', 'js/filter');
+```js
+zrequire('util');
+zrequire('stream_data');
+zrequire('Filter', 'js/filter');
+```
For modules that you want to completely stub out, use a pattern like
this:
-> const reminder = mock_esm("../../static/js/reminder", {
-> is_deferred_delivery: noop,
-> });
->
-> // then maybe further down
-> reminder.is_deferred_delivery = () => true;
+```js
+const reminder = mock_esm("../../static/js/reminder", {
+ is_deferred_delivery: noop,
+});
+
+// then maybe further down
+reminder.is_deferred_delivery = () => true;
+```
One can similarly stub out functions in a module's exported interface
with either `noop` functions or actual code.
@@ -132,13 +136,15 @@ this is a pretty strong code smell that the other module might be
lacking in cohesion, but sometimes it's not worth going down the
rabbit hole of trying to improve that. The pattern here is this:
-> // Import real code.
-> zrequire('narrow_state');
->
-> // And later...
-> narrow_state.stream = function () {
-> return 'office';
-> };
+```js
+// Import real code.
+zrequire('narrow_state');
+
+// And later...
+narrow_state.stream = function () {
+ return 'office';
+};
+```
## Creating new test modules
@@ -151,8 +157,8 @@ in that directory to create a new test.
You can automatically generate coverage reports for the JavaScript unit
tests like this:
-```
- tools/test-js-with-node --coverage
+```bash
+tools/test-js-with-node --coverage
```
If tests pass, you will get instructions to view coverage reports
diff --git a/docs/testing/testing-with-puppeteer.md b/docs/testing/testing-with-puppeteer.md
index 8cdaa8d2ab..03ac150696 100644
--- a/docs/testing/testing-with-puppeteer.md
+++ b/docs/testing/testing-with-puppeteer.md
@@ -10,8 +10,8 @@ keyboard shortcuts, etc.).
## Running tests
You can run this test suite as follows:
-```
- tools/test-js-with-puppeteer
+```bash
+tools/test-js-with-puppeteer
```
See `tools/test-js-with-puppeteer --help` for useful options,
@@ -34,7 +34,7 @@ appears/disappears", or "Click on this HTML element".
For example, this function might test the `x` keyboard shortcut to
open the compose box for a new private message:
-```
+```js
async function test_private_message_compose_shortcut(page) {
await page.keyboard.press("KeyX");
await page.waitForSelector("#private_message_recipient", {visible: true});
diff --git a/docs/testing/testing.md b/docs/testing/testing.md
index 5c1da100f5..77d2504f98 100644
--- a/docs/testing/testing.md
+++ b/docs/testing/testing.md
@@ -23,7 +23,7 @@ you're using Vagrant, you may need to enter it with `vagrant ssh`.
You can run all of the test suites (similar to our continuous integration)
as follows:
-```
+```bash
./tools/test-all
```
@@ -31,7 +31,7 @@ However, you will rarely want to do this while actively developing,
because it takes a long time. Instead, your edit/refresh cycle will
typically involve running subsets of the tests with commands like these:
-```
+```bash
./tools/lint zerver/lib/actions.py # Lint the file you just changed
./tools/test-backend zerver.tests.test_markdown.MarkdownTest.test_inline_youtube
./tools/test-backend MarkdownTest # Run `test-backend --help` for more options
@@ -140,7 +140,7 @@ depending on Internet access.
This enforcement code results in the following exception:
- ```
+ ```pytb
File "tools/test-backend", line 120, in internet_guard
raise Exception("Outgoing network requests are not allowed in the Zulip tests."
Exception: Outgoing network requests are not allowed in the Zulip tests.
diff --git a/docs/translating/internationalization.md b/docs/translating/internationalization.md
index 08c7990225..16ef4fcc71 100644
--- a/docs/translating/internationalization.md
+++ b/docs/translating/internationalization.md
@@ -139,7 +139,7 @@ template so that it can be translated.
To mark a string for translation in a Jinja2 template, you
can use the `_()` function in the templates like this:
-```
+```jinja
{{ _("English text") }}
```
@@ -149,14 +149,14 @@ help translators to translate an entire sentence. To translate a
block, Jinja2 uses the [trans][] tag. So rather than writing
something ugly and confusing for translators like this:
-```
+```jinja
# Don't do this!
{{ _("This string will have") }} {{ value }} {{ _("inside") }}
```
You can instead use:
-```
+```jinja
{% trans %}This string will have {{ value }} inside.{% endtrans %}
```
@@ -165,7 +165,7 @@ You can instead use:
A string in Python can be marked for translation using the `_()` function,
which can be imported as follows:
-```
+```python
from django.utils.translation import gettext as _
```
@@ -174,7 +174,7 @@ ensure this, the error message passed to `json_error` and
`JsonableError` should always be a literal string enclosed by `_()`
function, e.g.:
-```
+```python
json_error(_('English text'))
JsonableError(_('English text'))
```
@@ -257,13 +257,13 @@ $t_html(
For translations in Handlebars templates we also use FormatJS, through two
Handlebars [helpers][] that Zulip registers. The syntax for simple strings is:
-```
+```html+handlebars
{{t 'English text' }}
```
If you are passing a translated string to a Handlebars partial, you can use:
-```
+```html+handlebars
{{> template_name
variable_name=(t 'English text')
}}
@@ -271,7 +271,8 @@ If you are passing a translated string to a Handlebars partial, you can use:
The syntax for block strings or strings containing variables is:
-```
+
+```text
{{#tr}}
Block of English text.
{{/tr}}
@@ -298,7 +299,7 @@ Restrictions on including HTML tags in translated strings are the same
as in JavaScript. You can insert more complex markup using a local
custom HTML tag like this:
-```
+```html+handlebars
{{#tr}}
HTML linking to the login page
{{#*inline "z-link"}}{{> @partial-block}}{{/inline}}
@@ -319,7 +320,7 @@ file, located at `~/.transifexrc`.
You can find details on how to set it up [here][transifexrc], but it should
look similar to this (with your credentials):
-```
+```ini
[https://www.transifex.com]
username = user
token =
diff --git a/docs/translating/translating.md b/docs/translating/translating.md
index b49eb41f3a..6946732574 100644
--- a/docs/translating/translating.md
+++ b/docs/translating/translating.md
@@ -143,7 +143,7 @@ There are a few ways to see your translations in the Zulip UI:
you can pass the `Accept-Language` header; here is some sample code to
test `Accept-Language` header using Python and `requests`:
- ```
+ ```python
import requests
headers = {"Accept-Language": "de"}
response = requests.get("http://localhost:9991/login/", headers=headers)
diff --git a/docs/tutorials/life-of-a-request.md b/docs/tutorials/life-of-a-request.md
index f09c88d7ca..e4ab124786 100644
--- a/docs/tutorials/life-of-a-request.md
+++ b/docs/tutorials/life-of-a-request.md
@@ -29,7 +29,7 @@ File not found errors (404) are served using a Django URL, so that we
can use configuration variables (like whether the user is logged in)
in the 404 error page.
-```
+```nginx
location /static/ {
alias /home/zulip/prod-static/;
# Set a nonexistent path, so we just serve the nice Django 404 page.
@@ -61,10 +61,7 @@ in
[the directory structure doc](../overview/directory-structure.md).
The main Zulip Django app is `zerver`. The routes are found in
-```
-zproject/urls.py
-zproject/legacy_urls.py
-```
+`zproject/urls.py` and `zproject/legacy_urls.py`.
There are HTML-serving, REST API, legacy, and webhook url patterns. We
will look at how each of these types of requests are handled, and focus
@@ -140,9 +137,11 @@ yields a response with this HTTP header:
We can see this reflected in [zproject/urls.py](https://github.com/zulip/zulip/blob/master/zproject/urls.py):
- rest_path('users',
- GET=get_members_backend,
- PUT=create_user_backend),
+```python
+rest_path('users',
+ GET=get_members_backend,
+ PUT=create_user_backend),
+```
In this way, the API is partially self-documenting.
@@ -175,7 +174,7 @@ the request, and then figure out which view to show from that.
In our example,
-```
+```python
GET=get_members_backend,
PUT=create_user_backend
```
@@ -195,7 +194,9 @@ This is covered in good detail in the [writing views doc](writing-views.md).
Our API works on JSON requests and responses. Every API endpoint should
return `json_error` in the case of an error, which gives a JSON string:
-`{'result': 'error', 'msg': }`
+```json
+{"result": "error", "msg": ""}
+```
in a
[HTTP response](https://docs.djangoproject.com/en/1.8/ref/request-response/)
@@ -203,11 +204,13 @@ with a content type of 'application/json'.
To pass back data from the server to the calling client, in the event of
a successfully handled request, we use
-`json_success(data=`.
+`json_success(data=)`.
This will result in a JSON string:
-`{'result': 'success', 'msg': '', 'data'='{'var_name1': 'var_value1', 'var_name2': 'var_value2'...}`
+```json
+{"result": "success", "msg": "", "data": {"var_name1": "var_value1", "var_name2": "var_value2"}}
+```
with a HTTP 200 status and a content type of 'application/json'.
diff --git a/docs/tutorials/new-feature-tutorial.md b/docs/tutorials/new-feature-tutorial.md
index 07f234d864..19c2378634 100644
--- a/docs/tutorials/new-feature-tutorial.md
+++ b/docs/tutorials/new-feature-tutorial.md
@@ -68,7 +68,7 @@ organization in Zulip). The following files are involved in the process:
**Create and run the migration:** To create and apply a migration, run the
following commands:
-```
+```bash
./manage.py makemigrations
./manage.py migrate
```
@@ -168,14 +168,13 @@ boolean field, `mandatory_topics`, to the Realm model in
`zerver/models.py`.
``` diff
+ # zerver/models.py
-# zerver/models.py
-
-class Realm(models.Model):
- # ...
- emails_restricted_to_domains: bool = models.BooleanField(default=True)
- invite_required: bool = models.BooleanField(default=False)
-+ mandatory_topics: bool = models.BooleanField(default=False)
+ class Realm(models.Model):
+ # ...
+ emails_restricted_to_domains: bool = models.BooleanField(default=True)
+ invite_required: bool = models.BooleanField(default=False)
++ mandatory_topics: bool = models.BooleanField(default=False)
```
The Realm model also contains an attribute, `property_types`, which
@@ -186,18 +185,17 @@ is the field's type. Add the new field to the `property_types`
dictionary.
``` diff
+ # zerver/models.py
-# zerver/models.py
-
-class Realm(models.Model)
- # ...
- # Define the types of the various automatically managed properties
- property_types = dict(
- add_emoji_by_admins_only=bool,
- allow_edit_history=bool,
- # ...
-+ mandatory_topics=bool,
- # ...
+ class Realm(models.Model)
+ # ...
+ # Define the types of the various automatically managed properties
+ property_types = dict(
+ add_emoji_by_admins_only=bool,
+ allow_edit_history=bool,
+ # ...
++ mandatory_topics=bool,
+ # ...
```
**The majority of realm settings can be included in
@@ -231,7 +229,7 @@ is helpful.
Apply the migration using Django's `migrate` command: `./manage.py migrate`.
Output:
-```
+```console
shell $ ./manage.py migrate
Operations to perform:
Synchronize unmigrated apps: staticfiles, analytics, pipeline
@@ -291,50 +289,54 @@ argument can be a single user (if the setting is a personal one, like
time display format), members in a particular stream only or all
active users in a realm.
- # zerver/lib/actions.py
+```python
+# zerver/lib/actions.py
- def do_set_realm_property(
- realm: Realm, name: str, value: Any, *, acting_user: Optional[UserProfile]
- ) -> None:
- """Takes in a realm object, the name of an attribute to update, the
- value to update and and the user who initiated the update.
- """
- property_type = Realm.property_types[name]
- assert isinstance(value, property_type), (
- 'Cannot update %s: %s is not an instance of %s' % (
- name, value, property_type,))
+def do_set_realm_property(
+ realm: Realm, name: str, value: Any, *, acting_user: Optional[UserProfile]
+) -> None:
+ """Takes in a realm object, the name of an attribute to update, the
+ value to update and and the user who initiated the update.
+ """
+ property_type = Realm.property_types[name]
+ assert isinstance(value, property_type), (
+ 'Cannot update %s: %s is not an instance of %s' % (
+ name, value, property_type,))
- setattr(realm, name, value)
- realm.save(update_fields=[name])
- event = dict(
- type='realm',
- op='update',
- property=name,
- value=value,
- )
- send_event(realm, event, active_user_ids(realm))
+ setattr(realm, name, value)
+ realm.save(update_fields=[name])
+ event = dict(
+ type='realm',
+ op='update',
+ property=name,
+ value=value,
+ )
+ send_event(realm, event, active_user_ids(realm))
+```
If the new realm property being added does not fit into the
`property_types` framework (such as the `authentication_methods`
field), you'll need to create a new function to explicitly update this
field and send an event. For example:
- # zerver/lib/actions.py
+```python
+# zerver/lib/actions.py
- def do_set_realm_authentication_methods(
- realm: Realm, authentication_methods: Dict[str, bool], *, acting_user: Optional[UserProfile]
- ) -> None:
- for key, value in list(authentication_methods.items()):
- index = getattr(realm.authentication_methods, key).number
- realm.authentication_methods.set_bit(index, int(value))
- realm.save(update_fields=['authentication_methods'])
- event = dict(
- type="realm",
- op="update_dict",
- property='default',
- data=dict(authentication_methods=realm.authentication_methods_dict())
- )
- send_event(realm, event, active_user_ids(realm))
+def do_set_realm_authentication_methods(
+ realm: Realm, authentication_methods: Dict[str, bool], *, acting_user: Optional[UserProfile]
+) -> None:
+ for key, value in list(authentication_methods.items()):
+ index = getattr(realm.authentication_methods, key).number
+ realm.authentication_methods.set_bit(index, int(value))
+ realm.save(update_fields=['authentication_methods'])
+ event = dict(
+ type="realm",
+ op="update_dict",
+ property='default',
+ data=dict(authentication_methods=realm.authentication_methods_dict())
+ )
+ send_event(realm, event, active_user_ids(realm))
+```
### Update application state
@@ -351,27 +353,29 @@ apps). The `apply_event` function in `zerver/lib/events.py` is important for
making sure the `state` is always correct, even in the event of rare
race conditions.
- # zerver/lib/events.py
+```python
+# zerver/lib/events.py
- def fetch_initial_state_data(user_profile, event_types, queue_id, include_subscribers=True):
- # ...
- if want('realm'):
+def fetch_initial_state_data(user_profile, event_types, queue_id, include_subscribers=True):
+ # ...
+ if want('realm'):
for property_name in Realm.property_types:
state['realm_' + property_name] = getattr(user_profile.realm, property_name)
state['realm_authentication_methods'] = user_profile.realm.authentication_methods_dict()
state['realm_allow_message_editing'] = user_profile.realm.allow_message_editing
# ...
- def apply_event
- user_profile: UserProfile,
- # ...
- ) -> None:
- for event in events:
+def apply_event
+ user_profile: UserProfile,
+ # ...
+) -> None:
+ for event in events:
# ...
elif event['type'] == 'realm':
field = 'realm_' + event['property']
state[field] = event['value']
# ...
+```
If your new realm property fits the `property_types`
framework, you don't need to change `fetch_initial_state_data` or
@@ -380,14 +384,16 @@ property that is handled separately, you will need to explicitly add
the property to the `state` dictionary in the `fetch_initial_state_data`
function. E.g., for `authentication_methods`:
- # zerver/lib/events.py
+```python
+# zerver/lib/events.py
- def fetch_initial_state_data(user_profile, event_types, queue_id, include_subscribers=True):
- # ...
- if want('realm'):
- # ...
- state['realm_authentication_methods'] = user_profile.realm.authentication_methods_dict()
- # ...
+def fetch_initial_state_data(user_profile, event_types, queue_id, include_subscribers=True):
+ # ...
+ if want('realm'):
+ # ...
+ state['realm_authentication_methods'] = user_profile.realm.authentication_methods_dict()
+ # ...
+```
For this setting, one won't need to change `apply_event` since its
default code for `realm` event types handles this case correctly, but
@@ -409,8 +415,7 @@ function in `zerver/views/realm.py` (and add the appropriate mypy type
annotation).
``` diff
-
-# zerver/views/realm.py
+ # zerver/views/realm.py
def update_realm(
request: HttpRequest,
@@ -430,15 +435,17 @@ to `zerver/views/realm.py`.
Text fields or other realm properties that need additional validation
can be handled at the beginning of `update_realm`.
- # zerver/views/realm.py
+```python
+# zerver/views/realm.py
- # Additional validation/error checking beyond types go here, so
- # the entire request can succeed or fail atomically.
- if default_language is not None and default_language not in get_available_language_codes():
- raise JsonableError(_("Invalid language '%s'" % (default_language,)))
- if description is not None and len(description) > 100:
- return json_error(_("Realm description cannot exceed 100 characters."))
- # ...
+# Additional validation/error checking beyond types go here, so
+# the entire request can succeed or fail atomically.
+if default_language is not None and default_language not in get_available_language_codes():
+ raise JsonableError(_("Invalid language '%s'" % (default_language,)))
+if description is not None and len(description) > 100:
+ return json_error(_("Realm description cannot exceed 100 characters."))
+# ...
+```
The code in `update_realm` loops through the `property_types` dictionary
and calls `do_set_realm_property` on any property to be updated from
@@ -449,20 +456,22 @@ to call the function you wrote in `actions.py` that updates the database
with the new value. E.g., for `authentication_methods`, we created
`do_set_realm_authentication_methods`, which we will call here:
- # zerver/views/realm.py
+```python
+# zerver/views/realm.py
- # import do_set_realm_authentication_methods from actions.py
- from zerver.lib.actions import (
- do_set_realm_message_editing,
- do_set_realm_authentication_methods,
- # ...
- )
- # ...
- # ...
- if authentication_methods is not None and realm.authentication_methods_dict() != authentication_methods:
- do_set_realm_authentication_methods(realm, authentication_methods, acting_user=user_profile)
- data['authentication_methods'] = authentication_methods
+# import do_set_realm_authentication_methods from actions.py
+from zerver.lib.actions import (
+ do_set_realm_message_editing,
+ do_set_realm_authentication_methods,
# ...
+)
+# ...
+# ...
+if authentication_methods is not None and realm.authentication_methods_dict() != authentication_methods:
+ do_set_realm_authentication_methods(realm, authentication_methods, acting_user=user_profile)
+ data['authentication_methods'] = authentication_methods
+# ...
+```
This completes the backend implementation. A great next step is to
write automated backend tests for your new feature.
@@ -508,18 +517,17 @@ template.
Then add the new form control in `static/js/admin.js`.
``` diff
+ // static/js/admin.js
-// static/js/admin.js
-
-function _setup_page() {
- var options = {
- realm_name: page_params.realm_name,
- realm_description: page_params.realm_description,
- realm_emails_restricted_to_domains: page_params.realm_emails_restricted_to_domains,
- realm_invite_required: page_params.realm_invite_required,
- // ...
-+ realm_mandatory_topics: page_params.mandatory_topics,
- // ...
+ function _setup_page() {
+ var options = {
+ realm_name: page_params.realm_name,
+ realm_description: page_params.realm_description,
+ realm_emails_restricted_to_domains: page_params.realm_emails_restricted_to_domains,
+ realm_invite_required: page_params.realm_invite_required,
+ // ...
++ realm_mandatory_topics: page_params.mandatory_topics,
+ // ...
```
The JavaScript code for organization settings and permissions can be found in
@@ -582,20 +590,19 @@ setting has changed, your function should be referenced in the
`settings_emoji.update_custom_emoji_ui`.
``` diff
+ // static/js/server_events_dispatch.js
-// static/js/server_events_dispatch.js
-
-function dispatch_normal_event(event) {
- switch (event.type) {
- // ...
- case 'realm':
- var realm_settings = {
- add_emoji_by_admins_only: settings_emoji.update_custom_emoji_ui,
- allow_edit_history: noop,
- // ...
-+ mandatory_topics: noop,
- // ...
- };
+ function dispatch_normal_event(event) {
+ switch (event.type) {
+ // ...
+ case 'realm':
+ var realm_settings = {
+ add_emoji_by_admins_only: settings_emoji.update_custom_emoji_ui,
+ allow_edit_history: noop,
+ // ...
++ mandatory_topics: noop,
+ // ...
+ };
```
Checkboxes and other common input elements handle the UI updates
@@ -638,12 +645,14 @@ At the minimum, if you created a new function to update UI in
`frontend_tests/node_tests/dispatch.js`. Add the name of the UI
function you created to the following object with `noop` as the value:
- # frontend_tests/node_tests/dispatch.js
+```js
+// frontend_tests/node_tests/dispatch.js
- set_global('settings_org', {
- update_email_change_display: noop,
- update_name_change_display: noop,
- });
+set_global('settings_org', {
+ update_email_change_display: noop,
+ update_name_change_display: noop,
+});
+```
Beyond that, you should add any applicable tests that verify the
behavior of the setting you just created.
diff --git a/docs/tutorials/shell-tips.md b/docs/tutorials/shell-tips.md
index 4c905cea02..fe69ff83a9 100644
--- a/docs/tutorials/shell-tips.md
+++ b/docs/tutorials/shell-tips.md
@@ -44,7 +44,7 @@ abbreviation for your home directory (`/home/YOUR_USERNAME` most of the times).
That's why the following is exactly the same, if the user running it is
`john`:
-```
+```console
$ cd ~
$ cd /home/john
```
@@ -58,7 +58,7 @@ directory, instead of writing the whole path.
Imagine you have a file called `ideas.txt` inside `/home/john/notes/`, and
you want to edit it using `nano`. You could use:
-```
+```console
$ nano /home/john/notes/ideas.txt
```
@@ -69,7 +69,7 @@ That's why it's very useful to change the path where you are currently
located (usually known as **working directory**). To do that, you use `cd`
(**c**hange **d**irectory):
-```
+```console
$ cd /home/john/notes/
~/notes$ nano ideas.txt
```
@@ -99,7 +99,7 @@ In case you were wondering, the name `sudo` comes from **s**uper **u**ser
Some characters cannot be used directly in the shell, because they have a
special meaning. Consider the following example:
-```
+```console
$ echo "He said hello"
He said hello
```
@@ -118,7 +118,7 @@ before it.
Returning to our example:
-```
+```console
$ echo "He said \"hello\""
He said "hello"
```
@@ -138,7 +138,7 @@ the shell provides two different separators:
- **Semicolon `;`**: runs a command, and once it has finished, runs the next
one:
- ```
+ ```console
$ echo "Hello"; echo "World!"
Hello
World!
@@ -147,7 +147,7 @@ the shell provides two different separators:
- **Double ampersand `&&`**: runs a command, and **only if** it finished
without errors, it proceeds with the next one:
- ```
+ ```console
$ qwfvijwe && echo "Hello"
qwfvijwe: command not found
```
@@ -158,7 +158,7 @@ the shell provides two different separators:
When using an incorrect command with a semicolon, the `Hello` will still
be printed:
- ```
+ ```console
$ qwfvijwe; echo "Hello"
qwfvijwe: command not found
Hello
@@ -176,7 +176,7 @@ shell "wait, there's more on the next line".
This is an example, taken from the docs on how to install the Zulip development
environment:
-```
+```bash
sudo apt-get -y purge vagrant && \
wget https://releases.hashicorp.com/vagrant/2.0.2/vagrant_2.0.2_x86_64.deb && \
sudo dpkg -i vagrant*.deb && \
@@ -204,7 +204,7 @@ Most commands need additional data to work, like a path or a file. That extra
information is called an **argument**, and it's specified after the name of the
command, like this:
-```
+```console
$ cd /home/john/notes
```
@@ -221,7 +221,7 @@ different meanings.
Sometimes, a command can accept arguments indicated with dashes. Here's another
example of arguments usage:
-```
+```console
$ nano -C /home/john/backups --mouse todo.txt
```
@@ -245,7 +245,7 @@ Note that the `todo.txt` is the file we want to open! It has nothing to do with
the previous argument. This will probably clarify it (taken from `nano`'s
help):
-```
+```console
Usage: nano [OPTIONS] [FILE]...
```
@@ -263,13 +263,13 @@ them.
That's why you may have seen cases, in the Zulip codebase or
elsewhere, when some Python scripts are called with `python`:
-```
+```console
$ python my_program.py
```
While other times, `python` isn't used:
-```
+```console
$ ./my_program.py
```
@@ -281,7 +281,7 @@ The note telling the OS how to interpret the file goes on the file's
very first line, and it's called a **shebang**. In our Python scripts,
it looks like this:
-```
+```python
#!/usr/bin/env python3
```
@@ -294,7 +294,7 @@ added as a command-line argument. So, returning to our example with
`my_program.py`, when you run `./my_program.py`, what happens under
the hood is equivalent to:
-```
+```console
$ /usr/bin/env python3 ./my_program.py
```
diff --git a/tools/linter_lib/custom_check.py b/tools/linter_lib/custom_check.py
index 7f0faeeedd..740ee29d69 100644
--- a/tools/linter_lib/custom_check.py
+++ b/tools/linter_lib/custom_check.py
@@ -98,6 +98,9 @@ markdown_whitespace_rules: List["Rule"] = [
"pattern": "^#+[A-Za-z0-9]",
"strip": "\n",
"description": "Missing space after # in heading",
+ "exclude_line": {
+ ("docs/subsystems/hotspots.md", "#hotspot_new_hotspot_name_icon {"),
+ },
"good_lines": ["### some heading", "# another heading"],
"bad_lines": ["###some heading", "#another heading"],
},