docs: Apply sentence single-spacing from Prettier.

Signed-off-by: Anders Kaseorg <anders@zulip.com>
(cherry picked from commit 35c1c8d41b)
This commit is contained in:
Anders Kaseorg
2021-08-20 12:53:28 -07:00
committed by Tim Abbott
parent 0147c6adce
commit aa6e70382d
104 changed files with 1528 additions and 1528 deletions

View File

@@ -8,7 +8,7 @@ GitHub Actions and debugging issues with it.
## Goals
The overall goal of our CI is to avoid regressions and minimize the
total time spent debugging Zulip. We do that by trying to catch as
total time spent debugging Zulip. We do that by trying to catch as
many possible future bugs as possible, while minimizing both latency
and false positives, both of which can waste a lot of developer time.
There are a few implications of this overall goal:
@@ -28,14 +28,14 @@ run to iteratively debug something.
- GitHub Actions stores timestamps for every line in the logs. They
are hidden by default; you can see them by toggling the
`Show timestamps` option in the menu on any job's log page. (You can
`Show timestamps` option in the menu on any job's log page. (You can
get this sort of timestamp in a development environment by piping
output to `ts`).
- GitHub Actions runs on every branch you push on your Zulip fork.
This is helpful when debugging something complicated.
- You can also ssh into a container to debug failures. SSHing into
- You can also ssh into a container to debug failures. SSHing into
the containers can be helpful, especially in rare cases where the
tests are passing in your computer but failing in the CI. There are
various
@@ -108,7 +108,7 @@ generated Dockerfiles.
An important element of making GitHub Actions perform effectively is caching
between jobs the various caches that live under `/srv/` in a Zulip
development or production environment. In particular, we cache the
development or production environment. In particular, we cache the
following:
- Python virtualenvs
@@ -122,7 +122,7 @@ We have designed these caches carefully (they are also used in
production and the Zulip development environment) to ensure that each
is named by a hash of its dependencies and ubuntu distribution name,
so Zulip should always be using the same version of dependencies it
would have used had the cache not existed. In practice, bugs are
would have used had the cache not existed. In practice, bugs are
always possible, so be mindful of this possibility.
A consequence of this caching is that test jobs for branches which

View File

@@ -11,7 +11,7 @@ but for other files, we are quite thorough about checking semantic
correctness.
Obviously, a large reason for linting code is to enforce the [Zulip
coding standards](../contributing/code-style.md). But we also use the linters to
coding standards](../contributing/code-style.md). But we also use the linters to
prevent common coding errors.
We borrow some open source tools for much of our linting, and the links
@@ -44,7 +44,7 @@ You can also run them individually or pass specific files:
```
`./tools/lint` has many useful options; you can read about them in its
internal documentation using `./tools/lint --help`. Of particular
internal documentation using `./tools/lint --help`. Of particular
note are:
- `--fix`: Several of our linters support automatically fixing basic
issues; this option will ask `tools/lint` to run those.
@@ -82,9 +82,9 @@ And, of course, for minor oversights, `lint` is your friend, not your foe.
Occasionally, our linters will complain about things that are more of
an artifact of the linter limitations than any actual problem with your
code. There is usually a mechanism where you can bypass the linter in
code. There is usually a mechanism where you can bypass the linter in
extreme cases, but often it can be a simple matter of writing your code
in a slightly different style to appease the linter. If you have
in a slightly different style to appease the linter. If you have
problems getting something to lint, you can submit an unfinished PR
and ask the reviewer to help you work through the lint problem, or you
can find other people in the [Zulip Community](../contributing/chat-zulip-org.md)
@@ -95,12 +95,12 @@ find limitations in either the Zulip home-grown stuff or our third party
tools, feedback will be highly appreciated.
Finally, one way to clean up your code is to thoroughly exercise it
with tests. The [Zulip test documentation](../testing/testing.md)
with tests. The [Zulip test documentation](../testing/testing.md)
describes our test system in detail.
## Lint checks
Most of our lint checks get performed by `./tools/lint`. These include the
Most of our lint checks get performed by `./tools/lint`. These include the
following checks:
- Check Python code with pyflakes.
@@ -113,10 +113,10 @@ following checks:
- Check HTML templates for matching tags and indentations.
- Check CSS for parsability and formatting.
- Check JavaScript code for addClass calls.
- Running `mypy` to check static types in Python code. Our
- Running `mypy` to check static types in Python code. Our
[documentation on using mypy](../testing/mypy.md) covers mypy in
more detail.
- Running `tsc` to compile TypeScript code. Our [documentation on
- Running `tsc` to compile TypeScript code. Our [documentation on
TypeScript](typescript.md) covers TypeScript in more detail.
The rest of this document pertains to the checks that occur in `./tools/lint`.
@@ -134,14 +134,14 @@ In order for our entire lint suite to run in a timely fashion, the `lint`
script performs several lint checks in parallel by forking out subprocesses.
Note that our project does custom regex-based checks on the code, and we
also customize how we call pyflakes and pycodestyle (pep8). The code for these
also customize how we call pyflakes and pycodestyle (pep8). The code for these
types of checks mostly lives [here](https://github.com/zulip/zulip/tree/main/tools/linter_lib).
### Special options
You can use the `-h` option for `lint` to see its usage. One particular
You can use the `-h` option for `lint` to see its usage. One particular
flag to take note of is the `--modified` flag, which enables you to only run
lint checks against files that are modified in your Git repo. Most of the
lint checks against files that are modified in your Git repo. Most of the
"sub-linters" respect this flag, but some will continue to process all the files.
Generally, a good workflow is to run with `--modified` when you are iterating on
the code, and then run without that option right before committing new code.
@@ -157,7 +157,7 @@ various file types.
#### Generic source code checks
We check almost our entire codebase for trailing whitespace. Also, we
We check almost our entire codebase for trailing whitespace. Also, we
disallow tab (\t) characters in all but two files.
We also have custom regex-based checks that apply to specific file types.
@@ -165,7 +165,7 @@ For relatively minor files like Markdown files and JSON fixtures, this
is the extent of our checking.
Finally, we're checking line length in Python code (and hope to extend
this to other parts of the codebase soon). You can use
this to other parts of the codebase soon). You can use
`#ignorelinelength` for special cases where a very long line makes
sense (e.g. a link in a comment to an extremely long URL).
@@ -173,19 +173,19 @@ sense (e.g. a link in a comment to an extremely long URL).
Our Python code is formatted using Black (using the options in the
`[tool.black]` section of `pyproject.toml`) and isort (using the
options in `.isort.cfg`). The `lint` script enforces this by running
options in `.isort.cfg`). The `lint` script enforces this by running
Black and isort in check mode, or in write mode with `--fix`.
The bulk of our Python linting gets outsourced to the "pyflakes" tool. We
The bulk of our Python linting gets outsourced to the "pyflakes" tool. We
call "pyflakes" in a fairly vanilla fashion, and then we post-process its
output to exclude certain specific errors that Zulip is comfortable
ignoring.
Zulip also has custom regex-based rules that it applies to Python code.
Look for `python_rules` in the source code for `lint`. Note that we
Look for `python_rules` in the source code for `lint`. Note that we
provide a mechanism to exclude certain lines of codes from these checks.
Often, it is simply the case that our regex approach is too crude to
correctly exonerate certain valid constructs. In other cases, the code
correctly exonerate certain valid constructs. In other cases, the code
that we exempt may be deemed not worthwhile to fix.
#### JavaScript code
@@ -198,7 +198,7 @@ We check our JavaScript code in a few different ways:
#### Puppet manifests
We use Puppet as our tool to manage configuration files, using
Puppet "manifests." To lint Puppet manifests, we use the "parser validate"
Puppet "manifests." To lint Puppet manifests, we use the "parser validate"
option of Puppet.
#### HTML templates
@@ -209,7 +209,7 @@ Zulip uses two HTML templating systems:
- [handlebars](https://handlebarsjs.com/)
Zulip has an internal tool that validates both types of templates for
correct indentation and matching tags. You can find the code here:
correct indentation and matching tags. You can find the code here:
- driver: [check-templates](https://github.com/zulip/zulip/blob/main/tools/check-templates)
- engine: [lib/template_parser.py](https://github.com/zulip/zulip/blob/main/tools/lib/template_parser.py)
@@ -227,7 +227,7 @@ for the rules we currently enforce.
#### Shell scripts
Zulip uses [shellcheck](https://github.com/koalaman/shellcheck) to
lint our shell scripts. We recommend the
lint our shell scripts. We recommend the
[shellcheck gallery of bad code][shellcheck-bad-code] as a resource on
how to not write bad shell.

View File

@@ -1,14 +1,14 @@
# Manual testing #
As a general rule, we like to have automated tests for everything that
can be practically tested. However, there are certain types of bugs
can be practically tested. However, there are certain types of bugs
that are best caught with old fashioned manual testing (also called
manual QA). Manual testing not only catches bugs, but it also helps
manual QA). Manual testing not only catches bugs, but it also helps
developers learn more about the system and think about the existing
semantics of a feature they're working on.
This doc assumes you know how to set up a local development server
and open the Zulip app in the browser. It also assumes a basic
and open the Zulip app in the browser. It also assumes a basic
knowledge of how to use Zulip.
## Basic stuff ##
@@ -16,24 +16,24 @@ knowledge of how to use Zulip.
When testing Zulip manually, here are things to focus on:
- The best bugs to catch are security/permissions bugs.
- Don't rush manual testing. Look for small details like
- Don't rush manual testing. Look for small details like
display glitches.
- Always test with multiple users (you can use incognito windows
to facilitate this).
- Always keep the inspector console open and watch for warnings
or errors.
- Be methodical about collecting information on bugs. (You will
- Be methodical about collecting information on bugs. (You will
eventually want to create tickets, but you may want to consolidate
your own notes before filing tickets.)
You generally want to test with Cordelia as the primary user,
and use Hamlet as her primary conversation partner. Use Iago
when you need to test administrative functions. Send messages
and use Hamlet as her primary conversation partner. Use Iago
when you need to test administrative functions. Send messages
to Othello or Prospero if you want to verify things such as
Cordelia not being able to receive messages not intended for her.
The rest of this document groups tasks into basic areas of
functionality of the system. If you have multiple people testing
functionality of the system. If you have multiple people testing
at once, you can divvy up QA tasks by these sections in the doc.
### Message view ###
@@ -150,9 +150,9 @@ Here are some tasks:
Zulip uses the term "narrowing" to refer to opening different views
of your messages, whether by clicking on sidebar options, recipient
bars, or by using search. The main focus of these tasks should
be watching unread counts. Of course, you also want to see messages
show up in the message pane. And, finally, you should make sure
bars, or by using search. The main focus of these tasks should
be watching unread counts. Of course, you also want to see messages
show up in the message pane. And, finally, you should make sure
that no messages outside the narrow show up in Cordelia's view.
@@ -191,14 +191,14 @@ messages after each narrow):
- Go to Denmark view.
- Go to Denmark/foo view.
There are 56 things to test here. If you can get into a rhythm
There are 56 things to test here. If you can get into a rhythm
where you can test each case in about 30 seconds, then the whole
exercise is about 30 minutes, assuming no bugs.
### Composing messages ###
We have pretty good automated tests for our Markdown processor, so
manual testing is targeted more to other interactions. For composing
manual testing is targeted more to other interactions. For composing
a message, pay attention to details like what is automatically
populated and where the focus is placed.
@@ -258,9 +258,9 @@ populated and where the focus is placed.
### Popover menus ###
For this task you just want to go through all of our popover menus
and exercise them. The main nuance here is that you occasionally want
and exercise them. The main nuance here is that you occasionally want
to click somewhere on the UI outside of an existing popover to see if
the popover menu is "too sticky." Also, occasionally actions will be
the popover menu is "too sticky." Also, occasionally actions will be
somewhat jarring; for example, if you mute a message in the current view,
then the message will disappear from the view.
@@ -307,7 +307,7 @@ Here are the things to test:
### Sidebar filtering ###
This is a fairly quick task where we test the search filters on the left sidebar
and the buddy list. If Cordelia is not subscribed to Denmark, subscribe her to
and the buddy list. If Cordelia is not subscribed to Denmark, subscribe her to
that stream.
- Streams filtering
@@ -350,7 +350,7 @@ First, we start off with "positive" tests.
users.
For negative tests, we want to dig a little deeper to find back
doors for Cordelia to access the stream. Here are some techniques
doors for Cordelia to access the stream. Here are some techniques
to try:
- Try to have her compose a message to the stream by
@@ -360,7 +360,7 @@ to try:
- Go to stream settings and see if the stream shows up.
For public streams, it's ok for Cordelia to know the stream exists,
and she can subsequently subscribe. For private streams, she should
and she can subsequently subscribe. For private streams, she should
not even know they exist (until she's invited, of course).
- Negative tests
@@ -381,9 +381,9 @@ not even know they exist (until she's invited, of course).
### Search ###
The main task for testing search is to play around with search
suggestions (autocomplete). Once you select an option, verify the
suggestions (autocomplete). Once you select an option, verify the
message view is consistent with the search and that the left sidebar
reflects the current narrow. If a search comes up legitimately
reflects the current narrow. If a search comes up legitimately
empty, have Hamlet send a message that matches the search.
Here are searches you should be able to do with autocomplete:
@@ -531,7 +531,7 @@ Here are the tasks:
### To be continued... ###
This document does not cover settings/admin options yet. The main
This document does not cover settings/admin options yet. The main
things to do when testing the settings system are:
- Verify that changes are synced to other users.
- Verify error messages appear if you do something wrong and look right.

View File

@@ -1,9 +1,9 @@
# Python static type checker (mypy)
[mypy](http://mypy-lang.org/) is a compile-time static type checker
for Python, allowing optional, gradual typing of Python code. Zulip
for Python, allowing optional, gradual typing of Python code. Zulip
was fully annotated with mypy's Python 2 syntax in 2016, before our
migration to Python 3 in late 2017. In 2018 and 2020, we migrated
migration to Python 3 in late 2017. In 2018 and 2020, we migrated
essentially the entire codebase to the nice PEP 484 (Python 3 only)
and PEP 526 (Python 3.6) syntax for static types:
@@ -19,7 +19,7 @@ You can learn more about it at:
- The
[mypy cheat sheet for Python 3](https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html)
is the best resource for quickly understanding how to write the PEP
484 type annotations used by mypy correctly. The
484 type annotations used by mypy correctly. The
[Python 2 cheat sheet](https://mypy.readthedocs.io/en/latest/cheat_sheet.html)
is useful for understanding the type comment syntax needed for our
few modules that need to support both Python 2 and 3.
@@ -38,7 +38,7 @@ CI testing process in the `backend` build.
## Installing mypy
mypy is installed by default in the Zulip development environment. If
mypy is installed by default in the Zulip development environment. If
you'd like to install just the version of `mypy` that we're using
(useful if e.g. you want `mypy` installed on your laptop outside the
Vagrant guest), you can do that with
@@ -52,7 +52,7 @@ To run mypy on Zulip's python code, you can run the command:
tools/run-mypy
```
Mypy outputs errors in the same style as a compiler would. For
Mypy outputs errors in the same style as a compiler would. For
example, if your code has a type error like this:
```python
@@ -70,7 +70,7 @@ test.py:200: error: Incompatible types in assignment (expression has type "str",
## Mypy is there to find bugs in Zulip before they impact users
For the purposes of Zulip development, you can treat `mypy` like a
much more powerful linter that can catch a wide range of bugs. If,
much more powerful linter that can catch a wide range of bugs. If,
after running `tools/run-mypy` on your Zulip branch, you get mypy
errors, it's important to get to the bottom of the issue, not just do
something quick to silence the warnings, before we merge the changes.
@@ -86,7 +86,7 @@ Possible explanations include:
Each explanation has its own solution, but in every case the result
should be solving the mypy warning in a way that makes the Zulip
codebase better. If you're having trouble, silence the warning with
codebase better. If you're having trouble, silence the warning with
an `Any` or `# type: ignore[code]` so you're not blocked waiting for help,
add a `# TODO: ` comment so it doesn't get forgotten in code review,
and ask for help in chat.zulip.org.
@@ -105,7 +105,7 @@ root of the project, letting `mypy` know that it's third-party code,
or add type stubs to the `stubs/` directory, which has type stubs that
mypy can use to type-check calls into that third-party module.
It's easy to add new stubs! Just read the docs, look at some of
It's easy to add new stubs! Just read the docs, look at some of
existing examples to see how they work, and remember to remove the
`ignore_missing_imports` entry in `mypy.ini` when you add them.
@@ -116,9 +116,9 @@ to use `mypy`!), but means the code can't be fully type-checked.
## `type_debug.py`
`zerver/lib/type_debug.py` has a useful decorator `print_types`. It
`zerver/lib/type_debug.py` has a useful decorator `print_types`. It
prints the types of the parameters of the decorated function and the
return type whenever that function is called. This can help find out
return type whenever that function is called. This can help find out
what parameter types a function is supposed to accept, or if
parameters with the wrong types are being passed to a function.
@@ -145,8 +145,8 @@ func([int, ...], [int, ...]) -> [int, ...]
[1, 2, 3, 4, 5, 6, 7]
```
`print_all` prints the type of the first item of lists. So `[int, ...]` represents
a list whose first element's type is `int`. Types of all items are not printed
`print_all` prints the type of the first item of lists. So `[int, ...]` represents
a list whose first element's type is `int`. Types of all items are not printed
because a list can have many elements, which would make the output too large.
Similarly in dicts, one key's type and the corresponding value's type are printed.
@@ -156,13 +156,13 @@ So `{1: 'a', 2: 'b', 3: 'c'}` will be printed as `{int: str, ...}`.
Sometimes, a function's type is most precisely expressed as a few
possibilities, and which possibility can be determined by looking at
the arguments. You can express that idea in a way mypy understands
using `@overload`. For example, `check_list` returns a `Validator`
the arguments. You can express that idea in a way mypy understands
using `@overload`. For example, `check_list` returns a `Validator`
function that verifies that an object is a list, raising an exception
if it isn't.
It supports being passed a `sub_validator`, which will verify that
each element in the list has a given type as well. One can express
each element in the list has a given type as well. One can express
the idea "If `sub_validator` validates that something is a `ResultT`,
`check_list(sub_validator)` validators that something is a
`List[ResultT]` as follows:
@@ -186,7 +186,7 @@ type logic for the case where we are passed a `sub_validator`.
**Warning:** Mypy only checks the body of an overloaded function
against the final signature and not against the more restrictive
`@overload` signatures. This allows some type errors to evade
`@overload` signatures. This allows some type errors to evade
detection by mypy:
```python
@@ -201,7 +201,7 @@ x: int = f("three!!")
```
Due to this potential for unsafety, we discourage overloading unless
it's absolutely necessary. Consider writing multiple functions with
it's absolutely necessary. Consider writing multiple functions with
different names instead.
See the [mypy overloading documentation][mypy-overloads] for more details.
@@ -213,12 +213,12 @@ See the [mypy overloading documentation][mypy-overloads] for more details.
### When is a type annotation justified?
Usually in fully typed code, mypy will protect you from writing a type
annotation that isn't justified by the surrounding code. But when you
annotation that isn't justified by the surrounding code. But when you
need to write annotations at the border between untyped and typed
code, keep in mind that **a type annotation should always represent a
guarantee,** not an aspiration. If you have validated that some value
is an `int`, it can go in an `int` annotated variable. If you are
going to validate it later, it should not. When in doubt, an `object`
guarantee,** not an aspiration. If you have validated that some value
is an `int`, it can go in an `int` annotated variable. If you are
going to validate it later, it should not. When in doubt, an `object`
annotation is always safe.
Mypy understands many Python constructs like `assert`, `if`,
@@ -238,7 +238,7 @@ def f(x: object, y: Optional[str]) -> None:
It won't be able do this narrowing if the validation is hidden behind
a function call, so sometimes it's helpful for a validation function
to return the type-narrowed value back to the caller even though the
caller already has it. (The validators in `zerver/lib/validator.py`
caller already has it. (The validators in `zerver/lib/validator.py`
are examples of this pattern.)
### Avoid the `Any` type
@@ -248,7 +248,7 @@ type](https://mypy.readthedocs.io/en/stable/dynamic_typing.html) for
interoperability with untyped code, but it is completely unchecked.
You can put an value of an arbitrary type into an expression of type
`Any`, and get an value of an arbitrary type out, and mypy will make
no effort to check that the input and output types match. So using
no effort to check that the input and output types match. So using
`Any` defeats the type safety that mypy would otherwise provide.
```python
@@ -281,8 +281,8 @@ alternatives first:
`instance(value, str)` test.
- If you really have no information about the type of a value, use the
**`object` type**. Since every type is a subtype of `object`, you
can correctly annotate any value as `object`. The [difference
**`object` type**. Since every type is a subtype of `object`, you
can correctly annotate any value as `object`. The [difference
between `Any` and
`object`](https://mypy.readthedocs.io/en/stable/dynamic_typing.html#any-vs-object)
is that mypy will check that you safely validate an `object` with
@@ -290,17 +290,17 @@ alternatives first:
type.
- A common way for `Any` annotations to sneak into your code is the
interaction with untyped third-party libraries. Mypy treats any
interaction with untyped third-party libraries. Mypy treats any
value imported from an untyped library as annotated with `Any`, and
treats any type imported from an untyped library as equivalent to
`Any`. Consider providing real type annotations for the library by
`Any`. Consider providing real type annotations for the library by
[**writing a stub file**](#mypy-stubs-for-third-party-modules).
### Avoid `cast()`
The [`cast`
function](https://mypy.readthedocs.io/en/stable/casts.html) lets you
provide an annotation that Mypy will not verify. Obviously, this is
provide an annotation that Mypy will not verify. Obviously, this is
completely unsafe in general.
```python
@@ -329,7 +329,7 @@ Mypy allows you to ignore any type checking error with a
[`# type: ignore`
comment](https://mypy.readthedocs.io/en/stable/common_issues.html#spurious-errors-and-locally-silencing-the-checker),
but you should avoid this in the absence of a very good reason, such
as a bug in mypy itself. If there are no safe options for dealing
as a bug in mypy itself. If there are no safe options for dealing
with the error, prefer an unchecked `cast`, since its unsafety is
somewhat more localized.
@@ -347,7 +347,7 @@ issue.
not checked against the `@overload` signatures.
- **Avoid `Callable[..., T]`** (with literal ellipsis `...`), since
mypy cannot check the types of arguments passed to it. Provide the
mypy cannot check the types of arguments passed to it. Provide the
specific argument types (`Callable[[int, str], T]`) in simple cases,
or use [callback
protocols](https://mypy.readthedocs.io/en/stable/protocols.html#callback-protocols)
@@ -357,11 +357,11 @@ issue.
The [`Optional`
type](https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html#built-in-types)
is for optional values, which are values that could be `None`. For
is for optional values, which are values that could be `None`. For
example, `Optional[int]` is equivalent to `Union[int, None]`.
The `Optional` type is **not for optional parameters** (unless they
are also optional values as above). This signature does not use the
are also optional values as above). This signature does not use the
`Optional` type:
```python
@@ -371,7 +371,7 @@ def func(flag: bool = False) -> str:
A collection such as `List` should only be `Optional` if `None` would
have a different meaning than the natural meaning of an empty
collection. For example:
collection. For example:
- An include list where the default is to include everything should be
`Optional` with default `None`.
@@ -397,7 +397,7 @@ The basic Python collections
[`Dict`](https://docs.python.org/3/library/typing.html#typing.Dict),
and [`Set`](https://docs.python.org/3/library/typing.html#typing.Set)
are mutable, but it's confusing for a function to mutate a collection
that was passed to it as an argument, especially by accident. To
that was passed to it as an argument, especially by accident. To
avoid this, prefer annotating function parameters with read-only
types:
@@ -422,14 +422,14 @@ In some cases the more general
[`Collection`](https://docs.python.org/3/library/typing.html#typing.Collection)
or
[`Iterable`](https://docs.python.org/3/library/typing.html#typing.Iterable)
types might be appropriate. (But dont use `Iterable` for a value
types might be appropriate. (But dont use `Iterable` for a value
that might be iterated multiple times, since a one-use iterator is
`Iterable` too.)
A function's return type can be mutable if the return value is always
a freshly created collection, since the caller ends up with the only
reference to the value and can freely mutate it without risk of
confusion. But a read-only return type might be more appropriate for
confusion. But a read-only return type might be more appropriate for
a function that returns a reference to an existing collection.
Read-only types have the additional advantage of being [covariant
@@ -497,7 +497,7 @@ def f(s: str) -> str:
But Mypy doesn't yet support the advanced type annotations that would
be needed to correctly type generic signature-changing decorators,
such as `zerver.decorator.authenticated_json_view`, which passes an
extra argument to the inner function. For these decorators we must
extra argument to the inner function. For these decorators we must
unfortunately give up some type safety by falling back to
`Callable[..., T]`.
@@ -505,7 +505,7 @@ unfortunately give up some type safety by falling back to
All of our linters, including mypy, are designed to only check files
that have been added in git (this is by design, since it means you
have untracked files in your Zulip checkout safely). So if you get a
have untracked files in your Zulip checkout safely). So if you get a
`mypy` error like this after adding a new file that is referenced by
the existing codebase:

View File

@@ -1,13 +1,13 @@
# Testing philosophy
Zulip's automated tests are a huge part of what makes the project able
to make progress. This page records some of the key principles behind
to make progress. This page records some of the key principles behind
how we have designed our automated test suites.
## Effective testing allows us to move quickly
Zulip's engineering strategy can be summarized as "move quickly
without breaking things". Despite reviewing many code submissions
without breaking things". Despite reviewing many code submissions
from new contributors without deep expertise in the code they are
changing, Zulip's maintainers spend most of the time they spend
integrating changes on product decisions and code
@@ -17,9 +17,9 @@ lower-level issues.
This is possible because we have spent years systematically investing
in testing, tooling, code structure, documentation, and development
practices to help ensure that our contributors write code that needs
relatively few changes before it can be merged. The testing element
relatively few changes before it can be merged. The testing element
of this is to have reliable, extensive, easily extended test suites
that cover most classes of bugs. Our testing systems have been
that cover most classes of bugs. Our testing systems have been
designed to minimize the time spent manually testing or otherwise
investigating whether changes are correct.
@@ -34,7 +34,7 @@ While not every part of Zulip has a great test suite, many components
do, and for those components, the tests mean that new contributors can
often make substantive changes and have them be
more or less correct by the time they share the
changes for code review. More importantly, it means that maintainers
changes for code review. More importantly, it means that maintainers
save most of the time that would otherwise be spent verifying that the
changes are simply correct, and instead focus on making sure that the
codebase remains readable, well-structured, and well-tested.
@@ -43,12 +43,12 @@ codebase remains readable, well-structured, and well-tested.
When automated test suites are slow or unreliable, developers will
avoid running them, and furthermore, avoid working on improving them
(both the system and individual tests). Because changes that make
(both the system and individual tests). Because changes that make
tests slow or unreliable are often unintentional side effects of other
development, problems in this area tend to accumulate as a codebase
grows. As a result, barring focused effort to prevent this outcome,
grows. As a result, barring focused effort to prevent this outcome,
any large software project will eventually have its test suite rot
into one that is slow, unreliable, untrustworthy, and hated. We aim
into one that is slow, unreliable, untrustworthy, and hated. We aim
for Zulip to avoid that fate.
So we consider it essential to maintaining every automated test suite
@@ -62,7 +62,7 @@ suite (`test-js-with-node`) to run in under 10 seconds.
It'd be a long blog post to summarize everything we do to help achieve
these goals, but a few techniques are worth highlighting:
- Our test suites are designed to not access the Internet, since the
Internet might be down or unreliable in the test environment. Where
Internet might be down or unreliable in the test environment. Where
outgoing HTTP requests are required to test something, we mock the
responses with libraries like `responses`.
- We carefully avoid the potential for contamination of data inside
@@ -70,7 +70,7 @@ these goals, but a few techniques are worth highlighting:
- Every test case prepends a unique random prefix to all keys it
uses when accessing Redis and memcached.
- Every test case runs inside a database transaction, which is
aborted after the test completes. Each test process interacts
aborted after the test completes. Each test process interacts
only with a fresh copy of a special template database used for
server tests that is destroyed after the process completes.
- We rigorously investigate non-deterministically failing tests as though
@@ -79,9 +79,9 @@ these goals, but a few techniques are worth highlighting:
## Integration testing or unit testing?
Developers frequently ask whether they should write "integration
tests" or "unit tests". Our view is that tests should be written
tests" or "unit tests". Our view is that tests should be written
against interfaces that you're already counting on keeping stable, or
already promising people you'll keep stable. In other words,
already promising people you'll keep stable. In other words,
interfaces that you or other people are already counting on mostly not
changing except in compatible ways.
@@ -104,26 +104,26 @@ tiny internal functions, then any time you refactor something and
change the internal interfaces -- even though you just made them up,
and they're completely internal to that codebase so there's nothing
that will break if you change them at will -- you have to go deal with
editing a bunch of tests to match the new interfaces. That's
editing a bunch of tests to match the new interfaces. That's
especially a lot of work if you try to take the tests seriously,
because you have to think through whether the tests breaking are
telling you something you should actually listen to.
In some big codebases, this can lead to tests feeling a lot like
busywork... and it's because a lot of those tests really are
busywork. And that leads to developers not being committed to
busywork. And that leads to developers not being committed to
maintaining and expanding the test suite in a thoughtful way.
But if your tests are written against an external API, and you make
some refactoring change and a bunch of tests break... now that's
telling you something very real! You can always edit the tests... but
telling you something very real! You can always edit the tests... but
the tests are stand-ins for real users and real code out there beyond
your reach, which will break the same way.
So you can still make the change... but you have to deal with figuring
out an appropriate migration or backwards-compatibility strategy for
all those real users out there. Updating the tests is one of the easy
parts. And those changes to the tests are a nice reminder to code
parts. And those changes to the tests are a nice reminder to code
reviewers that you've changed an interface, and the reviewer should
think carefully about whether those interface changes will be a
problem for any existing clients and whether they're properly reflected
@@ -167,7 +167,7 @@ So, to summarize our approach to integration vs. unit testing:
## Avoid duplicating code with security impact
Developing secure software with few security bugs is extremely
difficult. An important part of our strategy for avoiding security
difficult. An important part of our strategy for avoiding security
logic bugs is to design patterns for how all of our code that
processes untrusted user input can be well tested without either
writing (and reviewing!) endless tests or requiring every developer to
@@ -177,7 +177,7 @@ Our strategy for this is to write a small number of carefully-designed
functions like `access_stream_by_id` that we test carefully, and then
use linting and other coding conventions to require that all access to
data from code paths that might share that data with users be mediated
through those functions. So rather than having each view function do
through those functions. So rather than having each view function do
it own security checks for whether the user can access a given stream,
and needing to test each of those copies of the logic, we only need to
do that work once for each major type of data structure and level of
@@ -211,7 +211,7 @@ test conditions.
The benefit of this strategy is that you guarantee that the test setup
only differs as intended: Done well, it helps avoid the otherwise
extremely common failure mode where a `test_foo_failure` test passes
for the wrong reason. (E.g. the action fails not because of the
for the wrong reason. (E.g. the action fails not because of the
permission check, but because a required HTTP parameter was only added
to an adjacent `test_foo_success`).
@@ -228,7 +228,7 @@ designed to be robust to future refactoring.
But for some things, like documentation and CSS, the only way to test
is to view the element in a browser and try things that might not
work. What to test will vary with what is likely to break. For
work. What to test will vary with what is likely to break. For
example, after a significant change to Zulip's Markdown documentation,
if you haven't verified every special bit of formatting visually and
clicked every new link, there's a good chance that you've introduced a

View File

@@ -2,21 +2,21 @@
## Overview
Zulip uses the Django framework for its Python backend. We
Zulip uses the Django framework for its Python backend. We
use the testing framework from
[django.test](https://docs.djangoproject.com/en/1.10/topics/testing/)
to test our code. We have over a thousand automated tests that verify that
to test our code. We have over a thousand automated tests that verify that
our backend works as expected.
All changes to the Zulip backend code should be supported by tests. We
All changes to the Zulip backend code should be supported by tests. We
enforce our testing culture during code review, and we also use
coverage tools to measure how well we test our code. We mostly use
coverage tools to measure how well we test our code. We mostly use
tests to prevent regressions in our code, but the tests can have
ancillary benefits such as documenting interfaces and influencing
the design of our software.
If you have worked on other Django projects that use unit testing, you
will probably find familiar patterns in Zulip's code. This document
will probably find familiar patterns in Zulip's code. This document
describes how to write tests for the Zulip backend, with a particular
emphasis on areas where we have either wrapped Django's test framework
or just done things that are kind of unique in Zulip.
@@ -26,7 +26,7 @@ or just done things that are kind of unique in Zulip.
Our tests live in `zerver/tests/`. You can run them with
`./tools/test-backend`. The tests run in parallel using multiple
threads in your development environment, and can finish in under 30s
on a fast machine. When you are in iterative mode, you can run
on a fast machine. When you are in iterative mode, you can run
individual tests or individual modules, following the dotted.test.name
convention below:
@@ -36,7 +36,7 @@ cd /srv/zulip
```
There are many command line options for running Zulip tests, such
as a `--verbose` option. The
as a `--verbose` option. The
best way to learn the options is to use the online help:
```bash
@@ -44,23 +44,23 @@ best way to learn the options is to use the online help:
```
We also have ways to instrument our tests for finding code coverage,
URL coverage, and slow tests. Use the `-h` option to discover these
features. We also have a `--profile` option to facilitate profiling
URL coverage, and slow tests. Use the `-h` option to discover these
features. We also have a `--profile` option to facilitate profiling
tests.
Another thing to note is that our tests generally "fail fast," i.e. they
stop at the first sign of trouble. This is generally a good thing for
stop at the first sign of trouble. This is generally a good thing for
iterative development, but you can override this behavior with the
`--nonfatal-errors` option. A useful option to combine with that is
`--nonfatal-errors` option. A useful option to combine with that is
the `--rerun` option, which will rerun just the tests that failed in
the last test run.
**Webhook integrations**. For performance, `test-backend` with no
**Webhook integrations**. For performance, `test-backend` with no
arguments will not run webhook integration tests (`zerver/webhooks/`),
which would otherwise account for about 25% of the total runtime.
When working on webhooks, we recommend instead running
`test-backend zerver/webhooks` manually (or better, the direction for
the specific webhooks you're working on). And of course our CI is
the specific webhooks you're working on). And of course our CI is
configured to always use `test-backend --include-webhooks` and run all
of the tests.
@@ -71,10 +71,10 @@ the rest of this document, and you can also read some of the existing tests
in `zerver/tests` to get a feel for the patterns we use.
A good practice is to get a "failing test" before you start to implement
your feature. First, it is a useful exercise to understand what needs to happen
your feature. First, it is a useful exercise to understand what needs to happen
in your tests before you write the code, as it can help drive out simple
design or help you make incremental progress on a large feature. Second,
you want to avoid introducing tests that give false positives. Ensuring
design or help you make incremental progress on a large feature. Second,
you want to avoid introducing tests that give false positives. Ensuring
that a test fails before you implement the feature ensures that if somebody
accidentally regresses the feature in the future, the test will catch
the regression.
@@ -87,7 +87,7 @@ which contains our `ZulipTestCase` and `WebhookTestCase` classes.
### Setting up data for tests
All tests start with the same fixture data. (The tests themselves
All tests start with the same fixture data. (The tests themselves
update the database, but they do so inside a transaction that gets
rolled back after each of the tests complete. For more details on how the
fixture data gets set up, refer to `tools/setup/generate-fixtures`.)
@@ -118,7 +118,7 @@ Here are some example action methods that tests may use for data setup:
### Testing code that accesses the filesystem
Some tests need to access the filesystem (e.g. `test_upload.py` tests
for `LocalUploadBackend` and the data import tests). Doing
for `LocalUploadBackend` and the data import tests). Doing
this correctly requires care to avoid problems like:
- Leaking files after every test (which are clutter and can eventually
run the development environment out of disk) or
@@ -138,7 +138,7 @@ To avoid these problems, you can do the following:
Our common testing infrastructure handles some of this for you,
e.g. it replaces `settings.LOCAL_UPLOADS_DIR` for each test process
with a unique path under `/var/<uuid>/test-backend`. And
with a unique path under `/var/<uuid>/test-backend`. And
`UploadSerializeMixin` manages some of the cleanup work for
`test_upload.py`.
@@ -164,7 +164,7 @@ Additionally, you can observe any calls made to your mocked object.
When writing tests, it often occurs that you make calls to functions
taking complex arguments. Creating a real instance of such an argument
would require the use of various different libraries, a lot of
boilerplate code, etc. Another scenario is that the tested code
boilerplate code, etc. Another scenario is that the tested code
accesses files or objects that don't exist at testing time. Finally,
it is good practice to keep tests independent from others. Mocks help
you to isolate test cases by simulating objects and methods irrelevant
@@ -314,8 +314,8 @@ On the other hand, if we had used `import os.urandom`, we would need to call `mo
#### Zulip mocking practices
For mocking we generally use the "mock" library and use `mock.patch` as
a context manager or decorator. We also take advantage of some context managers
from Django as well as our own custom helpers. Here is an example:
a context manager or decorator. We also take advantage of some context managers
from Django as well as our own custom helpers. Here is an example:
```python
with self.settings(RATE_LIMITING=True):
@@ -335,7 +335,7 @@ find several examples of doing this.
## Zulip testing philosophy
If there is one word to describe Zulip's philosophy for writing tests,
it is probably "flexible." (Hopefully "thorough" goes without saying.)
it is probably "flexible." (Hopefully "thorough" goes without saying.)
When in doubt, unless speed concerns are prohibitive,
you usually want your tests to be somewhat end-to-end, particularly
@@ -346,8 +346,8 @@ test suite...
### Endpoint tests
We strive to test all of our URL endpoints. The vast majority of Zulip
endpoints support a JSON interface. Regardless of the interface, an
We strive to test all of our URL endpoints. The vast majority of Zulip
endpoints support a JSON interface. Regardless of the interface, an
endpoint test generally follows this pattern:
- Set up the data.
@@ -384,7 +384,7 @@ via Django.
### Fixture-driven tests
Particularly for testing Zulip's integrations with third party systems,
we strive to have a highly data-driven approach to testing. To give a
we strive to have a highly data-driven approach to testing. To give a
specific example, when we test our GitHub integration, the test code
reads a bunch of sample inputs from a JSON fixture file, feeds them
to our GitHub integration code, and then verifies the output against
@@ -410,20 +410,20 @@ A detailed description of mocks, along with useful coded snippets, can be found
In [zerver/tests/test_templates.py](https://github.com/zulip/zulip/blob/main/zerver/tests/test_templates.py)
we have a test that renders all of our backend templates with
a "dummy" context, to make sure the templates don't have obvious
errors. (These tests won't catch all types of errors; they are
errors. (These tests won't catch all types of errors; they are
just a first line of defense.)
### SQL performance tests
A common class of bug with Django systems is to handle bulk data in
an inefficient way, where the backend populates objects for join tables
with a series of individual queries that give O(N) latency. (The
with a series of individual queries that give O(N) latency. (The
remedy is often just to call `select_related()`, but sometimes it
requires a more subtle restructuring of the code.)
We try to prevent these bugs in our tests by using a context manager
called `queries_captured()` that captures the SQL queries used by
the backend during a particular operation. We make assertions about
the backend during a particular operation. We make assertions about
those queries, often simply asserting that the number of queries is
below some threshold.
@@ -432,21 +432,21 @@ below some threshold.
The Zulip backend has a mechanism where it will fetch initial data
for a client from the database, and then it will subsequently apply
some queued up events to that data to the data structure before notifying
the client. The `BaseAction.do_test()` helper helps tests
the client. The `BaseAction.do_test()` helper helps tests
verify that the application of those events via apply_events() produces
the same data structure as performing an action that generates said event.
This is a bit esoteric, but if you read the tests, you will see some of
the patterns. You can also learn more about our event system in the
the patterns. You can also learn more about our event system in the
[new feature tutorial](../tutorials/new-feature-tutorial.html#handle-database-interactions).
### Negative tests
It is important to verify error handling paths for endpoints, particularly
situations where we need to ensure that we don't return results to clients
with improper authentication or with limited authorization. A typical test
with improper authentication or with limited authorization. A typical test
will call the endpoint with either a non-logged in client, an invalid API
key, or missing input fields. Then the test will call `assert_json_error()`
key, or missing input fields. Then the test will call `assert_json_error()`
to verify that the endpoint is properly failing.
## Testing considerations
@@ -458,26 +458,26 @@ If you have several tests repeating the same type of test setup,
consider making a setUp() method or a test helper.
- **Network independence** Our tests should still work if you don't
have an internet connection. For third party clients, you can simulate
their behavior using fixture data. For third party servers, you can
have an internet connection. For third party clients, you can simulate
their behavior using fixture data. For third party servers, you can
typically simulate their behavior using mocks.
- **Coverage** We have 100% line coverage on several of our backend
modules. You can use the `--coverage` option to generate coverage
modules. You can use the `--coverage` option to generate coverage
reports, and new code should have 100% coverage, which generally
requires testing not only the "happy path" but also error handling
code and edge cases. It will generate a nice HTML report that you can
code and edge cases. It will generate a nice HTML report that you can
view right from your browser (the tool prints the URL where the report
is exposed in your development environment).
- **Console output** A properly written test should print nothing to
the console; use `with self.assertLogs` to capture and verify any
logging output. Note that we reconfigure various loggers in
logging output. Note that we reconfigure various loggers in
`zproject/test_extra_settings.py` where the output is unlikely to be
interesting when running our test suite.
`test-backend --ban-console-output` checks for stray print statements.
Note that `test-backend --coverage` will assert that
various specific files in the project have 100% test coverage and
throw an error if their coverage has fallen. One of our project goals
throw an error if their coverage has fallen. One of our project goals
is to expand that checking to ever-larger parts of the codebase.

View File

@@ -1,7 +1,7 @@
# JavaScript/TypeScript unit tests
Our node-based unit tests system is the preferred way to test
JavaScript/TypeScript code in Zulip. We prefer it over the [Puppeteer
JavaScript/TypeScript code in Zulip. We prefer it over the [Puppeteer
black-box whole-app testing](../testing/testing-with-puppeteer.md),
system since it is much (>100x) faster and also easier to do correctly
than the Puppeteer system.
@@ -15,8 +15,8 @@ See `test-js-with-node --help` for useful options; even though the
whole suite is quite fast, it still saves time to run a single test by
name when debugging something.
The JS unit tests are written to work with node. You can find them
in `frontend_tests/node_tests`. Here is an example test from
The JS unit tests are written to work with node. You can find them
in `frontend_tests/node_tests`. Here is an example test from
`frontend_tests/node_tests/stream_data.js`:
```js
@@ -38,8 +38,8 @@ in `frontend_tests/node_tests`. Here is an example test from
```
The names of the node tests generally align with the names of the
modules they test. If you modify a JS module in `static/js` you should
see if there are corresponding test in `frontend_tests/node_tests`. If
modules they test. If you modify a JS module in `static/js` you should
see if there are corresponding test in `frontend_tests/node_tests`. If
there are, you should strive to follow the patterns of the existing tests
and add your own tests.
@@ -69,7 +69,7 @@ working on or debugging the Zulip node tests.
Conceptually, the `zjquery` library provides minimal versions of most
`jQuery` DOM manipulation functions, and has a convenient system for
letting you set up return values for more complex functions. For
letting you set up return values for more complex functions. For
example, if the code you'd like to test calls `$obj.find()`, you can
use `$obj.set_find_results(selector, $value)` to set up `zjquery` so
that calls to `$obj.find(selector)` will return `$value`. See the unit
@@ -97,7 +97,7 @@ based on how other functions have been stubbed in the same file.
The other big challenge with doing unit tests for a JavaScript project
is that often one wants to limit the scope the production code being
run, just to avoid doing extra setup work that isn't relevant to the
code you're trying to test. For that reason, each unit test file
code you're trying to test. For that reason, each unit test file
explicitly declares all of the modules it depends on, with a few
different types of declarations depending on whether we want to:
@@ -178,8 +178,8 @@ data/logic modules (UI modules are lower priority for unit testing).
Our node test system is pretty simple, and it's possible to configure
the native debugger features of popular editors to allow stepping
through the code. Below we document the editors where someone has put
together detailed instructions for how to do so. Contributions of
through the code. Below we document the editors where someone has put
together detailed instructions for how to do so. Contributions of
notes for other editors are welcome!
## Webstorm integration setup
@@ -201,7 +201,7 @@ These instructions assume you're using the Vagrant development environment.
- `Vagrant executable` should already be correctly `vagrant`.
- `Environment Variables` is not needed.
3. You'll now need to set up a WebStorm "Debug Configuration". Open
3. You'll now need to set up a WebStorm "Debug Configuration". Open
the `Run/Debug Configuration` menu and create a new `Node.js` config:
1. Under `Node interpreter:` click the 3 dots to the right side and
click on the little plus in the bottom left of the
@@ -217,7 +217,7 @@ These instructions assume you're using the Vagrant development environment.
1. Under `JavaScript file`, enter `frontend_tests/zjsunit/index.js`
-- this is the root script for Zulip's node unit tests.
Congratulations! You've now set up the integration.
Congratulations! You've now set up the integration.
## Running tests with the debugger

View File

@@ -26,7 +26,7 @@ of various useful helper functions defined in
The Puppeteer tests use a real Chromium browser (powered by
[puppeteer](https://github.com/puppeteer/puppeteer)), connected to a
real Zulip development server. These are black-box tests: Steps in a
real Zulip development server. These are black-box tests: Steps in a
Puppeteer test are largely things one might do as a user of the Zulip
webapp, like "Type this key", "Wait until this HTML element
appears/disappears", or "Click on this HTML element".
@@ -45,7 +45,7 @@ async function test_private_message_compose_shortcut(page) {
The test function presses the `x` key, waits for the
`#private_message_recipient` input element to appear, verifies its
content is empty, and then closes the compose box. The
content is empty, and then closes the compose box. The
`waitForSelector` step here (and in most tests) is critical; tests
that don't wait properly often fail nonderministically, because the
test will work or not depending on whether the browser updates the UI
@@ -70,14 +70,14 @@ integration](../testing/continuous-integration.md):
interactive to debug an issue in the normal Zulip development
environment than in the Puppeteer test suite.
- Does the change being tested adjust the HTML structure in a way that
affects any of the selectors used in the tests? If so, the test may
affects any of the selectors used in the tests? If so, the test may
just need to be updated for your changes.
- Does the test fail deterministically when you run it locally using
E.g. `./tools/test-js-with-puppeteer compose.ts`? If so, you can
iteratively debug to see the failure.
- Does the test fail nondeterministically? If so, the problem is
- Does the test fail nondeterministically? If so, the problem is
likely that a `waitForSelector` statement is either missing or not
waiting for the right thing. Tests fail nondeterministically much
waiting for the right thing. Tests fail nondeterministically much
more often on very slow systems like those used for Continuous
Integration (CI) services because small races are amplified in those
environments; this often explains failures in CI that cannot be
@@ -115,7 +115,7 @@ These tools/features are often useful when debugging:
extra [Django settings](../subsystems/settings.md) from
`zproject/test_extra_settings.py` to configure an isolated database
so that the tests will not interfere/interact with a normal
development environment. The console output while running the tests
development environment. The console output while running the tests
includes the console output for the server; any Python exceptions
are likely actual bugs in the changes being tested.
@@ -143,22 +143,22 @@ notes above:
assumes the last step was processed by the browser (E.g. after you
click on a user's avatar, you need an explicit wait for the profile
popover to appear before you can try to click on a menu item in that
popover). This means that before essentially every action in your
popover). This means that before essentially every action in your
Puppeteer tests, you'll want to use `waitForSelector` or similar
wait function to make sure the page or element is ready before you
interact with it. The [puppeteer docs site](https://pptr.dev/) is a
interact with it. The [puppeteer docs site](https://pptr.dev/) is a
useful reference for the available wait functions.
- When using `waitForSelector`, you always want to use the
`{visible: true}` option; otherwise the test will stop waiting as
soon as the target selector is present in the DOM even if it's
hidden. For the common UI pattern of having an element always be
hidden. For the common UI pattern of having an element always be
present in the DOM whose presence is managed via show/hide rather
than adding/removing it from the DOM, `waitForSelector` without
`visible: true` won't wait at all.
- The test suite uses a smaller set of default user accounts and other
data initialized in the database than the normal development
environment; specifically, it uses the same setup as the [backend
tests](../testing/testing-with-django.md). To see what differs from
tests](../testing/testing-with-django.md). To see what differs from
the development environment, check out the conditions on
`options["test_suite"]` in
`zilencer/management/commands/populate_db.py`.

View File

@@ -28,7 +28,7 @@ as follows:
```
However, you will rarely want to do this while actively developing,
because it takes a long time. Instead, your edit/refresh cycle will
because it takes a long time. Instead, your edit/refresh cycle will
typically involve running subsets of the tests with commands like these:
```bash
@@ -39,7 +39,7 @@ typically involve running subsets of the tests with commands like these:
./tools/test-js-with-node utils.js
```
The commands above will all run in just a few seconds. Many more
The commands above will all run in just a few seconds. Many more
useful options are discussed in each tool's documentation (e.g.
`./tools/test-backend --help`).
@@ -60,7 +60,7 @@ eventually work with, each with its own page detailing how it works:
Additionally, Zulip also has about a dozen smaller tests suites:
- `tools/test-migrations`: Checks whether the `zerver/migrations`
migration content the models defined in `zerver/models.py`. See our
migration content the models defined in `zerver/models.py`. See our
[schema migration documentation](../subsystems/schema-migrations.md)
for details on how to do database migrations correctly.
- `tools/test-documentation`: Checks for broken links in this
@@ -72,12 +72,12 @@ Additionally, Zulip also has about a dozen smaller tests suites:
`zerver/openapi/python_examples.py`.
- `test-locked-requirements`: Verifies that developers didn't forget
to run `tools/update-locked-requirements` after modifying
`requirements/*.in`. See
`requirements/*.in`. See
[our dependency documentation](../subsystems/dependencies.md) for
details on the system this is verifying.
- `tools/check-capitalization`: Checks whether translated strings (aka
user-facing strings) correctly follow Zulip's capitalization
conventions. This requires some maintenance of an exclude list
conventions. This requires some maintenance of an exclude list
(`tools.lib.capitalization.IGNORED_PHRASES`) of proper nouns
mentioned in the Zulip project, but helps a lot in avoiding new
strings being added that don't match our style.
@@ -104,7 +104,7 @@ something valuable to helping keep Zulip bug-free.
## Internet access inside test suites
As a policy matter, the Zulip test suites should never make outgoing
HTTP or other network requests. This is important for 2 major
HTTP or other network requests. This is important for 2 major
reasons:
- Tests that make outgoing Internet requests will fail when the user
@@ -116,7 +116,7 @@ reasons:
developer time, and we try to avoid them wherever possible.
As a result, Zulip's major test suites should never access the
Internet directly. Since code in Zulip does need to access the
Internet directly. Since code in Zulip does need to access the
Internet (e.g. to access various third-party APIs), this means that
the Zulip tests use mocking to basically hardcode (for the purposes of
the test) what responses should be used for any outgoing Internet
@@ -125,7 +125,7 @@ requests that Zulip would make in the code path being tested.
This is easy to do using test fixtures (a fancy word for fixed data
used in tests) and the `mock.patch` function to specify what HTTP
response should be used by the tests for every outgoing HTTP (or other
network) request. Consult
network) request. Consult
[our guide on mocking](../testing/testing-with-django.html#zulip-mocking-practices) to
learn how to mock network requests easily; there are also a number of
examples throughout the codebase.
@@ -133,7 +133,7 @@ examples throughout the codebase.
We partially enforce this policy in the main Django/backend test suite
by overriding certain library functions that are used in outgoing HTTP
code paths (`httplib2.Http().request`, `requests.request`, etc.) to
throw an exception in the backend tests. While this is enforcement is
throw an exception in the backend tests. While this is enforcement is
not complete (there a lot of other ways to use the Internet from
Python), it is easy to do and catches most common cases of new code
depending on Internet access.
@@ -151,9 +151,9 @@ This enforcement code results in the following exception:
The one exception to this policy is our documentation tests, which
will attempt to verify that the links included in our documentation
aren't broken. Those tests end up failing nondeterministically fairly
aren't broken. Those tests end up failing nondeterministically fairly
often, which is unfortunate, but there's simply no other correct way
to verify links other than attempting to access them. The compromise
to verify links other than attempting to access them. The compromise
we've implemented is that in CI, these tests only verify links to
websites controlled by the Zulip project (zulip.com, our GitHub,
our ReadTheDocs), and not links to third-party websites.

View File

@@ -2,7 +2,7 @@
Zulip is early in the process of migrating our codebase to use
[TypeScript](https://www.typescriptlang.org/), the leading static type
system for JavaScript. It works as an extension of the ES6 JavaScript
system for JavaScript. It works as an extension of the ES6 JavaScript
standard, and provides similar benefits to our use of
[the mypy static type system for Python](../testing/mypy.md).
@@ -33,7 +33,7 @@ The following resources are valuable for learning TypeScript:
## Type checking
TypeScript types are checked by the TypeScript compiler, `tsc`, which
is run as part of our [lint checks](linters.md). You can run the
is run as part of our [lint checks](linters.md). You can run the
compiler yourself with `tools/run-tsc`, which will check all the
TypeScript files once, or `tools/run-tsc --watch`, which will
continually recheck the files as you edit them.
@@ -41,13 +41,13 @@ continually recheck the files as you edit them.
## Linting and style
We use the Eslint plugin for TypeScript to lint TypeScript code, just
like we do for JavaScript. Our long-term goal is to use an idiomatic
like we do for JavaScript. Our long-term goal is to use an idiomatic
TypeScript style for our TypeScript codebase.
However, because we are migrating an established JavaScript codebase,
we plan to start with a style that is closer to the existing
JavaScript code, so that we can easily migrate individual modules
without too much code churn. A few examples:
without too much code churn. A few examples:
- TypeScript generally prefers explicit `return undefined;`, whereas
our existing JavasScript style uses just `return;`.
@@ -72,22 +72,22 @@ Our plan is to order which modules we migrate carefully, starting with
those that:
- Appear frequently as reverse dependencies of other modules
(e.g. `people.js`). These are most valuable to do first because
(e.g. `people.js`). These are most valuable to do first because
then we have types on the data being interacted with by other
modules when we migrate those.
- Don't have large open pull requests (to avoid merge conflicts); one
can scan for these using [TinglingGit](https://github.com/zulip/TinglingGit).
- Have good unit test coverage, which limits the risk of breaking
correctness through refactoring. Use
correctness through refactoring. Use
`tools/test-js-with-node --coverage` to get a coverage report.
When migrating a module, we want to be especially thoughtful about
putting together a commit structure that makes mistakes unlikely and
the changes easy to verify. E.g.:
the changes easy to verify. E.g.:
- First a commit that just converts the language to TypeScript adding
types. The result may potentially have some violations of the
long-term style we want (e.g. not using `const`). Depending on how
types. The result may potentially have some violations of the
long-term style we want (e.g. not using `const`). Depending on how
we're handling linting, we set some override eslint rules at the top
of the module at this stage so CI still passes.
- Then a commit just migrating use of `var` to `const/let` without